“Fus Ro Dah”

Interesting Post this by my course mate Chris, where he asks the question of whether we could use acoustics as a form of weaponry. Deterrents such as the “Long Range Acoustic Device” (LRAD) and “The Mosquito” are possible and have been used in society, however unfortunately as Chris discovers, acoustic weaponry, like that classic “Fus Ro Dah” attack is but a nice feature on a game (see Robert Bungay’s post).

Notes:

[1] Could an Acoustic Weapon Make Your Head Explode? – http://cperryacoustics.wordpress.com/2014/04/10/could-an-acoustic-weapon-make-your-head-explode/

[2] See how many decibels it would take to tear apart the Universe? Have a look! – http://rbungay.wordpress.com/2014/05/06/see-how-many-decibels-it-would-take-to-tear-apart-the-universe-have-a-look/

Pyracoustics

After seeing this video last week, it is definitely on my list of things to do this year! I’m currently unsure on my exact design, or whether I will create a Ruben’s tube instead, but nonetheless it is still a really cool idea (and probably the most interesting visualisation of acoustics)!

One of my current ideas is to create a few smaller Ruben’s tubes of varying lengths, each driven by a driver playing the same source. Because of their different lengths, they will resonate at different frequencies, each causing a different pattern of flames as the same source is played. If these were placed together (maybe in a stacked pyramid style) as music is played they would make a really cool music visualisation.

What do you think of the pyro board, have you seen anything like it before? If so, let me know or also if you have any other ideas for things I could try and build.

Wingardium Leviosa

Here’s a short video you hopefully may find interesting, it’s a more extreme visualisation of standing waves that I wrote about doing in some lab sessions a few weeks ago. As shown in the video, the way it works is by using 4 lines of loudspeakers all facing each other playing the same simple tone. As they interact with each other, they create a standing wave, meaning parts of the waves interact constructively and others destructively. This basically means that some parts of the wave move a lot and some don’t move at all.

As the small objects are dropped inside (they must be small enough to fit into the wave and light enough for the wave to be able to oppose gravity) the part of the wave that is constantly moving most traps them. As the air particles move up and down (about 1,500 times a second) they manage to oppose gravity and hold the object, so that it is levitating.

You may notice that that the polystyrene pieces are spread out and not all constantly spaced; this is because at the nodes of the standing wave (the bits with destructive interference where the sound wave does not move up and down) have no strength against gravity. When the pattern moves around in air, that is because the pattern of the loudspeakers change, causing the standing wave to move.

You may now be wondering if we had big enough speakers, could we levitate a person… Well theoretically I would say yes, I can’t see why not, however unfortunately in practice I think by the time you’d reached a loud enough Sound Pressure Level, your ears would have been exploded and you would probably be half way to mashed potato! As nice a thought as it is, if you want to fly it’s probably best to just go to a wind tunnel instead!

Hearing for the First Time

You may have seen this video during the week, it’s a lovely story of a lady called Joanne Milne (40) who was deaf from birth and her hearing was restored with a cochlear implant. This video shot by her Mum shows the moments when she hears for the first time, in tears as the nurse recalls the days of the week to her. Watching this made me appreciate how much of a difference my ears make to the world I experience everyday. I mean some people (including myself) have recently been talking about the sounds of F1 and how they don’t sound as nice as previously, but imagine that with no sound at all and I think it really puts it into perspective!

It is said that if you have shortfalls in one aspect of life then you more than make up for it in another and this is certainly true in this case! I mean think about it, she’s been deaf for her whole life and partially blind also and as soon as she can hear she can start talking! This is the power of lip reading, something she has very clearly mastered over her life and has been able to work out a whole language even though she has never heard it. I bet none of us could go abroad, having never learnt their language and then pick it up just by lip reading!

Someone who has been through the same thing is a lady called Kate Locke, who has written a blog as she has gone through the process of getting a cochlear implant. If the video interested you then it is definitely something to check out! She writes very well about what a cochlear implant is and how it works, which I am going to try and briefly summarise here!

Cochlea Implant

There are 3 parts to the ear: outer, middle and inner; Joanne’s hearing condition was called sensorineural hearing loss, which is damage to the inner ear. Damage to the outer or middle ear is easier to repair (i.e. perforated ear drum) however the inner ear is the part that sends electrical impulses to the brain which is what makes it harder to heal. Other types of inner ear damage include Presbycusis (old age) and Noise Induced Hearing Loss.

Whereas hearing aids work to simply amplify sound, a cochlear implant is different in that attempts to instead mimic the natural hearing system. It first of all captures sound using a small microphone, similar to in a hearing aid, but instead of amplifying it, the sound is converted into a digital code and then into electrical impulses. These impulses then travel down into the ear and are passed through an array of electrodes in the cochlear that stimulate it. Normally, in a functioning ear, there are tiny hair cells in the cochlear, so that when the sound wave causes them to move, an electrical signal is sent to the brain; however in this case the electrodes trigger this instead as these hair cells are either lost or damaged.

As an aside, this is why you get a ringing in your ear when loud noises are heard over a prolonged period of time. As sound enters the ear, the hair cells are bent, sending signals to the brain, however when this is prolonged the hairs stay bent causing the brain to think of this as a constant noise. Eventually the hairs return to their natural position, causing the ringing to stop, although repeated exposure will eventually cause lasting damage (i.e. noise induced hearing loss or tinitus).

There are many aspects of life where sound is the best part and many others where we just take it for granted. What was your reaction to the video and what sounds would you miss most if you couldn’t hear?

Notes

[1] News Article containing the video – http://www.telegraph.co.uk/health/10728006/Moment-a-deaf-woman-could-hear-for-first-time.html

[2] Kate Locke’s Blog – http://katelocke.wordpress.com

[3] How Cochlear Implants work – http://www.cochlear.com/wps/wcm/connect/us_OLD/home/understand/explore-your-options-to-treat-hearing-loss/cochlear-implants/how-do-cochlear-implants-work/

Stay Tuned…

Bridgewater Hall Organ

I saw an article this week about the Royal Festival Hall Organ, which has been refurbished and has returned to the hall after 9 years; now that’s a long refurbishment! It was an interesting article about the history of the organ and why it was being refurbished; however, as I was reading one sentence caught my attention as it seemed a slightly odd thing to point out… It read as follows:

And at full tilt, it makes the very air shake, leaving you feeling as if your molecular structure has been re-ordered.

This made me smile somewhat, as the writer of this article clearly didn’t quite grasp the concept of how sound travels, or how an organ works. “At full tilt”? Air vibrates at any volume you play the organ at, thats how sound travels in an organ, or in any other aspect of life!

Apart from that, the rest of the article did make sense, although it did remind me of when our course had the privilege of going to Bridgewater hall and we saw the organ there in all its glory. The video below gives an insight into the Bridgewater hall organ.

Organs are made up of many pipes, which work on the basis of sound vibrating in them, like when you have a bottle and you blow over it and it makes a tone. The many size pipes resonate (the air inside them vibrates) at different frequencies, like when you then fill your bottle with some water and blow over it again. The sound from each pipe is called a pure tone as it does is not altered once inside the pipe, however, when these many pure tones are played together, they create different tones.

Organs are unique in the instrument world, as they have the ability to sound like other (classical) instruments, e.g. brass or woodwind, by combining different combinations of pipes; when all pipes are combined, you get a classic organ sound, which would be similar to a whole orchestra playing the same note all at once. There are often many switches and buttons on an organ, which open and close pipes, meaning the different instrument sounds can be played.

This process of adding several pure tones together in order to imitate other instruments is not uncommon however, as it is the same theory that your electric keyboard or synthesiser is based on. In synthesis, sine waves (the most simple sound wave) are added together with some clever maths discovered by a guy called Fourier, which makes them into instrument sounds. This means that the more money you spend on an electric keyboard, the better it will sound because more sine waves have been used to give greater detail for each note.

So next time you are playing your keyboard, thank the organ for the sound you can hear! And if that isn’t enough for you, then have a look at this interesting take on an organ. Let me know what you think!

In the Mode

Yesterday we had a lab experiment in the university lab facilities, specifically the small reverberant chamber; this lab was the creation and measurement of room modes. In acoustics, everything has a frequency that it naturally vibrates at, like a guitar string or the air in a bottle when you blow across it; well the same is the same for a room.

If a loudspeaker is placed in a room and the frequency is found where the wavelength of the sound wave (produced from the loudspeaker) is the same as (or an integer multiple of)  that of the dimensions for the room, then what is known as a standing wave is set up. This is where when the wave is reflected between the two walls and it does not move or emit any energy but instead creates ‘nodes’ and ‘antinodes’ which are pressure minimum and maximum points respectively. As this wave is set up, the wave from the speaker interacts with the wave reflected from the wall which causes the equal and opposite signals to cancel each other out at nodes and or vice versa and double at anti nodes. A graphic representation is given in the diagram below. As the frequency is doubled, double the number of half wavelengths are set up between the walls, as is shown in the diagram also.

Standing wave

You may not notice room modes often in everyday scenarios as spaces are often acoustically designed to avoid them. Use of absorption (as mentioned in one of my previous posts) reduces reverberation and thus the reflected signal and room mode.  We carried out this experiment in the reverberant chamber because of this, so that they would be easily set up. Rooms with matching axis dimensions or integer multiple dimensions (e.g. a cube or a room 2m x 4m x 8m) are a bad idea as this means that at one frequency all room modes are going to resonate and will become more prominent as a result; in this case 4x louder than in just 1 dimension.

The video shows how the frequency starts loud at 1 wall then reduces in the middle and gets louder at the other wall; this is the effect of the room mode. You may have expected, if the theory is true, that there would be no sound in the middle, however this is not the case because of sound waves propagating in the other axes as the room was not cubic.

You may find it hard to hear on your speakers (but with footsteps and speech being more audible); this is because I shot this video on my iPhone 5. The graph below shows the frequency response for an iPhone microphone where 0dB on the y axis means that the mic picks up the input signal at the same level it is heard, with below this being picked up less and it being amplified greater than it is above 0dB. You will see that it is mainly amplified at mid/high frequencies because this is roughly the frequency of speech, the main requirement for a phone microphone! Low frequencies are reduced because they are not needed so much (e.g. to reduce that annoying wind howl in the background as you are trying to make a phone call).

iPhone Mic Response

This is why you cannot hear it very clearly in the recording, even though in reality the speaker was producing up to 100dB, which is why we needed to wear ear defenders.

Loud Noise

It was an interesting concept to experience, but one I recommend; if you have a spare few minutes and have a reasonably good set of speakers, try out this Tone Generator and try and find the resonant frequency of the dimensions of the room. Note that there will be a different mode set up for each different dimension in the room.

To find the right frequency, you need to use the simple equation: Frequency = 340÷Dimension (in Metres). You can also use integer multiples of the dimension (e.g. 2 or 3 times the length of the room). To get you started, here are some example values so that you can get an idea of the frequency region you will need:

Room Modes

Once you find a frequency you seem to think may be resonant, then have a walk round and listen for how the sound varies at different points in the room, then comment below and let me know what you heard!

Notes

[1] https://www.soundonsound.com/sos/dec07/articles/acoustics.htm

[2] iPhone mic and speaker responses http://blog.faberacoustical.com/2010/ios/iphone/iphone-4-audio-and-frequency-response-limitations/

Water in your Ear

My attention was recently brought to this article on BBC website, about a new concept for concert goers, an underwater concert! At first it seems a really bizarre concept, but there are strengths this holds over a standard concert. So without further ado, let’s dive straight in… (sorry I couldn’t resist it)!

Mainly, the listener is in charge of their own concert experience, through moving around and finding positions where it sounds best to them, giving them a sense of control over their experience. The main problem with this though is that as the swimmers move around, so does the water, and as the sound is travelling through the water, this means that the sound waves are getting moved around as well, distorting and altering the shape of the sound. This is most probably the main reason for the strange sound created. A similar effect, although not quite as extreme, could be created on land with extreme and sudden alterations in air temperature where sound is propagating through, causing the sound to bend and move with the air temperature (as sound travels at different speeds in different temperatures).

In the video, the swimmers say that they can mainly hear high frequencies, which is because this is when sound travels best in water. It is also mentioned that the speed of sound in water is different, being at 1,481m/s* for water compared to 343m/s for air, making water 4.3 times faster for sound than air. This is why the high frequencies are heard better, because they have a shorter wavelength. Sonar systems rely on very high frequencies for this reason and the fact that humans cannot hear at frequencies this high.

There is only one way to experience what this concert may sound like and that is to go to one; however, if you don’t fancy this method, then it could be also simulated on a computer. Normally in a concert hall, an Impulse Response** (IR) is recorded then this can be mixed with a ‘dry’ recording of some music (no reverb added) and the output file sounds like the music would if it were played in the space. This technique is called convolution and is common practice in concert hall design and testing. Trevor Cox made a particularly interesting example of this when he recently measured the longest reverb ever. My plan was to try and find an underwater IR and try this to find out what it actually sounds like, however, I could not seem to find one online. Please comment below if you have one or know of where I can get one as I think this would be an interesting test.

The concept of an underwater concept is a very interesting one, one I would be interested to try, although not one that would give the high quality sound given in a state of the art concert hall! Would you ever try an underwater concert and if so who would you like to hear played underwater?

Notes

*m/s stands for metres per second and is the distance that a sound wave can travel in 1 second in the specified medium

**An Impulse response is a recording of a gun shot, balloon bursting or a hand clap (an impulse in a space

References

[1] http://www.bbc.co.uk/news/uk-26445429

[2] http://www.bbc.co.uk/newsround/26445927