Warning! This one is going to be a little maths-y and a little science-y! Don’t worry though, it’s a lot simpler than you expect.
HOW CAN WE DESCRIBE SOUND?
Sound waves have a set of complex characteristics that determine what they sound like. However, the only two we need to know for now are Frequency, and Amplitude.
If you pluck a guitar string, you can see it vibrate back and forth and listen to the sound it makes. The faster the string vibrates, the higher pitch of the sound. The more violently the string vibrates, moving further as it shakes, the louder it will be. These characteristics of a guitar string are similar to the Frequency and Amplitude of a sound wave.
Frequency, measured in Hertz (Hz), is how we describe how many times per second the wave completes a cycle. You can think of a cycle in this case as when the wave returns to the same value again. A cycle expressed in the illustration above by the blue lambda symbol (λ) – This is also known as Wavelength. The shorter the distance between repeats, the higher the frequency. Frequency determines the pitch of the sound – The higher the frequency, the higher the pitch, and vice versa. In the diagram above, if the total amount of time along the x axis is 1 second, the frequency of the wave would be about 2Hz (two repeats per second).
Amplitude is how we describe the ‘height’ of the wave. The amplitude of a sound wave at any given time can be either a positive number, a negative number, or zero. As you can see in the diagram above the wave goes from zero on the y axis, up to a positive value (we can assume this value is 1), then down to a negative value (-1), then it repeats. Amplitude determines how ‘loud’ a sound is.
POSITIVE AND NEGATIVE NUMBERS
When you play multiple sounds at the same time, the waves for each sound are added together to create a new soundwave. You can do this manually, quite easily. For example, we have two identical sound waves playing at the same time - let’s pretend they both look exactly the same as the diagram above. At the furthest point to the left on the x axis, both waves are at zero on the y axis. This means they both have an amplitude of zero.
0 + 0 = 0.
So, the new wave’s amplitude at that point in time is also zero. If we move forwards to the next point along, both waves have a value of approximately 0.5.
0.5 + 0.5 = 1.
So, the new wave would have an amplitude of 1 at this point in time. If we continue this trend of adding each value together along the entire waveform, we will end up with a wave with double the amplitude of the originals!
SET PHASERS TO STUN
So, what is phase?
Phase, rather confusingly, has many different meanings. The type of phase we’re interested in here is the “expression of relative displacement between two features of waves having the same frequency”. In simpler terms, if you have two of the same wave (same frequency and amplitude, just as we had before) and you nudge one forwards or backwards in time compared to the other, the phase is the difference between the two. This is sometimes called ‘Phase Shift’ and can be expressed in degrees – like an angle.
If we have two waves and they are aligned on the x axis perfectly, like the diagram below, they would be described as ‘in phase’ with each other. If they are not aligned perfectly, they are ‘out of phase’.
There is a special case where two waves can be effectively ‘perfectly out of phase’. In the diagram below, you can see the two waves are exact opposites, because of how far they have been phase shifted. This is a phase shift of 180°.
Now what happens if you add these out-of-phase waves together?
Take each amplitude value on both waves and add them together and you will always get zero.
1 + -1 = 0
The two waves, when played together, cancel each other out – complete silence!
This is phase cancellation!
WHY DOES THIS MATTER?
Getting phase right is a very big deal in a music production environment.
In the studio, phase cancelation can make instruments sound weak and less impactful. When recording music, producers and musicians love to layer similar sounds up to create lush textures. There can be cases where multiple layers of the same sound end up interfering with each other via phase cancellation.
Another example, one that I’ve personally experienced, is when recording a drum kit. The snare drum is often recorded with two microphones – one on the top and one on the bottom. The stick hits the drum on the top skin right next to the top microphone giving us a lovely ‘pop’ to the sound. The microphone on the bottom skin captures the rattle of the snares that give the drum its distinctive sound. Can you spot the phase problem? It takes more time for the sound of the impact to reach the bottom mic than the top mic. So, when you listen to both recordings at the same time, this tiny delay can cause phase cancellation that can badly affect the body of the drum hit, leaving you with a thin sounding snare!
WHY SHOULD I CARE?
That’s what I thought too. The music you listen to every day can be effected by phase cancellation, both when it’s being made and recorded, and when you’re listening to it.
It is important to understand that sound is the vibration of air. It’s essentially a fluctuation of air pressure – the wave we can draw is a simple way to visualize this. Everything above zero in a waveform is positive air pressure, and everything below is negative air pressure. Everything on the zero line is normal air with no change in pressure. The air pressure is created by the speaker cone moving back and forth.
If you have two speakers, pushing air back and forth in perfect synchronisation, the sound they produce will be strong and full of energy. However, if the speakers are pushing air ‘out of phase’ with each other, the sound will be lacking. This is phase cancellation in reality – not just theoretical graphs!
During research and development on an upcoming speaker here at Bayan Audio, we discovered a phase problem in our design. In the design, we have two speaker cones positioned one above the other, both facing forwards. The top speaker produces the high frequencies, and the bottom the low. Naturally, and to provide a balanced sound, there is a crossover range of frequencies that both speakers will produce. We discovered that there was some phase interference occurring between the two speakers in the crossover range. How do you solve a problem like this? Simple! Recess the top speaker into the body of the cabinet by half of the wavelength of the misbehaving frequency. This small change meant that the out of phase frequency was ‘shifted’ back into phase!
THAT’S PRETTY INTERESTING
So, there we have it; a crash course in phase cancellation and a bit of sound wave theory. It’s not as complicated as expected I hope. If you didn’t quite get it – try drawing two waves on a graph and adding together the values of their amplitudes over time. Plot the new amplitudes onto a new graph, then draw a line between each point. Ta-Da! You’ve got a shiny new wave – the wave that would be produced if you played the two original waves together.
This post was written by Jack Chapman.