The Use and Abuse of Signal Processing

Signal processing in audio speak is the act of modifying sound that has been converted into an analog or digital signal. It is in just about every aspect of recorded or amplified music. When you adjust the bass and treble in your car stereo, you are performing signal processing. You are not changing the source material, but you are changing how it sounds by running the signal through the equalizer in your radio. Signal processing can be used for subtle effects such as in the car example, to make up for the fact that your car is not the environment in which the music was created. It can also be used for much more drastic mutations of sound to become as much a part of the sound as the sound itself.... or more. An electric guitar's actual acoustic sound isn't much, but if you turn it off and just put your ear up to it, it's sound is, to say the least, absolutely nothing like what you are used to hearing an electric guitar sound like. So how does that thin twangy plucking sound turn into the vast number of ways that we hear electric guitars in music? You got it.... signal processing. Signal processing in some cases is the only thing we actually hear. A guitar's amplifier (or a distortion pedal) is a signal processor that distorts the guitars little plucky sounding signal into harsh gritty tones and is often run through various other effects that are all we hear of the actual guitar. In mixing music, all sorts of signal processing help us get the sound just right or just wrong. Almost everything we hear is being signal processed in some way. In this article we will get into what the major kinds of signal processing are and how they are used, misused, and intentionally misused for cool effects.

Dynamics control: Dynamics means how the sounds volume changes over time. Dynamics control is the act of keeping a sounds volume in a desirable range between it's softest possible volume and its loudest possible volume. In English this means dynamics control helps keep a sound (like a singer) from being much louder or softer in some parts than others. The main process that provides this is called compression. Compression basically lowers the relative volume of any part of the sound that exceeds a certain level called a threshold. Any part of the sound that exceeds the threshold is reduced by a certain amount called a ratio. We are talking about relative volumes here, not the actual volume you are hearing. Compression compresses the difference between loud and soft. Basically, it brings the louder parts of the signal down compared to the softer parts making the sound more even in volume. Then the overall volume can be increased with an amplifier allowing you to raise the volume of the soft parts. Compression was originally intended for this purpose of keeping volume in check, however it has some interesting side effects. When you compress a signal heavily, it brings the louder frequencies down, and if the signal is amplified enough, making frequencies that were barely audible, in the range of hearing. If compression is used drastically, this characteristic can produce some unnatural results to the signal. If you are just going for dynamics control, then back down a little on the compression. However,... Overdoing compression can be really cool. The result is a very heavy, thick and often aggressive sound. Abusing compression can make a sound as if it were about to rip your head off. It can also make drum's tails as loud as the inital attack and a guitar note sustain almost indefintely.

Digital Delay effects: include reverb and delay. Reverb is what you hear when you are in the audience at, for example, a play. The performer's voice, after being sent through the speakers, is bounced around from the surfaces that make up the interior of the theater. Since sound is not instantaneous, you hear the direct sound from the speakers first through the air, and then you hear it as bounced around from the walls and seats. This after-sound consists of tons of little echoes so close to each other that they sound like one constant blurry wash. This blurry after-sound is known as reverb. Often depending on the room, you will here a few periodic more distinct repeats of the sound. Digital delay effects try to emulate a sound being in a room by various delaying and blurring of the sound which is heard beneath (and after by a few milliseconds) the orginal sound. Reverb is yet another fun tool to misuse. If you overdo it, it may not sound like it's from a real room, but that doesn't have to be what your going for. It can be great for making sounds more in the background, or making a synthesizer just envelope you into a lush soundscape. You can crank up those distinct delays mentioned above and set them to repeat at times that are rhythmically meaningful to the music. You can play the delays as if they are part of the instrument. You can also get too carried away and muddy up the balance of the music. With reverb and delay, a little can often go a long way.

EQ: amplifies or reduces the volume of seperate frequencies within the spectrum of sound. The easiest example is the one mentioned in the beginning of this article; your car stereo. If you turn up the bass, you are adding boost to the low frequencies. This is good for compensating for your car's listening environment and speaker system being different from the creator's. The job of EQ in music mixing, is to make the overal composite sound of every instrument or voice musically pleasing and balanced. When EQ is used on everything individually it can be used to give more presence to sounds in the frequency area that they are suited to musically and de-emphasize frequency areas that are displeasing or do not mix well with other instruments. For this purpose EQ is best used subtly to make up for the defiency of the perfect source sound to begin with. For instance, if the acoustic guitar sounds muddy you can either give it a little low frequency reduction and hi frequency boost or you can reposition the microphone for a better source. Often the best EQ is natural EQ, meaning very sparing EQ from signal processing and lots of getting the right sound from mic placement. You can change a sound's frequency content drastically by changing mic positioning in sometimes the slightest amounts. EQ is used mostly to make small compensations in the various sounds in music to make them all fit nicely with each other. Only use EQ drastically when you want to produce sound that is drastically different in frequency content than it is capable of naturally or in a pinch at a live show where you can't just ask the drummer to stop playing to move his mics around.

The rest of 'em: There's no way I could fit a section on every type of signal processing out there into this arcticle. I could easily fill the whole magazine with paragraphs on flanging, chorus, tube distortion versus solid state, tape saturation and bit reduction. Too bad too, cause there's some fun toys out there to mangle or beautify, (or beautifully mangle) The art of mixing is knowing when to turn a knob and when not to. When done well, it is a unique instrument in it's own right with many creative possibilities. A good mix and good choices on signal processing can mean the difference between music being amazing and being so-so to the audience, even if the artists played the best they ever had. Good sound is also a key ingredient to the musicians playing well. The musicians will generally only play the best they ever have if they sound as good as they are playing.