BCC Rapid Cycle Improvement for Chronic Kidney Disease Discussion

User Generated

pnzwnz

Humanities

Brookdale Community College

Description

Unformatted Attachment Preview

Ch a p t er 2 Au d i o I Overview n this chapter we will discuss the nature of working with audio in live and recording contexts. We will describe the common components used, including input sources like microphones and electric instruments and output systems like PA speaker systems, amplifiers, and computer recording systems. Some discussion of signal types is included to help you better understand the many different kinds of cables and connectors that are used in professional audio situations. By the end of this chapter you should have a firm understanding of how to set up audio equipment to record or amplify your performance. This understanding will include the types of hardware and cabling you need, how and where to plug in the cables, and some basic recording and microphone positioning techniques. Before We Begin Discussing audio in the abstract may seem again like more of an exercise in science than music making. Working with audio itself is complicated because many different components can be involved in doing this work and there are endless variations in the types of microphones, cables, and other associated equipment that can be used. Additionally, the concept of converting something we hear with our ears into something we observe with a computer can be difficult to grasp. However, at the very least, a basic understanding of the ways in which audio technology works will help you to avoid and troubleshoot common problems when capturing live performances and working with digital audio. Read this chapter slowly and try to conceptualize what is being described. There are lots of terms introduced here, and the nature of thinking of “what plugs into what” can be confusing. Take breaks and reread sections if necessary. It is encouraged that you consult the resources available on the companion website for this book to help you understand the concepts introduced in this chapter. You might ask, “What are the right settings to use for the equipment I have,” or, “Can you tell me what equipment I should buy in order to record my band or ensemble so it sounds right?” The truth is that different instruments playing different notes produce different sounds in different rooms when captured with different equipment. It’s best to develop good listening skills, know what you’d like 14 02-Manzo-Chap02.indd 14 3/30/15 7:12 PM Chapter 2 • Audio 15 the music you’re working with to sound like, know how the technology you’re using works in concept, and then use the technology to facilitate the recording that you want to make. Developing a good set of “listening ears” is probably the most difficult and the most important skill to acquire, and it’s certainly one of the most sought-after skills for a producer, engineer, composer, or performer. If you feel comfortable with your skills as a listener, you should be relieved, because learning how the technology works and how to work it is relatively easy once you know what your music should sound like. We’ll begin by discussing the basic concept of working with audio and then describe, in some detail, many of the individual components involved. The Signal Path Based on our understanding of sound from Chapter 1, we know that sound refers to waves traveling through the air. When these waves “hit” a microphone, it converts their oscillations into an electrical signal. This electrical signal is transmitted through a cable that begins at the base of the microphone and travels until it eventually comes out of a speaker. The signal will typically travel to a number of other audio devices along the way in what is commonly referred to as the signal path, signal chain, or signal flow. One signal path in a live sound situation might be as follows. (First read the numbered elements of the list, and then, once you understand the signal path, read the lettered elements of the list.) 1. 2. Signal source origin: a person is singing A microphone is placed in front of the person singing. a. The acoustic sound of the singer is converted into an electrical signal. 3. A cable connects the microphone to a mixing board. a. The electrical signal from the microphone travels to a mixing board through the cable. b. The mixing board adds some strength to the signal using a built-in preamplifier, or preamp. 4. The signal from the mixer’s preamplifier is sent through a cable to a power amplifier to make the signal even stronger. 5. The signal from the power amplifier is sent through cables to one or more speakers. a. The speakers, by pushing air back and forth, convert the signal back into a sound that we can hear with our ears. i. Sometimes the amplifier is even built into the speaker cabinet. When a signal reaches a device (like a microphone or preamp), it is called the input; when it leaves that device, it is called the output. The input (original signal) will be changed in some way before it leaves the device as output. For example, as we have noted, the microphone changes the input (sound waves) into a different output (an electrical signal). 02-Manzo-Chap02.indd 15 3/30/15 7:12 PM 16 F O u n DAT I O n S O F m u S I C T e C H n O l O g Y figure 2.1: Signal flow from microphone to mixer to an amplifier to speakers numerous variations can be applied to this signal path depending on your application. For example, if you were recording, instead of a mixer, the signal would likely be sent to an audio interface that converts the signal into a digital format so that it can be captured on a computer. However, at some point the signal will ultimately return to its nonelectrical state when it passes through the speakers. miCrophoneS The most important part of a recording is the quality of the performance taking place in front of the microphone. However, we rely on microphones to capture this performance and accurately convert the air pressure waves (sound) to an electrical signal. Think of the microphone as a person who has ears with a really 02-Manzo-Chap02.indd 16 3/30/15 7:12 PM CHAPTeR 2 • Audio 17 figure 2.2: Some typical components in a signal path on a stage good hearing range who will listen to the sounds taking place in front of him or her, and then describe them in a very articulate way (by converting them to electrical signals). As the first component in the signal path, if your microphone is not of high quality, the representation, when you listen back to the recording, will not sound the same as what you’d heard with your ears. There are ways to manipulate a poor recording in the studio to make it sound like a more accurate representation of the original sound source, but it’s easiest and best to use the proper equipment in the proper way from the start. (We’ll discuss the different types of microphones later in this chapter.) Although quality microphones can be very expensive, as can speakers, both microphones and speakers tend to retain their value through the years and remain mostly unchanged by advancements in technology. Whereas digital devices like 02-Manzo-Chap02.indd 17 3/30/15 7:12 PM 18 fo u n datio n s of m u sic t e ch n o l o g y effects and recording technologies change rapidly, many of the microphones and speakers used in professional studios today were designed decades ago. Keeping a Uniform Signal Once the sound source has been converted into a signal by a microphone, we want to keep the signal clean and strong wherever it goes. To ensure this, it’s necessary to understand what each piece of equipment in the signal chain (reverb unit, mixer, amplifier, stompbox, cable, adapter, etc.) is doing to the signal. Some devices like effects sound better placed at different “stages” in the signal path, while others like amplifiers absolutely need to be placed in the proper order or else your equipment can be damaged. Knowing “what goes where” can be very confusing but, in most cases, can be determined by reading the manuals that come with each piece of equipment you own. The goal of any signal path is to keep the signal moving smoothly between gear. The manual for each piece of equipment will give you some insight into the type of signal each device receives at the input stage and delivers at the output stage. Impedance For example, a synthesizer keyboard typically produces a type of signal strength said to be at line level. A digital device like a synthesizer contains what is essentially a computer to produce and amplify its synthesized sounds and, as such, is able to output a relatively strong signal; a line level signal is a strong signal. In comparison, there are weaker signals said to be at microphone level, or “Lo-Z”, and instrument level, or “Hi-Z”. With these weaker-level signals, the symbol Z refers to the concept of signal resistance, or impedance with regard to signal strength and flow. In layman’s terms: the “Hi” in Hi-Z refers to high impedance—the signal flow is “impeded,” thus denoting a weak signal flow. Low impedance, or Lo-Z, means low resistance and denotes a stronger output. Electric guitars, basses, and their associated accessories all typically transmit Hi-Z signals, which is why Hi-Z level signals are commonly referred to as instrument level. The term instrument level distinguishes Hi-Z inputs from Lo-Z inputs, such as microphones, which, as you may have guessed, commonly transmit Lo-Z signals. These terms, and in particular. For this reason, Lo-Z signals are commonly referred to as microphone level. The amount of resistance, or the “Z,” is measured in ohms. Links to more technical resources are available on the companion website for this book. Microphones and other electronic components are built to different standards and have varying input and output levels of resistance. In modern day audio equipment, the impedance types are, for the most part, clearly spelled out for you. A mixer, for example, may have a number of inputs and outputs labeled “Hi-Z”, “Lo-Z”, or even “instrument” and “microphone.” Sometimes they also have graphic icons representing keyboards, guitars, mics, amps, and so on clearly printed on the device inputs and outputs themselves. The purpose of knowing 02-Manzo-Chap02.indd 18 3/30/15 7:12 PM CHAPTeR 2 • Audio 19 figure 2.3: A few microphone level and line level input and output classifications about impedance ratings is to ensure that the signal types are matched as they enter each new device in the chain. This might seem confusing, but it all boils down to ensuring that the signal is strong enough, but not too strong, as it enters each device in the signal path. A mismatch in the impedance type can result in unpleasant distortion or very low signal, so it’s important to make sure that you have the right signal strength going to each device in the signal path. For example, some mixing boards and audio interfaces only accept either line level input or lo-Z microphone input. Plugging a Hi-Z instrument like an electric guitar into a device that is expecting a lo-Z or line level input could produce a very weak signal. As you might suspect, numerous devices exist to convert from one impedance type to another. A device known as a direct box or DI may be used to take a Hi-Z instrument level signal at its input and convert it into a lo-Z microphone level. A direct box can take a Hi-Z instrument level output, like an acoustic-electric guitar, and convert the signal into a lo-Z microphone level signal that can be connected to a mixing board. Direct boxes typically also have a Thru output that allows an additional feed of the unchanged input signal to be sent to other Hi-Z inputs like that of a guitar amplifier (Figure 2.4). In this manner, the acoustic guitar can be sent to both the mixer as a lo-Z signal and the stage amp as a Hi-Z signal. CaBleS One way to visibly note the impedance difference used by each component can be the cable type. microphones typically use a three-prong XLR cable, while instruments typically use a ¼˝ cable. However, important exceptions may include synthesizer outputs that may use ¼˝ jacks for signals at the instrument level or microphone level. Xlr Cables The three prongs at the end of an XlR cable connect to three wires inside the cable: a “ground” wire (sometimes called “earth”), a positive “hot” signal, and a 02-Manzo-Chap02.indd 19 3/30/15 7:12 PM 20 F O u n DAT I O n S O F m u S I C T e C H n O l O g Y figure 2.4: A guitar signal plugged into a DI box, simultaneously routed to a snake and a stage amp negative “cold” signal, which is an inverted copy of the hot signal. To put it succinctly, inverting the hot signal (changing its polarity) allows the cold signal to cancel out unwanted hum and noise from the signal while boosting the desirable signal. A cable with three conductors functioning in this manner is said to be balanced. Additionally, the ground wire is useful because it can carry voltage to power certain microphones that require voltage in order to work. Balanced and unbalanced Signals Balancing a signal is important because it helps reduce external noise from entering or existing in the signal path. All lo-Z microphones with an XlR output are typically balanced signals. Instruments like guitar and bass with their Hi-Z outputs are typically all unbalanced and are susceptible to the same noise and 02-Manzo-Chap02.indd 20 3/30/15 7:12 PM CHAPTeR 2 • Audio 21 figure 2.5: An XLR cable with the female end on left and the male end on right. Courtesy abzee/iStock interference as any unbalanced signal. However, a guitar or bass signal coming through a microphone that is placed in front of an amplifier or through the XlR output of a direct box is a balanced lo-Z output; a microphone signal is balanced, and a direct box balances unbalanced instrument signals. Balanced ¼” tip, ring, and Sleeve (trS) Cables like XlR cables, balanced ¼˝ cables commonly transmit three things: a ground and two copies of the signal with opposite polarities. Instead of having three 02-Manzo-Chap02.indd 21 3/30/15 7:12 PM 22 F O u n DAT I O n S O F m u S I C T e C H n O l O g Y prongs like the XlR cable, the ¼˝ cable with its reduced size is able to transmit the ground and signals using the parts of the cable identified as tip, ring, and sleeve. Shown in Figure 2.4, the sleeve carries the ground, while the tip and ring carry the signals. For this reason, a ¼˝ cable like this is referred to as a ¼˝ TRS (tip, ring, sleeve) cable. A ¼˝ TRS cable is also said to be balanced. unbalanced tip and Sleeve (tS) Cables Another type of ¼˝ cable is the unbalanced tip and sleeve (TS) cable. Instruments like electric guitars, bass guitars, and keyboards all typically use ¼˝ output jacks to transmit the signal but don’t all produce a balanced signal. electric guitars and basses, for example, output an unbalanced signal consisting of just a tip and sleeve. As a result, a TS cable—commonly referred to as an instrument cable—is used to transmit the signal from a guitar to other devices that also accept unbalanced signals like guitar stomp boxes and guitar amplifiers. Since a TRS cable has the same tip and ring conductors as a TS cable, in many cases it may be used if a TS cable is unavailable, but this will not make the unbalanced signal balanced; use a direct box for that. The output levels and connections of keyboards may be any combination including balanced ¼″, XlR, or unbalanced line level. This information would be available in the manual for the instrument and is sometimes engraved directly into the device itself near the output jacks. In situations where there is a great distance between the sound source and the mixing board, it’s common to use an audio snake to route audio from one figure 2.6: A balanced ¼” TRS cable above an unbalanced ¼” TS “instrument” cable: 1, sleeve; 2, ring; 3, rip; 4, insulating rings. (From Pedersen, 2005.) 02-Manzo-Chap02.indd 22 3/30/15 7:12 PM Chapter 2 • Audio 23 Figure 2.7: A “12 3 4” snake with 12 female XLR jacks and 4 male XLR jacks. (From VK1LW, 2012.) location to another. “Snakes” are, in layman’s terms, a bundle of cables wrapped together. Snakes usually have some combination of XLR and ¼″ TRS connections with the “male” pronged end given as a cable and the “female” receiving end given as a jack fixed on top of a metal box. Balanced audio signals can travel greater distances than unbalanced signals without picking up noise and interference along the way. A snake helps to consolidate the number of cables that are run at great distances by essentially wrapping numerous cables into one large cable. A microphone on the stage would connect to the snake, and the snake would run the distance to the back of the venue and connect to the mixing board. Other Cables Many other types of cables exist with varying connectors at the end. For example, computers and portable devices commonly use 1/8˝ TRS connections through headphone jacks. Unlike ¼˝ TRS cables, 1/8˝ TRS ones typically send two different audio signals, referred to as channels, on the tip and ring wires. Two-channel signals like this are referred to as stereo; this can be thought of as two separate audio signals—one for the left speaker and one for the right speaker— simultaneously coming through your stereo sound system at home. Another cable used in consumer products and some audio products is the RCA connection. This cable comes in the “white and red” type for audio devices 02-Manzo-Chap02.indd 23 3/30/15 7:12 PM 24 F O u n DAT I O n S O F m u S I C T e C H n O l O g Y figure 2.8: An RCA cable with three wires. (Courtesy Krasyuk/iStock.) and in the “white, red, and yellow” type for consumer TVs and video devices, sometimes referred to as a composite cable. In these cases, the white connector carries an audio channel for the left speaker, red carries an audio channel for the right speaker, and yellow carries a video signal. miXerS A mixing board, also known as a mixer, mixing console, or mixing desk, is a device that can receive multiple channels of audio through XlR or ¼˝ TRS input. The purpose of a mixer is to allow a sound engineer to create a balanced mix of the different audio signals and send that mixed signal to speakers, recorders, or other output destinations. 02-Manzo-Chap02.indd 24 3/30/15 7:12 PM Chapter 2 • Audio 25 Figure 2.9: A mixing board. (From Ten, 2013.) Audio signals enter the mixer via XLR and TRS jacks on the mixer itself. Most mixers allow engineers to increase or decrease the amplitude, or signal strength, of the incoming signal using a built-in preamplifier, or preamp. The amount of boost (strengthening) or cut (weakening) is measured in what is called gain as soon as the sound signal enters. The goal at this point in the mixer, called the gain stage, is to ensure that the incoming signal is strong, but not too strong. A weak signal will make it difficult to use the features of the mixer for that channel, which can include effects, tone shaping, and routing the audio to different speakers. A signal that is too strong will clip the signal, resulting in unpleasant and undesired distortion. Gain can be adjusted using the knobs on the mixer commonly labeled “gain” or “trim.” Additionally, some mixers have buttons labeled “pad” or “boost” that will decrease or increase, respectively, the incoming signal by some amount. Past the gain stage, mixers vary in terms of their controls and features. Some mixers will have sound effect processors and “tone-shaping” controls built into the board. All will have at least one volume slider, called a fader, or rotary knob that controls the level of the signal that will be sent to the main speaker outputs facing the audience. Some mixers will have the option to send the signal to additional output destinations, such as smaller monitors on the stage that the performers use to hear the mix, in-ear monitors, recorders, or other places. The knobs on the mixer that allow a signal to be sent to these other locations are generally labeled Aux (Aux 1, Aux 2, etc.), short for “auxillary.” Imagine that a rock band is performing on a stage. The audio engineer will balance the individual instrument levels coming into the soundboard using the 02-Manzo-Chap02.indd 25 3/30/15 7:12 PM 26 fo u n datio n s of m u sic t e ch n o l o g y volume sliders. Suppose that the singer wants to hear his or her voice louder on the stage but is overpowered by the level of the instruments. Placing a small speaker on stage, referred to as a stage monitor, will allow the engineer to send a special mix that is different from the main mix in that only the singer’s microphone signal is present. The special mix comes by adjusting the levels of the Aux and routing the Aux output from the mixer to that speaker through a cable. A singer might then ask the sound engineer to add a little bit of the guitar or keyboard signal in the stage monitor/aux mix. The overall number of aux mixes available is determined by the size of the mixers. The available features vary from mixer to mixer and may include the ability to group similar channels together onto a single channel, called “subgroups” or “submixes,” add internal effects and processing, route signals to and from “outboard” external effects processors, and more. A mixer that is said to be 16 3 4 refers to 16 input channels and 4 subgroups. If there are four members in the band, each member can have their own mix routed to individual stage monitors, in-ear monitors, and so on uses an aux send or a subgroup. D I D YO U K N OW? Performing Live without Sophisticated Sound Equipment Imagine playing with your band on stage and not being able to hear the other instruments clearly. This was the case for most bands in the ’60s. The screaming fans and the general noise made it impossible for most groups to hear themselves. The Beatles, for example, took to miming their vocals and guitar movements because neither they—nor their fans—could hear what they were doing anyway! Stage amplification was primitive and the sound that the audience heard was far from ideal. Later bands like the Grateful Dead took to carting their own sound equipment and employing their own technicians and engineers in order to ensure the sound quality of their shows was up to their standards. Go to the Software Lesson: Mixer In the FMT application, the Mixer lesson shows a simulated view of a soundboard. The application allows you to play back 5 independent channels of audio and work with them as you would with a hardware mixer. At the orange circled 1: • Choose an output destination. Remember that the main outputs are what the audience hears, whereas the aux mixes are what the musicians hear through stage monitors and personal in-ear monitoring systems. The first aux, Aux 1, is an example of a pre-fader aux 02-Manzo-Chap02.indd 26 3/30/15 7:13 PM Chapter 2 • Audio 27 in which the signal sent to the aux is unaffected by the volume fader on the channel. At the orange circle 2: • Press play to begin playing the 5 tracks. Each of the individual instrument tracks, or stems, has been included for this song. Note that not all of the tracks will sound simultaneously. As the music plays: • Adjust the volume sliders, known as faders, to increase signal strength • Adjust the aux knobs on various channels to increase the level of signal strength to the auxes, then change the output destination to observe the balance of the auxes compared to the main mix. It is recommended that you lower the level of all instruments as they play simultaneously and slowly match the relative balance between each instrument using your ears. It is often preferred to match the levels of the bass and drums first before adding other instruments. • Adjust the preamp gain, pan, and other elements carefully as not to hurt your ears through loud bursts of sound. The U stands for unity gain, the level of gain where no volume is being added or taken away from the signal. The -inf stands for negative infinity decibels, which means “no sound.” These levels can be called by clicking on the letters U and -inf within the software. If you have access to other multitrack stems, you may use them within this software lesson by dragging and dropping the audio files onto the track header or label beneath each slider where “Input Channel” is specified. Bands and artists will sometimes release songs as stems or multitracks in order to encourage remixes to take place. These audio file collections would be ideal for this activity. D I D YO U K N OW? Feedback It is important to note that if a microphone is placed too closely to a speaker that is outputting that microphone’s signal, it will produce a piercing sound known as feedback. In essence, the speaker is amplifying its own signal being picked up through the microphone. To prevent feedback, keep sound levels moderate if the microphone is being used within close proximity to a speaker or simply move the main speakers in front of the singers facing the audience. Be aware that stage monitors facing the musicians are also susceptible to feedback and that engineers in the back of the room may be unaware that feedback is starting on stage until it becomes noticeable to the audience. 02-Manzo-Chap02.indd 27 3/30/15 7:13 PM 28 fo u n datio n s of m u sic t e ch n o l o g y Audio Interfaces Instead of a mixing board, you might just want to record the live signals directly into your computer. To do so, you would use an audio interface, also known as a sound card or recording interface, to convert the audio signal into a digital format so that a computer can understand it. This conversion is said to be from “analog to digital,” or A to D. In this sense, the word “analog” refers to the electrical signal captured by the microphone travelling in the signal flow, which is “analogous” to, or a representation of, the sound source originally captured by the microphone. The word “digital” represents the conversion of the signal into numbers that the computer can understand. Figure 2.10: A USB digital audio interface for recording. (From Kamilbaranski, 2010.) In digital recordings, the term samples refers to a rate by which an audio signal—for example, a violinist sustaining a note, which is a continuous flow of audio—can be broken down into “discrete” and evenly spaced signals. The conversion of audio we hear with our ears into a series of digits means that each of the discrete signals is given a numerical value to represent the overall continuous signal that was broken down. Each of these values is referred to as a sample. D I D YO U K N OW? Discrete Signals Consider a flip-book drawing or an animation in which a series of still images are grouped together and shown sequentially. The number of images used and the rate at which they are shown will influence the way the audience experiences a sense of flow. If there 02-Manzo-Chap02.indd 28 3/30/15 7:13 PM Chapter 2 • Audio 29 are too few images, the animation will appear choppy. Increasing the number of images will help ensure that there are no gaps in representing the flow of motion in the animation. If there are many images but the rate at which they are shown is too slow—i.e., the person flipping the flip-book is doing a poor job flipping—the animation will, again, feel wrong. Additionally, if the transitions between the motions of the characters in the animation are not gradual, there will be breaks in the flow. To this end, there needs to be a large number of high-quality individual images drawn with gradual transitions within the animated character motions that occur at a rate that feels natural to the way we experience motion in real life. In the same regard, we capture or “sample” audio at a quality or “resolution” that captures as many audio frequencies that we can at a very fast rate— many times a second—so that when we hear the audio played back it sounds as close as possible to the way we hear sounds in real life. There are different numerical rates by which someone can sample audio. For example, many computer sound cards can record 44,100 samples per second. This means that for a 1-second recording of a violinist sustaining a note, there are 44,100 samples representing that 1 second of continuous audio. Some more advanced sound cards have the ability to use more samples per second in the sampling process. Higher Sampling Rates There is much debate as to whether humans can perceive a noticeable increase in audio quality with higher sample rates. Remember, the human hearing range stops at around 20 kHz. The default sampling rate for many devices is 44.1 kHz, a little more than twice the highest frequency that our ears can hear. According to what is known as the Nyquist-Shannon theorem, audio must be sampled at a rate that is at least two times that of the highest frequency that is to be present in the audio source. Sampling at 44.1 kHz accurately captures 22.5 kHz. Compact discs playback audio at 44.1 kHz, so recordings made with a higher sampling rate like 96 or 192 kHz must be downsized to this rate if they are intended to be put on CD. Some feel that recording audio at sample rates higher than 44.1 kHz is recording a lot of information that humans can’t perceive. Is this recoding practice producing a waste of hard drive space? Some might contend that audio interface manufacturers use these advanced sampling capabilities as a marketing ploy. Compressed Audio Audio formats like MP3 use methods of compression in order to reduce the file size by sacrificing some of the audio quality. If you’ve ever been to a website like YouTube, you’ve noticed that higher-quality videos look and sound better but require longer to load. This is also true for compressed and uncompressed audio. 02-Manzo-Chap02.indd 29 3/30/15 7:13 PM 30 fo u n datio n s of m u sic t e ch n o l o g y Compression as an audio effect (a different notion altogether) will be discussed in Chapter 9, but in this capacity we can think of it as file compression, a process that reduces the quality of the audio in order to achieve a smaller file size. A 5-minute uncompressed WAV or AIF audio file may be about 50 megabytes in file size or greater, whereas a compressed MP3 audio file may only be about 5 megabytes in file size with little perceived difference in the quality. Some parts of the audio can be compressed or completely thrown away if they are outside the range of what a human being can actually hear or perceive. The “throwaway” part can be the overall dynamic range of the recording, the full-spectrum range— above the frequency range of what our ears can actually hear—or more. In this manner, MP3 compression is referred to as lossy compression because audio content has been removed from or modified within the audio file and cannot be retrieved. File sizes are decreased with, ideally, little or no perceived loss of quality depending on the type of compression used, making them suitable for digital transmission on websites and other means compared to high-quality audio that is larger in file size. By contrast, a zip file is a lossless type of compression in which files are identical prior to compression as they are once they are uncompressed. D I D YO U K N OW? Perception of Quality Vast numbers of perception studies have been conducted seeking to determine if the average person can tell the difference between a highresolution uncompressed audio file, a CD-quality audio file, and a compressed MP3 audio file of the same piece. You can easily conduct a similar study with your friends. As we will discuss in Chapter 8, the field of psychoacoustics deals with these types of issues directly among other concepts. Being able to detect the difference may seem obvious, but to what extent have online video sites, low-quality speakers in restaurants, and built-in speaker(s) in portable electronic devices conditioned our ears to accept lower-fidelity audio as an acceptable standard of quality? Additionally, would a poor-quality instrument timbre recorded at 44.1k sound noticeably “better” if it was recorded at 192k? Can you notice the difference between a highquality MP3 and an uncompressed format? an AAC, FLAC, OGG, or other compression format? The discussion continues. Bit Depth and Sampling Rate The sampling rate is different from a bit depth, which refers to the amount of binary digits, or bits for short, that can be used to represent the data being recorded. For example, as you may know, “binary code” refers to a counting system 02-Manzo-Chap02.indd 30 3/30/15 7:13 PM Chapter 2 • Audio 31 using only the numbers 1 and 0 where 1 represents that something is in the state of being “on” and 0 represents that something is “off.” Think about this in terms of your light switch having two states: the switch is on, state one; the switch is off, state zero. The more 1s and 0s you string together, the longer the number will be and the more states you will be able to represent. For example, the 16-character binary “word” 1011101011110001 represents many more “on/off” states, 65,536 possible combinations of 0 and 1, than the 4-character word 1011 with its 16 possible states, and certainly more than a 1-character word with only two states: 0 or 1. A longer binary word can better represent something than a shorter one simply because it has more numbers to work with. When a binary word is 16 characters long, it is said to be 16-bit; 32 characters is 32-bit, and so on. A 1-bit word has only one character and two possible states: 0 or 1. A 2-bit word has two characters and 4 possible combinations of 1s and 0s. An 8-bit word has 8 characters and can represent 256 possible values given the combination of 1s and 0s. Recording something at 24-bit means that you are using more numbers to digitally represent the amplitude fluctuations of the recorded analog signal than if you recorded it at 16-bit. The sampling rate, as described earlier, refers to how frequently you will take “snapshots” (samples) of a continuous audio signal. Audio interfaces are compared to each other according to the quality of their A-to-D converters. In short: if you have the equipment necessary to record at a higher bit depth, do so! Audio Interface I/Os Like mixers, audio interfaces have a number of input channels, commonly with XLR and ¼″ connections; some are even shaped like mixers or double as live mixers and audio interfaces. The audio interface will probably also have built-in preamps, but that is where the similarities to the mixing board end. Once the signal is preamped, it is converted to digital and sent to a connected computer via some digital connection (typically USB or FireWire). Given the popularity of the electric guitar, it’s not uncommon for one of the input channels on the soundcard to accept a Hi-Z “instrument level” input signal so that a guitarist can connect directly to the audio interface using a standard ¼″ instrument cable. Computers, especially portable computers, typically have built-in (sometimes called “on-board”) soundcards that allow 1/8˝ stereo cables to be plugged into the computer as a line-in recording input in addition to microphones that are also built in. These built-in microphones, though convenient, are of lesser quality than would typically be used on a serious audio recording. Digital Audio Workstations A computer communicates with audio interface using recording/editing/ playback software programs. These programs are referred to as digital audio 02-Manzo-Chap02.indd 31 3/30/15 7:13 PM 32 fo u n datio n s of m u sic t e ch n o l o g y workstations, or DAWs. Many DAWs are used for recording, including Pro Tools, Logic, GarageBand, Audacity, Cubase, FL Studio, and Live. Though they differ in terms of their appearance on the screen and some functionality, they are all able to receive digital audio from an audio interface and play it back in some manner through speakers. From within the DAW, the audio interfaces connected to your computer and their input channels will be accessible from the software. Clicking the Record button from within the software will allow signals received at the inputs of the audio interfaces to be imported digitally in real time within the DAW. In Chapter 3 we will examine the common features available when working with audio in a DAW. D I D YO U K N OW? When Should You Convert the Analog Signal to Digital? Once the signal is digital, the only limit to the amount of processing that can be done to the sound source is determined by what the computer’s processing power can handle without crashing and by what the DAW software allows; literally millions of audio effects can be added, analyses can be calculated, and so on, all in real time. For this reason, some recording engineers prefer to convert the signal to digital as early in the signal flow as possible. The idea is to capture the live performance and convert it to the digital domain immediately and then use the computer to handle all of the effects processing. Eventually, the digital signal is converted back to an analog signal when it is sent from the sound card to the speakers. An engineer following this approach might even use a digital snake, (like the snake described earlier, but uses analog signals converted into digital signals) if necessary, to take multiple inputs from a distant location and route them directly into the recording DAW being used. Another school of thought is to get the analog signal sounding “just right” and then simply record what is heard into the digital domain at the last stage of the signal flow. Some engineers prefer to record the sound of a choir in a reverberant room, while others prefer to record them in a “dry” room and add digital reverb within a DAW. An engineer may run a vocal track through a dozen external hardware vocal effects processors before the signal ever reaches the recording console. The advantage of the former approach is that, for example, the amount and type of reverb can be changed freely in software, whereas it cannot be removed, only masked, if present on a recording. 02-Manzo-Chap02.indd 32 3/30/15 7:13 PM Chapter 2 • Audio 33 Go to Software Lesson: Audio Recorder Using the FMT companion software, choose the lesson Audio Recorder. If necessary, slowly adjust the main volume slider at the top left of the software. To begin, let’s demonstrate the basic function of all DAW: • Click the play button near the orange circle 4, or press the space bar. Notice that the red playback line scrolls through the track. All DAWs operate in this fashion: a playback line moves forward at some tempo playing back anything in its path. Currently, there’s nothing on this track, so let’s record something. First: • Ensure that the microphone input channel from your sound card or audio interface is selected in the menu near the orange circle 1; for notebook computers, this is probably input channel 1 or 2. Next: • Make sure the track is “armed” by observing the indicator near the orange circle 2; DAWs use tracking “arming” as a way to distinguish tracks that will be recorded onto from tracks that will just play back. If the track is armed, you should be able to see activity in the meter above the “arm” button. Make sure that your input is not so loud that it clips. Finally: • Press the record button near the orange circle 3. This will allow you to record on this track for 10 seconds. After 10 seconds, recording will automatically stop. After recording: • Press the play button again by clicking near orange circle 4 or by pressing the spacebar. Consider the simplicity of the steps above, and understand that most DAWs are actually no different. What makes an unfamiliar DAW intimidating is the layout of the controls and other elements related to its operation, not understanding basic concepts like setting input channels, arming tracks, and clicking the master record button. Speakers and Amplifiers When the signal leaves the mixer, it is sent to speakers at the front of the stage, facing the audience. The speakers are the final stop in the signal path. The electrical signal causes the speaker to move, pushing air back and forth, which reproduces the sound. The signal being sent to the speakers must be increased using an amplifier. Powered speakers, also known as active speakers, have amplifiers built into the speaker cabinet, while passive speakers require the signal to first be sent to a dedicated power amplifier before connecting to the speaker. 02-Manzo-Chap02.indd 33 3/30/15 7:13 PM 34 fo u n datio n s of m u sic t e ch n o l o g y Speakers have the difficult task of authentically reproducing the frequencies captured by the original sound source at the other end of the signal path. Instead of one single speaker having to represent all of these frequencies, the frequency range is distributed so that certain groups of frequencies, referred to as bands, are sent to speakers that are better suited to represent them more accurately. For example, a subwoofer is a dedicated speaker used to play back low frequencies. When the signal from the mixer hits the amplifier in the speakers, a component, known as a crossover, filters the low-frequency content below a certain specified number (in Hz) and sends it to the subwoofer, which is designed to reproduce these low frequencies accurately. Even though speakers might be rated as “full range,” meaning that they can accurately reproduce all of the audible frequency content that is sent to them, a subwoofer helps to ease some of this responsibility by handling the lowest frequencies. In general, a good-sounding speaker setup, be it a PA, an instrument amplifier, or a home theater, will faithfully represent the full frequency range of the audio content that is being sent through it. Poor-quality speakers may have “gaps” or “holes” in the playback where bands of frequencies are attenuated, or lost, by the speakers’ inability to accurately reproduce those frequencies. A typical “House Public Address (PA),” or “Front of House (FOH),” sound system involves at least two main speakers. If the sound system is set up so that each of the main speakers is capable of playing back different audio content, the system is said to be in “stereo.” If the same audio content comes out of both speakers, the system is said to be in “mono.” Running a PA in stereo can improve clarity in the overall sound. Imagine a band with two guitarists. If each guitarist’s signal is primarily coming through only one of the two main speakers, the listener will likely experience some sense of spatiality in the mix, whereas if the same audio content comes from each speaker, the individual guitar parts might become hard to distinguish. The downside of using a stereo configuration is that some people seated on the right side of the venue might not be getting the same sound experience as the people on the left side. It’s not uncommon for large venues to have multiple stereo pairs of speakers placed in clusters around the hall. Multi-channel speaker configurations are also popular in theaters. For example, a configuration with three speakers in the front of the audience (left, center, and right), two speakers behind the audience (rear left and rear right), and one subwoofer is known as a “5.1” configuration; the “5” indicates the number of speakers and the “1” indicates the number of subwoofers. A “10.2” configuration would mean 10 speakers and two subs. Note that a subwoofer can typically be placed anywhere in the room since the low-frequency waves it produces are not as easily blocked or absorbed by people in the venue as other frequencies are. Microphone Types and Basic Placement Microphones, or mics, come in many different varieties. In concept, they are the same; they capture sounds as they travel through the air and convert them into 02-Manzo-Chap02.indd 34 3/30/15 7:13 PM CHAPTeR 2 • Audio 35 electrical signals. microphones are labeled as being able to capture sounds within some specified range of frequencies and are designed to have different directional patterns by which they pick up sound. For example, some microphones are said to be omnidirectional, meaning that they pick up sound in all directions, which can be useful with choirs or recording sounds in nature. Unidirectional microphones record in one pattern and are ideal for use directly in front of a sound source. Some microphones have the ability to switch the directional pattern and will often depict the “polar pattern” in a graphic within the manual. An engineer should consider the sound source being recorded and choose the ideal pattern; for example, if you’re recording a vocalist, it’s probably not necessary to use an omnidirectional microphone pattern. Dynamic microphones are popular for use in live and studio applications. In practice, these mics are best placed close to the sound source being mic’d so as to prevent other sounds from “bleeding” into the mic. Dynamic microphones are known to increase the volume of the bass frequencies when placed in close proximity to the sound source. For example, a vocalist who puts the microphone very close, even touching, his or her mouth while singing will produce a warmer, bassier tone on the receiving end of the mic line, even if his or her tone in actuality is somewhat thin. Windscreens are soft foam covers used on mics to fi lter pops and other unwanted noises from passing through to the mic. This is especially useful for mic’ing singers for whom plosive consonants often cause the signal level to clip in the microphone. For example, words beginning with the letter P tend to cause a pop that can be softened with the use of a windscreen. figure 2.11: The popular Shure SM58 dynamic mic. (From Fergusson, 2006b.) Wireless microphones are typically just dynamic microphones with an additional wireless transmitter component through which the signal is sent via the airwaves to a receiver connected to the mixer. Because of the inherent loss of signal quality during the wireless transmission, wireless microphones are not used in studio situations but are common in live performance situations where such quality loss is less noticeable. 02-Manzo-Chap02.indd 35 3/30/15 7:13 PM 36 fo u n datio n s of m u sic t e ch n o l o g y Condenser microphones are another type of microphone that are ideal for live and studio situations. Unlike dynamic microphones, condenser mics are much more sensitive to sound and, as a rule of thumb, should not be placed in any location that you wouldn’t feel comfortable putting or leaving your ears; for example, not directly on a kick drum or guitar cabinet. Instead, a good starting place for a condenser microphone is some distance away from the sound source where it is agreed the sound is most pleasant to the ears. Unlike dynamic mics, condenser mics require power in order to operate. For this reason, most mixers and audio interfaces are equipped with a “48-V” switch, referred to as “phantom power,” which sends power to the microphone. Figure 2.12: An AKG C414 Condenser microphone. (From Fergusson, 2006a.) Microphone Placement There are numerous techniques for and approaches to achieving great-sounding recordings audio and live mixes, much of which is contextual. Proper microphone placement can be approached scientifically by observing the construction of the room and accounting for the ways in which the room design will change the sound source. However, there is also something to be said for the real-world experimentation with mic placement that tends to happen “on the fly.” For example, the balance of the ensemble you’re recording may be so poor that it’s better to mic each individual instrument, called “spot mic’ing,” and rely on the 02-Manzo-Chap02.indd 36 3/30/15 7:13 PM CHAPTeR 2 • Audio 37 engineer to balance the mix using a mixer or from within the computer DAW. On the other hand, the ensemble balance may be so perfect that just two mics are needed to capture the live mix of a performance. Sometimes it’s nice to use a combination of both approaches in which an overall balanced mix is mic’ed while additionally spot mic’ing each individual instrument, or groups of instruments, in case they need to be boosted in the mix. One popular technique for capturing a balanced mix is to use two identical— referred to as “matched”—condenser microphones placed near each other in order to simulate the way our ears capture a single sound source. The mics are usually placed very high in the room using microphone stands and are positioned so that the mics themselves are at angles facing each other, almost touching, in what is known as an XY pattern. In concept, the sound arrives at each of the microphones at the same time from different angles. Typically, on the receiving side (i.e., the mixer or the DAW), each respective mic signal would then be sent to either the left or right speaker to represent the manner in which the audio was captured with a “left” and “right” microphone. figure 2.13: The X/Y microphone position. (From Fergusson, 2007.) 02-Manzo-Chap02.indd 37 3/30/15 7:13 PM 38 fo u n datio n s of m u sic t e ch n o l o g y Panning Adjusting the relative volume that an audio signal has in a particular speaker is called panning. An audio track can be “panned” left to make it more present in the left speaker, panned center with an even distribution of the signal in both speakers, or panned right. Panning a signal all the way to the left speaker, for example, with no representation of the signal in the right speaker, is called panning the signal “hard left.” Panning “hard left” and “hard right” is common in simple stereo recording situations like the XY pattern recording technique that we just discussed. Other Technologies and Techniques There are various different mic’ing techniques in existence for recording and live situations, many of which are discussed in online forums and professional musician publications. Often times, new techniques come about through experimentation. If you only have one microphone, you might be surprised to learn that there are specialized techniques and approaches to mic’ing drum sets, ensembles, choirs, and more. An engineer recording an electric guitar 4 3 12 cabinet might place two mics against the speaker—one offset from the amp logo, one placed behind it—or more mics 25 feet away, and so on, then mixing these tracks to create a unique sound that is, perhaps, even better than the original. Use your ears as your guide. D I D YO U K N OW? USB Microphones Some microphones marketed as “USB microphones” are part microphone and part audio interface in a single device. It is important to remember that although the cost of these hybrid microphones may seem like a great savings over equally priced microphones without the USB connectivity, you are paying for both a microphone and an audio interface. For example, the quality of the actual microphone part of a $600 USB microphone might not be the same as a standard $600 microphone. The same is typically true with anything labeled “portable,” “compact,” or “mobile”; the convenience of using something that is smaller comes often with a higher price tag than the larger version without an increase in quality. A pickup found in electric guitars and other electroacoustic instruments is a small transducer used to convert the acoustic sound of a host instrument into an electrical signal. The source signal is “picked up” through vibration of the instrument, although the actual mechanism will vary depending on the type of pickup being used. In electric guitars, magnetic pickups are installed inside the body of the guitar, which, depending on the location of the pickup beneath the strings, 02-Manzo-Chap02.indd 38 3/30/15 7:13 PM CHAPTeR 2 • Audio 39 figure 2.14: Electric guitar pickups. (Courtesy MarianoR/iStock.) will produce a different timbre given the way the strings resonate at the location of the pickup. There are also guitar pickups that convert the analog signal to digital just as uSB microphones do. many types of audio equipment exist, and it is easier to think about each component in terms of “what you need” as opposed to “what is available.” A choir ensemble needs a good set of stereo microphones, a small PA or recording system, and likely little else. A 19-piece jazz fusion ensemble with electronic instruments may need much more: direct boxes, microphones, amplifiers, XlR cables, ¼″ cables, stage monitors, in-ear monitors, effects processors, and so on. Consider the music you like and the types of music you intend to record and mix, and determine the types of equipment you may need to obtain. Do you really need to mic every instrument in your drum kit, or will a few mics positioned properly do the trick for your purposes? To what extent does the technology facilitate your live sound? Does it dictate it? Does it make it possible? Does it make it better or worse? Sum marY This chapter contains a broad spectrum of information regarding the types of equipment used in audio production. In general, the goal in using this equipment is to keep the musical material at the start of the signal chain “pure” right through the entire path until it comes out of the speakers or goes into the computer. In order to preserve the signal’s integrity, we need to understand the nature of the signal being transmitted or recorded in terms of balanced and unbalanced signals, and so on; many of the devices and cable types differ with regard to how they can 02-Manzo-Chap02.indd 39 3/30/15 7:13 PM 40 fo u n datio n s of m u sic t e ch n o l o g y help you preserve the signal without introducing artifacts in a way that would degrade the signal quality. Once you are working with high-quality audio in a “clean” signal path that is free of “hum,” “buzzes,” “hiss,” and other annoying sounds, you are free to mix and record your musical material in any way you see fit. Key Concepts • It is important to keep the signal path uniform. • You can make better recordings if you understand the various cable connector types, including XLR and ¼˝. • Dynamic microphones are good for mic’ing loud sounds up close. • Condenser microphones are sensitive and shouldn’t go directly against or near loud sounds. • Mixers receive multiple channels of audio. • Audio interfaces allow for analog-to-digital conversion. • Bit depth and sampling rate relate to the conversion from an analog signal to a digital one and the way the signal is captured. Key Terms 1/8˝ TRS cables analog-to-digital (A-to-D) conversion audio interface/sound card/recording interface audio snake band (frequency) bit bit depth boost channel clip compression condenser microphone crossover cut digital audio workstation (DAW) direct box/DI dynamic microphone feedback gain 02-Manzo-Chap02.indd 40 gain stage Hi-Z (high impedance) -inf (negative infinity decibels) input instrument level signal line level signal lossy/lossless compression Lo-Z (low impedance) mic (microphone) microphone level signal mixing board/mixer/ mixing console mono omnidirectional microphone output panning pickup power amplifier preamplifier/preamp RCA connection/cable resistance/impedance sample sampling rate signal signal path/signal chain/signal flow speakers stage monitor stems stereo Thru output tracks TRS (tip, ring, sleeve) ¼˝ cable TS (tip and sleeve) ¼˝ cable/instrument cable U/unity gain unidirectional microphone XLR cable Z (symbol) 3/30/15 7:13 PM
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

Please view explanation and answer below.

1

Sound

Student’s Name:
Course Title:
University affiliation
Submission Date:

2
Sound
The use of a sampling rate of 44, 100 in recording has been there for a long time due to
varied reasons. I think given biological reasons and the technologies we have; this is the best
sampling rate. Early technologies such as the Compact Disc could only allow audio that is 44.1
kHz to be recorded. Over time, there have been lots of technological advancements made and the
48 kHz sampling rate got introduced. However, means of converting them to 44.1 kHz got
invented. Based on the Nyquist-Shannon sampling theorem, 20Hz to 20,000Hz is the human
hearing range (Prado et al., 2015). Therefore, an audio sampling above 44.1 kHz best allows us
to listen to the sound messages. The theory also states that it is best to have more than twice the
maximum frequency individuals could hear.
Going below this frequency may omit several bits contained in the production
instruments. Thus, sound quality will reduce. This is the same thing that happens whenever
compression is done. It is also vital to note that ...


Anonymous
Really great stuff, couldn't ask for more.

Studypool
4.7
Indeed
4.5
Sitejabber
4.4

Similar Content

Related Tags