Upgrade to remove ads
SWU Instruments Week Four Terms 2019
Terms in this set (40)
A type of connection found on guitars equipped with hexaphonic or "divided" pickup systems, specifically, those compatible with Roland equipment. The 13-pin output of the guitar can be connected to a guitar-to-MIDI converter, a guitar synthesizer, or a guitar modeling system. The 13-pin connector carries individual output from each of the six strings/hex pickup, and the signal from the guitar's regular pickups as well as several control signals and "phantom" power (assuming the connected device can provide phantom power). Note that the 13-pin connection itself does not carry MIDI information. An ancillary box must be used to convert the analog signals created by the hex pickup into MIDI note and control data.
In acoustics (as opposed to paper towels), the opposite of reflection. Sound waves are "absorbed" or soaked up by soft materials they encounter. Studio designers put this fact to work to control the problem of reflections coming back to the engineer's ear and interfering with the primary audio coming from the monitors. The absorptive capabilities of various materials are rated with an "Absorption Coefficient," which is a measure of the relative amount of sound energy absorbed by that material when a sound strikes its surface. (See also WFTD "Anechoic")
Acoustically treating a room is necessary in audio production due to the fact that very few "spaces" have the physical qualities that make for accurate monitoring or desired recording. There are many things that can be done to a space before and during construction to optimize its acoustic behavior. These include the shape of the space, its isolation, and the surface materials. Once a room is already constructed, Acoustic Treatment mostly tends to consist of treating the surfaces. There are two primary elements to consider: absorption and diffusion. Acoustic foam is well suited to alleviate slap and flutter echo, the two most common problems in rooms not specifically designed for music recording and performance. In fact, foam can turn even the most cavernous warehouse or gymnasium into a suitable acoustic environment. Diffusion keeps sound waves from grouping, so there are no hot spots or nulls in a room. In conjunction with absorption, diffusion can effectively turn virtually any space into one that is appropriate and useful for the purpose of recording or monitoring sound with a high degree of accuracy.
A device, piece of software, or plug-in that emulates the coloration added to a signal by an amplifier, particularly an instrument amplifier such as those used with guitars and bass guitars. Amplifier simulators will typically emulate the effects of an amp's preamp section, power amp section, any effects (such as built-in spring reverb), and the connected speaker and its enclosure or cabinet. Often convolution or modeling are used to generate the most accurate emulation.
This term is used to describe an image on a TV or movie theater screen, and is defined as the width of the image divided by the height. In the case of a standard TV with a full-screen image, it is 4:3 or 1.33:1 (once the mathematical division is calculated). Movie theater images are usually 1.85:1 or 2.35:1, sometimes called "widescreen" or "letterbox." When the widescreen images are shown on a regular TV in their original aspect ratio, they leave a blank area at the top and bottom of the screen.
A device used to help acoustically tune a room. Enclosed spaces all have resonant frequencies based upon the various dimensions of the space. As a room becomes energized with sound certain frequencies will build up or be cancelled at various locations around the room based upon its shape and dimensions. A bass trap is a low frequency sound absorber used to reduce the effects of standing waves in a room. They are usually placed in corners or along wall joints where low frequency energy tends to build up. The absorption qualities of bass traps prevent low frequencies from interfering with each other throughout the rest of the room, which results in much more accurate response in the listening area. Bass traps come in many shapes and sizes and employ a variety of construction techniques. Some are tuned to kill a narrow band of frequencies while others are designed to cover a broad range.
Abbreviation for Beats Per Minute, it is the standard way in which musical tempos are denoted, especially for use in electronic music composition tools like sequencers. 120 BPM means that in one minute there will be 120 musical beats regardless of any other variables such as time signature.
Breath controller (a.k.a controller number 2) is a MIDI continuous controller command set aside for parameters lending themselves to breath control. To fully understand why something seemingly this obscure has a designated controller number, one has to go back to the early days of MIDI, when the Yamaha DX-7 came out. The DX-7 utilized a breath control device to add realism to certain types of sounds such as brass and woodwind instruments. The breath controller itself was a small device that connected to a proprietary port on the back of the DX-7. A musician could insert it into the mouth (like a whistle) and blow through it. The air velocity was measured and turned into control data inside the DX. The control data could then be used to open a filter or some other assigned parameter to manipulate the sound by the player without having to do anything special with his/her hands or feet. MIDI was in its infancy at the time, and the DX-7 was an extremely popular and groundbreaking instrument in a number of respects. As such it seemed likely that breath control would become a common way of manipulating synth parameters in real time, and so it made sense for a controller to have this function. In reality there is nothing unique about controller #2 compared to most of the other controllers. It can be used for any common continuous controller command so long as you set up the transmitting and receiving devices accordingly. You will simply see it referred to as breath controller pretty frequently in documentation. While breath controllers aren't as popular today as we once thought they would be, there are quite a few players who use them.
CoreMIDI refers to the built in MIDI support available in Mac OS X. It allows for you to set up what devices are attached to any interfaces with Mac OS X drivers. You can assign device names and attributes such as which channels they can work with, and what other features they may support such as MIDI Clock and/or Machine control. CoreMIDI is basically a built in system that gives you the power and flexibility in your MIDI system as OMS and/or FreeMIDI does under OS 9. Another advantage is that currently any CoreMIDI enabled application under OS X can utilize MIDI Time Stamping (MTS) with any of the MOTU MIDI interfaces.
A phenomenon in the propagation of waves where the direction of a wave front (either sound wave or electromagnetic [light] wave) is altered when passing by an object or through a small aperture in a large surface. At shorter wavelength relative to the obstacle, sound (and light) will tend to reflect off the surface more and bend around it less (which partially explains why you can hear, but not see at a concert when someone is standing in front of you). Waves will also bend to fill an opening behind a surface (which partly explains why you can hear someone talking in the next room through an open door even though you can't see them).
Diffusion is the process of spreading or dispersing radiated energy so it is less direct or coherent. A Diffuser is a device that does this. The plastic covers over fluorescent lights in many office environments are diffusers. They make the light spread out in a more randomized way so it is less harsh. In audio, diffusion is a characteristic of any enclosed (or partially enclosed) space. It is caused by sound waves reflecting off of many complex surfaces. For example, a flat concrete wall produces a pretty distinct echo when sound reflects off of it. However a brick wall, while still pretty reflective, tends to diffuse the sound reflections and produces a much less distinct echo. This is due to both the surface of the brick itself and the mortar between the bricks (more specifically the edge diffraction of the joint between the two). All surfaces will of course differ and it is usually a variety of surfaces that create the most randomized diffusion of sound. Diffusion is a very important consideration in acoustics because it minimizes coherent reflections that cause problems. It also tends to make an enclosed space sound larger than it is. Diffusion is an excellent alternative or complement to absorption in acoustic treatment because it doesn't really remove much energy, which means it can be used to effectively reduce reflections while still leaving an ambient or live sounding space.
Abbreviation for Direct X Instrument. A platform for virtual synthesizer and sampler plug-ins that integrate with sequencer programs using Windows directX drivers. These instrument plug-ins are launched from within the sequencer and can be played via an external MIDI source or from recorded MIDI tracks. They can also be subsequently patched through effects plug-ins available to audio tracks of the sequencer from its mixer window.
According to standard definitions, early reflections are sounds that arrive at the listener after being reflected maybe once or twice from parts of listening space, such as walls, ceilings and floor. They arrive later than the direct sound, often in a range from 5 to 100 milliseconds, but can arrive before the onset of full reverberation. The early reflections give your brain the information about the size of a room, and for the sense of distance of sounds in a room. They have an important role in determining the general character and sound of the room.
Abbreviation for "Electronic Dance Music."
In MIDI sequencers an event list is a way to look at a written index of all the recorded MIDI messages or events. While not used often in today's graphic-heavy software sequencer environments, Event Lists provide users with the ability to edit MIDI events precisely and comprehensively. It can be argued that Event lists are one of three ways to view or edit messages; the others are graphic editing, which is most commonly used by today's standards, and notation editing, not available in all sequencers.
A sophisticated (and esoteric) form of additive synthesis (see WFTD archive Additive Synthesis) combining sound elements called "grains," which are used to make up sonic "events." Events are time sliced into "screens" that end up containing the amplitude and frequency dimensions of hundreds of events. Very complex sounds can be created using this technique, but the computational power required to generate them is so great that it has not been practical to use this form of synthesis in any commercially available hardware machines.
In audio a harmonic is sort of the opposite of a fundamental, though technically the fundamental is also considered a harmonic. Pretty confused? Harmonics of a particular waveform are multiples of its fundamental frequency. The first multiple is obtained by multiplying the fundamental frequency by one (1). Therefore in a strict sense the first harmonic is the same value (frequency) as the fundamental. The rest of the "harmonic series" (2x, 3x, 4x, etc.) of a sound make up the basic character, or timbre, of the sound based upon all of their relative amplitudes (levels).
Acronym for "High-Definition Multimedia Interface." HDMI is the first and only industry-supported, uncompressed, all-digital audio/video interface. It was designed to deliver crystal-clear, digital audio and video via a single cable, thereby dramatically simplifying cabling and providing consumers with the highest-quality home theater experience.
Based on OS X, iOS is an operating system for mobile devices developed by Apple, initially created for the iPhone. (The original name for the operating system was "iPhone OS.") The iOS is now used in a variety of Apple mobile devices, including various generations of iPhone, iPod touch, iPad, and Apple TV.
The iOS supports third-party applications, and its user interface supports multi-touch gestures, such as tap, swipe, pinch, reverse pinch, and more. iOS is space efficient, requiring just 74MB of storage.
Refers to a function in modern keyboards and synthesizers that use sample data for raw sounds. The keymap is what defines or assigns each sample to a particular key or key range (or each key to a sample - depends on how you look at it). This is sometimes confused with a zone, but in most keyboards zones are distinct and separate from keymaps. It depends on the architecture of the specific instrument, but keymaps are usually at a much lower level of the hierarchy than zones. If you were to make a sample based piano program, for example, one of the first steps would be to assign your individual samples to specific set of keys that will trigger them. This will cause the proper samples to play over the range of the keyboard. In most modern instruments this "sound" can then be layered with other sounds or routed through effects and filters to create the final program or patch. To make things confusing, keymaps are not always called keymaps, though the word keymap is by far the most descriptive of what they are. Some brands of keyboards refer to it as Key Group, Voice, Multisample, or Wave.
A MIDI timing reference signal used to synchronize pieces of equipment together. MIDI clock runs at a rate of 24 ppqn (pulses per quarter note). This means that the actual speed of the MIDI clock varies with the tempo of the clock generator (as contrasted with time code, which runs at a constant rate). Also note that MIDI clock does not carry any location information - the receiving device does not know what measure or beat it should be playing at any given time, just how fast it should be going.
This is one of those terms that has been bantered around in the industry over the years and has come to have several subtly different meanings. The original meaning of MIDI delay refers to the time it takes for any active MIDI circuit to handle the signal. Just passing MIDI into, and then directly out of any device (even without doing anything to it) takes some finite amount of time because of the electronics involved in managing and buffering the signal. This is MIDI delay and in most cases it is usually well under 5 ms. The delay is cumulative though. So if you pass your signal through several devices it may be significantly delayed by the time it gets to the last device. Some people also refer to the time it takes an instrument to respond to MIDI commands as MIDI delay. While true MIDI delay is one component of this, there are other factors, such as the speed of the processor in the device. Some instruments react more slowly as they are asked to do more (for example, play more notes at once), but this is technically not MIDI delay. Some musicians claim to be able to hear/feel MIDI delay and do not like performing in situations where MIDI is used. While it's pointless to dispute what a person says they can perceive, it is important to note that given the speed of sound in air the sound leaving a speaker cabinet on the one side of a 20 foot wide stage would take about 20 ms to reach the ear of a player on the other side.
MIDI Implementation Chart
MIDI implementation refers to the specific MIDI messages and signals a piece of gear can recognize; a MIDI implementation chart is therefore a listing of the messages a particular device can transmit and recognize. This can be very useful when attempting to determine if a device can send and/or receive various types of channel or system messages. Normally found in the back of the device's manual, its MIDI implementation chart will consist of a list of available MIDI messages, whether the device incorporates those messages, and any special notes or limitations on how it deals with those messages. For example, the chart will list the MIDI channels and modes, note numbers, and continuous controllers the device can respond to. Support for aftertouch, velocity, pitch bend (often with bit resolution), and program change will be indicated. Also listed will be recognition of system exclusive, system real time (clock commands), system common (song position, song select, etc.) and aux messages (local on/off, all notes off, active sensing, and so on).
MIDI Log Jam
When too much MIDI data is present in a single MIDI cable or between a MIDI Interface and the host computer timing anomalies can occur. This phenomenon, often called "MIDI log jam", is the result of the MIDI processor having too many time sensitive events to manage into a serialized communication. Eventually the data gets dense enough that some bytes must wait in a buffer to be sent. If the wait is long enough you can notice timing problems. It is usually a good idea to "thin out" your MIDI data some by removing any extraneous continuous controller data, or any other types of information that can generate lots of data if you notice these problems.
MIDI Machine Control (MMC)
A part of the MIDI spec that allows MIDI devices to control hardware devices, MIDI Machine Control is commonly used to send transport control messages to hardware recorders. Play, Stop, and Locate are examples of MMC messages. Note that MMC does not include synchronization information, although MIDI sync info could also be sent to/from the device that MMC is addressing. MMC allows you to centralize control of your studio from a MIDI source (often a sequencer). A common scenario: Pressing play on a MIDI seqencer sends an MMC play message to a connected multitrack recorder, which begins playing. As the deck plays, it generates MIDI Time Code (MTC) which the sequencer then synchronizes to (chases). When "stop" is pressed on the sequencer, the deck also stops, and ceases to send out MTC. When MTC stops, the sequencer stops chasing. Locating to a point within the sequence will cause the deck to fast forward or rewind to the corresponding location on tape.
MIDI Time Code (MTC)
A form of time code representing real time in Hours: Minutes: Seconds: Frames: Subframes, and transmitted over MIDI.
MTC can also be described as a way of sending SMPTE time code over MIDI cables. Like all forms of time code, MTC is designed to allow various pieces (in this case MIDI-equipped) of equipment to synchronize together.
Similar in concept to a harmonic. Overtones are tones produced by an instrument (or sound source) that are higher in frequency than the fundamental. They may or may not coincide with the frequencies of a harmonic series (harmonics), but they usually do. The difference is that harmonics are always musically related to the fundamental in that they are integer multiples of it. Overtones of a sound are often exactly the same as its harmonics except the first overtone is considered the second harmonic because the first harmonic is the fundamental. Overtones are also sometimes called partials (more on them later).
A type of PAR lamp measuring eight inches in diameter. (PAR lamp diameter can be derived by dividing the PAR number by eight; in this case, 64/8 gives us 8 inches.) PAR 64 lamps have standard power ratings of 250, 500, and 1,000 watts and are available with 110-volt or 220-volt ratings.
The housing into which a PAR lamp is inserted; the PAR housing typically resembles a can with one end closed (where the lamp is inserted) and one end open (where the light beam is emitted). Many PAR cans have a frame or bracket on the open end for holding a colored gel or special effects media. PAR cans also typically have an integral clamp for mounting the can to a lighting bar or truss.
Any one of a series of tones which usually accompany the prime tone (fundamental) produced by a string, an organ-pipe, the human voice, etc. The fundamental is the string tone produced by the vibration of the whole string, or the entire column of air in the pipe; the partial tones are produced by the vibration of fractional parts of that string or air-column. Harmonic tones such as these are also obtained, on any stringed instrument which is stopped (guitar, violin, zither), by lightly touching a nodal point of a string.
A function of some MIDI devices, patch mapping allows an incoming MIDI Program Change message to be assigned to call up any of the receiving device's internal program numbers. For example, an incoming MIDI Program Change message with a value of 44 might be mapped to call up internal program number 95, MIDI Program number 67 might call up internal program number 14, and so on. There are a variety of reasons for using this function. Just a few:
When layering sounds in a live situation, a single program change message from a controller can simultaneously call up several sounds located in different MIDI modules at different patch numbers.
On devices which have more than 128 patch locations, but don't respond to bank select messages, internal programs higher than #128 can be accessed using mapping (E-mu's Proteus family comes to mind as one example).
The most basic reason: convenience! Rather than copying patches around to put them in the order you want or need, simply use patch mapping to establish the desired order, and call them up through MIDI.
Physical Modeling Synthesis
A type of sound synthesis performed by computer models of instruments. These models are sets of complex equations that describe the physical properties of an instrument (such as the shape of the bell and the density of the material) and the way a musician interacts with it (blow, pluck, or hit, for example).
A type of audio mixer combining two audio signals, and outputting their sum and difference. The frequencies found in the original signals are not passed through to the output. For example, if two sine waves (single frequency waveforms containing no overtones) are inputted, one with a frequency of 1000 Hz, and the second at 400 Hz, the ring modulator will output two frequencies: 600 Hz and 1400 Hz. With more complex waveforms (which contain many more overtone frequencies) ring modulators produce a clangorous, "metallic" result often used for special effects, in synth programming, and so on. One popular use has been to process vocals, which produces sci-fi sounding "robotic" voices.
Sample Dump Standard (SDS)
The MIDI Sample Dump Standard is a method of sending digital audio sample data from one machine to another via MIDI connections. Due to the bandwidth limitations of MIDI, SDS transfers can be quite slow, but are an effective way to share sample data between samplers, or between samplers and computer-based sample editing software.
One of the categories of MIDI messages, System Exclusive (Sys Ex) is data intended for, and understood by, only one particular piece of gear. Normally, this data is used to communicate with and control parameters specific to that item. For example, all of the proprietary data in a Roland D-110 synthesizer representing RAM patches might be sent as a "sys ex dump" to a computer librarian. When the computer sends this data back out over MIDI, the only device recognizing and responding to it will be a D-110, all other synths and MIDI devices will ignore it. Other uses for sys ex? MIDI control of parameters not supported by continuous controllers, remote patch editing, patch bank select, and more - uses depend on, and can be tailored for, each specific piece of MIDI gear - that's the beauty of sys ex!
A stage lighting technique where a person or object is lit from below, contrasting with the more standard lighting from above or in front. Uplighting creates an different shadow and light combination than other lighting techniques and can create dramatic effects.
An acronym for Variable Architecture Synthesis Technology. VAST was developed by Kurzweil's Research & Development Institute prior to the release of the original K2000 (1991). Back when most synthesizers utilized one main configuration of oscillators, envelope generators, and filters to produce all their sounds (which is still largely true of many synths today) the idea was to make a synthesizer in which its individual building blocks could be changed and/or connected in different configurations (which they call Algorithms). This, of course, was not a new concept. Modular synthesizers have always had this flexibility. But the problem with modular synths is you have to patch each component manually, which not only takes time, but also requires a great deal of knowledge (experience) in predicting the outcomes. Kurzweil simplified the process by putting 31 useful algorithms under computer control and building the functionality to easily utilize them into their OS. VAST basically is all of those architecture choices as well as the ability to modulate their parameters from a wide list of control sources.
A computer program that emulates the performance of an analog or digital synthesizer, a sampler or an acoustic instrument. Virtual instruments earn this name because they operate entirely as software with no physical "box." However, this is not actually correct, as virtual instruments simply utilize the host computer's CPU and internal or external audio hardware to generate sounds in place of the dedicated, proprietary hardware of most of the keyboards and synthesizers we've been used to over the years. Virtual instruments can be of relatively simple design, such as a collection of samples with a playback engine, or they can use complex modeling algorithms to emulate analog synths of the past (called "virtual analog" synths). Most of these instruments will respond to MIDI continuous controller messages in the same manner as a hardware synthesizer.
A software based musical instrument such as a synthesizer or sampler that works in Steinberg's VST environment. These are referred to as VST Instruments in Steinberg and other software applications.
A type, or style, of keys found on a keyboard that does not have extruding lips or edges. Waterfall style keys (also known as "Square Front") are best recognized as the key found on the famous Hammond B3 organ. Many B-3 players perform using the palm of their hands on the base of the keys in a "wipe" motion up the keyboard (glissando). This, and other performance styles, are made possible by Waterfall keys. Common, sharp-edged synth keys would make this performance style difficult or impossible.
THIS SET IS OFTEN IN FOLDERS WITH...
SWU Instruments Week Two Terms 2019
SWU Instruments Week Three Terms 2019
Sweetwater University- Live Sound and Recording We…
SWU Live Sound and Recording Week One Terms 2019
YOU MIGHT ALSO LIKE...
Sweetwater U Instruments Week 4 Terms
Music Tech vocab
Music 140 Green Terms Quiz
OTHER SETS BY THIS CREATOR
SWU Instruments Week One Terms 2019
SWU Live Sound and Recording Week Four Terms 2019
SWU Live Sound and Recording Week Three Terms 2019
SWU Live Sound and Recording Week Two Terms 2019
OTHER QUIZLET SETS
Geometry Chapter 10 Vocabulary
SCI Test 3
THEATRE EXAM 1