Advantages Of Midi
There are two main advantages of MIDI -- it's an easily edited/manipulated form of data, and also it's a compact form of data (ie, produces relatively small data files).
Because MIDI is a digital signal, it's very easy to interface electronic instruments to computers, and then do things with that MIDI data on the computer with software. For example, software can store MIDI messages to the computer's disk drive. Also, the software can playback MIDI messages upon all 16 channels with the same rhythms as the human who originally caused the instrument(s) to generate those messages. So, a musician can digitally record his musical performance and store it on the computer (to be played back by the computer). He does this not by digitizing the actual audio coming out of all of his electronic instruments, but rather by "recording" the MIDI OUT (ie, those MIDI messages) of all of his instruments. Remember that the MIDI messages for all of those instruments go over one run of cables, so if you put the computer at the end, it "hears" the messages from all instruments over just one incoming cable. The great advantage of MIDI is that the "notes" and other musical actions, such as moving the pitch wheel, pressing the sustain pedal, etc, are all still separated by messages on different channels. So the musician can store the messages generated by many instruments in one file, and yet the messages can be easily pulled apart on a per instrument basis because each instrument's MIDI messages are on a different MIDI channel. In other words, when using MIDI, a musician never loses control over every single individual action that he made upon each instrument, from playing a particular note at a particular point, to pushing the sustain pedal at a certain time, etc. The data is all there, but it's put together in such a way that every single musical action can be easily examined and edited.
Contrast this with digitizing the audio output of all of those electronic instruments. If you've got a system that has 16 stereo digital audio tracks, then you can keep each instrument's output separate. But, if you have only 2 digital audio tracks (typically), then you've got to mix the audio signals together before you digitize them. Those instruments' audio outputs don't produce digital signals. They're analog. Once you mix the analog signals together, it would take massive amounts of computation to later filter out separate instruments, and the process would undoubtably be far from perfect. So ultimately, you lose control over each instrument's output, and if you want to edit a certain note of one instrument's part, that's even less feasible.
Furthermore, it typically takes much more storage to digitize the audio output of an instrument than it does to record an instrument's MIDI messages. Why? Let's take an example. Say that you want to record a whole note. With MIDI, there are only 2 messages involved. There's a Note On message when you sound the note, and then the next message doesn't happen until you finally release the note (ie, a Note Off message). That's 6 bytes. In fact, you could hold down that note for an hour, and you're still going to have only 6 bytes; a Note On and a Note Off message. By contrast, if you want to digitize that whole note, you have to be recording all of the time that the note is sounding. So, for the entire time that you hold down the note, the computer is storing literally thousands of bytes of "waveform" data representing the sound coming out of the instrument's AUDIO OUT. You see, with MIDI a musician records his actions (ie, movements). He presses the note down. Then, he does nothing until he releases the note. With digital audio, you record the instrument's sound. So while the instrument is making sound, it must be recorded.
So why not always "record" and "play" MIDI data instead of WAVE data if the former offers so many advantages? OK, for electronic instruments that's a great idea. But what if you want to record someone singing? You can strip search the person, but you're not going to find a MIDI OUT jack on his body. (Of course, I anxiously await the day when scientists will be able to offer "human MIDI retrofits". I'd love to have a built-in MIDI OUT jack on my body, interpreting every one of my motions and thoughts into MIDI messages. I'd have it installed at the back of my neck, beneath my hairline. Nobody would ever see it, but when I needed to use it, I'd just push back my hair and plug in the cable). So, to record that singing, you're going to have to record the sound, and digitizing it into a WAVE file is the best digital option right now. That's why sequencer programs exist that record and play both MIDI and WAVE data, in sync.
Because MIDI is a digital signal, it's very easy to interface electronic instruments to computers, and then do things with that MIDI data on the computer with software. For example, software can store MIDI messages to the computer's disk drive. Also, the software can playback MIDI messages upon all 16 channels with the same rhythms as the human who originally caused the instrument(s) to generate those messages. So, a musician can digitally record his musical performance and store it on the computer (to be played back by the computer). He does this not by digitizing the actual audio coming out of all of his electronic instruments, but rather by "recording" the MIDI OUT (ie, those MIDI messages) of all of his instruments. Remember that the MIDI messages for all of those instruments go over one run of cables, so if you put the computer at the end, it "hears" the messages from all instruments over just one incoming cable. The great advantage of MIDI is that the "notes" and other musical actions, such as moving the pitch wheel, pressing the sustain pedal, etc, are all still separated by messages on different channels. So the musician can store the messages generated by many instruments in one file, and yet the messages can be easily pulled apart on a per instrument basis because each instrument's MIDI messages are on a different MIDI channel. In other words, when using MIDI, a musician never loses control over every single individual action that he made upon each instrument, from playing a particular note at a particular point, to pushing the sustain pedal at a certain time, etc. The data is all there, but it's put together in such a way that every single musical action can be easily examined and edited.
Contrast this with digitizing the audio output of all of those electronic instruments. If you've got a system that has 16 stereo digital audio tracks, then you can keep each instrument's output separate. But, if you have only 2 digital audio tracks (typically), then you've got to mix the audio signals together before you digitize them. Those instruments' audio outputs don't produce digital signals. They're analog. Once you mix the analog signals together, it would take massive amounts of computation to later filter out separate instruments, and the process would undoubtably be far from perfect. So ultimately, you lose control over each instrument's output, and if you want to edit a certain note of one instrument's part, that's even less feasible.
Furthermore, it typically takes much more storage to digitize the audio output of an instrument than it does to record an instrument's MIDI messages. Why? Let's take an example. Say that you want to record a whole note. With MIDI, there are only 2 messages involved. There's a Note On message when you sound the note, and then the next message doesn't happen until you finally release the note (ie, a Note Off message). That's 6 bytes. In fact, you could hold down that note for an hour, and you're still going to have only 6 bytes; a Note On and a Note Off message. By contrast, if you want to digitize that whole note, you have to be recording all of the time that the note is sounding. So, for the entire time that you hold down the note, the computer is storing literally thousands of bytes of "waveform" data representing the sound coming out of the instrument's AUDIO OUT. You see, with MIDI a musician records his actions (ie, movements). He presses the note down. Then, he does nothing until he releases the note. With digital audio, you record the instrument's sound. So while the instrument is making sound, it must be recorded.
So why not always "record" and "play" MIDI data instead of WAVE data if the former offers so many advantages? OK, for electronic instruments that's a great idea. But what if you want to record someone singing? You can strip search the person, but you're not going to find a MIDI OUT jack on his body. (Of course, I anxiously await the day when scientists will be able to offer "human MIDI retrofits". I'd love to have a built-in MIDI OUT jack on my body, interpreting every one of my motions and thoughts into MIDI messages. I'd have it installed at the back of my neck, beneath my hairline. Nobody would ever see it, but when I needed to use it, I'd just push back my hair and plug in the cable). So, to record that singing, you're going to have to record the sound, and digitizing it into a WAVE file is the best digital option right now. That's why sequencer programs exist that record and play both MIDI and WAVE data, in sync.
Kunjungi www.investasi-saham.com untuk investasi saham dan bermain saham. Panduan Lengkap dalam Berinvestasi di Pasar Modal, Bursa Efek Indonesia, Saham, Obligasi, ORI, SUKUK, Reksadana, Derivatif, Investasi Syariah dan Produk Investasi Keuangan lainnya serta Belajar Menjadi Investor/ Trader (Main) Saham.