mySoftware [Updates]

Once you create a user profile on Motifator and update with the appropriate information, the updates shown here will be specific to you.

newProducts [YOK]

rssFeeds [Syndicate]

Welcome to the support section.

MIDI, Audio, Hardware, Software

Concept: MIDI - Audio - Hardware - Software

This is an article directed at the creative process. I will attempt to introduce some very basic and fundamental concepts available to you with the Motif XF Music Production Synthesizer. Some of this is review and some of it is just to try and point out some basic background on what tools are available. The XF combines a keyboard controller, with Advanced Wave Memory 2 synthesizer tone engine, and an Integrated Sampling Sequencer.

MIDI
MIDI data is generated anytime you press a key or move a controller on your Motif XF. These gestures generate coded messages (Note-ons, note-offs, velocity, volume, pan, reverb send, pitch bend, aftertouch, knob or slider movements, etc.). These coded messages can be recorded to a device called a SEQUENCER. When these messages are captured on a TRACK, you can edit the data correcting mistakes and shaping the musical performance. These messages are not audible. They do not make any sound. The must be played back to the Motif XF’s tone generator (synth engine) for the messages to be interpreted and turned back into audible sound. The advantage of recording a musical performance as MIDI data can be easily understood as you attempt to emulate different non-keyboard instruments. Mimicking the phrasing of a flute or trumpet can take some skill when you are doing it with a keyboard interface. Playing a convincing guitar lick is not a simple matter of using your traditional keyboard technique. The XF introduces some specific controllers to help you better articulate the subtle music gestures necessary to really perform like the instrument you are emulating. In addition to pitch bend, modulation wheel, you will find the XF’s XA CONTROL functions that help you perform legato phrasing on lead Voices.

Audio
Audio is just what you know it to be… in this case, the sound generated by the synthesizer engine or through its audio system. We are only talking about it here to separate it from MIDI data. When you are connected within a system, you have MIDI data triggering the synth engine, the synth engine generating the audio signal. It is basic and simple but is a very important thing to grasp. Ultimately, it is audio data that becomes the finished product. It is what you can take with you - and play for your friends.

SEQUENCER: When what you hear is not what you get:
Say you are playing the XF piano sound using an FC3 sustain pedal. You can record yourself generating continuous control messages from the FC3 pedal to the sequencer.  But what plays back is always going to be directly a result of what is recorded. If you step on the sustain pedal a few seconds before the sequencer begins recording, although it will sound like the sustain pedal is working, the fact that the EVENT generated by you initially stepping on the pedal is not registered in the SEQUENCER track means when you play this back the sustain ON messages never occurs.

They call it a SEQUENCER because events are placed one after the other. The order of recorded events is what primarily determines the playback. 

Hardware Processing
Allocation of the Effects is a part of the creative process in music production. How to approach this area of your synthesizer is the topic for article. This  analogy can help make it clear ... when we say the SYSTEM EFFECTS are to recreate the “outer environment” - that is, the room acoustics of the space in which the band is assembled… you can picture this in your mind’s eye.

Typically, a “live” band will all play on the same (one) stage at the same time. So there is only one reverberation chamber that contains all the performers. Picture your XF Voice selections in MIXING mode as an actual musical ensemble, when you close your eyes and envision the loudness balance and stereo  positioning of these different instruments on a stage you want to be able to find the bass player, the guitar, where the keyboards are, where the drummer is sitting. If you are at a live performance (in the room with actual performers) your ear/brain does this for you automatically. You do not hear different room acoustics for each, rather it is the room acoustics that lets you position, in your mind’s eye, exactly where each of the players is positioned in the room before you.

Say you are seated in row 12, center of house, and you are listening to a live performance in a club. With your eyes closed you are listening to a MIXING setup (a combination of instrument parts playing music in a stereo field). It should not sound like everyone is in the middle sitting on top of each other. Use stereo, you get a stereo image, you can start to position the players with the pan parameter and with the amount of SEND to the Reverb processor.  Those things you want up close and clear and punchy would have little or no reverberation. Those that you want to place you would add a bit of reverberation, etc. etc. 

The fact that the drummer is farther away from you and the saxophone is closer, is determined by their positioning on the stage and how your brain interprets the sound; the volume, the amount delay and reflections of the sound in the room you are in.

If the Insertion Effects are the personal gear of the individual player, the System Effects are those that recreate the outer environment. The Insertion Effects can be assigned real time controllers and can be ‘performed’ while emulating the instrument part. The System effects are not real time controllable, even though each PART of the MIX has a send amount. This makes sense since the size and shape of the room rarely change during a performance, but you can reposition a player (closer or farther away) by adjusting the SEND amount. This can greatly influence how you hear a particular instrument within your mix. Reverb Send amount is always in relationship to the other musical instruments playing. Your sense of distance and position is greatly influenced by reverberation. And is worthy of constant study. (In other words, you should listen, again, and again, to well engineered recordings to get a sense of how reverb is used. And it is on most everything you hear. By the time you notice it - you have a lot. But learn to listen to tracks with and without reverberation - learn to understand what “less is more” means. The System Effects are those that shared by all PARTS of the system via a very typical SEND/RETURN mixer arrangement.

That said, you can copy the REVERB from a particular VOICE to your MIX (REVERB is the effect most responsible for reflected audio within the environment) separately from the CHORUS processor (the effect responsible for speciality time delays). You are offered a dialog box in which you select what effect you want to copy. You can copy the Reverb from one Voice and the Chorus from another Voice.

Think about this: The Chorus processor is going to be assigned to some type of DELAY be it chorus, phase, flanger, delays, echo, etc how many sounds out of your mix actually require this? The answer is typically not that many. For example, you have an acoustic piano - do you use the Chorus process or it it… mostly no. If you have an acoustic guitar, mostly no, strings, it can vary, Drums, no… Think about how the Effects are going to be used. And try not to think that  "It must sound exactly like it does in VOICE mode” and start making it sound how YOU want it to sound in the current mix of things.... nothing in an ensemble sounds the same as it does individually. Your ear/brain interprets its contribution entirely differently. Trust me, it does. Seek always to make your instrument sound good in the current surroundings. 

Alone you might put more reverb on a guitar than you do when you place that guitar in with other instruments. Alone it is the only sound in the environment, your attention to it and your mind’s eye position for it is one way. When you place it with a Bass, a Piano and some strings, the same amount of reverb might place it in the back of the stage rather than in the lead. It can vary.

Is there a way to set one system effect to each voice, at the mixing mode, I mean different system effects to each voice?

Yes, there is. And again the analogy would apply. In the real world, you have a band in the studio and you want to record each in its own separate room acoustics. You could - of course, put the drummer in a large ambient room (many recording studios, used to have rooms with 22 foot ceilings to get a nice ambient drum ROOM sound). Acoustic guitar rooms were also used to enhance the acoustics of that particular instrument. So room acoustics have long been manipulated by recording engineers during the tracking process.

You can work the same way. The “live” band thing is completely altered when the band goes to a studio. The studio, is a series of functions that break the music into separate parts so that it can be reassembled in a specific fashion. This included oftern recording things in isolation - not just isolation booths but on totally separate days. Using the entire studio room to record one instrument (alone).

You can use the powerful INSERTION and SYSTEM EFFECTS to “print” any PART as an audio clip. Remember any PART of your 16 MIXING PARTS can contain a musical instrument sound (normal or drum VOICE)  or it can contain audio clips you sample/resample using the Integrated Sampling Sequencer (user sample VOICE).

This allows you, during an overdub session to isolate any PART in the XF Insertion Effects, and print the results as an audio clip. Each PART designated as a USER SAMPLE VOICE (SMPL or SP) in  your MIXING setup can hold as a many as 128 audio clips - each individual trigger-able by its own MIDI Note-On event. In other setups you can isolate PARTS completely and use the XF’s System Effects to process audio in conjunction with your DAW software.

Once you commit a track as audio via a RESAMPLE, it will sound exactly as it did when it was MIDI data triggering the tone engine… only it will be an audio clip triggered to playback. In this fashion you can ‘print’ each PART as necessary.

You can work with the internal SDRAM (this is 128MB of RAM) you can use it to render any of the internal PARTS as audio. Once it is transferred to audio you are free to reallocate the hardware. the Integrated Sampling Sequencer will automatically create the MIDI Events to trigger playback - so printing PARTS as audio tracks is a very powerful weapon in the arsenal of the XF! Once the track is rendered as audio in SDRAM, you can trim it and use the full editing power of an XF VOICE to maximize its sound, then COPY the audio data to your FLASH BOARD - or you can export it to your computer DAW (if you are using Cubase it can be transferred via an ALL data file: “IMPORT Motif XF Song")

Working with an external DAW (also this is a very viable way to work) 
1) Create your tracks initially as MIDI data (easily editable and great for constructing difficult to play things like drums, etc.)
2) Edit your MIDI Tracks to perfect your performance
3) Prepare the tone generator PARTS for transfer to your internal SDRAM or to your DAW  as audio tracks. Customize processing, bring the full resources of the XF synth engine (including Effects)
4) Adjust audio output levels to facilitate proper record levels in your DAW. Compensate for MIDI dynamic range versus audio record dynamic range. 
5) Print the PART as audio to your DAW as an “audio stem”  

As you do this - all PARTS playback in perfect synchronization (whether they are the original MIDI data triggering the XF tone engine or they are the audio tracks you rendered from them). Once you commit a PART as audio to your internal SDRAM or to your DAW, you can “MUTE” the XF MIDI Track and work with the audio track version. 

I often move the MIDI track to an empty SONG  (which means, YES, I save it because it is like the “original source data") and if later I am unhappy with a PART I’ve committed to audio, I have the original source data - I can recall it, edit it and fix the part… and then print a new audio version.

Once you have a PART as AUDIO 
1) you can transfer it from SDRAM to your FLASH board - where you can always access it
2) you can play it as audio from your DAW. Muting the MIDI track (source data)

This will allow you to reallocate your Motif XF hardware. You have an empty track and/or a free PART. On large projects where there are going to be in access of 16 PARTs, this is a way in which you never run out of polyphony, you never run out of effects processing, you never run out of musical PARTS available.

MIDI and Audio, Hardware and Software - it is really the best of both worlds.



Permalink