This forum is for Tips and Tricks. Please do not post any questions in this forum. It is only for information.
5 posts
Page 1 of 1
I was watching a promotional video from the brilliant John Mitchell (Lonely Robot, Frost*, It Bites) about a track mixdown for his new Lonely Robot album, and okay - he uses Cubase - but I was intrigued that he mixdown/renders all his VST instruments into waves files before doing a song mixdown.
I'm trying to decide if this is a good idea. Obviously, you're commiting to your drum/keyboard/synth programming somewhat. Do you think that many plugins, like compressors and EQs, will provide a better result inserted over the rendered wave file rather than the VST Instrument track itself?

He also has separate Outs from Addictive Drums 2. I'm in two minds whether you can process VST drum tracks (individually) better outside the VST instrument GUI (AD2) compared to the mixer environment inside software like AD2.

Thoughts?

Bag.

Windows 11 64 bit, 12th gen i5 eight core CPU, 32GB RAM, 1810C interface, SSD drive (system) and USB SSD for audio and samples.
Studio One 6.5, Latest UC driver
User avatar
by Lokeyfly on Sat Sep 26, 2020 8:20 am
Hey Bag,
Yes, the ability to mix when tracks are converted to audio is typically better than leaving them as a VSTi. There might be times when they are both pretty neck and neck (even) until the near final mixing or pre mastering come into play.

One might tend to think "Well, just automate the instrument level as you would audio levels, but it's a bit of a different means of gaining expression off of a private Ted track. It's also not a compression, or dynamic processing fix either, though interesting results can come out of that, and shouldn't be ignored with both instrument, and audio tracks. Coming up on when it all starts to sound like finality is taking place is when it's time to print all.

As to a drum instrument like AD, or other, it's largely just fine if that VSTi has a reasonably good mixer, effects processing, etc. Again, it's fine up until that final mix territory where by printing, and the ability to have various means of bussing, that a whole other process expands over control over the main mix. This may include all kinds of tweeks. From dynamics to sibulance, or movement within that drum mix to the rest of the tracks in the song.

Can that be achieved before printing to audio for that final mix? Many times yes, it can, so go for it! If the feel or dynamic is there early on, it's only going to add to the excitement that much sooner. However, again, I'd eventually place that groove either into individual, or grouped tracks to be printed so that wonderful groove (already there) can sit best in the mix. That might only be subtle changes, can be much more expressive when grouped for best overall control, and feel. Remember. You still have the individual control as well.

The short of it is I rarely print an instrument track to audio until I am almost completely convinced the notes and timing are where I want them. Typically, the synths end up being printed first, and invariably the drums/percussion are last. Then, I'll ride each instruments level, as if I were that musician playing in real time to the rest of the band. This creates not only more of a musicians feel, but on the larger order of things, a better production or composers complete idea.

S1-6.2.1, HP Omen 17" i7 10th Gen, 32 GB,512 GB TLC M.2 (SSD),1 TB SSD. Win10 Pro, Audient iD14 MkII, Roland JV90, NI S49 MkII, Atom SQ, FP 8, Roland GR-50 & Octapad. MOTU MIDI Express XT. HR824, Yamaha HS-7, NS-1000M, Yamaha Promix 01, Rane HC-6, etc.

New song "Our Time"
https://youtu.be/BqOZ4-0iY1w?si=_uwmgRBv3N4VwJlq

Visit my You Tube Channel
https://youtube.com/@jamesconraadtucker ... PA5dM01GF7

Latest song releases on Bandcamp -
 
Latest albums on iTunes

All works registered copyright ©️
User avatar
by Jemusic on Sun Sep 27, 2020 4:26 pm
It certainly is a great idea. I don't think sound quality is different applying plugins over a VST compared to the rendered audio file. I think the main benefit will be CPU resources will be considerably better mixing all the virtual parts as rendered audio. Some instruments are heavy on CPU and that is still all going on while you are mixing. (Arturia's OBXa for example can get up to 60% CPU use with one instance alone. Diva is a CPU beast I believe!)

Sessions will open faster as well with all VST's rendered. Waves Grand Rhapsody for example takes quite a while to load the samples every time you boot it up.

External instruments are also best rendered too because then you don't have turn them on and set up their sounds etc..

If you are collaborating you can send these rendered VST's as well to others. They don't need the same VST's then in order to hear the parts.

Midi files can remain behind, muted and popped into a folder. Its very easy to un-render a file, make changes to the midi and re-render also if you need to.

There are a lot of great reasons for doing this.

Specs i5-2500K 3.5 Ghz-8 Gb RAM-Win 7 64 bit - ATI Radeon HD6900 Series - RME HDSP9632 - Midex 8 Midi interface - Faderport 2/8 - Atom Pad/Atom SQ - HP Laptop Win 10 - Studio 24c interface -iMac 2.5Ghz Core i5 - High Sierra 10.13.6 - Focusrite Clarett 2 Pre & Scarlett 18i20. Studio One V5.5 (Mac and V6.5 Win 10 laptop), Notion 6.8, Ableton Live 11 Suite, LaunchPad Pro
User avatar
by GMHague on Mon Sep 28, 2020 7:05 pm
Thanks guys, great answers and pretty much in line with what I've experienced through my dabbling so far. I've got a couple of observations:
Likewise to what's said here about VSTs and CPU usage, rendering everything to wave files offers an odd, and I'll admit somewhat illogical, "peace of mind" when you're mixing down. Everything is very straightforward, there are no VST Instruments chugging away in the background ... and yes, it's quick to tweak any issues MIDI-wise and rerender, although ideally you've programmed things to nth degree before render anyway.
The other thing, slightly different in topic, is that I've found no real benefit in splitting and rendering individual drum channels from VSTs like Addictive Drums 2. Considering the GUIs of these VSTs are purpose-built and the in-house compressors, etc, are designed for the job, plus you often end up grouping them to buss anyway ... maybe separating the snare or kick drum can provide individual effects if the song requires it, but on the whole a rendered stereo wave file of drums is as good as it will get compared to separate channels.
Do you agree?
Cheers!

Windows 11 64 bit, 12th gen i5 eight core CPU, 32GB RAM, 1810C interface, SSD drive (system) and USB SSD for audio and samples.
Studio One 6.5, Latest UC driver
User avatar
by Baphometrix on Thu Oct 01, 2020 7:23 am
GMHague wrote...The other thing, slightly different in topic, is that I've found no real benefit in splitting and rendering individual drum channels from VSTs like Addictive Drums 2. Considering the GUIs of these VSTs are purpose-built and the in-house compressors, etc, are designed for the job, plus you often end up grouping them to buss anyway ... maybe separating the snare or kick drum can provide individual effects if the song requires it, but on the whole a rendered stereo wave file of drums is as good as it will get compared to separate channels.
Do you agree?
Cheers!


There can be several advantages of bursting out the individual drum sounds from a multi-output drum plugin (or even from Impact XT) into separate channels, and then merging those channels into some bussing structure. But it all depends on your goals and your technical skill.

There's no question that it can be "more hassle" to create complex routing structures like this. But there's also no question that it gives you flexibility in your mixing decisions and techniques.

One typical example is for those who work in louder electronic "dance" or "bass music" genres. Emphasis on "loud". Mixing in a way that can yield the cleanest-possible-sounding LOUD master requires you to think about the sequencing and arrangement of your sounds in a very different way compared to someone working in a quieter genre. TL;DR - you need to sequence a lot of very loud "slices" of audio right NEXT to each other, instead of layering too many sounds on TOP of each other.

So in this world, your kick drum and snare drum sounds are often squeezed to within an inch of their lives with as much RMS as possible. They're usually the loudest sounds in your entire mix, and everything is balanced against them. You often don't want ANY other sound actually summing with the kick or the snare, because doing so would cause transient peaks to be higher than the headroom of the total song (project) allows for, which means the final mastering limiter has to hit those summed peaks very hard, which will destroy your mix.

So in this world, it is a routine practice to "sidechain duck" pretty much every other sound that happens at the same moment as the kick and snare, so that literally nothing is summing on top of the kick and snare. The classic example is the kick making the bass/sub sound duck out of the way every time the kick hits. But there are other nuances here too, and this is where complex drum routing comes into play.

For example, we often have a "room reverb" of sorts on at least some sounds in a drum kit, right? Like, usually, we like a subtle amount of reverb on the snare, at the very least. Putting reverb on more elements in the drum kit also helps the kit sound more natural and organic and "real" instead of dry and artificial and disconnected.

But what happens when the reverb signal is summed with the kick and the snare? That's right, you've just increased the crest factor (peak to RMS ratio) of the kick and the snare. So a producer working in loud genres who understands this will choose to duck the room reverb signal against the kick and snare. Now let's think about the "drum top" sounds: hats, cymbals, shakers, percs, toms, etc. Many top pattterns will have certain sounds hitting more or less at the same time as any one kick or snare hit. When a hat hits at the same moment as a kick or snare, the summed transient (crest factor) is suddenly much higher than when the kick or snare hits alone by itself. This is not necessarily a big deal in quieter genres, but in loud genres this simple combination can ruin your headroom, make your mastering limiter work WAY too hard, and will destroy your mix every time the limiter clamps down on big summed transients like that. So again, a person working in a louder genre might choose to route ALL top sounds (hat, ride, shaker, crash, perc, tom, etc.) to a sub-bus and then to duck literally that entire sub-bus to the kick and the snare.

Hopefully all that was easy to follow?

Here's are two screenshots of my standard project template, showing my drum channel burst-outs and the bus routing they move through on the way up to the mixbus. I work in loud and quieter genres. The same routing works for both. I make heavy use of this routing in a loud song so that I can do the type of ducking I described above, as well as other insert processing that is helpful when working in a very limited total dynamic range. For example, applying surgical transient shaping on individual kicks and snares, or running dynamic resonance controllers on the cymbals and shakers (in the CYM.SHAK sub-bus) to reduce the harshness that creeps into these sounds when they have a very low dynamic range, and so on. By contrast, when I'm working on a more "relaxed" and quiet/dynamic song, I simply don't use the clippers and duckers you see sitting on these busses by default.

StudioOneTemplateDrumOuts.jpg


StudioOneBusRouting.jpg

Studio One 5.2.x Pro (Sphere) | Bitwig 4.x | Ableton 10.x
Faderport 8 | ATOM SQ | MOTU M4 | Windows 10 | i9 9900K | 64 GB RAM | Geforce RTX 2070

5 posts
Page 1 of 1

Who is online

Users browsing this forum: No registered users and 6 guests