Discuss Notion Music Composition Software here.
4 posts
Page 1 of 1
I've never really known the answer to this question and it has bugged me. If I use a VST with pan and reverb controls, how does it work with the Notion pan and reverb, exactly? If I create a violin section out of separate instruments in GPO, I can pan each instrument in the GPO mixer to get a little more textured effect. If I haven't touched the pan in Notion, I hear the GPO pan. But if I pan in a different direction, it seems Notion overrides the GPO pan.
When does the Notion control override the VST's control? If I want to use the VST control, should I somehow disable the Notion controls? And is their some kind of hierarchy by which reverbs take effect?
User avatar
by Surf.Whammy on Thu Jun 28, 2018 12:08 pm
thomasbaxter wroteI've never really known the answer to this question and it has bugged me. If I use a VST with pan and reverb controls, how does it work with the Notion pan and reverb, exactly? If I create a violin section out of separate instruments in GPO, I can pan each instrument in the GPO mixer to get a little more textured effect. If I haven't touched the pan in Notion, I hear the GPO pan. But if I pan in a different direction, it seems Notion overrides the GPO pan.
When does the Notion control override the VST's control? If I want to use the VST control, should I somehow disable the Notion controls? And is their some kind of hierarchy by which reverbs take effect?


They are separate and independent. The VSTi virtual instrument does it work and generates audio for the respective instrument(s), which is sends to NOTION. Then you can make additional changes in the NOTION Mixer . . . :)

THOUGHTS

(1) As a general rule, I think it's best not to use the native NOTION 6 reverb. The reason is that there are more sophisticated, third-party reverberation effects plug-ins . . .

(2) The effects in a VSTi virtual instrument are separate and independent from the effects plug-ins you are using on the track in NOTION 6, which includes any effects plug-ins (native or VST) on the NOTION 6 Master Stereo Track . . .

(3) Digital Audio Workstation (DAW) stereo tracks do not have true panning controls. Instead, they are balance controls, and all they do is raise or lower the respective left-channel and right-channel volume levels, which is not true panning . . .

(4) Monaural tracks have true panning controls, but NOTION 6 does not have monaural tracks . . .

(5) DAW applications usually have an effect plug-in that can be used to separate a stereo track into two monaural tracks for the purpose of providing true panning. In Studio One, this is done by the Dual Pan effect plug-in . . .

[NOTE: There are various "panning laws" that determine the way panning is done, and the Studio One "Dual Pan" supports them. The easiest way to understand panning rules is to imagine that you are listening with studio quality headphones to an monaural instrument playing a continuous note at a constant volume level on a monaural track. When you pan the instrument to the far-left position, you only hear the instrument at far-left, which means that what you hear is being presented by one side of the headphones. When you pan it to the far-right position, you only hear the instrument with the other headphone, at far-right. But when it is panned to top-center, you hear the instrument in both ears. Unless you apply one of the panning rules, what happens is that the perceived loudness when the instrument is panned top-center will be greater than the perceived loudness when the instrument is panned either (a) far-left or (b) far-right. The various panning rules (a.k.a., "pan laws") are designed to solve this problem, so that regardless of where the instrument is panned on what I call the "Rainbow Panning Arc", the perceived loudness is constant . . . ]

Image
Dual Pan Effect Plug-In ~ Studio One

Image

(6) In contrast to every DAW and most other audio applications that have stereo tracks, NOTION 6 almost has true stereo panning controls. The exception that maps to "almost" is that the NOTION 6 volume controls are stereo and operate equally on the left-channel and right-channel. If NOTION 6 had volume controls with separate volume sliders for the left-channel and right-channel, then the NOTION 6 panning controls would be true stereo panning controls.

Nevertheless, this is useful when you want to position sounds very specifically in conjunction with a DAW application like Studio One . . .

For example, I developed a strategy in NOTION that I call "Sparkling"; and it involves spreading individual notes across as many as eight staves--all with the same instrument (native or VSTi) . . .

If the instrument being "Sparkled" is an electric guitar, then I will create as many as eight staves of electric guitar and then pan each one to a specific location . . .

When there are eight staves, I set the panning on the stereo track for each instrument to a specific location, as identified in the diagram of the "Rainbow Panning Arc" (see above) . . .

If I want a note to appear at far-left, then I put the note on the staff that has its stereo track panned in NOTION to far-left; if I want the note to appear at far-right, then I put the note on the far-right panned staff; and so forth . . .

It takes about an hour to "Sparkle" about three minutes of an instrument at a moderate tempo; and then I fine-tune the panning when I record it in the DAW application in a ReWire session . . .

This is an example of a "Sparkled" psaltery harp, and it's easier to hear the panning motion when you listen with studio quality headphones like the SONY MDR-7506 (a personal favorite) . . .

[NOTE: There is a deep bass synthesizer that is panned top-center for the entire song. The psaltery harp is "Sparkled" . . . ]

phpBB [video]


[NOTE: This is two measures of the "Sparkled" psaltery harp, which is labeled "Synth" because it's played by SampleMoog (IK Multmedia) or Kontakt 5 (Native Instruments). I did this about four years ago, and at the moment I don't remember what I used for the psaltery harp . . . ]

Image

(7) Until recently, the panning controls in the NOTION Mixer were set to full-spread; but more recently the default setting has changed to both left and right "dots" pegged to top-center. You can do some experiments to determine whether this changes anything; but when you move the left "dot" to far-left and the right "dot" to far-right, I think this maps to "full-spread". Whether this is the case is another matter, but (a) it appear logical and (b) it's a bit of a puzzle that the default panning position is with both "dots" at top-center . . .

(8) Panning to top-center is easy . . .

(9) Panning to specific locations is difficult, and it's an advanced activity--especially when you are limited to working with (a) stereo source material and balance controls rather than (b) monaural source material and true panning controls . . .

(10) If you want to control panning and other effects with the VSTi virtual instrument and its onboard effects, then (a) set the NOTION tracks to full-spread and (b) avoid using the native NOTION reverb; because stereo reverberation (native or VST) on the Master Stereo Track destroys precise individual instrument and voice panning . . .

SUMMARY

What happens in a VSTi virtual instrument and its engine is separate and independent from whatever you decide to do in the NOTION Mixer . . .

As an example, in one of my current projects in this forum, I am having a bit of FUN with a song called "Surf Zot"; and it has rhythm guitars that are panned far-left and far-right . . .

[NOTE: This is best enjoyed when you listen with studio quality headphones . . . ]

phpBB [video]


Project: ReWire ~ NOTION + Studio One Professional (PreSonus NOTION Forum)

The rhythm guitars are done in SampleTank 3 (IK Multimedia), and they are panned far-left and far-right in SampleTank 3. Additionally, I use the balance controls in Studio One to emphasize the far-left and far-right panning. So far, I have not used the Dual Pan control in the Studio One ".song" for the rhythm guitars, because the rhythm guitars already are isolated in SampleTank 3 to far-left and far-right, respectively . . .

Explained another way, since the rhythm guitar at far-left only has audio for the far-left coming from SampleTank 3, it is not necessary to isolate it more than it already is isolated . . .

Ideally, it would be preferable to have monaural electric guitars; but for practical purposes nearly all VSTi virtual instruments have stereo samples--which from the perspective of producing and mixing is annoying . . .

It's annoying, because (a) when producing and mixing you want to be the one who specifies and controls where each instrument and vocalist is heard in the sonic landscape and (b) even when you use the add-on "true stereo panning" effect plug-ins, the source material is stereo when you are using AUi (Mac only) and VSTi virtual instruments . . .

When you are recording real instruments and real vocalists, (a) you will use monaural microphones and (b) even when you use several monaural microphones, each one has its own track with true panning . . .

For reference, even though this now is obvious here in the sound isolation studio, it took me several years to discover why it was nearly impossible to locate a VSTi virtual instrument rhythm guitar at far-left very precisely . . .

In retrospect, I think that years ago all the folks who create and sell VSTi virtual instruments and sampled sound libraries had a secret meeting to which no producers and audio engineers were allowed . . .

They decided that "stereo" is a powerful word and that it would be cool if everything was "stereo" . . .

There were marketing folks at the meeting, and they agreed that "We can sell 'stereo'. It's a winner!" . . .

A few of the folks asked, "What is 'stereo'?"; but nobody actually knew the answer; so they said it was "power", "really big", and "We like it!" . . .

Technically, if you want to make a stereo recording, then you put a monaural microphone on the left side of the room; and you put a monaural microphone on the right side of the room . . .

There are stereo microphones, typically for the same reason (see above regarding the "secret meeting"), although with a few caveats; but for practical purposes, nobody uses them . . .

Lots of FUN! :)

P. S. This is perhaps the definitive song for exquisite producing using effects and precise monaural panning in a headphone mix, which is fabulous . . .

phpBB [video]


Fabulous! :+1
Last edited by Surf.Whammy on Mon Jul 02, 2018 11:42 pm, edited 2 times in total.

The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
by thomasbaxter on Thu Jun 28, 2018 2:24 pm
You're the ace, S-Wham. I will proceed in accordance with Thought (10). Are you specifically saying don't use the Notion reverb in the master, or any reverb in the master?

TB
User avatar
by Surf.Whammy on Thu Jun 28, 2018 11:32 pm
thomasbaxter wroteYou're the ace, S-Wham. I will proceed in accordance with Thought (10). Are you specifically saying don't use the Notion reverb in the master, or any reverb in the master?

TB

As a general rule, yes . . . :)

Surf.Whammy wrote(10) If you want to control panning and other effects with the VSTi virtual instrument and its onboard effects, then (a) set the NOTION tracks to full-spread and (b) avoid using the native NOTION reverb; because stereo reverberation (native or VST) on the Master Stereo Track destroys precise individual instrument and voice panning . . .

THOUGHTS

There are times when you might want to use a reverberation unit (native or VST effects plug-in) on the Master Stereo Track, but it depends . . .

Les Paul's perspective on reverberation was that it was better to use it on the Master Track so that it applied to everything, where his thinking was that if you just used reverberation on a few selected tracks, then the result would not sound like a real musical group performing in a venue (nightclub, concert hall, arena); and there is a bit of logic to this perspective . . .

However, I think the best way to put the concept of reverberation--and to an often equal extent, echoes--into perspective is that it jumbles everything by bouncing the various sounds off a set of virtual walls, floor, and ceiling . . .

At the extreme, you have the reverberation in the Taj Mahal, which is so massive that you cannot distinguish speech uttered by someone shouting at full volume just a few feet away from where you are standing . . .

There are different types of reverberation units, but in the digital music production universe all of them are virtual . . .

Some of them create a reverberation effect using algorithms, while others are based on doing very precise impulse measurements in specific concert halls and other venues ("convolution reverberation") . . .

No matter how it's done, the goal is to bounce the sounds around what at best is a fuzzy, textured sonic landscape; and the more the reverberation unit bounces the sounds, the less distinct spatial cues become, where at the extreme a listener cannot determine whence the original sound first appeared . . .

I am not suggesting that reverberation and echo units never should be used--quite the contrary--but I am suggesting that they need to be used in very specific ways . . .

For practical purposes, there are two primary ways to establish the originating spatial location for a sound:

(1) panning

(2) reverberation and echoes

Reverberation includes echoes, and from this perspective one can suggest that reverberation is a set of rapid echoes of varying distance and duration; but (a) there are other ways echoes are used and (b) there are different types of echoes . . .

There is a particular type of echo that is fascinating; and it is one-time, short-duration digital delay . . .

There is an acoustic phenomenon called the "Haas Effect" (a.k.a., "Precedence Effect"), and it occurs when the same sound is repeated very rapidly with the repeat time being in the range of approximately 5 milliseconds to perhaps 25 milliseconds, although I extend the upper range to as much as 75 milliseconds . . .

What happens is that the audio perceptual apparatus of the human brain merges the two identical sounds and treats them for perceptual purposes as if they were a single but distinctly louder sound . . .

[NOTE: The Darwinian perspective on this is that by merging two, nearly identical rapidly occurring sounds into one single but louder sound, this provided an extra few seconds for early humans to notice the sounds made by the paws of a rapidly approaching tiger on the leaves and branches on the ground in enough time either (a) to jump out of the way or (b) to perform some type of defensive maneuver, thereby contributing to the perpetuation of the human species. The humans whose perceptual apparatuses were not able to connect the respective dots quite so quickly did not survive to create or raise progeny. There generally are logical reasons for unique perceptual behaviors, and in this instance I think it's the relic of a survival mechanism, which continues to be useful today in the early-21st century . . . ]

The most obvious use of the Haas Effect is in the audio for advertisements, where for some usually unknown reason, the voice announcing of an advertisement appears to be much louder than the show or other regular entertainment material . . .

There are two ways this is done; and one of them involves using the Haas Effect, while the other literally maps to increasing the volume level for a short time . . .

There are Federal Communication Commission (FCC) rules regarding broadcast audio, and not following the rules in the US can result in the loss of a broadcasting license. The EU has similar rules; and now there are voluntary standards that are more extreme than the legally enforced broadcast audio rules . . .

Haas Effect (Wikipedia)

This is an interesting plug-in from Waves Audio that I plan to get in the near future . . .

Image

It supports the various audio loudness standards; and the information it provides is interesting . . .

[NOTE: Loudness is perceived; volume levels are physical. The general rule is that for a sound to be perceived as being twice as loud, the volume level needs to be increased 10 times; and this is the reason that volume levels are measured in logarithmic units called "decibels" . . . ]

If the Taj Mahal is the extreme of massive reverberation and echoes, then an anechoic chamber is the extreme other side of the scale; since as the name implies, there are no echoes and no reverberation in an anechoic chamber . . .

[NOTE: The sonic energy absorbing baffles in the walls, floor, and ceiling eliminate all reverberation, echoes, and sonic reflections, which makes an anechoic chamber the definitively extreme "dry" room . . . ]

Image
Anechoic Chamber

The Taj Mahal is "wet"; but an anechoic chamber is "dry" . . .

The way one uses reverberation and echoes depends in part on whether the intended listening device is (a) calibrated, full-range studio monitors or (b) studio quality headphones . . .

Headphones are more precise, since each ear has its own separate and independent loudspeaker--it's a tiny loudspeaker but a loudspeaker nevertheless . . .

Studio monitors tend to blur sounds, because the sounds are able to bounce around the listening room--unless you have a specifically designed and often expensive listening room . . .

Here in the sound isolation studio, everything is carefully designed and controlled; but it's done on a budget, which is fine with me . . .

It has a fully-floated floor; and it's a room within a room within a room--complete with innermost floor, walls, and ceiling that among other things are Helmholtz resonating panels . . .

The primary purpose of the outer rooms is to isolate the innermost room from external sounds, so that it's very quiet in the innermost room, which is excellent for recording and listening . . .

However, there is a standing wave at approximately 70-Hz; so I added several rolls of fiberglass insulation and cubes of compressed cellulose to zap the standing wave, with the result that the low-frequency response is "punchy" with no annoyingly sustained low-frequency standing waves . . .

[NOTE: There are four small rolls of fiberglass (shown), two large rolls of fiberglass insulation on the floor, and five cubes of compressed cellulose, also on the floor, which makes the sound isolation studio acoustically dry and punchy . . . ]

Image

The sound isolation studio is approximately 6 feet wide by 7 feet tall and 12 feet long; so it's not a large room . . .

It's about the same size as Les Paul's 1949 recording studio--maybe a little smaller, but not by much . . .

Image
Les Paul in his recording studio (Teaneck, NJ circa 1949)

[SOURCE: Les Paul (History of Recording) ]

[NOTE: One of the more fascinating aspects of the sound isolation studio is that you can build one approximately this size in an apartment, house, or garage. I assembled it with Deck Mate® wood screws; and it's not anchored physically to the floor, ceiling, or walls. It sits atop the existing floor; and while it might appear to be tiny, it's very nice for everything except a drumkit and grand piano; but there is an adjacent room where I have the Really Bigger Drumkit and a Marshall stack; and I have a KORG Triton Workstation (88 weighted, piano style keys) in the sound isolation studio; so it's all good. I can run the calibrated, full-range studio monitor system at 85 dB SPL measured with a dBA weighting (90 dB SPL with a dBC weighting), and when the doors are closed, you can't hear anything outside the room, which means that the sound can be as loud as at a concert inside the sound isolation studio but outside nobody hears it, although these days "popular" music concert sound reinforcement systems tend to be dangerously loud and well above 85 to 90 dB SPL. However, the fully floated floor does not eliminate all the subsonic vibrations emanating from the innermost room. It isolates the ones from outside coming inside, but not the other direction. Subsonic and low-frequency sound waves travel through just about everything, so isolating them is not easy. The sound isolation studio sits atop a layer of rubber mats made from ground truck tires, which are good isolators; but completely Isolating subsonic vibrations requires a more elaborate treatment; so if you do this in an apartment, the building might vibrate subsonically; but so what . . . :P ]

I use a digital sound pressure level meter to ensure the maximum listening level is around 85 dB SPL measured with a dBA weighting, but I tend to like a bit more low-frequency; so I boost it a bit . . .

The frequency response is measured using a combination of external devices and the ARC System (IK Multimedia) . . .

[NOTE: I use the ARC System to measure everything after the external devices have adjusted the frequency response. I do not use the ARC System as an effects plug-on on the Master Stereo Output Track (NOTION or DAW). The goal here in the sound isolation studio is that the frequency response of the calibrated, full-range studio monitor system is a flat curve (equal loudness for all reproducible frequencies) running from 20-Hz to 20,000-Hz, although I extend it downward into the subsonic range at 10-Hz. Done this way, I don't need to use the ARC System as an effects plug-in to make corrections; because the calibrated, full-range studio monitor system already is correct. The key bit of information is that the only way you can trust your ears is when you are listening to music played through a calibrated, full-range studio monitor system. It's a verified FACT . . . ]

ARC System (IK Multimedia)

In so-called "popular" music, there are a few distinct vocal sounds; and the classic vocal producing style from the late-1950s and early-1960s is the combination of a condenser microphone, vacuum tube based compressor-limiter, and some type of reverberation unit (which often was a tiled room but later was a "plate" style reverberation unit) . . .

[NOTE: It's best to listen to this song with studio quality headphones to understand how the producer, audio engineer, and Bobby Darin are "working" the condenser microphone, compressor-limiter, and reverberation unit . . . ]

phpBB [video]


Skilled vocalists discover how to "work" a condenser microphone, compressor-limiter, and reverberation unit; and it's interesting to watch when you know it's happening. Soon after they do a bit of work in a recording studio, they learn that they can control reverberation by either (a) singing softer but closer to the microphone or (b) singing louder but not so close to the microphone. Done skillfully, it's a way to increase sustain and the perception of being "larger than life" (or "big", if you prefer the simpler terminology) . . .

[NOTE: Until you know the background on the Beatles and the military-style recording rules imposed at Abbey Road Studios, you might not think the Beatles are doing anything other than playing guitars, drums, and singing; but there's more to it. The university-degreed electrical engineers at Abbey Road Studio had a well-defined, fully-documented, and enforced step-by-step set of procedures and rules for everything--including the procedures technicians used to setup microphone stands and, somewhat humorously, the rules for "pop singers", microphones, and "dancing". Specifically, "pop singers" were not allowed to do unnecessary "dancing", "fidgeting", and other frivolous motions. When one "pop singer" was singing, the "pop singer" was to stand and sing into the microphone from a distance no greater than six inches from the microphone, but closer distances were allowed when necessary. When two "pop singers" were sharing a single microphone, they were instructed to stand at 45-degree angles to the microphone and to maintain the same distance range from the microphone. And there was a geometric configuration to be used when three "pop singers" were sharing a single microphone. John Lennon's strategy in this respect was to pretend he was riding a horse; and once you know about this aspect, it's easy to observe what they are doing. Ultimately, the reason they agreed to follow all the rules is that the rules ensure the best quality sound recording. There is not a lot of reverberation on the singing, which is unique at this point in their recording career . . . ]

phpBB [video]


As the composer and arranger, you control the music and the instrumentation; but when you switch roles to producing and audio engineering, you control the presentation . . .

These are different roles; and when you are producing and audio engineering, sounds appear in specific locations in the sonic landscape because you put them in the specific locations . . .

It's a "control thing", and the more you control the sonic landscape, the better the results . . .

Lots of FUN! :)

The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!

4 posts
Page 1 of 1

Who is online

Users browsing this forum: No registered users and 2 guests