Discuss Notion Music Composition Software here.
9 posts
Page 1 of 1
Hello everyone,
My name is Andy.
If this question was already addressed I apologize because I did not find the answer.
Is there anyway to use the NTempo function to control my Daw.
I have symphonic scores with all subtilities that cannot be incorporated into notion and I would like to use these in a real time performance situation.
I am not new at music but I am quite new at using technology.
Thank you for the help.
Andy
User avatar
by michaelmyers1 on Wed Feb 01, 2023 9:42 pm
I don't know of any way to use Notion's nTempo function to control a DAW in real time, but there might be a workaround (depending on the context this might not work at all but I'm not sure what you want to do).

It's pretty easy to record an nTempo performance in Notion and then export it as MIDI. It's really easy if your DAW is Studio One as there is a direct export function, and it exports both note information and MIDI tempo map to S1's tempo map. So your DAW will reflect all the timing in the nTempo map.

iMac (Retina 5K 27", 2019) 3.6 ghz I9 8-core 64 gb RAM Fusion Drive
with small AOC monitor for additional display
macOS Ventura 13.4
2 - 500 gb + 2 - 1 tb external SSD for sample libraries
M Audio AirHub audio interface
Nektar Panorama P1 control surface
Nektar Impact 49-key MIDI keyboard
Focal CMS40 near-field monitors
JBL LSR310S subwoofer
Notion 6 + Studio One 5 Pro

http://www.tensivity.com
User avatar
by Surf.Whammy on Sat Feb 04, 2023 1:23 pm
Something like this might work . . . :)

THOUGHTS

Starting with Studio One Professional 6, there is no ReWire; but it doesn't matter because even with the earlier versions of Studio One Professional--up to and including v5.5--that support ReWire, NOTION cannot be the ReWire host controller, since Studio One does not act as a ReWire helper device; hence using NTempo with Studio One will not work . . .

Prior to Studio One Professional 6, I composed songs in ReWire sessions with Studio One Professional as the ReWire host controller and NOTION as the ReWire helper device; but now I do it all in Studio One Professional 6 without NOTION; so there is no ReWire and no NOTION--only Studio One Professional 6 . . .

Specifically, there is a version of NOTION embedded in Studio One Professional 6; and it does everything I need to do with music notation . . . :+1

For what I need to do and my somewhat eccentric but practical perspectives, this is excellent!

Eccentric? :roll:

(1) Unless I am using Realivox Blue (RealiTone)--my favorite VSTi virtual female soprano--I do everything on soprano treble staves, which I set using Transposition to play notes as notated or one or two octaves lower or higher than notated, where for bass I set Transposition to two octaves lower than notated but for guitar I set Transposition to one octave lower than notated . . .

(2) I avoid using articulations, dynamics, playing styles, and other nonsense in music notation, because (a) it clutters the music notation and (b) it is better done with VST effects plug-ins . . .

There are other reasons, of course, but these are the primary reasons . . .

[NOTE: This might appear to be an Obsessive-Compulsive Delusion (OCD) of a lunatic, but it's highly logical, and thoroughly supported in vast detail in my posts to this forum spanning over a decade when the original Notion Music forum is included, as it it in the archived version in this PreSonus NOTION forum. For example, articulation, dynamics, and playing style enthusiasts might be inclined to suggest that my assertion about not using tremolo or vibrato playing styles clearly brands me as a fool; but when such folks are using a "diatonically" sampled-sound library rather than a "chromatically" sampled-sound library, Surf.Whammy is not the fool. Why? In a diatonically sampled-sound library only every other note actually was played by the trained, typically virtuoso musician, which means that half of the notes are computed using logarithmic interpolation based on starting with either a lower or higher pitched, actually sampled note. Consider a simple example where the playing style is tremolo--fluctuating volume--and the fluctuation rate is 182 beats per minute (BPM) and "Middle C" (C4 in Scientific Pitch Notation) is a sampled note, but C#4 is not a sampled note, hence C#4 must be computed. Using Middle C as the basis for the computation, C#4 is computed using logarithmic interpolation to increase its pitch from C4, which works reasonably well--except as the pitch is increased, so is the embedded tremolo rate, which increases the tremolo rate of the computed C#4 and introduces a perhaps subtle but nevertheless discernible error which itself will vary for any other computed rather than sampled notes. The better strategy is to use a dry set of sampled sounds and then to apply a VST effects plug-in that does tremolo to all the notes, which results in all the notes having consistent tremolo regardless of whether notes actually were sampled or were computed. This is a matter of producing and audio-engineering, and the same goal is achieved but in a more precise way. I like articulations, dynamics, playing styles, and all that stuff; but I do not like to clutter music notation with it. When the music notation is played by virtuoso musicians using sheet music under the direction of a skilled conductor, it matters; but when the music is played through headphones, Apple AirPods, or a sound-reinforcement system, it's more like playing it through a "fuzz box" or distortion effects pedal in terms of destroying al the otherwise pristine sounds, textures, nuances, and emotions virtuoso musicians and singers evoke. It might be a perfect, pristine recording; but headphones, Apple AirPods, and sound-reinforcement amplifiers and loudspeakers are imperfect . . . ]

phpBB [video]


(3) Here in the sound isolation studio, there are 12 notes and 10 octaves, 2 octaves of which humans cannot hear, hence are provided to entertain bats, birds, cats, dogs, dolphins, porpoises, sea turtles, and whales. This is the case even for babies if they have attended a KISS, Metallica, Miley Cyrus, or Rammstein concert and sat anywhere near the sound system loudspeaker arrays . . .

phpBB [video]


USE A MIDI CONTROLLER

The Novation LaunchControl XL MIDI controller can be configured to control the tempo of Ableton Live; and it probably can be configured similarly to do this with Studio One Professional 6, although I have not done this and do not know if it can be done--mostly because at present I do not have a Novation LaunchControl XL MIDI controller . . .

[NOTE: This YouTube video shows how to configure a Novation LaunchControl XL MIDI controller to control tempo for Ableton Live. Observe at the end of the video the presenter makes a glaring mistake by confusing volume level with tempo, but it's tempo not volume level. Mistakes like this happen, and I have done something similar a few times, where for some unknown reason I get confused and say something stupid. Sometimes I correct it, but other times I leave it and expect that listeners will understand whatever I said was a mistake . . . ]

phpBB [video]


This wanders into the "advanced technical activity" realm; but if being able to adjust tempo in one-beat per minute increments in real-time will provide a solution, then (a) it's possible when Ableton Live is the Digital Audio Workstation (DAW) and (b) it might be possible when Studio One Professional 6 is the DAW . . .

I am curious about this, so I might get a Novation LaunchControl XL in a month or so and see if it works with Studio One Professional 6 . . .

PreSonus might have something that does this, so that's another possibility . . .

Combined with Studio One Professional 6 "Show" capabilities, this could work for performing my songs about Flying Saucers and Angela Gossow's Underpants at a local nightclub or senior retirement village on "Bingo Night" . . . :P

[NOTE: As with everything I post, these are best enjoyed when listening with studio-quality headphones like SONY MDR-7506 headphones and Apple AirPods, since I do stereo headphone mixes and have lots of panning, cascading echoes, Haas delay effects, and motion effects specific to headphone listening . . . ]

phpBB [video]


[NOTE: This was recorded 13 years ago when I was doing everything with real instruments here in the sound isolation studio . . . ]

phpBB [video]


Lots of FUN! :)

Surf.Whammy's YouTube Channel

The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
by andregauthier on Sat Feb 04, 2023 10:51 pm
Thank you for the answers so far.
Be advised that sometimes It may be difficult to understand exactly my way of thinking. English is not my first langage and I apologize if I am making wrong sentences sometimes.

I understand the concept of creating a soundtrack but this is not what I am trying to achieve here.
The situation is this.
1-I make orchestrations for a symphony orchestra to accompany choirs an soloists in live performances.
2-I am conducting a 35 piece musicians and most of the time a quite large choir (around 120)
3-Due to budget limitations I complete with a musician specialized in using NTempo in a stage setup. Let us say I have only one bassoon and one oboe. I also have only 2 horns and one trumpet. I also only have 14 strings. Many percussion accessories are also missing.This is where Notion is very helpful.
4-The « Notion guy » will complete the string section and the missing winds with their exact parts written in his score.
Of course this is demanding some adjustments to the real musician parts and also to the Notion parts.
This is almost impossible to make a group this size to follow a click track or a recorded track.
But it is very easy, although still challenging, for the « Notion guy » to follow the conductor with all the rubato and subtle changes made in performance.
It is clear to me that the concept of the NTempo track was to do just that. To provide orchestra for a live performance.
I use this all year long in conducting session and it make rehearsals with the choir much more productive Because they can hear the exact orchestrations

Now I think many would agree that the performing sound provided into the software are very limiting. This is where sound sets are welcome. The problem is that the really good sample libraries are too much demanding on CPU for live performances when used inside Notion. I tried to use the Opus Orchestra but I can’t even figure out how to do that.
By the way I do not understand how to make a sound set for a given library. I see that you guys are very knowledgeable in this. If there were clear tutorials for beginners, I would attend the class.
What is available in the documentation presupposes knowledge way too much advance for me.
I do understand my limitations in this field and I can admire the work you are doing.

Now, the same samples can be managed more easily, it seems to me, inside a daw.
So, I am trying to create tracks inside the daw with all the subtilities related to each track and wish to control tempo changes with NTempo.
Maybe there is a simpler way. But I don’t know that way.

The tap control that is useful to sync different softwares and hardware is not enough precise to do the job.
Notion NTempo can control this to 1/64th note and if the « Notion Guy » is good, the interpretation can be wonderful.
By the way this is not just a wish, NTempo is very precise.
Thank you for reading.
Andy
Waiting for enlightenment.
User avatar
by Surf.Whammy on Sun Feb 05, 2023 7:33 am
andregauthier wroteNTempo is very precise.

Based on what you described, you need to use NOTION as it is intended to be used . . . :)

THOUGHTS

I think it's accurate to suggest that quite a few of us understand exactly what you are doing and the best way to do it . . .

As you correctly observed in one way or another, there is no other way to do it than to use the NOTION native instruments and NTempo with no VST effects plug-ins and no NOTION reverb, although you did not state this but suspect it . . .

First, it's a matter of computing power and reliability . . .

Third-party sampled-sound libraries are superior to the NOTION native or bundled sounds, as well as the add-on NOTION sampled-sounds; but while third-party sampled-sound libraries are superior, they only work with their respective VSTI instrument engines running in the background . . .

Consider Vienna Symphonic Library (VSL) sampled-sound libraries that are chromatically sampled where every note is played by a virtuoso musician . . .

These are among the highest quality sampled-sound libraries available, but they are what mainframe computer programmers like myself call "core hogs" and "processor sinkholes", which is an arcane way of saying (a) they use too much system memory; (b) they use too many processor cycles; and (c) they have the distinct ability to cause the computer to fail catastrophically by overloading it with code and system memory requests . . .

It will not break the computer, but it will cause the music to stop . . .

There is a reason NOTION has its own instruments, and these are the same instruments it has had for over a decade . . .

They are tried and true, which means they work with near absolute reliability in performances . . .

The same is true for NOTION itself and, of course, NTempo . . .

The key word in this regard is performing . . .

Consider a crude analogy, metaphor, or simile: your underpants . . .

There you are on stage performing for an auditorium of people . . .

This is not a good time for your underpants to fall off . . .

Or your ballet tights and codpiece . . .

NOTION is your parachute, and you do not want it to fail . . .

The only way you can ensure this is to use NOTION with the native and bundled NOTION sampled-sound instruments . . .

There is no other alternative . . .

A DIFFERENT PERSPECTIVE AND SOLUTION

In this discussion and in similar discussions I do not find anyone other than me focusing on the sound-reinforcement system; and this probably is the result of few people doing sound-reinforcement work and studying acoustic physics . . .

The reality is that whatever software is used to create digital music via a computer in real-time for an audience ultimately will be reproduced using a combination of physical amplifiers and loudspeakers . . .

It's like the adage about a chain being only so strong as its weakest link . . .

At the extreme, if you use two Pignose guitar amplifiers as your sound-reinforcement system, then the music will sound like it's played through a distortion pedal or fuzz box . . .

Image

People tend to be a bit too easy in dismissing the NOTION native instruments as being not so good, but while in a sense this might be true, it's only true in certain respects that are true only within a few perspectives that tend not to be so accurate . . .

I say essentially the same things, but my perspective in this regard is subject to conditions that probably are very different . . .

One of these conditions is that I have a studio monitor system that is calibrated, full-range, and has a flat equal-loudness curve running from 20-Hz to 20,000-Hz (the full-range of normal human hearing) . . .

[NOTE: For reference, this chart is logarithmic. What it shows is that for the low-pitch "E" string of a contrabass to be perceived by a human listener as being as loud as a "Middle C" played on a clarinet, the string bass note must be 90 dB SPL, while the clarinet note only needs to be 40 dB SPL. If I am remembering the mathematics correctly, the contrabass volume needs to be 50 times the clarinet volume for both instruments to be perceived as being equally loud. The general rule is that for a sound to be perceived as being twice as loud, it's volume level needs to increase 10 times. Loudness is perceived, but volume level is like temperature and is a physical unit, not an emotional or perceptual phenomenon. This research was done at Bell Laboratories in the 1930s to determine the optimal and least expensive way to provide clear telephone service sufficient for human conversations over long distances. I did a quick check, and my recollection is correct. The midrange "dip" in the frequency range is what the Bell folks decided to use for telephones, since human hearing is most sensitive to sounds in the midrange "dip", hence lower physical costs for the various telephone lines and equipment, including the telephone receivers and transmitters . . . ]

Image

I use Kustom PA self-powered loudspeakers and Kustom PA self-powered deep-bass subwoofers, where the amplifiers for the loudspeakers are Class A, while the amplifiers for the deep-bass subwoofers are Class D . . .

[NOTE: Although it is not shown in these photos, there are 10 rolls of fiberglass insulation and 6 cubes of compressed cellulose insulation in various locations within the sound isolation studio. These absorb a gnarly standing wave at approximately 70-Hz and make the sound crisp and accurate. The sound isolation studio is a room within a room within a room, and the innermost room sits atop a 1/2" mat of ground truck tires, hence is floated. There is an air space of approximately 9" between the innermost room and then next room, including the ceiling; and there are two layers of sheetrock of differing thickness. The general rule in this respect is that sound does not like to travel through material of different thickness, and along with the rubber mats, insulation, and only one hole in the wall to bring electric service, when the insulated inner door is closed, it's very quiet. I tested this by having a friend run a Stihl gasoline chain saw in the kitchen; and other than feeling low-frequency vibrations on the floor, nothing was heard by me in the sound isolation studio. Sounds around 10-Hz are felt as vibrations, not actually heard. It's literally a sound isolation studio . . . ]

Image

Image

Image

Image

The output of the computer is sent to a MOTU 828mk3 Hybrid external digital audio and MIDI interface, and from there it goes to a Behringer equalizer and then to a Behringer loudspeaker management interface, finally to the Kustom PA units . . .

The Behringer equalizer provides pink-noise and a calibrated condenser microphone that listens to the generated pink-noise and uses a computer algorithm to set the equalizer to achieve a flat equal-loudness curve in the sound isolation studio, which for reference is 6 feet wide by 7 feet tall and 12 feet long . . .

I check it with software and another calibrated microphone from IK Multimedia, and all the while I am doing this, I wear OSHA-approved hearing protection like airport workers wear when loading and unloading jet aircraft on the tarmac at airports . . .

[NOTE: In the US, the Occupational Safety and Health Administration (OSHA) is tasked with determining and setting safe levels and practices for various activities . . . ]

I wear the OSHA-approved hearing protection, because the Kustom PA units are sufficient to play music in a small nightclub . . .

In total there are two 15" loudspeakers, two high-frequency horns, and two 12" deep-bass subwoofers . . .

I set the maximum volume level to 85 db SPL or 90 dB SPL with a dBC weighting on the digital SPL meter I use . . .

I use Kustom PA loudspeakers and deep-bass subwoofers, (a) because they are good quality and (b) they are not expensive . . .

Normal people do not know this--especially composers, musicians, and singers--so it's mostly a "sound guy" type of knowledge, since "sound guys" are all about physics, technical specifications, and for the most part are not impressed by marketing blurbs like "Justin Bieber says it's super fabulous" or "Elvis would use it" . . .

That's all a bunch of nonsense, and it's as meaningless as it is useless . . .

All I need to know are (a) the technical specifications and (b) the technical specifics are honest and accurate . . .

Due to the potential to do permanent damage to hearing when using my strategy, for folks who do not understand the physics and do not have the various equipment and OSHA-approved hearing protection, I recommend the PreSonus Sceptre S8 studio monitors and Temblor T10 deep bass subwoofers--a pair of each . . .

Look at the technical specifications, and you will see that the Temblor T10 deep bass subwoofers are required if you expect to hear what Paul McCartney is playing on his Beatle bass . . .

They cost more than the Kustom PA units, but they are safer for "normal" folks to use . . .

Most importantly from my perspective, the PreSonus specifications are accurate and honest . . .

To the point, when I listen to the NOTION native instruments through a calibrated, full-range studio monitor system with a flat equal-loudness curve running from 20-Hz to 20,000-Hz at 85 dB SPL or perhaps 90 dB SPL measured with a dBC weighting, I can tell a difference in the way the NOTION native or bundled instruments sound compared to high-quality instruments from Native Instruments, IK Multimedia, EW ComposerCloud X, and so forth, but so what . . .

[NOTE: The "flat equal-loudness curve" means that absent any dynamic marks and when played at "normal" volume by the musicians, all the notes for all the instruments sound equally loud, more or less, depending only on the particular characteristics of each instrument, where it's always possible that some instruments tend to self-amplify in certain ranges, but so what . . . ]

Being less psychic than technically trained and knowledgeable, I am reasonably confident suggesting that the sound-reinforcement system you are using is not calibrated, full-range with a flat equal-loudness curve, or at least that if it is, then it's not recalibrated under software control when the auditorium or concert hall is filled with people . . .

Next time you attend a concert, listen for some very short duration but loud "chirps" as the audience arrives . . .

These "chirps" are used to recalibrate the sound-reinforcement system so it's accurate when the audience is present . . .

SUMMARY

I suggest the hypothesis that the reason for your concern is not the quality of the native NOTION instruments . . .

Instead, I suggest you are concerned due to a less than adequate sound-reinforcement system that is not properly configured, probably is not full-range, and probably does not have a flat equal-loudness curve . . .

Explained another way, my hypothesis is that the virtual music sounds bad because it's played through a bad, improperly configured sound-reinforcement system . . .

Consequently, I suggest focusing on improving the quality of the sound-reinforcement system first . . .

NOTION has been used to augment and enhance live performances for decades at this time, and it's designed to do this . . .

phpBB [video]


phpBB [video]


If it doesn't sound good, then I suggest it's a matter of the sound-reinforcement system being used to reproduce the music . . .

Lots of FUN! :)

P. S. Regarding evidence of my veracity, I used the NOTION native instruments to have a bit of FUN with the "Chinese Dance" sample score that comes with NOTION . . .

I used some VST effects plug-ins, a different reverb unit, and fine-tuned the volume levels, but otherwise it's the sample score that comes with NOTION using the NOTION instruments . . .

When it was done, I posted it to my YouTube channel; and a few days later I got a notice from YouTube stating that a French company had filed a copyright dispute claiming I was a Russian conductor and orchestra for the national ballet . . .

YouTube said I could agree that I was a copyright violator, or I could protest; but if I protested and lost, then my YouTube privileges could be revoked . . .

Image

For me losing my YouTube channel would be bad; but I did nothing wrong, hence I protested, and this is the result of the protest:

Image

phpBB [video]


FACT: A supercomputer in France thought I was a famous Russian conductor and an equally famous orchestra for the national ballet, all because I used the NOTION score sample with NOTION instruments but a different reverb unit and a compressor-limiter.

This does not guarantee my hypothesis and advice to focus on the sound-reinforcement system is good; but it strongly suggests it's at least educated and based on experience . . .

Keep us updated on the progress solving this puzzle, which when solved will make me very happy, which is fabulous . . .

Fabulous! :)

Surf.Whammy's YouTube Channel

The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
by acequantum on Mon Feb 06, 2023 6:09 pm
Hi Andy,

This sounds like a simple set up after reading your explanation. Though there is nuance in Notion that you will only get by using Notion.

1. Set up the tracks in your DAW with approriate VST instruments that you want Notion to play.
2. Set these tracks to receive on specific MIDI channels (Violins MIDI channel 1, Bassoon MIDI Channel 2 etc.)
3. In Notion, load your score.
4. For each staff in Notion, change the instrument to external MIDI and choose an appropriate MIDI channel and port that matches what you set up in your DAW. This will remove the playback of the VST from Notion which you are worried about for CPU overhead. The play back will be handled by your DAW as long as you have set up the MIDI channels correctly.

That's it. As long as you have an Ntempo staff in your Notion score, your MIDI channels are set up correctly, you can tap and it will play the instruments back in your DAW. I'm not sure how Studio One works in terms of playing back multiple tracks at once without the transport running, but usually you can just "arm" tracks in a DAW and they will receive their input without recording.

The only thing is, techniques in the score may not be transmitted over MIDI as a lot of those are built into Notion itself for playback.
User avatar
by Surf.Whammy on Tue Feb 07, 2023 3:15 am
acequantum wroteThis sounds like a simple set up after reading your explanation. Though there is nuance in Notion that you will only get by using Notion.

1. Set up the tracks in your DAW with approriate VST instruments that you want Notion to play.
2. Set these tracks to receive on specific MIDI channels (Violins MIDI channel 1, Bassoon MIDI Channel 2 etc.)
3. In Notion, load your score.
4. For each staff in Notion, change the instrument to external MIDI and choose an appropriate MIDI channel and port that matches what you set up in your DAW. This will remove the playback of the VST from Notion which you are worried about for CPU overhead. The play back will be handled by your DAW as long as you have set up the MIDI channels correctly.

That's it. As long as you have an Ntempo staff in your Notion score, your MIDI channels are set up correctly, you can tap and it will play the instruments back in your DAW. I'm not sure how Studio One works in terms of playing back multiple tracks at once without the transport running, but usually you can just "arm" tracks in a DAW and they will receive their input without recording.

The only thing is, techniques in the score may not be transmitted over MIDI as a lot of those are built into Notion itself for playback.

The idea of using a DAW as a smart synthesizer is intriguing . . . :reading:

THOUGHTS

I did a few experiments with Studio One Professional 6 (no ReWire) and Studio One Professional 5.5 (with ReWire but started after NOTION 6.6), and it did not work; but I plan to do a few more experiments . . .

After the experiments with two versions of Studio One Professional, I did some experiments only with NOTION 6.6, SampleTank 4 (IK Multimedia), and Kontakt 6 (Native Instruments), except that the two synthesizers were running in standalone mode, not as VSTi virtual instruments in NOTION . . .

The standalone version of SampleTank 4 had two instruments (synthesizer and synth bass), while the standalone version of Kontakt 6 had one instrument (tenor saxophone) . . .

The music notation in NOTION was assigned to External MIDI staves, each with a specific channel; and the standalone versions of SampleTank 4 and Kontakt 6 also had specific channel assignments for their respective instruments . . .

This worked nicely, and NOTION was able to use NTempo to control the standalone synthesizers . . .

Based on the rule that doing something one time is the specific case, while doing it two times is the general case, this variation of your suggested strategy might work--although subject to the same or similar computer processing constraints . . .

In this strategy, NOTION only needs to generate and send the MIDI and do the NTempo work, which is lower processing overhead . . .

Since NOTION is running the show, NOTION is handling the implied or real synchronizing, which is another constraint . . .

There also is the matter of the synthesizer engines (SampleTank 4 and Kontakt 6 in my experiments) and the computer processing they need to do, which no matter how it's configured does not happen instantly, which might be another constraint on the timeliness of the generated audio . . .

If this strategy works, then I think it will work best if the standalone synthesizers are preloaded and used repeatedly as the NOTION scores change, because it will take too long to load different versions of the standalone synthesizers for each song or section of a symphony . . .

There also is the matter of standalone synthesizers not being so intimately fine-tuned like the NOTION instruments for real-time performing . . .

It's useful to observe that IK Multimedia has a product called "Miroslav Philharmonik" that is a full orchestra and has its own standalone player but also runs in SampleTank 4 . . .

NOTION has four MIDI ports, and each MIDI port has 16 channels, for a total of 64 MIDI channels . . .

When NOTION is being used to augment real instruments and singers, perhaps 64 MIDI channels will be sufficient?

I plan to do some experiments with Digital Performer (MOTU) and perhaps Ableton Live and Reaper, but no telling when . . .

I also plan to make a YouTube video of the NOTION, NTempo, SampleTank 4, and Kontakt 6 experiment . . .

SUMMARY

If the problem is the NOTION native instruments, then within the constraints of running standalone synthesizers, this might be a solution; but it will need to be verified to confirm it is suitable for live performances . . .

I think the problem is the sound-reinforcement system, not NOTION . . .

It's one thing to listen to elaborately notated music in a sound isolation studio with a calibrated full-range studio monitor system with a flat equal-loudness curve running from 20-Hz to 20,000-Hz or studio-quality headphones like the SONY MDR-7506 headphones (a personal favorite); but when you play the music through a sound-reinforcement system suitable for a 100-person or 1,000-person venue, then most if not all the articulations, dynamics, playing styles, and so forth tend to be just a bunch of useless nonsense, because the laws of acoustic physics clearly state that the sound-reinforcement system will destroy every possible nuance, no matter how obvious or subtle . . .

As an example, I suggest only a handful of people ever have heard Celine Dion singing naturally in a small room the size of a walk-in closet, and ditto for Elvis, Beatles, Rolling Stones, Frank Sinatra, and everyone else . . .

Instead, we hear them through typically cheap radios, televisions, iPods, iPads, iPhones, computers with marginal-quality loudspeakers, and so forth . . .

These singers and musical groups are famous primarily because their voices and instruments sound good and genetically are attuned to existing technologies and new technologies as they appear . . .

And like Bing Crosby, they tend to have perfect pitch and can sing a song perfectly on demand, which in the days when there were no computers and so forth made such singers very cost effective, since doing live recordings required orchestras that were costly and had union minimum session times . . .

Celine Dion certainly is a skilled singer, but she is not an operatic soprano--too skinny and unlikely to be heard naturally singing over a symphonic or operatic orchestra . . .

Stated another way, she needs a condenser microphone and a sound-reinforcement system; and her voice and singing style are attuned accordingly, probably via serendipity . . .

Ultimately, Elvis Presley was an operatic baritone, all self-taught based on rare genetics; and like Celine Dion, his voice sailed when captured with a condenser microphone and reproduced by a sound-reinforcement system, vinyl record, cheap car radio, and so forth . . .

Stated another way, you cannot dismiss or ignore the playback media (iPod, headphones, studio monitor system, sound-reinforcement system for concerts, car radio, and so forth) . . .

With a few, typically rare exceptions, every famous singer and musical group is heard through amplified loudspeakers--even when it's headphone loudspeakers the size of a dime or the Apple AirPods' miniature loudspeakers the size of an English pea . . .

FACT: If the amplifiers and loudspeakers are garbage, then the music will sound like garbage, although it might be good garbage, but so what.

Lots of FUN! :)

Surf.Whammy's YouTube Channel

The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
by acequantum on Tue Feb 07, 2023 10:54 am
...The standalone version of SampleTank 4 had two instruments (synthesizer and synth bass), while the standalone version of Kontakt 6 had one instrument (tenor saxophone) . . .


I was focusing on the DAW - but good catch about using the stand alone players for the VSTis. That may cure some of the overhead CPU usage.

However, often DAWs may be designed for instancing - where they don't necessarily reload an instrument on a separate track but use an instance of it to save resources.

Andy, if you understand the process, try both methods.

If your orchestra synthesizers have stand alone programs, do the below process but choose those programs instead of using the DAW. Also test out using the DAW for the same purpose and see what gives you the best results. The advantage of the DAW is of course setting levels, pan and effects processing per track or project - but at the cost of maybe more CPU.

1. Set up the tracks in your DAW with approriate VST instruments that you want Notion to play.
2. Set these tracks to receive on specific MIDI channels (Violins MIDI channel 1, Bassoon MIDI Channel 2 etc.)
3. In Notion, load your score.
4. For each staff in Notion, change the instrument to external MIDI and choose an appropriate MIDI channel and port that matches what you set up in your DAW. This will remove the playback of the VST from Notion which you are worried about for CPU overhead. The play back will be handled by your DAW as long as you have set up the MIDI channels correctly.
User avatar
by Surf.Whammy on Tue Feb 07, 2023 3:48 pm
acequantum wroteThe advantage of the DAW is of course setting levels, pan and effects processing per track or project - but at the cost of maybe more CPU.

Most standalone virtual instrument engines provide the ability to set volume levels, panning, effects, and so forth for each instrument . . . :)

THOUGHTS

I used "most" only because I do not have every virtual instrument engine available . . .

The ones I use here in the sound isolation studio all have mixers that provide the ability to set volume levels, panning, effects, and so forth . . .

SampleTank 3 and SampleTank 4 (IK Multimedia) have extensive per instrument control for volume levels, panning, effects, and so forth . . .

IK Multimedia also sells VST effects plug-ins, and these can be used with the IK Multimedia standalone virtual instrument engines . . .

Image
SampleTank 4 Mixer

Image
SampleTank 4 MULTI Editor

[NOTE: The IK Multimedia FREE software is a lot of useful stuff--more than enough stuff to do the instrumentation for a Pop or Heavy Metal song, when used with a DAW application . . . . . . ]

IK Multimedia FREE software, including SampleTank 4 Custom Shop (IK Multimedia)

This is the Kontakt 6 Mixer Panel and Info Panel for a Tenor Saxophone . . .

[NOTE: Native Instruments also sells VST effects plug-ins, and when you purchase them they appear in the list of Effects available in the Kontakt Mixer panel. If you subscribe to the Native Instruments newsletter, they occasionally proved free VST effects plug-ins and other good stuff, where for example the Replica reverb unit was a free gift for email subscribers, and it's a nice reverb unit that does different types of reverb but also echoes like the Haas Effect. Native Instruments provides a useful gift every three or so months, which is nice, and they also provide notice of discount sales, which is very important when you want to get the most VSTi virtual instruments, sampled-sound collections, and VST effects plug-ins at the lowest cost. Over the past few years, they have had 50-percent discount sales of Kontakt at least once a year, typically during Thanksgiving Holiday week or month, which saves about $200 USD on the full price. And like with SampleTank, you can create MULTIS with Kontakt . . . ]

Image
Kontakt 6 Mixer Panel and Info Panel for Tenor Saxophone

[NOTE: Komplete Start is a FREE bundle featuring a lot of stuff, including synthesizers, players, instruments, and so forth--more than enough stuff to do the instrumentation for a Pop or Heavy Metal song, when used with a DAW application . . . ]

Native Instruments FREE software, including Kontakt 7 Player and Komplete Start (Native Instruments)

[NOTE: The FREE software (IK Multimedia and Native Instruments) requires you to create and register an account (FREE); and this also adds you to their email lists, so you get notifications of discount sales, more FREE stuff, and so forth. They do not flood you with emails--just occasional emails with useful information, perhaps one email every week or two or perhaps one email a month, depending on whether there are updates, discount sales, and so forth. It's a smart way to learn about discount sales and product updates, which over time saves money to spend on getting more stuff. Many digital music production software companies have FREE stuff and email newsletters, plus occasional discount sales. It's the smart way to get the most stuff with the money in your budget at the lowest prices . . . ]

SUMMARY

As noted, I think most standalone virtual instruments have options for setting volume levels, panning, effects, and so forth, as well as saving groups of instruments as a MULTI, including each instrument's specific volume levels, panning, and effects . . .

Depending on the specific VSTi virtual instrument and its standalone version, a MULTI can have as many as 64 instruments, although it's more standard to create MULTIS with up to 16 instruments . . .

The advantage is that you can create a MULTI for an ensemble of woodwinds, brass, strings, and so forth, where for example one instance of SampleTank 4 can be a MULTI for an ensemble of woodwinds, with 16 woodwinds each having its own channel . . .

The more elaborate a MULTI is, the more processing overhead it incurs; but doing a set of experiments should help determine whether the additional processing is a problem for real-time, live performances where NOTION is running the show for augmenting, enhancing, or replacing real instruments with a set of virtual instruments . . .

Again, the same caution applies, which specifically is that the combination of NOTION and its native instruments is designed specifically to work with NTempo in live performances . . .

NOTION is optimized for this specific usage, which includes optimizing the NOTION native instruments for efficient and flawless processing in live performances . . .

Yet, I am not suggesting other standalone synthesizers like SampleTank 4 and Kontakt 6 are not optimized for live performance usage . . .

Nevertheless, I am suggesting two things about NOTION:

(1) NOTION is designed specifically for use in live performances.

(2) NOTION is optimized, tested, and verified for this purpose.

Of course, NOTION does other things; but NOTION is unique in its ability to augment, enhance, and replace real musicians and instruments with music notation and virtual instruments under precise control of NTempo . . .

Lots of FUN! :)

P. S. I strongly recommend doing this on the fastest Mac you can afford; and I recommend the Mac, because it is more stable than a Windows computer for this purpose--especially if you are unable to configure a Windows machine to prevent show-stopping annoyances, where in fairness there also are things that need to be configured when you are using a Mac, more so a new Mac with Apple Silicon and all the enhanced security nonsense when during a tranquil musical interlude performed before a live audience you want to avoid having Siri ask whether you changed your underpants or something equally absurd, which is fabulous . . .

Fabulous! :P

P. P. S. Regarding creating your own sampled-sound libraries, SampleTank and Kontakt do this . . . .

For example, if I want to create a sampled-sound library for my whammy guitar sounds, then I begin by recording each note, one-at-a-time per pitch for several typical durations ranging from 1/32nd to a double whole note with all points in between . . .

For example, start with "Middle C" (C4 in Scientific Pitch Notation) and record a C4 pitch note for each duration {1/32nd, 1/16th, 1/8th, . . . , 1, 2, 3, 4, 8} . . .

The in-between durations can be constructed by addition or actually recorded to provide more detail . . .

There can be "whammy" notes that start at standard pitch but go downward or upward in various motion patterns . . .

Whether it's elaborate or simple depends on what you want to do; and it can include upward and downward string bends like the native NOTION electric guitar . . .

Then using Audacity, edit each note sample (audio clip) and trim the attack and release to focus the resulting audio clips on the actual note recordings, excluding leading and trailing studio noise and so forth . . .

Save each note with a specific name that includes information about the pitch and duration of each note; and put all the edited audio clips in a common folder . . .

Next, import the audio clips to SampleTank, and SampleTank makes the sampled-sound library, which then can be used in SampleTank when I want to have virtual electric guitar whammy sounds . . .

In fact, this is why the product is called "SampleTank", where "Sample" is a clue to one of the capabilities of the software . . .

Kontakt extends this capability to include providing scripts for sample-sound libraries; and Realivox Blue (RealiTone) is an example of this, where it's a third-party product that runs in Kontakt and provides a virtual female soprano who sings the phonetic scripts you specify . . .

Image
Realivox Blue (RealiTone)

[NOTE: This is the song I used to make sense of Realivox Blue scripting and vocal singing control; and it's best enjoyed when listening with studio-quality headphones or Apple AirPods. The theme is a parody of the classic Nirvana song "Smells Like Teen Spirit" which was inspired by a teenage antiperspirant. In the parody, sometime in the future cyborgs are so lifelike that the only way to distinguish them from humans is to taste their skin, since it tastes like AXE® "Anarchy" antiperspirant, deodorant, and body wash. The pattern for the lyrics is "<pronoun> taste like Anarchy", where <pronoun> is {she, you, I, we}. This took about three months, and getting the word "taste" to sound human required using the Melodyne Editor (Ceremony) to add and reinforce leading and trailing consonants, sibilants, and all the stuff, which is fully documented in the "Project: Realivox Blue" topic in this forum (see link below). For reference, I focused on specific words that are difficult to humanize, even with Realivox Blue's intimate phonetic scripting and the Melodyne Editor. Along the way, I identified a complete set of rules for humanizing Realivox Blue, and now I can do this with precision. As you listen to the song, the key is to focus on the words when they are perfect, since even though every word is not so perfect, it was the perfected words that led to identifying the humanizing rules, which was all I needed to identify. Since the Melodyne Editor has its own Clipboard, it's important to append various vowels, consonants, sibilants, and so forth to the end of the Realivox Blue singing so when you switch to the Melodyne Editor, you have starts and ends of words available in the Melodyne Editor Clipboard. Instead of continuing the experiment until every word was perfect, when I was able to render a perfect version of each word two times, this was the stopping point since by then I had determined the rules. As explained in the topic linked below, "taste" was one of the difficult words, and after listening to it phonetically I realized that it sounded best with an added ending ("taste-tuh"), which is something the Everly Brothers and other singers from the 1950s did to add clarity to the lyrics they sang. Perhaps they discovered this on their own, but it's more likely a producer or vocal coach taught them how to sing for clarity. Listen carefully during Don Everly's vocal solo starting at 0:37 for "tall-ah", "crawl-ah", and "all-ah" . . . ]

phpBB [video]


phpBB [video]


Project: Realivox Blue (PreSonus NOTION Forum)

Yes, I composed and recorded a song about pronouns five years ago before pronouns were "like so today, OMG", which is super . . .

Super! :P

Surf.Whammy's YouTube Channel

The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!

9 posts
Page 1 of 1

Who is online

Users browsing this forum: No registered users and 14 guests