I have been using Notion with East West Quantum Leap sample library for just over a year now. I'm slowiy making my way through several of the manuals and have also watched quite a few video tutorials.
I have some classical composing and scoring education from college. One of the main reasons I chose Notion was due to its ability to play well with sound sample library's, since I am a fairly new orchestral/choral composer and wanted immediate feedback as much as possible for my pieces. However I do realize that scoring notation programs are far from perfect with their playback abilities and do not consider their performances as a definitive source for scoring education.
Before I launch into the rest of this post I especially want to thank any and all who read or respond to this post. Its extremely helpful and beneficial when new users are privileged to glean the experience and wisdom of seasoned users who have spent countless hours problem solving these kinds of issues and then benevolently grace us all with a simplified distillation of knowledge. So a BIG thank you for that. [i]My computer hardware stats are listed at the bottom of this page.
Which now leads me to a several of my technical questions to problems I have experienced since using Notion and EWQL. With two of my inquiries I will link pieces, via soundcloud, where you can hear my problem audibly. I do not know how to embed these tracks into this forum yet so I just provided soundcloud links: sorry bout that.
1) sometimes I get sound glitches or pops in specific places of my compositions during playback when my resources are barely pushed. In this organ chorale during just before the 1st and 2nd endings you'll hear the rice krispy boys chime in sing'in snap, crackle, and pop...jk/ its just one crackle but it does happen 2 times at 0.58, and at 1:20 seconds into the piece.
https://soundcloud.com/user-138445636/m ... pipe-organ
2) After exporting, not during software playback, there is an additional triggered instrument hit heard in the composition. This one is immediately on the down beat after the crescendo of the 1st repeat. It sounds like the pizz. string is stuck again though it is not in the score. The sound anomaly occurs at 1:01 seconds into the chamber piece.
https://soundcloud.com/user-138445636/s ... rchestrati
3) I believe I have read in the forums and through watching a Notion video tutorial that crescendos or decrescendos between 2 tied notes can not be audibly heard in playback unless its destination is tied to the end of a bar line or marking other than the tied not itself. Has this been fixed or are there any other fixes for this as this does make the score look a bit wonky.
4) Since I just recently started attempting to score choral music this one is a two-parter:
a) It seems in Notion's vocal section, not using EWQL vocals, that if you choose "voice", which I assume should be for a single singer, that it only plays a piano sample in place of it. I assume I am doing something incorrect but when I choose any of the other vocal samples from the vocal pane that I do hear their perspective vocal sample qualities as listed.
b) when I had chose a choral section from Notion's list if I write 1/4 notes with a staccato they were ignored altogether. I haven't tried any other articulation yet but I was hoping to sort out the "solo voice thing" first. Are there some specific things that the choral group samples do not perform?
5) I'm grateful for Notion's presets for some of EWQL's orchestral samples which work quite well together with minimal tweaking according to the compositions style. But if I want to use other orchestral samples from EWQL or even from their various choral samples like Opera, Passion, etc... which do not have Notion presets can I favorably integrate them via Rewire? I've been reading up on this recently but wanted to inquire as to other user's success rate and satisfaction before I embark upon this endeavor myself.
Here is my current Hardware set up: I know this set up limited in its abilities with EW & Notion etc..
My current IMac Specs
Model Name: iMac
Model Identifier: iMac12,1
Processor Name: Intel Core i5
Processor Speed: 2.5 GHz
Number of Processors: 1
Total Number of Cores: 4
L2 Cache (per Core): 256 KB
L3 Cache: 6 MB
Memory: 32 GB
OWC SSD 1TB Drive (for EW samples)
OWC Dual Drive Dock (for SSD 1TB Drive via lightning cable)
I'm not an EW user, so I can't address anything specific regarding that player or library, but I do have a long experience using the same vintage iMac that you have. You didn't mention what OS you're using but I don't think that's probably an issue with your problems.
Regarding pops, you're pulling samples from an SSD which was my first thought. So that's good. As I said, I don't have experience with EW player so I can't comment on what settings are ideal, but I'd suggest looking at the buffer settings in Notion and EW Player. I set my Notion buffer to the max (always have, even on the similar age iMac that you have). That might help some issues.
If there is a latency setting in the EW player or any other plug-ins that you use, try adjusting them for longer latencies. A short or minimal latency is only needed when you are recording live, and will drive up performance problems because of the CPU demands. I used to get problems with Vienna MIRPro on my older machine when I set the latency too low. That would cause pops. Setting a higher latency always seemed to take care of the issue.
On VERY large mock-ups (50-60 tracks) I sometimes used to have to split the instruments into sections and record one at a time. This is not ideal, but it can work. I've since retired the older iMac to the living room, and got a new 2019 iMac for music. No issues since...
32 GB memory is probably getting to be a challenge as well. Use the Mac Activity Monitor to check RAM usage and see if it's pushing past 32 gb. If the program starts having to page out memory, it's pretty much game over...
Others that are EW users may have more specific troubleshooting thoughts, hope these help and welcome!
macOS Catalina 10.15 with small AOC monitor for additional display
2 - 500 gb + 1 tb external SSD for sample libraries
M Audio AirHub audio interface
Nektar Panorama P1 control surface
Nektar Impact 49-key MIDI keyboard
Focal CMS40 near-field monitors
JBL LSR310S subwoofer
Notion 6 + Studio One 5 Pro
I don't have any EastWest/Soundsonline products, but so what . . .
In general there are two ways to use NOTION:
(1) live performance
(2) virtual music
These are two, very different uses, and for live performance it's best to use the native NOTION instruments, because they are optimized for this purpose and they are extensively tested for live performance use, which is the case for two reasons, (a) the music is played through a sound system, hence will not be so "lhigh quality" due to being played through a sound system, typically at a higher volume and (b) when you are doing this with an audience, you do not want the computer to have problems--so the focus is on reliable music generation more than on the highest possible fidelity, where another important factor is that the focus in a live performance is on the humans rather than the computer-generated accompaniment . . .
The way I explain this is that when there is a singer, (a) the focus is on the singer but (b) the musicians are there to accompany the singer . . .
In other words, the band plays to Elvis, rather than vice-versa . . .
The other side of the coin occurs when the focus is on the highest quality sounds where there are no particular time constraints with respect to how long it takes to create the finished song . . .
In this strategy, the goal is to get the best possible sounds, which ultimately requires constructing songs in layers rather than all at once in what becomes a huge score, with this being the case because the best quality virtual instruments require a lot of computing, plus you typically will want to include some real instruments and real singing . . .
Explained another way, in this strategy you use the same techniques as were used starting in the mid-1960s, where musical groups tended to record parts in a series of efforts rather than the way it was done in the mid-1950s and earlier when everybody played and the singer(s) sang all at the same time, since there only were one-track and perhaps two-track recording machines . . .
For example, in early Elvis Presley songs, each song was recorded live as many times as necessary to satisfy Elvis, who in addition to being the greatest operatic tenor of the 20th century had stellar musical judgment . . .
[NOTE: Listen with studio quality headphones like the SONY MDR-7506 (a personal favorite) and observe the uvular trill on "hound" at 2:00, which is mind-boggling. How did Elvis know about uvular trills? Did a producer provide a bit of guidance? Regardless, it's profoundly amazing . . . ]
[NOTE: Les Paul was doing multitrack recording with extensive overdubbing starting in the late-1940s, but he was the exception rather than the rule . . . ]
In the early years, the Beatles could do this; but a few years later they switched to creating songs in layers . . .
CREATING SONGS IN LAYERS
My primary focus is on what one might call "popular music"; but I like all genres and styles, and usually can create consistent patterns . . .
Specific to your questions, after considerable experimentation and a bit of stellar advice from "Magic Dave" at MOTU, I decided that the best way to create songs in layers with NOTION is to do it in ReWire sessions where the VSTi and AUi virtual instruments are hosted in the Digital Audio Workstation (DAW) application running as the ReWire host controller and NOTION only provides the music notion and its generated MIDI, which is sent to the ReWire host controller to "play" the VSTi and AUi virtual instruments . . .
Specific to computer science and software engineering, there are a few very important bits of information:
Nearly all music software is not multitasked and multithreaded in a highly complex way, which means that whatever multitasking and multithreading occurs is done automagically by the operating system and Intel processors rather than specifically in the application code in an unlimited way, which you can verify on the Mac by observing processor activity via the Activity Monitor utility application . . .
Play a song in NOTION and compare (a) what you see in Activity Monitor to (b) what you see when you run a multitasked and multithreaded application like ScreenFlow (Telestream) or Final Cut (Apple), both of which will grab and use every processor (and all its "cores") by design when rendering video . . .
[NOTE: I am simplifying this a lot, but it's sufficiently precise for purposes of this conversation. Some of everything occurs (multiprogramming, multitasking, and multithreading), but conceptually it's easier to understand when you consider it to be linear. NOTION and Studio One Professional are programs, so when both are running, there is multiprogramming being done by the operating system, which includes multitasking and multithreading . . . ]
Explained another way, it doesn't accomplish anything useful to start throwing processors and cores at digital music applications; and while it's a problem when you only have 4GB of system memory, from 8GB to 16GB is where you start getting optimal bang, with 32GB being a bit much when you are creating songs in layers . . .
The practical perspective is that it's better to have a fast computer than a slow computer, since the processor speed is more important than the number of processors and cores, as well as the amount of system memory over 16GB--although there is a bit more bang when you have 32GB and are creating songs in layers . . .
Another way to understand this is that there is a finite amount of stuff each digital music application and component can do, and when you ask them to do more than they can do, they become overloaded and then annoying things start happening, which includes skips, pops, noise, and generally deteriorating audio quality . . .
Consider all this stuff from the perspective of managing an orchestra, where you put all the musicians and singers in the same room and tell them to start playing and singing, which is the smart and easy way to do it . . .
On the computer, this maps to having everything in the same workspace, which is controlled ruthlessly in a primarily linear way, although there probably is a bit of multithreading happening at various times but in a carefully controlled and constrained way . . .
Compare it to putting each musician and singer in a different room and then devising a way to synchronize everything and so forth, which is vastly more complex and requires more helpers, which at some point becomes too time-consuming and expensive to be practical . . .
You want an orchestra in the same music hall being controlled and managed by one conductor . . .
There are ways to do this like an air traffic control system at an airport where there are a group of air traffic controllers, each managing and controlling a set of airplanes; but this requires more resources, more people, and costs more . . .
If it's necessary, which it is for air traffic managing and controlling, then you do it and have elaborate computing systems, which also require significant resources--human and machine . . .
These are two, very different strategies; and at some point years ago, the big question was "how can we do more stuff?" . . .
One answer was "hire more software engineers and devise a totally complex way to do parallel computing and asynchronous multiprocessing, which will cost a lot of money and probably will bankrupt the company" . . .
The other answer was, "In a few months, Intel is releasing a new processor which is twice as fast the the current processor, so let's wait a few months and the problem solves itself without us needing to do anything" . . .
Occasionally, Intel, Apple, and Microsoft do things that require more work by software engineers and cannot be avoided--like moving solely to 64-bit computing--but so what . . .
Yeah, it's a hassle for a while--sometimes for a year or two--but it must be done; there are benefits for customers; and you get the revenue stream from upgrades; so it works over the long run . . .
Vienna Symphonic Library (VSL) solves the problem for folks who need to do vastly elaborate digital music production by delivering a separate system which can run on a set of computers and allows work to be done elsewhere or offloaded to additional computers
Vienna Ensemble Pro 7 (Vienna Symphonic Library)
[NOTE: I have not examined any of the code for all this stuff, and it's highly likely that there is a lot of multitasking, multithreading, and distributed computing occurring--including parallel computing--but so what. This isn't a software engineering discussion group. It's easier to conceptualize all this computer science and software engineering stuff as being essentially linear, meaning that you click on "Play" and audio is generated in a step-wise way that produces music where what you want to hear at each instant in time is heard like a clock ticking in perfect time. It's easier to work with Audio Tracks than with VSTi and AUi virtual instruments on Instrument Tracks, but when you are composing with music notation, this is what needs to happen until you are happy with the layer, at which time you commit it to Audio Tracks and move on to the next one . . . ]
You can verify what I call the "linear aspect" by doing a simple series of experiments, where for example you start with one instance of Kontakt 5/6 (Native Instruments) or MachFive 3 (MOTU) and then add more instances until the generated audio quality and playback performance deteriorates . . .
In particular, MachFive 3 has some chromatically sampled, highly computing-intensive and system memory-intensive instruments, and in some scenarios getting the highest quality audio requires limiting a score for a layer to only one or two instances of MachFive 3 . . .
For reference, "chromatically sampled" maps to each note being sampled rather than what I call "diatonically sampled", where only every other note actually is sampled and the in-between notes are computed by logarithmic interpolation, where a lower note is used to compute the value of the next higher note or a higher sampled note is used to computer the value for a lower non-sampled note, which for reference is good reason not to use motion-based playing styles like vibrato and tremolo (Fender terminology on both), with the reason being that if the vibrato or tremolo is embedded in the "every-other" samples, then arbitrarily computing in-between (not actually sampled) notes also increases or lowers the rates of motion-based effects . . .
If you want consistent tremolo or vibrato on an electric guitar, then do the tremolo or vibrato using an effects plug-in so that it will be consistent no matter whether you are using a chromatically or diatonically sampled sound library . . .
It doesn't matter if you (a) are transcribing and orchestrating an existing score or (b) are composing a new song, because when you do everything yourself, you only can do one thing at a time . . .
I do everything on a 2.8-GHz (8-core) Mac Pro (Early 2008) with 32GB of system memory here in the sound isolation studio; and while it's 10 years-old, it's still a peppy supercomputer, especially since I offload a good bit of the audio generation to a MOTU 828mk3 external digital audio and MIDI interface . . .
My current strategy is to use ReWire MIDI staves in NOTION and to host the VSTI and AUi virtual instruments and effects plug-ins in Studio One Professional, where Studio One Professional is the ReWire host controller and NOTION is the ReWire slave application, which among other things maps to Studio One Professional being responsible for the audio . . .
NOTION provides the music notation and generated MIDI which is sent to Studio One Professional where the VSTi and AUi virtual instruments are hosted . . .
I do 10 instruments per layer, which works nicely; and once I am happy with the instruments for a layer, I record the generated audio to Audio Tracks in Studio One Professional; save the ".song"; remove the current set of 10 instruments; save the ".song" with a new version number; and add another 10 instruments--repeating this in steps and layers until the song is finished . . .
Along the way, I consolidate Audio Tracks into subgroups and create yet another version of the ".song", where the goal is to keep everything as simple as possible visually, since this makes it easier to do the producing, audio engineering, and mixing . . .
The song might have hundreds of instruments and individual tracks, but even if you have a wall of display monitors you cannot see all of them at the same time in any detailed and precise way--so you consolidate and mix sections with the goal being to have one sub-grouped track for the violins, one for the cellos, and perhaps just one or two tracks for all the strings . . .
Otherwise, it just too much stuff to manage at a high-level . . .
Once I have the drumkit nicely mixed, I move it to a subgroup and then work with it as one, two-channel track--although for drums I like to keep the primary snare drum and kick drums as separate tracks, at least until everything is recorded, which is the same for the electric bass guitar tracks . . .
In what I call "popular music" everything rides on the basic rhythm section, which is drums, bass, and rhythm guitar or synthesizer playing chords and as such needs to be produced in a specific way, where perhaps the best current example is Keith Urban's now signature hit song, "Blue Ain't Your Color" . . .
There are a few instruments that are more important from a producing and mixing perspective, so it's usually best to keep them as individual tracks as contrasted to combining them into one stereo subgroup track . . .
I like to have three kick drums (panned far-left, top-center, far-right) and three snare drums panned the same way--each being on a separate ReWire MIDI staff in NOTION, since this makes it possible to put the kick drums and snare drums in motion; and I do this for a few more instruments but at minimum usually have two ReWire MIDI staves for each instrument . . .
By saving everything in version-numbered steps, you can go backward; but overall it's better to go forward, which you can do once you make sense of creating songs in layers . . .
This strategy is explained in great detail in one of my projects in this forum . . .
Project: ReWire ~ NOTION + Studio One Professional (PreSonus NOTION Forum)
It's also important to ensure that NOTION, Studio One Professional, and all the various VSTi and AUi virtual instruments are configured consistently, which includes matching sample rate, buffer size, global tuning, processor usage, and system memory usages . . .
In particular, NOTION is not set by default to standard reference tuning, so you need to set the global tuning value to 440-Hz (a.k.a., "Concert A") to be consistent with everything else . . .
44.1-kHz is the standard sample rate for audio, while 48-kHz is the standard sample rate for audio with video . . .
I set everything to 44.1-kHz, since it's the standard for audio, which is another conversation--although my perspective is that nobody over perhaps 5 years-old can hear much over perhaps 10-kHz, which includes 5 year-olds who have attended a KISS or Metallica concert and sat anywhere in the front rows . . .
44.1-kHz is completely sufficient to reproduce the complete range of human-perceivable audio perfectly (20-Hz to 20-kHz) . . .
The rule is that you divide the sample rate by two to get the frequency upper value for the frequency range, hence when the sample rate is 44.1-kHz, the highest frequency which is reproducible with the highest possible fidelity is approximately 22-kHz, which is above the range of normal human hearing, and there you are . . .
The higher the sample rate, the more computing is required, which overall wastes valuable computing resources for no discernible benefit or reason . . .
Do some experiments with the various configuration options to determine the "happy" place for configuration options, and then do the experiment where you start a NOITON score with one instrument and then continue adding more VSTi and AUi virtual instruments--one instance per staff--until you overload NOTION to determine what NOTION can handle in one score on your Mac . . .
When you switch to what I call the "ReWire MIDI" strategy, NOTION hosts no instruments and is focused solely on music notation and generating MIDI to send to Studio One Professional, where all the VSTi and AUi instruments are hosted--but in layers where you work with 10 at a time and then record the generated audio to Audio Tracks and, after saving version-numbered NOTION scores and Studio One Professional ".songs", start on a new layer of another 10 instruments . . .
NOTION is doing a lot of work simply to do the music notation and MIDI generating; so in this strategy NOTION is focused specifically on what it does best . . .
Studio One Professional is focused on hosting VSTi and AUi virtual instruments and handling Audio Tracks and effects plug-ins, which is what it does best; so it also is focused efficiently . . .
Explained another way, when the goal is to produce the highest quality audio, you want to avoid asking NOTION and Studio One Professional (or any other DAW application) to do more work than they can do smoothly without being overloaded . . .
Remember that on top of everything NOTION and Studio One Professional are doing, the various VSTi and AUi virtual instrument engines also are doing a lot of computing . . .
Lots of FUN!
P. S. On the computer science and software engineering side of digital music production, among other things I am an Apple developer and a Reason Rack Extension developer and have been working on a digital-delay Rack Extension for several years--making a tiny bit of progress every so often--where the key is that once you make sense of digital delay algorithms, you can create hundreds of effects, most of which do not exist currently because there are not so many lead guitar players who also are software designers and engineers and have enough understanding of advanced mathematics and physics to imagine stuff . . .
The biggest conceptual hurdle for me is making intuitive sense of samples (or "lollipops" as I call them), where a song is constructed from sets of samples that arrive at the rate of 44.1-kHz samples per second . . .
I can listen to a song and nearly instantly get a detailed sense of all the instruments and singing, but doing this from the perspective of streams of individual samples is not so easy, although it's starting to make sense . . .
The problem is that everything all the instruments and voices are doing at any instant in time is contained in the corresponding sample for that instant, which has a grand total of two scalar values (arrival time and amplitude) . . .
How do you look at a single sample and then map it to everything an orchestra or musical group is doing at a particular microsecond or whatever?
This is with a university degree in Computer Science and 40 years of doing GUI and transaction designing and programming but none it it being focused on streams of individual samples--well, except for large-scale, high-volume transaction processing, which at least is similar . . .
NOTION and Studio One Professional dumbfound me, and while I have a reasonable sense of what is done via the internal, low-level code, it's simply mind-boggling . . .
I think that if I worked on it 18 hours a day for 150 years, I probably could come up with a primitive version of NOTION 3 . . .
When I write that NOTION and Studio One Professional are doing a lot of stuff, I am not exaggerating, which is fabulous . . .
Last edited by Surf.Whammy on Sun Feb 16, 2020 5:03 pm, edited 1 time in total.
The Surf Whammys
Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
First of all thank you for MM1 and Surf Whammy for responding and offering advice and insight.
michaelmyers1 wrote You didn't mention what OS you're using but I don't think that's probably an issue with your problems.
Yes, sorry about that. I am 10.13.6 High Sierra.
I have experimented with my EW buffer but not with Notion's. Figured the sample program would be the culprit but I will look into adjusting Notion's as well. And as SW mentioned checking other crucial program operating parameters to insure they are maximized. Though I understand most of the these relationships between the two programs that need to be in place for the most efficient operation I certainly may have overlooked or been unaware of something.
Yes, 32 GB is max capacity for this current machine. Funny thing is I have experimented with adding instruments in Notion from EW until I get blatant drop outs or pops etc.. but when I checked the activity monitor my CPU load maybe hit 30-50% and my RAM barely pushed above 1/3 its capacity as well. And this was putting my EW latency to the highest setting.
Lastly, thanks SW for your thorough reply. Thanks for the workaround due to my machine's limitations. I will look through and apply what I can. I am really grateful for the functionality of Notion and its friendly play with other sample programs, which I'm sure is no easy task to achieve, but I definitely look forward to the day when scoring software programs, for those who are interested, will become not only more accurate in performances but may possibly even facilitate the production of orchestral scores faster for those who prefer to write them. They've certainly come a long long way. Thanks to Presonus for their excellent software programs!
So for question #4
4) Since I just recently started attempting to score choral music this one is a three-parter:
gonna keep this simple.
1) choosing the vocal pane in Notion's score set up only produces a piano sample regardless of the vocal range.
2) the choirs work but seem to not perform staccatos or staccatissimos. I have not tried all the articulation yet but these I noticed immediately.
3) Tied note dynamics seem to cut off after "f: on a swell with the choir voices say from ppp - f - ppp.
I tried linking and embedding audio and the image but was unsuccessful. Using these features is somewhat lacking for me atm. I've done it before in the past on other sites but only a few times. If i figure it out soon I will still post them.
Doing realistic virtual singing is possible, but it requires a bit of work . . .
Here in the sound isolation studio my focus is on a female soprano--at first with the goal of being able to add a female voice to my science fiction radio plays, but then after a bit of experimenting realizing that this also makes it possible to add female singers to songs . . .
I do this with Realivox Blue (RealiTone), and it's working nicely . . .
Realivox Blue (RealiTone)
Realivox Blue requires Kontakt (Native Instruments) . . .
At this level of realism, two things are required:
(1) phonetic scripting
Products like Realivox Blue have elaborate phonemes that form the vocal language the virtual singer(s) sing; and keyswitches are used to provide additional programming cues and controls . . .
It's important to read and understand the licenses, where for example Realivox Blue must be used in a melodic way--as contrasted to being used for spoken word (which is prohibited) . . .
At first I thought this might be a problem for using Realivox Blue in my science fiction radio plays, but after pondering it for a while, I remembered that there is extensive music in my science fiction radio plays--something I do intentionally to make them music albums, hence protected ruthlessly from copying, renting, and so forth . . .
The rules are different for audio books, which once purchased can be rented over-and-over without the author being paid royalties . . .
It's the same with movies--buy once, rent many times with no royalties required . . .
Music folks are smart, but not so much for the audio book and movie folks . . .
So, the key to using Realivox Blue productively is to ensure she is melodic and has a bit of at least occasional accompaniment . . .
This is my project in this forum on Realivox Blue, and it explains a good bit of the advanced work required to create the most realism possible--a work in progress but coming along nicely . . .
Project: Realivox Blue (PreSonus NOTION Forum)
This is one of the versions where it's just electric bass and Realivox Bue . . .
[NOTE: These are complex phonemes, and the theme of the song is that it's a parody of "Smells Like Teen Spirit" (Nirvana), which was based on an antiperspirant popular among teenagers in the 1990s or whatever. In my song, it's based on the taste of Anarchy by AXE and the concept that sometime in the future cyborgs will be so realistic that the only way to distinguish them from humans is by taste--hence Realivox Blue is a cyborg singing for all cyborgs in the collective "we taste like ANARCHY" . . . ]
There are other virtual vocalists, and some of them are choirs--like Voices of Prague I(Virharmonic), which has its own player and does not require Kontakt . . .
Voices of Prague ~ Choral Series (Virharmonic)
At one point, I was considering hiring a female soprano; but (a) it would be expensive and (b) considering that the sound isolation studio is 6 feet wide by 7 feet tall and 12 feet long, there's not a lot of room for two people, hence it might be a bit awkward to coach the voice-over lady through some of the more risqué bits of my science fiction radio plays . . .
In fact, for the first few weeks, I explored the limits of how risqué I could make Realivox Blue; and once you make sense of her phonetic language, she can be a very naughty lady . . .
For advanced choral singing, there are other products, of course, and folks in this forum use them, hence probably can provide more information on them . . .
One way to explain phonetic singing is something I learned when I was n a liturgical boys choir singing in a cathedral that had reverberation and echoes reminiscent of the Taj Mahal . . .
The choirmaster instructed us to sing "Excelsius" as "egg-shell-sea-us", over-annunciating each phoneme since it was the only way to cut-through all the reverberation and echoes . . .
It's a bit of work to get a practical understanding of phonetic scripting and singing, but the result is that (a) you have a soloist or a choir and (b) they sound realistic . . .
I did experiments with Realivox Blue for about 6 months, and now I have a virtual female soprano, so it's all good . . .
Lots of FUN!
The Surf Whammys
Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
I appreciate your effort SW but you didn't really answer the original question. Can anyone directly answer my original question regarding the Notion solo voice and choir sounds I inquired about?
I would like to work with the Notion vocals, if even marginally, but not playing basic articulations is going to render them mostly unusable.
Yes, I am aware of Solo Voices of Prague. I have been also looking at some of Audiobros vocal software and I really like what I see. But I must say that I was hoping for at least some basic functionality beyond sing the scored vocal note for its strictly base duration-game over.
So if anyone can address this I would appreciate it.
Articulations seem to work pretty well with the orchestral instruments I can't see why they wouldn't with solo vox or choirs.
https://soundcloud.com/user-138445636/n ... al/s-G2miJ
I tried to embed the audio but I'm apparently doing something incorrect. I honestly haven't done much in the way of posting these kinds on things. Anyways, thanks for all the suggestions. I understand that there are other programs that can deliver more realistic vocal performances from a score. I just figured the solo or choir voices that came stock in the Notion software would be a little more functional that I am currently experiencing. Hence why I thought I was overlooking something or doing something incorrect.
Thanks again for all your suggestions and help.
Users browsing this forum: No registered users and 1 guest