Discuss Notion Music Composition Software here.
28 postsPage 2 of 2
1, 2
Dude! I don't know if it's my imagination because you planted the suggestion of the changes you made in my brain, but the clarity of the instruments is so much better. When I listened to the first version I was like "hmmm the strings sound a bit like a synth pad because they are lost in all of the sound..." but now I feel I can hear the bowing. I was tricked at section starting around 1:20. I was listening to the low strings which were up front and saying "oh there they are!" and then in the background, sneaking in at around 1:26, 1:28 whispering the melody was the horns. Nice!

I don't know what's going on, but my experience with this piece is almost like night and day. I can hear everything so much more clearly. Oh, and I was tricked again at 4:06 - the teeter-totter woodwinds in disonance. Didn't hear that before, or at least it didn't jump at me. When I say tricked I mean I was expecting something else to happen but was pleasantly surprised.

So if there is anything to be said about MIDI music, dynamics, nuances being controlled and influenced by the digital parameters, sound settings, playing, articulations and playing styles, this piece is an excellent example of those types of changes that breath "life" into the sound.

Question: is the repduction just from notation in Notion or did you actually play some of the passages on your MIDI keyboard (with the appropriate VSTi engaged)? If it's just from notation and then adjusting controls, and ultimately DAW production, that's very encouraging.
Last edited by acequantum on Sun Nov 14, 2021 7:00 pm, edited 1 time in total.
User avatar
by acequantum on Sun Nov 14, 2021 6:38 pm
Surf.Whammy said:
Although it appears we are coming from opposite directions, in a general sense we are saying the same things but in different ways . . .

When one is working with what might be called an "unsophisticated or non-advanced" VSTi virtual instrument, engine, and sampled-sound library, simply increasing the MIDI volume level from 60 to 127 does not do what a skilled trumpet player does when playing at mezzopiano and then switching to fortississimo . . .

All it does is increase the volume level without increasing or decreasing anything else . . .

On the other side of the equation, if the engine of an advanced VSTi virtual instrument does all this extra work using a cornucopia of sampled-sounds and elaborate textural sample-changing algorithms, then the big question becomes, "Why are specific articulations and all that stuff needed?" . . .


Though your post was directed at MM, I would say you've answered your own question with the statement and question itself. A trumpet is a good example because you have lips, breath, vocal chords (for growls), physical position (shakes), valves, various mutes you put in the bell or over the bell; all of these things contribute to the trumpet's sound and volume.

In a practical sense, you couldn't or rather wouldn't want to program all of these responses with the single strike or instance of a MIDI key. You might want to control changes in volume, timbre, and maybe even articulations with one strike. But things like portamento, vibrato, breathiness, valve click - you might not want these things to appear when your are just playing most of the time. You need a separate set of controls to introduce these things on command. You almost never shake a trumpet in classical music though you still play loud or soft but you might shake the trumpet at any given moment in Jazz or Latin music plying. Falls in classical? Once in a while. Jazz - often. Doits? can't recall hearing it in classical.

Now, if you put together an entire library consisting of more than one instrument, you'll want some consistancy in how to reproduce the playing styles, articulations and dynamics for all instruments. Like, maybe the MOD wheel always controls expression. Or press C1 and you go into legato mode.

The cornucopia of sampled sounds and algorithms might be in there but distributed to other controls that have to have a means of being reproduced.

MIDI MPE tries to address nuance at various levels by expanding what MIDI can do, but it comes down to if and how all of those physical manipulations of say, the trumpet, can be reproduced on an electronic device.

In this discussion it seems we're in the realm of emulating acoustic instruments. If you jump into modular synthesis, the discussion takes on an entirely different flavor because it's not about reproducing the dynamics, playing styles and articulations of existing acoustic instruments, it's about creating altogether new and wild sounds and instrument possibilities. The nature of these instruments do not have to follow the conventions or expectations of acoustic instruments or how they are played.
User avatar
by Surf.Whammy on Sun Nov 14, 2021 9:26 pm
michaelmyers1 wroteHere is a quote from Paul Gilreath from his book The Guide to MIDI Orchestration (my emphasis):

"Brass players use air and lip pressure to change dynamics. With these changes in volume also come changes in timbre. Louder dynamics have an abundance of high overtones, which make the tone much more brilliant. Notes played in softer dynamics have fewer overtones, making them softer and mellower. Consequently, when a brass musician plays a note that changes from a p to a fff, not only does the volume change, but the timbre changes as well."


I read this several times over the past few days, and I think it will be helpful to analyze what Paul Gilreath is explaining . . . :)

THOUGHTS

"Brass players use air and lip pressure to change dynamics.

With these changes in volume also come changes in timbre.

As I read these two statements, Gilreath is equating (a) dynamics to (b) volume; but he provides the additional information that changes in dynamics (a.k.a., "volume") travel with changes in timbre.

[NOTE: Among other things, timbre is what distinguishes a violin from a trumpet, trombone, clarinet, oboe, and so forth; but an instrument or voice can have different types of timbre, so it's not limited to distinguishing one instrument or voice from another instrument or voice . . . ]

I think Gilreath is providing the information that a brass player can control the timbre of the respective brass instrument by the amount of "air and lip pressure" being applied . . .

This is not the only way a brass player can control timbre, where for example a trumpet player can use a metal cone or a plunger to change timbre; and in particular I like the affect a metal cone has on the tone, texture, or timbre . . .

phpBB [video]


Louder dynamics have an abundance of high overtones, which make the tone much more brilliant.

Notes played in softer dynamics have fewer overtones, making them softer and mellower.

Here, Gilreath is explaining how the quantities of "air and lip pressure" affect timbre with respect to overtones (higher at stronger "air and lip pressure", but lower at softer "air and lip pressure") . . .

If you watch electric guitar players, you will observe they tend to have their amplifier and loudspeaker rig set at a fixed volume level and other characteristics (bass, midrange, treble, presence, gain, and so forth); but they make adjustments one the guitar, itself, using a combination of the toggle-switch for pickup selection, the guitar volume control, and the guitar tone control(s), as well as other switches and the location where they strum or pick the strings and the intensity with which they strum and pick the strings at various locations . . .

Effectively, this is the way a guitar player does the equivalent of varying "air and lip pressure" to change timbre, although there are effects pedals or "stomp boxes" that change timbre and other types of textures, echoes, and so forth . . .

phpBB [video]


It's a bit different for acoustic guitar, but the principles are the same . . .

I agree with Gilreath, as well as with you . . .

The difference in our perspectives on dynamics in some respects is a combination of (a) semantics and (b) the capabilities of VSTi virtual engines and their sets of sampled sounds . . .

Consider the violin data from the Miroslav Philharmonik 2 screen capture . . .

Image

Staccato and Spiccato are bowing techniques, but (a) in staccato the bow stays on the string while (b) in spiccato the bow comes off the string, which I know but mostly forgot so "reminded" myself with a bit of web searching . . . :reading:

Whether these are (a) articulations or (b) playing styles is another matter; but (a) in NOTION they are considered to be articulations and (b) the standard generally-accepted definition says they are articulations . . .

Presuming each of the staccato and spiccato items in the list from Miroslav Philharmonik 2 were played, recorded, and digitized separately by a trained violinst as notated or described, then if you want "Violin Spiccato mp", this is the one you select; but if you want "Violin Spiccato mf", then it will be the one you select . . .

The articulation is the same but the analog of a brass players "air and lip pressure" is different based on the dynamic at which the strings were played . . .

Here, I think these are two separate performances--played, recorded, and digitized as described and labeled . . .

If a phrase begins (a) spiccato mezzopiano but then switches to (b) spiccato mezzoforte, then I think the set of samples needs to change , , ,

I claim no particular expertise in Miroslav Philharmonik 2; but the only way I know to do this is via a MULTI and switching from one channel to another in the MULTI to select the specific set of samples, (a) vs. (b) . . .

Gilreath is saying to add more higher overtones, the brass player blows harder and tighter . . .

This maps to an increase in volume and a change in timbre . . .

In this regard, absent a set of rules, I do not think Miroslav Philharmonik 2 is "advanced" in the sense of being able automagically to switch from one set of samples to another based on articulation marks . . .

Explained another way, I think there are two separate conditions:

(1) dynamics only affecting the MDII volume level number from the set {0, 1, 2, . . . , 127}

(2) advanced or "intelligent" dynamics that affect both the MIDI volume level number, MIDI velocity number, MIDI expression number, and the specific set of samples based on the the articulation, playing style, and so forth

As you suggest, some of the Vienna Symphonic Library (VSL) engine(s) and sets of sampled sounds in various articulations, dynamics playing styles, and so forth do (2) rather than only (1 based on some type of instruction, perhaps by specifying articulations, dynamics, and so forth); and if this capability exists--which I presume they do--then it revises and extends the otherwise standard definition of "dynamics" . . .

In other words, if you set the Miroslav Philharmonik 2 sample set to "Violin Spiccato mp", which was played, recorded, and digitized as described by a trained violist but then apply a mezzoforte dynamic mark to the music notation, then I suggest the result is a mezzopiano recording which has increased in volume--but nothing else, no different timbre, and so forth--to be louder . . .

It has the played, recorded, and digitized timbre of a mezzopiano performance but for practical purposes it's "pumped", although without the various nuances and fine-tuned controls of a compressor-limiter to provide a bit of graciousness to the "pumping" . . .

Lots of FUN! :)

Surf.Whammy's YouTube Channel

The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
by Surf.Whammy on Sun Nov 14, 2021 10:03 pm
acequantum wroteIn this discussion it seems we're in the realm of emulating acoustic instruments. If you jump into modular synthesis, the discussion takes on an entirely different flavor because it's not about reproducing the dynamics, playing styles and articulations of existing acoustic instruments, it's about creating altogether new and wild sounds and instrument possibilities. The nature of these instruments do not have to follow the conventions or expectations of acoustic instruments or how they are played.


I agree, but I include played, recorded, and digitized electric guitar in "acoustic instruments", although it's more consistent with "instruments recorded with microphones" . . . :)

THOUGHTS

I think everything is different with a synthesizer when the VSTi virtual instrument engine, sampled sounds, scripts, and controls are present . . .

Here, it's more complex and is not limited in the way instruments played, recorded with microphones, and digitized are limited, although it depends on the specifics . . .

As you know, there are physical synthesizers that create sounds based on electronic components, processors, and so forth; and if this is done, then the result is not limited by what was recorded by a microphone and then digitized . . .

I have an Alesis ION Analog Modeling Synthesizer, and it constructs and modifies sounds electronically, with a bit of computing along the way; and I got it about 20 years ago to make "outer space" noises for my science fiction radio plays . . .

At the time, I was using a Fender American Deluxe Stratocaster to compose and play music to accompany the dialogue; and the fascinating aspect I soon discovered is that all the white keys sounded "good" with the already recorded guitar track, which might be a stellar instance of serendipity . . .

phpBB [video]


Image
Alesis ION Analog Modeling Synthesizer

Later, I got a drumkit and added more instruments--including an elaborate set of effects pedals for the electric guitar--but I continued to use the Alesis ION for "outer space" sounds . . .

phpBB [video]


Some of the MachFive 3 (MOTU) instruments respond to MIDI wheels if you have a suitable MIDI keyboard; and this also is the case with some of the instruments for Kontakt (Native Instruments) from Bolder Sounds and other third-party companies . . .

There probably is a way to emulate a MIDI wheel in Studio One Professional; but I have not explored this . . .

I do a bit of automation in Studio One Professional but mostly to vary volume levels; to do panning effects; and to do fades . . .

It's an extra level of work, but when it's necessary I do it . . .

One of my new additions to SampleTank 4 (IK Multimedia)) is the Resonator guitar; and what I like about it is the ability to specify the way slides occur automagically via keyswitches on a grand staff in music notation rather than trying to do it with a MIDI keyboard . . .

phpBB [video]


I can play keyboards but I'm not a pianist, hence am limited in this regard; so for me, presets and a bit of automagical help works best, although I am comfortable using keyswitches on a grand staff in NOTION, which is the way I guide Realivox Blue (RealiTone) in her singing . . .

In this regard, I think the combination of SampleTank 4 and the Resonator qualifies as an advanced VSTI virtual instrument--probably not "super advanced" but advanced enough to do some things automagically with presets and more so when keyswitches are used . . .

Lots of FUN! :)
Last edited by Surf.Whammy on Sun Nov 14, 2021 10:47 pm, edited 1 time in total.

Surf.Whammy's YouTube Channel

The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
by michaelmyers1 on Sun Nov 14, 2021 10:09 pm
acequantum wroteQuestion: is the repduction just from notation in Notion or did you actually play some of the passages on your MIDI keyboard (with the appropriate VSTi engaged)? If it's just from notation and then adjusting controls, and ultimately DAW production, that's very encouraging.

Thanks for the kind words! This recording is made stritly using Notion as a starting pint and exporting into S1. I've posted a number of videos regarding my methods. Check out my YouTube Channel under the user Tensivity.

I don't typically play anything into DAW. I can't play keyboards, really. I can hammer on notes and get an occasional one right. I have little patience for the "play it in or die" types that you encounter so often on forums out there regarding orchestration. :)

You can do a lot with your ears and notation and a DAW!

iMac (Retina 5K 27", 2019) 3.6 ghz I9 8-core 64 gb RAM Fusion Drive
with small AOC monitor for additional display
macOS Sonoma 14.4
2 - 500 gb + 2 - 1 tb external SSD for sample libraries
M Audio AirHub audio interface
Nektar Panorama P1 control surface
Nektar Impact 49-key MIDI keyboard
Focal CMS40 near-field monitors
JBL LSR310S subwoofer
Notion 6/Notion Mobile + Studio One 6 Pro

http://www.tensivity.com
User avatar
by michaelmyers1 on Sun Nov 14, 2021 10:31 pm
acequantum wroteSo if there is anything to be said about MIDI music, dynamics, nuances being controlled and influenced by the digital parameters, sound settings, playing, articulations and playing styles, this piece is an excellent example of those types of changes that breath "life" into the sound.

This piece has an interesting history. It was written as a musical setting of a Rudyard Kipling poem of the same name by Grainger as a young man (17) in 1899 for his architect father's amateur orchestra in Western Australia. They couldn't play it, it was too difficult for them. There's a lot of back and forth between the parts and the timing could be a challenge as one section of the orchestra hands off to the next. And not to mention control of dynamics and timbre plus articulations (theres even a passage which uses mutes on the strings)! :)

iMac (Retina 5K 27", 2019) 3.6 ghz I9 8-core 64 gb RAM Fusion Drive
with small AOC monitor for additional display
macOS Sonoma 14.4
2 - 500 gb + 2 - 1 tb external SSD for sample libraries
M Audio AirHub audio interface
Nektar Panorama P1 control surface
Nektar Impact 49-key MIDI keyboard
Focal CMS40 near-field monitors
JBL LSR310S subwoofer
Notion 6/Notion Mobile + Studio One 6 Pro

http://www.tensivity.com
User avatar
by Surf.Whammy on Sun Nov 14, 2021 11:06 pm
michaelmyers1 wroteYou can do a lot with your ears and notation and a DAW!

Profoundly brilliant observation! :+1

THOUGHTS

I play drums, electric bass, and electric guitar, plus a little bit of keyboards; but when I am composing in NOTION, it's all "by ear" with a lot of experimenting . . .

More often than not, I put a measure or two of notes on a staff using the mouse and then listen to it with focus on identifying "bad"notes or notes that are "good" but don't fit with the other instruments . . .

Then I make some changes and listen to it a few more times, repeating this until I am happy with the notes . . .

If it were a Gilbert Chemistry Set, then I would have blown-up myself years ago . . . :P

Lots of FUN! :)

Surf.Whammy's YouTube Channel

The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
by Surf.Whammy on Sat Nov 20, 2021 2:44 am
michaelmyers1 wroteThis recording is made strictly using Notion as a starting point and exporting into S1. I've posted a number of videos regarding my methods. Check out my YouTube Channel under the user Tensivity.

I am intrigued by Spitfire Audio . . . :)

THOUGHTS

After your post on Spitfire Audio strings and its advanced "dynamic" features, I am thinking about Spitfire Solo Strings but am waiting for a discount sale . . .

As a youngster, I took Violin lessons for a few months but thought it was a "girly" instrument; tried Clarinet for a while but got thumb blisters; and then switched to Contrabass when I was 12 years-old . . .

The Violin lessons mostly were 10 minutes of me saying, "This makes no sense" and then 20 minutes of the teacher playing fantastic violin solos; so even though I did not learn how to play violin, I learned how a Stradivarius violin can sound up-close and personal when played by a virtuoso; hence overall, it was a productive learning experience . . .

This was in Wichita, Kansas, and the school supplied instruments, mostly due to funding from The Boeing Company, I think . . .

The contrabass (a.k.a., "string bass") had a wheeled cart designed by one of the mathematics teachers who played contrabass in the local symphonic orchestra; and I would wheel it around the hallway thinking, "Hey girls, I've got a big one." . . . :P

It was stupid, but I was young and had no sense . . .

Decades later when I started teaching myself to play lead guitar, I realized that in some respects getting an electric guitar to sound like a Stradivarius violin, viola, or cello is the "Holly Grail" of electric guitar tone and texture . . .

Seeing the Spitfire Audio strings in your Kontakt MULTI--as well as the custom rules and so forth--was very helpful and convincing . . . :+1

[NOTE: I like the first audio example, "Violin (Virtuoso)" for Spitfire Solo Strings . . . ]

Spitfire Solo Strings (Spitfire Audio)

In the electric guitar arena, this is the reason folks pay $250,000+ USD for vintage Gibson Les Paul and Fender Stratocaster guitars from the 1950s . . .

Instead, I use a variety of distortion and fuzz pedals . . .

The concept is the same but costs less . . .

I have Gypsy Jazzy (UVI), and I like the tone and textures of Gypsy violin . . .

[NOTE: The "Sad Violin" audio example is good . . . ]

Gypsy Jazzy (UVI)

I also have "Gypsy" (EW ComposerCloud X), and I like its violin . . .

[NOTE: I like the "Legato" and "Fuorigrotta" audio examples . . . ]

Gypsy (EW ComposerCloud X)

Historically, it's Gypsy Jazz developed in the mid-1930s in Paris, France by Django Reinhardt and Stéphane Grappelli . . .

[NOTE: Due to injuries sustained in a fire, only two of Django Reinhardt's left-hand fingers worked; so he relearned how to play guitar with two fingers for fretboard playing, which is mind-boggling . . . ]

phpBB [video]


phpBB [video]


Lots of FUN! :)

Surf.Whammy's YouTube Channel

The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!

28 postsPage 2 of 2
1, 2

Who is online

Users browsing this forum: No registered users and 11 guests