111 postsPage 1 of 6
1, 2, 3, 4, 5, 6
Admiral Bumblebee, the guy behind http://www.admiralbumblebee.com and the DAW Feature Chart, has made an article discussing artifacts when working with automation in various DAWs. Some DAWs perform well and others not so well. Logic, S1 and Waveform appear to be the worst among the ones he has tested, with a bunch of intermodulation distortion added to the audio signal. How would we approach it if we wanted PreSonus to fix this? Am I right in viewing this as a bug, or would we be directed to the Answers database and told to make a Feature Request?

There's a video attached to his article as well, check it out:
phpBB [video]


He also did a part one of the DAW vs. DAW series, discussing rendering anomalies, but due to the amount of work only Logic, Cubase and Pro Tools were featured in part one. Well worth a look, and very interesting for those having an opinion about DAWs sounding different or not.
User avatar
by Nip on Tue Mar 12, 2019 8:54 am
Using anything in stretching, modulation and automation - I would expect anomalies.
In settings somewhere there is also something about automation resolution - that also affect things by definition.

Some software will obviously produce more anomalies the more modulation stuff it offers.

By definition modulation like chorus and similar sound phatter because it generates harmonics not there from start - meaning distortion.

I stopped using Reaper partly because of the horrible things it does if using anything producing resampling - like various playback speeds apart from 1.0000 or similar. I discovered by accident you might say - in that I run projects in 48k, and having a second tab with reference audio as 44k. So shifting over to reference audio to compare I found it overs - by 1.6 dB.(Reaper does not shift sample rate doing that shifting tab, just resampling instead).

I reported as a bug - and all I got was answer like this is intersample peaks and things like that. But this is what math can do to your audio. I started go through the various math options for stretch/resampling and above increasing the quality and cpu tougher and overs were down to 0.6 dB. But original does not over at all - so there you have it.

It's basically selfinflicted if you use these options - and I would not worry about it.

You worry about these things and then use analog warmifiers - again that is what happends, anomalies and added harmonics not there from start.

These levels in this video - is that in audible range even?
- Yes, you can see in a graph - but what does it tell?

Probably below -100 dB or something, I did not hear a comment exactly how much.

So just use your ears - if it sounds good and you spent enough time mixing and able to hear anomalies you don't want - just use what sounds good to you..

If you are on a deadline - and find you need to stretch music a bit to fit a scene - your choice of to redo it or trust the math that stretching for you.

Here you can check out what SRC alone do to your audio
http://src.infinitewave.ca/

Cubase was probably the worst in class for over 10 years - and Cubase 10 they upgraded SRC algos to what Wavelab had forever. I never used SRC in Cubase, I rendered to 48k, same as project and used r8brain to do SRC.

*** Windows 7 Pro * i7-860 2.8 Ghz 16 GB ram * RME HDSP 9632+AI4S+Audient ASP 800 * GT730 ***
User avatar
by Skaperverket on Tue Mar 12, 2019 9:39 am
Thanks for answering, Nip. I understand what you mean. It's all relative, of course, and the performance and composition is naturally the most important factors, but all these things add up. I'd still like for PreSonus to fix this. Admiral Bumblebee writes in the article that the distortion is clearly audible in the Reason file, and the Logic, S1 and Waveform files look a lot worse than the Reason one.
User avatar
by matthewgorman on Tue Mar 12, 2019 11:01 am
Didn't watch the vid yet, but curious: Did they have MixFX on?

Matt

Lenovo ThinkServer TS140 Win 10 64bit, 8GB RAM, Intel Xeon
Lenovo Thinkpad E520, Windows 7 64bit, 8 GB RAM, Intel i5 Processor

S1Pro V5
User avatar
by Nip on Tue Mar 12, 2019 11:38 am
Like on Inifnitewave he actually bothered to zoom in even a short frame what level each color represents - he would be somewhat trustworthy. Not one frame did he care to show.

He claims you can actually hear it - the color scale of the analyzer software display could tell.
The mix of cyan to amber on StudioOne - what level was that?
How much did this guy get from analyzer software devs promoting this?

He virtually namedropped every daw out there - without a proper comment.
I would not go by one YT channels and think I have the truth about anything.

Did he do a SSL4000 console equal or something?
For StudioOne did he go with 64-bit engine?
Did he have tempo 120 in audio file as well as in StudioOne as he imported?
Was the options to stretch audio on import like in Song Setup?
If tempo was different at import StudioOne will stretch in import - and then stretch again when changing tempo?

I had to uncheck that option for stretching audio file to song tempo.
Did he have this on?

Many things are default in daw, that might be better turned off.

Did he try realtime rendering and just moving fader down as a test and compare - is it automation at all that is culprit?
It should work equally good to test that way.

I have more questions that answers. He clearly has an agenda, it seems to me.

And no audio in video since YT is so awful - but that is his chance that if YT make this audible, really adding distortion with low bitrates.- it would possibly be more audible through that conversion. At least what I think. that would proove his point even more - but no he did not - why?

Is it because even through destruction over YT it is not audible?

If you think there is something to be fixed, put a question to presonus on AskAQuestion above on forum.
And don't refer to a video - explain clearly what it is you want fixed.

I have a lot of professional recordings that I feel sucks in regards to audio quality in the production, how they mixed harmonies of vocals that clearly has cancellation problems in how they did that. This passed all professional ears through mastering and out to consumer.

I have plenty example of remastering of old stuff I have on vinyl - this passed all professional ears through mastering. One Led Zeppelin III where vocals is so distorted that I wonder how they missed that - almost bitcrushing, not there having the vinyl as well.

Sometimes I really understand why they put lofi effects in there, with vinyl statics and stuff - not having to pretend this is hi fidelity.

So yes, digital has flaws if we use it to every corner it offers - it goes way beyond what analog can do, since that is limited by natural limitations. You can squeeze in frequencies in high end that make every tweeter in our homes squeel. Math can do anything with our precious audio.

I use daw as tape recorder as straight up as possible. I would not even think of all this fancy modulation stuff, or stretching or follow tempo audio for that matter.
Last edited by Nip on Tue Mar 12, 2019 12:36 pm, edited 1 time in total.

*** Windows 7 Pro * i7-860 2.8 Ghz 16 GB ram * RME HDSP 9632+AI4S+Audient ASP 800 * GT730 ***
User avatar
by Skaperverket on Tue Mar 12, 2019 12:23 pm
matthewgorman wroteDidn't watch the vid yet, but curious: Did they have MixFX on?


I'm pretty sure he did not. Admiral Bumblebee is very knowledgeable when it comes to these things, and have used all these DAWs pretty extensively, so it would surprise me. Especially since it's off by default. Also, Logic, which performs almost equally as bad, does not have MixFX.

Nip wroteHe claims you can actually hear it - the color scale of the analyzer software display could tell.
The mix of cyan to amber on StudioOne - what level was that?
How much did this guy get from analyzer software devs promoting this?


I'm not sure what level it was. But I am very confident that he did not get any money or anything else from any analyzer software developers. But thanks for asking, Nip. I'll try to respond to more questions and statements.

Nip wroteHe virtually namedropped every daw out there - without a proper comment.
I would not go by one YT channels and think I have the truth about anything.


He clearly states in his videos and articles exactly this: don't trust people on YT, test it yourself, it's easy, he even shows you how to test it yourself.

Nip wroteDid he do a SSL4000 console equal or something?


Not that I'm aware of. This is a test of automated native volume fader movements in DAWs.

Nip wroteFor StudioOne did he go with 64-bit engine?


I am not sure, but it should not matter. In my opinion something as simple as a change of volume should be clean and without distortion in a DAW.

Nip wroteDid he try realtime rendering and just moving fader down as a test and compare - is it automation at all that is culprit?
It should work equally good to test that way.


I am not sure. But different DAWs react differently when performing these automated fades. He is comparing the results between different DAWs.

Nip wroteI have more questions that answers. He clearly has an agenda, it seems to me.


His agenda seems to be to find out if there are differences between the various DAWs when it comes to the resulting audio. And perhaps to educate users or help them educate themselves. And to give developers a heads up.

Nip wroteAnd no audio in video since YT is so awful - but that is his chance that if YT make this audible, really adding distortion with low bitrates.- it would possibly be more audible through that conversion. At least what I think. that would proove his point even more - but no he did not - why?

Is it because even through destruction over YT it is not audible?


He does explain this a little. I think it is because some artifacts will be audible via YT while others won't. He also encourages us to do the tests ourselves. Remember that a sine is a very simple signal with relatively simple intermodulation distortion artifacts. Once complex audio signals like music passes through this, the resulting intermodulation will be a lot more complex and chaotic.

Nip wroteIf you think there is something to be fixed, put a question to presonus on AskAQuestion above on forum.
And don't refer to a video - explain clearly what it is you want fixed.


Thanks! How would you word this to make it more clear than the video?

Nip wroteI have a lot of professional recordings that I feel sucks in regards to audio quality in the production, how they mixed harmonies of vocals that clearly has cancellation problems in how they did that. This passed all professional ears through mastering and out to consumer.

I have plenty example of remastering of old stuff I have on vinyl - this passed all professional ears through mastering. One Led Zeppelin III where vocals is so distorted that I wonder how they missed that - almost bitcrushing, not there having the vinyl as well.

Sometimes I really understand why they put lofi effects in there, with vinyl statics and stuff - not having to pretend this is hi fidelity.

So yes, digital has flaws if we use it to every corner it offers - it goes way beyond what analog can do, since that is limited by natural limitations. You can squeeze in frequencies in high end that make every tweeter in our homes squeel. Math can do anything with our precious audio.

I use daw as tape recorder as straight up as possible. I would not even think of all this fancy modulation stuff, or stretching or follow tempo audio for that matter.


Some people work with classical music, mastering or audio archiving, where a pristine signal path is desired. Especially something seemingly basic like volume automation should be clean, don't you think?
User avatar
by Jemusic on Tue Mar 12, 2019 12:27 pm
Change the meter scale in Sonic Visualizer and all these results look completely different. Studio One can look perfect depending on what meter scale you choose. He has not explained why he has used the meter scale choice he used. Read the manual carefully for Sonic Visualzer as well.

Fact is it all really depends on how they all sound doing the fade out. I bet all the same. You can get all carried away with what sort of distortions result in any system. Tape has 1 to 2% distortion involved. If you put a square wave into a reel to reel tape machine you will be horrified as to what comes out. Yet it worked for a very long time. And sounded great too. In many other areas most modern DAW's have levels of performance that is incredible.

You have got to put things into perspective. How loud the artefacts are compared to the signal. At very low levels the signal gets quiet and yes the artefacts may increase in relation to the signal. But after listening to the signal at a reference monitor volume e.g. 85 dB SPL what can you hear 50 dB down. Not much.

Specs i5-2500K 3.5 Ghz-8 Gb RAM-Win 7 64 bit - ATI Radeon HD6900 Series - RME HDSP9632 - Midex 8 Midi interface - Faderport 2/8 - Atom Pad/Atom SQ - HP Laptop Win 10 - Studio 24c interface -iMac 2.5Ghz Core i5 - High Sierra 10.13.6 - Focusrite Clarett 2 Pre & Scarlett 18i20. Studio One V5.5 (Mac and V6.5 Win 10 laptop), Notion 6.8, Ableton Live 11 Suite, LaunchPad Pro
User avatar
by Nip on Tue Mar 12, 2019 1:07 pm
Skaperverket wroteHe clearly states in his videos and articles exactly this: don't trust people on YT, test it yourself, it's easy, he even shows you how to test it yourself.


And did you do that - before coming here?
You put all trust in this guy and defend him in every way - on what grounds?
He lives on "look what I found!"

Nip wroteIf you think there is something to be fixed, put a question to presonus on AskAQuestion above on forum.
And don't refer to a video - explain clearly what it is you want fixed.


Thanks! How would you word this to make it more clear than the video?

Because devs won't look through a video to see if what you claim is anything to bother with.
And nobody voting for stuff do either, I know I don't.
If people explain in a short and consistent way, I might vote.

And see to every setting is ok, and turn off stretching of import in song setup - or doing this in wrong order would do double stretching of test pattern before it is rendered.

If like me, happend to save a default for new project I had 108 in tempo on the project, so new projects are created with 108 as a start.

He meant you must set to 120 bpm, but that must be done first before importing, if having the default of stretching import of video on. If that video is 120 and project is 108 first, StudioOne stretches audio on import. Then changing to 120 bpm in project, audio is stretched again.

There you have plenty anomalies already before rendering. I don't think people get the amount of stuff you insert into audio doing this stuff. It's not in your face at moderate use, but can be audible when processed further. You clearly see on analyzer software though.

Explain to them how you did this test. How to reproduce.

You trust this guy, but who else does?
To me he is just one guy trying to make a living from making everybody raise their eyebrows - out of millions others.

We must not be so gullible that we just buy everything we watch.

It's the days of internet - nothing is regulated - yet they try.
There are no editor-in-chief resposible for everything put out there - able to be prosecuted if something is illegal.
In the days of printed media editor-in-chief was responsible for everything published.
This for tv and radio as well.

Politicians are trying to catch up with legislation but probably will destory the entire internet idea doing that.
Last edited by Nip on Tue Mar 12, 2019 1:27 pm, edited 1 time in total.

*** Windows 7 Pro * i7-860 2.8 Ghz 16 GB ram * RME HDSP 9632+AI4S+Audient ASP 800 * GT730 ***
User avatar
by Jemusic on Tue Mar 12, 2019 1:26 pm
When you mouse over the signal at say -14 dB rms the artifacts are around -60 dB in my results. Even when the signal is down at -40 db the artefacts are still around -60. 20 dB lower at -40 dB. So it won't be very audible. If the signal is at 85 dB SPL at its normal volume in the room then when the signal is 40 dB down in the fade, it will only be at 45 dB SPL in the room. That is damn quiet. And the artefacts will be at 25 db SPL! (lower than the ambient noise level in your room!)

What this means is if you are automating at normal track reference levels and making changes of 3 to 6 db etc you are not going to hear any artifacts. End of story. He says you can hear it. In a blind test comparing various DAW's fading from a reference level (at 85 dB SPL) down to zero this guy would not have hope in hell.

Specs i5-2500K 3.5 Ghz-8 Gb RAM-Win 7 64 bit - ATI Radeon HD6900 Series - RME HDSP9632 - Midex 8 Midi interface - Faderport 2/8 - Atom Pad/Atom SQ - HP Laptop Win 10 - Studio 24c interface -iMac 2.5Ghz Core i5 - High Sierra 10.13.6 - Focusrite Clarett 2 Pre & Scarlett 18i20. Studio One V5.5 (Mac and V6.5 Win 10 laptop), Notion 6.8, Ableton Live 11 Suite, LaunchPad Pro
User avatar
by Lawrence on Tue Mar 12, 2019 1:52 pm
Long story short, if you're stressing over things that you can't hear, it's a waste of time. An oldie but a goodie. Grab some popcorn, it's a long video.

phpBB [video]
User avatar
by Jemusic on Tue Mar 12, 2019 2:03 pm
I mentioned in one of my previous posts that if you are automating around the track ref level and making the usual level changes you won't hear any artefacts.

I just did a test fading from -14 dB rms to zero in Cool Edit Pro (Adobe Audition now) Got a totally perfect result. Literally zero artefacts even right at the end of the fade.

This is how I fadeout a full mix by the way. (Which I don't do very often but I do fade endings this way, either fast tight endings or slower ringing out sounds etc) I never do this in the DAW. I do it after mastering right at the very last minute in the editor.

Just goes to show the editors do it perfectly and they are in fact the best place to do this type of fade. Not in the DAW. That is the only thing that this particular test actually confirms for me.

Specs i5-2500K 3.5 Ghz-8 Gb RAM-Win 7 64 bit - ATI Radeon HD6900 Series - RME HDSP9632 - Midex 8 Midi interface - Faderport 2/8 - Atom Pad/Atom SQ - HP Laptop Win 10 - Studio 24c interface -iMac 2.5Ghz Core i5 - High Sierra 10.13.6 - Focusrite Clarett 2 Pre & Scarlett 18i20. Studio One V5.5 (Mac and V6.5 Win 10 laptop), Notion 6.8, Ableton Live 11 Suite, LaunchPad Pro
User avatar
by Nip on Tue Mar 12, 2019 2:31 pm
Jemusic wroteJust goes to show the editors do it perfectly and they are in fact the best place to do this type of fade. Not in the DAW. That is the only thing that this particular test actually confirms for me.


Thanks for your tests and sharing.

In StudioOne:
Same doing realtime rendering with automation?
Same doing manual fading main fader in realtime without automation?


What you are saying pretty much says that OP is right in what he mention this has to be looked into.

Thinking of narrowing down if actually automation or processing of any change in faders.

For midi there is CC interval for automation - default 10 ms.

Something for audio as well, somewhere to alter?

Some interpolation done in too large intervals or something.

As your test - thiat level of artifacts is fixed disregarding volume level - is weird as well.

This is what dithering do!!!!! - is that box unchecked in preferences?

If doing things in realtime, I gather shortcuts have to be taken - a lot to process before next sample is out.

But offline rendering, daw has all time in the world to do things right.

*** Windows 7 Pro * i7-860 2.8 Ghz 16 GB ram * RME HDSP 9632+AI4S+Audient ASP 800 * GT730 ***
User avatar
by leosutherland on Wed Mar 13, 2019 7:52 am
Personally, I am using a lot of well-reverbed fades (the fading into the far distance in a vast space type of fades, if you understand what I mean) - I have not heard any artifacts, all the way down, fades sound perfect.

Ahem, Halo hastens to add that the music probably isn't perfect, but the sound is :D

...said Halo

Studio One Pro v4.6.2 / v5.5.2 / v6.5.2
Serum, Diva, Repro, Synthmaster, Syntronik Bully, MTron Pro, Kontakt 6/7, AIR synths, Cherry Audio synths, Battery 4, PPG Wave 3.V, Generate

3XS SQ170 Music Studio PC
Windows 10 x64 (22H2)
i7 8700 Hexcore 3.2GHz, 16GB, 2TB 970 EVO+ M.2 NVMe SSD + 1TB SATA HD
Scarlett 2i4 (G2), Korg Taktile 25, Faderport 2018, ATOM


Beyerdynamic DT990 Pros, JBL 305P II speakers
User avatar
by dgkenney on Wed Mar 13, 2019 10:43 am
I knew it. There had to be a reason my songs weren't winning Grammys Now I know why. :punk:

It's all down hill from what's in front of the microphone. - Bob Olhsson

Studio One 6.5x Pro
***Optoplex i7 @ 3.4 ghz 32 gig mem running Windows 10 x64 Pro
*** Levono E520 with Win 10 x64 Pro
***Studio rig - MOTU Ultralite AVB*** Audient ASP/880***
***Mobile rig - Antelope Zen Q***
***Soundtoys/Plugin Alliance/Izotope/Slate/Nomad Factory plugs***
and....Lots of Outboard gear cause Pipeline is your friend
User avatar
by darrenporter1 on Wed Mar 13, 2019 10:48 am
In 30 years will we have "vintage emulation" plugins that simulate artifacts introduced during fades from today's DAWS?


Studio One Professional 5.whatever, Harrison MixBus 32c v.6
i5-8400, 16GB RAM, 512 GB SSD, 2TB HD, Win10 Pro
UA Apollo QUAD, QUAD Satellite, PCIe DUO
FaderPort 8, Softube Console 1, JBL 306P Mk.II Monitors
User avatar
by matthewgorman on Wed Mar 13, 2019 12:07 pm
Just for discussion, but at the end of it all, what's the difference? Going back to the old days, every preamp, console, and tape machine added artifacts (for lack of a better term) that were not present in the "original" signal. Things like cross talk, saturation, etc etc were considered "character", and many times had a bearing on where an artist chose to record. Its only in the computer age where this became an issue.

Matt

Lenovo ThinkServer TS140 Win 10 64bit, 8GB RAM, Intel Xeon
Lenovo Thinkpad E520, Windows 7 64bit, 8 GB RAM, Intel i5 Processor

S1Pro V5
User avatar
by mikemanthei on Wed Mar 13, 2019 3:56 pm
Lawrence wrote ... An oldie but a goodie. Grab some popcorn, it's a long video.


Thanks for that. Yup, it's a classic. Ethan has been a rare voice of sanity in a world of hype. He pops into my head every time I hear the word "Null" even when it's not used in the audio arena.

Studio One v2, 3, and 4 Professional
Presonus 1818VSL / Focusrite 18i20 / StudioLive 32S
24-core Ryzen 9. 32 GB RAM
Tascam US-2400
Faderport 8
StudioLive 32S
User avatar
by sirmonkey on Wed Mar 13, 2019 9:49 pm
Of all the things I worry about, this is right around the 64,382nd item on my list.

And my list only has 25 or 30 things on it.

Atari 5200, 64K RAM S1PRO Radio Shack Cassette Recorder w/internal Mic, and too many plugins.
User avatar
by Skaperverket on Thu Mar 14, 2019 12:35 am
Is there really no one else that finds this topic interesting? Perhaps especially in light of years and years of discussions about DAW A sounding better than DAW B etc. I remember PreSonus using Teddy Riley in their promotion material, claiming that S1 sounded better than the other DAWs (probably not the sound of fades, though), and there have been made plenty of jokes on the topic. Personally, I thought that this promotion was a bit over the top. But, then again, people are sensitive to different things, some identify pitch fluctuations, some pitch correction, some quality of transients, some dynamic envelopes, some harmonics, aliasing or distortion, some off timings or tempo variations, some frequency balance etc. etc. Just because one person cannot hear a phenomenon, does not mean that other people can't, or that it's not there. It's amazing what some people are able to identify (repeatedly).

Regarding prioritizing of features and bug fixes, I think it is a good idea to fix this one. It also affects the reputation of the software and the developer.

Anyway, the responses here tells me that a Feature Request is perhaps not the most efficient way of getting this fixed, as it would probably just get a few votes. We did, however, manage to get PreSonus to fix the issues with the automation timing that changed when you switched buffer settings. It just took a while. Now we can trust that the timing is right.

Ari, are you here?
User avatar
by Nip on Thu Mar 14, 2019 1:47 am
Isn't it a good idea to really establish if there is a problem first?

I downloaded and installed on my laptop yesterday, not to clutter daw with software that does not belong there.

a) import file provided yt channel - 16-bit
What happends when you import 16-bit in daw and process as 32-bit float?
As I see it you have to interpolate to make a perfect fade, since at least looking at 24-bit you lost 256 levels in 16-bit already.
Every step in source file, have at least 256 steps less - than processing is to handle.

I happend to have Sonar on laptop, and exported as 32-bit float.
But I did not as suggested in yt video - create a fader right from start.
I had a couple of secs untouched, and then did fader down to infinite.

Looking at this I saw no change whatsoever over what was faded and what was not.
I could see a weak blue line, at the very node where fading started - otherwise looked the same to me - all the way through the fade.
Reservation for that I did not zoom in enough in Sonic Visualiser.
And I did not go down to 16-bit again - I exported as 32-bit float.

I will continue, this is work in progress and will see what StudioOne in daw will do.
And be sure not to have dither applied in StudioOne(default in preferences) - I suspect as Jemusic got, a fixed -60 dB level of artifacts disregarding level of audio which is roughly what dither do - insert noise in that range. Dither is just for our ears not to spot truncation of 256 levels as it becomes 16-bit again. If to do measurement this dither noise is artifacts.

Current assumption is that yt channel is not aware of what he is doing. "Hey, look what I found".
He got 600+ subsribers to he is not eligable for anything from Google yet(think it is 1000 subscribers), so he is referring to Patreon as I recall expecting contributions.
He is new to this - even though he seems like a good guy.

*** Windows 7 Pro * i7-860 2.8 Ghz 16 GB ram * RME HDSP 9632+AI4S+Audient ASP 800 * GT730 ***

111 postsPage 1 of 6
1, 2, 3, 4, 5, 6

Who is online

Users browsing this forum: Anderton, leosutherland and 79 guests