24 postsPage 2 of 2
1, 2
Believing people on the internet ... always take it with a grain of salt. I can only base my opinions with the facts I know :)

Sure would be nice if it was different, I'd LOVE to get a threadripper and just laugh at the CPU use :lol:

Bye......:roll:
User avatar
by PreAl on Wed Jul 14, 2021 4:35 am
reggie1979beatz wroteThat's fine, but I've heard too many conflicting reports to just accept it. On my machine with the plugs I use, it's certainly not evenly distributed. No, I'm not saying it doesn't use other cores but it CLEARLY uses the first one the most. Unless task manager is just completely wrong.


An interesting experiment is to get studio one to run without using the first core (if you are using hyperthreading then get it to ignore the virtual core/s as well for the first core). Then you get a more realistic picture away from the inner working of OS which generally uses the first core, and the OS can run without being disturbed (I use "disturbed" for want of a better term).

I had to do this on my laptop because ACPI.SYS (and drivers associated) killed my latency whatever I did (managed to work this out thanks to latencymon)..
now works great.

Once you are not using the same core as the OS processes will appear more balanced (at the expense of losing a core, but it's no big deal unless you have poor specs).

Google: "CPU affinity". When I first discovered this a very long time ago, it was a game changer on how I optimized my environment.

Intel i9 9900K (Gigabyte Z390 DESIGNARE motherboard), 32GB RAM, EVGA Geforce 1070 (Nvidia drivers).
Dell Inspiron 7591 (2 in 1) 16Gb.
Studio One Pro 6.x, Windows 11 Pro 64 bit, also running it on Mac OS Catalina via dual boot (experimental).
Presonus Quantum 2626, Presonus Studio 26c, Focusrite Saffire Pro 40, Faderport Classic (1.45), Atom SQ, Atom Pad, Maschine Studio, Octapad SPD-30, Roland A300, a number of hardware synths.
User avatar
by stephenhoubart on Thu Jul 22, 2021 4:39 pm
Having performed extensive testing on a multi-core Windows Studio One 5 platform, I have discovered the following to be true.

Once an effect or instrument is loaded, Studio One often forces that plugin to run on a single core. If there are multiple plugs stacked on the same channel, all those plugs can be forced to use the same single core. If there are multiple plugs across several channels, these may well be distributed across multiple cores. Compared with a REAPER installation on the same machine, Studio One is thread crippled in its handling of effect and instrument plugs by forcing instances to use single cores.

The problem of forced single core usage only becomes game changing when using instruments with heavy load. Generally, effect plugs can be bounced down, if need be multiple times, to evade the load limitations of being forced to run on a single cores, particularly high CPU effects such as most of the Acustica Audio product range. However, some VST instruments with high loads, particularly when handling high resolution synthesis with high sample rates and high polyphony, including but not limited to some heavyweight Kontakt Instruments, AAS Chromaphone 3, Serum and Arturia Pigments to name but 4, cannot be handled this way. You just can’t design a sound when you must bounce down after each and every change because the plug needs more than a single core can deliver. And comparison with REAPER illustrates that this is unambiguously and specifically a Studio One limitation. REAPER does not force plugs to use a single cores once loaded.

A good solution when using instruments is to load a client REAPER, Bidule or other REWIRE client instance from within Studio One using a REWIRE instrument. Because the client REAPER instance is running in a separate Win64 process, Studio One plays no part in handling its core allocation and the load of most heavy-duty instruments is correctly shared out across available cores. This suffers from the limitation of requiring a Realtime recording to be made of the REWIREd instrument, as an offline bounce often induces core failures and sound breakup in Studio One.

Rewire can only be used with instruments. Some users on GearSpace report that using AudioGridder to host a ‘local’ VST grid (also in a separate Win64 process) offers uncrippled thread handling of all gridded plugs.
User avatar
by TonalDynamics on Fri May 26, 2023 11:06 am
Funkybot wrote
SkylineUK wroteWhen using SampleTank 3 I find I can only use lower latencies (e.g. for using a guitar processor plugin) if I load a separate instance of ST3 for every MIDI track, instead of one instance and using channels within ST3, when crackling occurs.


This is a fairly complex subject. Let me try to explain some basics, as best as I understand them as a fellow layperson...

...

Hope that helps explain some of how all of this works. It's not easy to wrap your head around, and at times can be counter intuitive. The best thing is to have a combination of the fastest per-core speed you can get (to prevent per-core overloads as much as possible) and also a large number of cores (to allow lots of tracks to run on separate cores) along with projects that are setup to favor parallel processing and load balancing by the DAW.


Hey Funky, I know this is an old post and you only had two thanks for this topic but you deserved a hundred. That's about the best summation of how processing actually works inside S1 that I've ever seen, and I've seen a lot of posts; frankly it needs to be stickied for all to see, because the information is so critical that it should actually affect how Studio One power users are setting up their entire mix templates and/or workflows.

Funkybot wrote
SkylineUK wroteWhen using SampleTank 3 I find I can only use lower latencies (e.g. for using a guitar processor plugin) if I load a separate instance of ST3 for every MIDI track, instead of one instance and using channels within ST3, when crackling occurs.


6. There are other factors as well. Does the DAW use a dual-buffer design or anticipative FX processing? Studio One, when in Green Z mode, will make a copy of any FX chains on input monitored and run those at a lower latency at the expense of CPU. Great for latency. Terrible for CPU.



Incidentally, do you have any idea why in the hell the 'green-Z' is designed this way? Because to me it makes zero sense: When we activate green z, monitor a track, and play it back, the 'project buffer' channel is suddenly muted when the 'low latency' monitoring is at a smaller buffer size than the project (Dropout protection set to anything above 'minimum'), so there should in theory be ZERO processing on it while the green-Z 'Low latency monitoring' is on — yet it literally doubles the CPU overhead.

Even if there does have to be processing on both channels (the low-latency version and the project buffer version), it should certainly NOT be on the same core because it's entirely taken out of playback!

Like am I wrong here or is this just a horrible implementation of this otherwise golden feature?

--------------------------------------------------------------------
Composer for Media, Producer, Noodler.

System Specs:

Studio One Professional v6.5

Windows 10 LTSC 21H2, i9 10850k, 128gb RAM, 6 TB SSD+6 TB HDD, RME Fireface 800

24 postsPage 2 of 2
1, 2

Who is online

Users browsing this forum: No registered users and 92 guests