Just wondering, is different sample rates for the (midi) interface and the song affecting the midi timing accuracy? What does S1’s resampling do to the midi clocking data?
|
niles wroteadmiralbumblebee wroteI went through and did more testing.I bet you tested with an equal or bigger buffer than 256 samples. Here's more on that: viewtopic.php?p=144701#p144701 Yes, that's correct. However all tests were done with the same audio devices, latest drivers (which vary between OS's), same projects, same buffer sizes, same sample rates etc... So the macOS and Windows behaviours are different regarding the larger buffer size affecting it. |
admiralbumblebee wroteWindows - Fresh install, fresh installs of Studio One. Instances of Studio One were not run in their own systems, so they may have automatically imported settings from earlier versions. This result kind of begs the question- are you and roland1 (the OP) discussing the same thing? I find it interesting that he’s experiencing such persistent inaccuracies when monitoring at 1024 samples but your current findings say otherwise on Windows. roland1 wroteI switched over to a 64 audio buffer size, the MIDI notes began to line up almost perfectly with my audio recorded hits. So if everything is fine at a buffer size of 64 (which can be achieved easily with Green Z monitoring) why do all these tests seem to revolve around monitoring at high buffers of 512, 1024, and 2048? Is this somehow a requirement for you and roland’s workflows, or is it purely for academic purposes? |
robertgray3 wroteThis result kind of begs the question- are you and roland1 (the OP) discussing the same thing? I find it interesting that he’s experiencing such persistent inaccuracies when monitoring at 1024 samples but your current findings say otherwise on Windows. I completely agree. I mentioned in my above post that there might be some setting on Windows which affects this, or some driver complication that affects it. It's also possible that, in my recent testing, this setting was changed from later versions importing settings from earlier versions (also mentioned). On macoS, this is easily reproducible. On Windows it seems to be more complex. robertgray3 wroteroland1 wroteI switched over to a 64 audio buffer size, the MIDI notes began to line up almost perfectly with my audio recorded hits. Everything is not fine at any buffer size. The effect is reduced at lower buffer sizes, and higher sample rates, but not eliminated. I began investigating this after 3 different people e-mailed me this year with fuzzy evidence of S1's live MIDI timing. They were using "normal" setups and picked up on it through listening. You can google around and find many people complaining about S1's MIDI timing over the years, and I suspect that this is a significant factor since the actual recorded MIDI is fine. This doesn't matter for my workflow, but a lot of people look to me when they have issues like this and I think it's important to work as an advocate when I can. |
admiralbumblebee wroteSo the macOS and Windows behaviours are different regarding the larger buffer size affecting it.I don't use macOS so I can't tell. I can only tell what I can measure on Windows and that's what I told you. Can you do the attached test (preferably 2 cycles per buffer size) and post the movie here? Like this (I did it on 4.6.2 because I don't know if you have 5): It gives me an idea of what happens on macOS.
|
admiralbumblebee wroteEverything is not fine at any buffer size. The effect is reduced at lower buffer sizes, and higher sample rates, but not eliminated. Yes, and Roland1 said in his quote that It was fine at 64. And on my setup (macOS) it is reduced to a level that is fine. Also, you just said there was no jitter in your new tests on Windows. I guess I'll assume you're only talking about macOS then. Yes, I can see jitter starting at around 512 or so on Mac. At 1024 my round trip latency is 70ms, at which point the jitter is the least of my concern for live input. This doesn't matter for my workflow, but a lot of people look to me when they have issues like this and I think it's important to work as an advocate when I can. The person you're advocating for said it was fine at 64. I'm really struggling to see the practical reason to monitor an input at 1024. I also assumed all this work you did was out of some concern for your own personal workflow. In an academic sense, any jitter is bad, but in a practical sense, there is going to be an acceptable level/tolerance right?
Last edited by robertgray3 on Tue Jul 14, 2020 3:30 pm, edited 5 times in total.
|
roland1 wrote Sorry but you are incorrect here. Explain the fact that in the very first post the Admiral did, he actually was not even recording midi at all. He was feeding a live midi input signal coming in (from a hardware sequencer) and driving a sound direct from Sample One. This is all about the consistency by which virtual instruments are responding. Jitter they call it. The incoming notes were all an exact distance apart yet Sample One was not outputting the sounds evenly. (Note: NO midi recording or playback taking place) I do agree though that some people may not get the super accurate recording midi accuracy I am getting. This is because I am using a midi interface that is driven over the PCI bus. And when I send those accurate midi recordings out to an external instrument I am hearing playback exactly as I recorded it in. But as soon as I insert a virtual instrument, even with my setup, the jitter appears and the playback from the virtual instrument shows a higher than normal amount of jitter. Ableton does not perform the same way though. It is very consistent even with virtual instruments. However some people are getting less than perfect recording and playback of midi data over say USB. So in their case it may be a bit of both. (which would make things even worse) But even in my case with perfect recording and playback of midi data, I am still hearing and seeing the jitter with the virtual instrument response. It IS virtual instrument related.
Specs i5-2500K 3.5 Ghz-8 Gb RAM-Win 7 64 bit - ATI Radeon HD6900 Series - RME HDSP9632 - Midex 8 Midi interface - Faderport 2/8 - Atom Pad/Atom SQ - HP Laptop Win 10 - Studio 24c interface -iMac 2.5Ghz Core i5 - High Sierra 10.13.6 - Focusrite Clarett 2 Pre & Scarlett 18i20. Studio One V5.5 (Mac and V6.5 Win 10 laptop), Notion 6.8, Ableton Live 11 Suite, LaunchPad Pro
|
niles wroteadmiralbumblebee wroteSo the macOS and Windows behaviours are different regarding the larger buffer size affecting it.I don't use macOS so I can't tell. I can only tell what I can measure on Windows and that's what I told you. Here's that test on macOS. First, with Dropout Protection at Max. My preferred way of working. Second, with Dropout Protection at Min. I tested from 64 - 4096 because 64 is generally the lowest buffer setting that produces stable overall performance with Dropout Protection. Like I said, I think this is fine. I'm never monitoring live input at > 256 block size - when would that would be advantageous? @niles since you know more about the history of DP- If I'm not mistaken, there are other reasons for the slight difference between Green Z and non-Green Z when using a MIDI Loopback in this case, right? |
admiralbumblebee wroteHere's the test as I believe you requested.No this is not what I requested. You are switching the device block size while having Dropout protection enabled. It doesn't make any sense. Please, 44.1kHz, no dropout protection (Minimum), like my video. Let's keep it simple. |
@robertgray3. What if you have a midi track recorded and it is all quantised to the grid. And you are feeding a virtual instrument that you have not rendered yet. But the session is fairly big so you need an audio buffer of 1024 or higher in order to get it to play well. Virtual instrument will not generate sounds with perfect timing consistency.
Also your video shows me you have also missed the point. All you are showing is the delay with increased buffer sizes. What you are not showing is each generated sound form a virtual instrument is not lining up perfectly with each midi note on at a higher buffer setting. Some audio sounds will be on the money with the midi note on, others early, some late etc. By as much as 15mS which is quite noticeable. People here are not understanding the issue fully.
Specs i5-2500K 3.5 Ghz-8 Gb RAM-Win 7 64 bit - ATI Radeon HD6900 Series - RME HDSP9632 - Midex 8 Midi interface - Faderport 2/8 - Atom Pad/Atom SQ - HP Laptop Win 10 - Studio 24c interface -iMac 2.5Ghz Core i5 - High Sierra 10.13.6 - Focusrite Clarett 2 Pre & Scarlett 18i20. Studio One V5.5 (Mac and V6.5 Win 10 laptop), Notion 6.8, Ableton Live 11 Suite, LaunchPad Pro
|
niles wroteadmiralbumblebee wroteHere's the test as I believe you requested.No this is not what I requested. You are switching the device block size while having Dropout protection enabled. It doesn't make any sense. EDIT: i tested both in mine on macOS to show the difference.
Last edited by robertgray3 on Tue Jul 14, 2020 4:19 pm, edited 2 times in total.
|
Jemusic wroteAlso your video shows me you have also missed the point. All you are showing is the delay with increased buffer sizes. What you are not showing is each generated sound form a virtual instrument is not lining up perfectly with each midi note on at a higher buffer setting.I think you are missing the point. From the video I made you can measure consistency related to the audio signal (the fixed value) to calculate the average deviation of the monitored sound. |
robertgray3 wroteIt clouds the results. Let's first start without DP enabled, then with DP. DP brings a lot of other factors on the table.niles wroteadmiralbumblebee wroteHere's the test as I believe you requested.No this is not what I requested. You are switching the device block size while having Dropout protection enabled. It doesn't make any sense. BTW I see an output latency of 216ms with a buffer size of 64 ms in that video.
Last edited by niles on Tue Jul 14, 2020 4:21 pm, edited 1 time in total.
|
niles wroteLet's first start without DP enabled, then with DP. DP brings a lot of other factors on the table. Then the second half of my video (starting at 1:28) is for you bro! MacOS, Minimum, 64-4096 |
robertgray3 wroteWait, sorry Robert, I totally missed your video. All my comments are directed too the admiralbumblebee video. Going to check your video right awayniles wroteLet's first start without DP enabled, then with DP. DP brings a lot of other factors on the table. |
niles wroteAll my comments are directed too the admiralbumblebee video. Going to check your video right away @niles My bad too, I kind of thought you were replying to both of us at once. Smart-a** comments rescinded! Jemusic wroteyou need an audio buffer of 1024 or higher in order to get it to play well. Virtual instrument will not generate sounds with perfect timing consistency. @Jemusic Playing back a recorded track? I’ll believe it when I see it. Everybody else is talking about monitoring and all the other tests involved monitoring live input of some kind.
Last edited by robertgray3 on Tue Jul 14, 2020 4:47 pm, edited 2 times in total.
|
robertgray3 wroteHere's that test on macOS. Thanks! When I quickly take the 128 samples part (no DP) and compare it to PC, latency is a bit higher on macOS (average of 12ms). You can easily measure it yourself by opening (recording) the audio of the video and measure the distance between the right (fixed audio) and left channel (monitored signal). The distance roughly is the latency (not taking MIDI jitter into account, but I suspect that's marginal). I also measured 512 from the robertgray3 video and there is a fixed latency of 27ms. Different figures as Windows, same behavior! |
niles wroterobertgray3 wroteHere's that test on macOS. I’d imagine your interface has different latency figures than the Presonus Quantum 2. I typically use 64 for VIs and 64 or 32 for audio |
Users browsing this forum: BobF and 84 guests