-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Controlling a sound delivery onset through the sd.OutputStream object for human research #555
Comments
And for completion, a version with |
Looks like I should look into https://github.com/spatialaudio/python-rtmixer |
Yes, I know about them but I'm not an active user. #37 psychopy/psychopy#1312 psychopy/psychopy#2065 The I just saw that there was even a "mega-study": https://peerj.com/articles/9414/
I have the feeling that those values can be quite inaccurate, maybe depending on host API, drivers and physical hardware. See also https://stackoverflow.com/questions/78869667/sounddevice-in-python-time-vector-of-input-and-output-signal-synchronized and #148 (comment).
Yes, the blocking API is definitely bad for jitter, since it quantizes the start times to block boundaries (AFAIK).
The variability might be due to the inaccurate
I'm not sure how much overhead is caused by the NumPy conversions, but if you don't need NumPy arrays, it's probably better to use the "raw" streams.
Yeah, that's what I was suggesting in my conversation with the PsychoPy devs back in the day, e.g. psychopy/psychopy#2065 (comment). Note that it is still quite experimental (many TODOs: https://github.com/search?q=repo%3Aspatialaudio%2Fpython-rtmixer+TODO%3A&type=code) and there wasn't much development happening in recent times, but if it seems promising to you, I'm willing to put some more work into it. |
BTW, the original idea of the What I want to say with this, is that maybe you should consider writing your own callback function in C (maybe using the |
Hello @mgeier! First of all, thank you for the continuous development and maintenance of
sounddevice
, it's amazing 😃Short version
How could I further improve the
callback
, argument settings ofOutputStream
, ... to precisely start delivering an auditory stimuli at a given time for asounddevice
backend used to deliver auditory stimuli in human neuroscience research? Code at the end.Long version
You might know of
Psychtoolbox
,PsychoPy
, tools to design psychological, behavioral, neurosciences experiments. I work primarily with the later, where delivering an auditory stimuli is very often used to elicit a neurological or behavioral response in a participant. Historically,Psychtoolbox
offers an excellent interface toportaudio
with which I can deliver an audio stimuli simultaneously with a trigger (an event to mark the audio onset, usually delivered via parallel port) with excellent precision (delay between the event and the audio onset measured at < 1 ms).In python, this interface is accessible through
PsychoPy
and its SoundPTB object.PsychoPy
also offers other backends, including SoundDeviceSound. Sadly, the other backends do not match the performance ofSoundPTB
(delay between event marking the sound onset and the actual sound onset non zero and variable). On top of that,PsychoPy
development tends to break regularly the most basic features, including sound delivery with their latest2024.2.0
and2024.2.1
versions.Bringing me to today's affairs, I gave a shot in implementing a python audio delivery backend matching
SoundPTB
performance usingsounddevice
. The objective is to package this backend independently fromPsychoPy
to avoid breakage and replace the existingPsychoPy
SoundDeviceSound object with a wrapper around this new backend.The key element which makes the delivery precise in
SoundPTB
is a scheduling mechanism. Typically, this is how asound and a trigger (event) would be delivered:
I tried replicating this scheduling idea with
sounddevice
, compensating for delays with thetime.outputBufferDacTime
andtime.currentTime
value obtained in the callback function, and I got very close. This callback approach compared to a blockingstream.write
brings down the delay between the event and the audio onset to ~3 ms ± 1 ms. It is visibly slightly worst than theSoundPTB
object, especially with an higher variability.I'm a bit out of idea on how I could continue to improve this tentative and would love to hear your input on how I could improve the MWE below, how I could set the different
OutputStream
arguments. Without further due, here is the current working state yielding~3 ms ± 1 ms
.Note, I measure the delay between the sound onset and the trigger through an EEG amplifier at 1 kHz. I can increase the sampling rate to 16 kHz if needed.
The text was updated successfully, but these errors were encountered: