-
-
Notifications
You must be signed in to change notification settings - Fork 23.7k
AudioServer to have function to access microphone buffer directly #113288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
AudioServer to have function to access microphone buffer directly #113288
Conversation
Ivorforce
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a great start! This is about the amount of complexity I was hoping to see :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this might be ready to merge (though I want to test before approving). Any reason it's still in draft?
e9e4d3f to
0bc133a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this PR is as ready to merge as it will ever get. Great job!
I'm not in the audio team, so it would be nice to get confirmation on its mergability as well. I'm hoping to fit it into 4.6, since it's a pretty riskless, potentially high impact change.
Since this PR has quite some history, I want to lay out my reasons for endorsing this form of the feature.
Full disclosure, I have been part of a few audio meetings discussing this, and further had dedicated meetings with @goatchurchprime discussing this feature in detail. This PR is the outcome of these discussions.
First off, why is this feature needed?
To cut it short, the current way to accept microphone data is inherently flawed. It is done through AudioEffectCapture, after it has experienced a lot of travel through the audio systems.
The main problem with this is that audio input is accepted asynchronously to the audio server's clock. Even when they have the same theoretical sampling rate, they inevitably drift apart. This can lead to either too many samples being produced to consume, or too few. In our codebase, this leads to audio stutters and gaps (previously, it led to capture ending abruptly), making audio capturing like this problematic. A way to fix it in place might be to resample on the fly, but this is expensive and difficult, and more importantly, it is not needed.
The above example shows that handling input data on the audio system designed for playback is inherently error prone. On the bright side, most games do not need to play back the audio. They just need to capture it, and send it elsewhere, also known as voice over IP (or, use it as an input, ala One Hand Clapping). So a system is needed to capture the audio before it enters the server at an incorrect sampling rate.
The three previous PRs were various attempts at this.
The first PR injected an alternative usage path into AudioStreamPlaybackMicrophone, which is the class that handles the microphone input for playback in Godot's audio playback system. I can see the reasoning, but in my opinion it makes the class ambiguous, and adds unnecessary complexity to it.
The second PR was created after maintainer feedback which suggested the perspective to handle the microphone input more like other control inputs, like mouse, keyboard and controllers, and add it to the Input class. That PR ended up similar to this PR. It's not bad, but the kind of data microphones generate is very different to the data Input usually deals with For that reason, I believe AudioServer is a more suitable class to working with it appropriately.
The third PR was created after feedback that the system proposed in Input could not deal with multiple microphones at once, and was therefore underdesigned.
However, I believe this feedback was misguided. By our contribution guidelines, we should strive to only solve problems that people actually have. Using multiple microphones at once is not a feature that has been requested by users. Even dedicated audio software does not support capturing input from multiple audio inputs simultaneously a lot of the time, probably because it's so rarely needed in actual projects. @goatchurchprime's PR indulging the feedback, and adding a microphone server and feeds, ended up way more complicated than this one, with multiple new classes exposed, and functionality skeletons added for functions that are unlikely to be needed by anyone anytime soon. An alternative implementation by @adamscott attempted to implement the feature wholesale, but ended up even more complicated. In contrast to this PR, which adds and exposes barely any features, and solves the microphone needs of most games perfectly, I think this clearly shows why we should add demand-driven infrastructure, rather than infrastructure designed for a potential future case that is unlikely to be needed by most Godot engine users. (on that note, the addition of AudioStreamPlaybackMicrophone idea was probably premature in the first place, since it's unlikely many games want to play back the player's voice to them from their own speaker).
Even if people run into limitations with the system added here, we can deprecate this system and add a better one that solves whatever problem users run into at that time. Since this is an extremely small solution, it's easy to sculpt or remove completely when it outlives its designed purpose.
All in all, I believe this PR is the appropriate solution for the problem at hand.
The PR is low risk, especially with @goatchurchprime's rigorous testing with his bespoke tool.
It is also potentially high-impact, since for the first time in Godot's history, it will be possible to have stable voice-over-ip technology in Godot made games. A demand for this is evident by the popularity of @goatchurchprime's extension two-voip, which even in the current state of the godot interface is attracting users.
Due to the reasons given above, I'd like to restate my initial comment that I think this should be merged into 4.6.
5f60c23 to
68a73aa
Compare
|
I just want to add that there are a number of users of my twovoip plugin who are waiting on this issue (they've submitted tickets about the sound quality on my repo that are actually due to this Godot problem), and who would probably begin using this feature as soon as it's on a development release. Some have already tested that it's good with an earlier PR, but not everyone is set up for compiling PRs very easy -- especially the export templates. The important point is that I expect this is going to get tested by users on different pieces of hardware right away, so we're going to find out about any problems I've not anticipated very quickly, and I am committed to getting on top of them asap. |
lyuma
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code looks good. the only risk is this api is designed for single-consumer since it has only one global offset variable, and modifies with the same start/stop state as AudioStreamMicrophone.
Some day, I would prefer a higher level object oriented API which would also be used by AudioStreamMicrophone, that holds the offset internally and and can increment/decrement a "active counter" that will control start / stop so you can have two consumers of the audio data without interference. For the object oriented idea, we'd need a Vector version of the get_input_frames function (which AudioStreamPlaybackMicrophone::_mix_internal would call).
As for my review, the code should do what it says so I'll approve.
I consider this to be a low level API which ideally users wouldn't use directly. If there was more time I'd really prefer an object oriented api that will have exactly the same performance but avoids the single global offset variable stored in AudioServer which limits our ability to improve things in the future.
Right, this was also my primary reservation.
I also think it's likely we'll have some more comprehensive API eventually, and this code will be deprecated. |
With multiple maintainers expecting this to be deprecated in the future: should we mark the API as experimental? This way users can expect to need to update their code to a new API in the future (and if they don’t need to, then that’s just a bonus to them). |
Direct access to the audio data in the microphone buffer is necessary to overcome the design flaw of connecting the Audio Input stream (microphone) to the Audio Output stream under the mistaken assumption that they will always proceed at exactly the same rate for all time on every platform.
A diagram of the current design and the improved situation can be seen here.
The following issues are illustrative of the flaw in the current design.
This PR is a re-implementation of three previous attempts to find an appropriate place for a
get_microphone_frames()function:AudioStreamPlaybackMicrophoneto access the microphoneAudioDriver::input_bufferindependently of the rest of the Audio system #100508Input#105244This also resolves godotengine/godot-proposals#11347 (proposal).
A comprehensive demo is at:
https://github.com/goatchurchprime/godot-demo-projects/tree/gtch/mic_input/audio/mic_input
This PR adds four new functions to
AudioServer:In the class AudioEffectCapture, the function that is equivalent to
get_input_frames()is known asget_buffer().The function
get_buffer_length_frames()is necessary to predict when the overflow is likely to occur.The function
set_input_device_active()provides direct control of exactly when the microphone stream is turned on and off, rather than it currently being done by the existence of an activeAudioStreamMicrophoneplayback object.This could mean that we never have to call
AudioServer.set_input_device()while the input device is active, which leads to problematic situations where the error case of a failure to start the new device cannot be properly handled. Instead you would callset_input_device_active(false), then change the device, before callingset_input_device_active(true)which can return an error if it doesn't work.Stuff to do
Review the function names and whether it needsget_input_device_active()and member variableinput_device_activeand functions to be const.Write the docsMake AudioStreamMicrophonePlayback use theinput_device_activeinterface, and check it still works as before (if none of these new functions are used).