Skip to content

Conversation

@goatchurchprime
Copy link
Contributor

@goatchurchprime goatchurchprime commented Nov 28, 2025

Direct access to the audio data in the microphone buffer is necessary to overcome the design flaw of connecting the Audio Input stream (microphone) to the Audio Output stream under the mistaken assumption that they will always proceed at exactly the same rate for all time on every platform.

A diagram of the current design and the improved situation can be seen here.

The following issues are illustrative of the flaw in the current design.

This PR is a re-implementation of three previous attempts to find an appropriate place for a get_microphone_frames() function:

This also resolves godotengine/godot-proposals#11347 (proposal).

A comprehensive demo is at:
https://github.com/goatchurchprime/godot-demo-projects/tree/gtch/mic_input/audio/mic_input

This PR adds four new functions to AudioServer:

set_input_device_active(active : bool) -> int
get_input_buffer_length_frames() -> int
get_input_frames_available() -> int
get_input_frames(frames : int) -> PackedVector2Array

In the class AudioEffectCapture, the function that is equivalent to get_input_frames() is known as get_buffer().

The function get_buffer_length_frames() is necessary to predict when the overflow is likely to occur.

The function set_input_device_active() provides direct control of exactly when the microphone stream is turned on and off, rather than it currently being done by the existence of an active AudioStreamMicrophone playback object.

This could mean that we never have to call AudioServer.set_input_device() while the input device is active, which leads to problematic situations where the error case of a failure to start the new device cannot be properly handled. Instead you would call set_input_device_active(false), then change the device, before calling set_input_device_active(true) which can return an error if it doesn't work.

Stuff to do

  • Review the function names and whether it needs get_input_device_active() and member variable input_device_active and functions to be const.
  • Write the docs
  • Make AudioStreamMicrophonePlayback use the input_device_active interface, and check it still works as before (if none of these new functions are used).

Copy link
Member

@Ivorforce Ivorforce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a great start! This is about the amount of complexity I was hoping to see :)

Copy link
Member

@Ivorforce Ivorforce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this might be ready to merge (though I want to test before approving). Any reason it's still in draft?

@goatchurchprime goatchurchprime marked this pull request as ready for review November 30, 2025 11:57
@goatchurchprime goatchurchprime requested review from a team as code owners November 30, 2025 11:57
Copy link
Member

@Ivorforce Ivorforce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this PR is as ready to merge as it will ever get. Great job!
I'm not in the audio team, so it would be nice to get confirmation on its mergability as well. I'm hoping to fit it into 4.6, since it's a pretty riskless, potentially high impact change.

Since this PR has quite some history, I want to lay out my reasons for endorsing this form of the feature.
Full disclosure, I have been part of a few audio meetings discussing this, and further had dedicated meetings with @goatchurchprime discussing this feature in detail. This PR is the outcome of these discussions.

First off, why is this feature needed?
To cut it short, the current way to accept microphone data is inherently flawed. It is done through AudioEffectCapture, after it has experienced a lot of travel through the audio systems.
The main problem with this is that audio input is accepted asynchronously to the audio server's clock. Even when they have the same theoretical sampling rate, they inevitably drift apart. This can lead to either too many samples being produced to consume, or too few. In our codebase, this leads to audio stutters and gaps (previously, it led to capture ending abruptly), making audio capturing like this problematic. A way to fix it in place might be to resample on the fly, but this is expensive and difficult, and more importantly, it is not needed.

The above example shows that handling input data on the audio system designed for playback is inherently error prone. On the bright side, most games do not need to play back the audio. They just need to capture it, and send it elsewhere, also known as voice over IP (or, use it as an input, ala One Hand Clapping). So a system is needed to capture the audio before it enters the server at an incorrect sampling rate.

The three previous PRs were various attempts at this.
The first PR injected an alternative usage path into AudioStreamPlaybackMicrophone, which is the class that handles the microphone input for playback in Godot's audio playback system. I can see the reasoning, but in my opinion it makes the class ambiguous, and adds unnecessary complexity to it.

The second PR was created after maintainer feedback which suggested the perspective to handle the microphone input more like other control inputs, like mouse, keyboard and controllers, and add it to the Input class. That PR ended up similar to this PR. It's not bad, but the kind of data microphones generate is very different to the data Input usually deals with For that reason, I believe AudioServer is a more suitable class to working with it appropriately.

The third PR was created after feedback that the system proposed in Input could not deal with multiple microphones at once, and was therefore underdesigned.
However, I believe this feedback was misguided. By our contribution guidelines, we should strive to only solve problems that people actually have. Using multiple microphones at once is not a feature that has been requested by users. Even dedicated audio software does not support capturing input from multiple audio inputs simultaneously a lot of the time, probably because it's so rarely needed in actual projects. @goatchurchprime's PR indulging the feedback, and adding a microphone server and feeds, ended up way more complicated than this one, with multiple new classes exposed, and functionality skeletons added for functions that are unlikely to be needed by anyone anytime soon. An alternative implementation by @adamscott attempted to implement the feature wholesale, but ended up even more complicated. In contrast to this PR, which adds and exposes barely any features, and solves the microphone needs of most games perfectly, I think this clearly shows why we should add demand-driven infrastructure, rather than infrastructure designed for a potential future case that is unlikely to be needed by most Godot engine users. (on that note, the addition of AudioStreamPlaybackMicrophone idea was probably premature in the first place, since it's unlikely many games want to play back the player's voice to them from their own speaker).
Even if people run into limitations with the system added here, we can deprecate this system and add a better one that solves whatever problem users run into at that time. Since this is an extremely small solution, it's easy to sculpt or remove completely when it outlives its designed purpose.

All in all, I believe this PR is the appropriate solution for the problem at hand.
The PR is low risk, especially with @goatchurchprime's rigorous testing with his bespoke tool.
It is also potentially high-impact, since for the first time in Godot's history, it will be possible to have stable voice-over-ip technology in Godot made games. A demand for this is evident by the popularity of @goatchurchprime's extension two-voip, which even in the current state of the godot interface is attracting users.

Due to the reasons given above, I'd like to restate my initial comment that I think this should be merged into 4.6.

@Ivorforce Ivorforce modified the milestones: 4.x, 4.6 Nov 30, 2025
@goatchurchprime
Copy link
Contributor Author

I just want to add that there are a number of users of my twovoip plugin who are waiting on this issue (they've submitted tickets about the sound quality on my repo that are actually due to this Godot problem), and who would probably begin using this feature as soon as it's on a development release.

Some have already tested that it's good with an earlier PR, but not everyone is set up for compiling PRs very easy -- especially the export templates.

The important point is that I expect this is going to get tested by users on different pieces of hardware right away, so we're going to find out about any problems I've not anticipated very quickly, and I am committed to getting on top of them asap.

Copy link
Contributor

@lyuma lyuma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code looks good. the only risk is this api is designed for single-consumer since it has only one global offset variable, and modifies with the same start/stop state as AudioStreamMicrophone.

Some day, I would prefer a higher level object oriented API which would also be used by AudioStreamMicrophone, that holds the offset internally and and can increment/decrement a "active counter" that will control start / stop so you can have two consumers of the audio data without interference. For the object oriented idea, we'd need a Vector version of the get_input_frames function (which AudioStreamPlaybackMicrophone::_mix_internal would call).

As for my review, the code should do what it says so I'll approve.

I consider this to be a low level API which ideally users wouldn't use directly. If there was more time I'd really prefer an object oriented api that will have exactly the same performance but avoids the single global offset variable stored in AudioServer which limits our ability to improve things in the future.

@Ivorforce
Copy link
Member

Ivorforce commented Dec 2, 2025

The code looks good. the only risk is this api is designed for single-consumer since it has only one global offset variable, and modifies with the same start/stop state as AudioStreamMicrophone.

Right, this was also my primary reservation.
However, after some consideration, I think we don't need to address this concern now, at least not until people actually raise it as a limitation. I think the vast majority of games will only need a single consumer of this microphone data. Especially, AudioStreamMicrophone will probably not be used much at all when this API is available, because playing the player's voice back to them from their own speakers is not very useful for most games.

Some day, I would prefer a higher level object oriented API which would also be used by AudioStreamMicrophone, that holds the offset internally and and can increment/decrement a "active counter" that will control start / stop so you can have two consumers of the audio data without interference. For the object oriented idea, we'd need a Vector version of the get_input_frames function (which AudioStreamPlaybackMicrophone::_mix_internal would call).
I consider this to be a low level API which ideally users wouldn't use directly. If there was more time I'd really prefer an object oriented api that will have exactly the same performance but avoids the single global offset variable stored in AudioServer which limits our ability to improve things in the future.

I also think it's likely we'll have some more comprehensive API eventually, and this code will be deprecated.
However, as mentioned above, I don't think we're at a state yet where we can confidently design this API. If we were to design it now, it would probably look like #108773, and add a ton of complexity most people don't need.
As players will start using the microphone input for voip or game controls, I'm sure they'll raise the limitations they'll run into, and we can design the future API around those concerns.

@akien-mga akien-mga requested a review from adamscott December 2, 2025 13:13
@allenwp
Copy link
Contributor

allenwp commented Dec 2, 2025

I also think it's likely we'll have some more comprehensive API eventually, and this code will be deprecated.

With multiple maintainers expecting this to be deprecated in the future: should we mark the API as experimental? This way users can expect to need to update their code to a new API in the future (and if they don’t need to, then that’s just a bonus to them).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

A new InputEventMicrophone class to deliver audio chunks of microphone data independently of the Audio system

6 participants