Create real-time audio applications with software synthesizers

I am developing software that makes the keyboard look like a piano (for example, the user presses the "W" key and the speakers play note D). I will probably use OpenAL. I understand the basics of digital sound, but playing real-time sound in response to keystrokes creates some problems that I'm having problems with.

Here is the problem: Suppose I have 10 audio buffers and each buffer stores one second of audio data. If I need to fill buffers before they are played through the speakers, I would fill the buffers one or two seconds before they are played. This means that whenever a user tries to play a note, there will be one or two seconds between pressing a key and a note being played.

How did you get around this problem? Are you just making buffers as small as possible and filling them up as late as possible? Is there any trick I'm missing?

+3
source share
3 answers

.

, , .

(, , ) .

, ( ).

:

(, ) , .

, ( , ..), . 10 , , . 10 .

+6

WinAPI . 40-50 , . , ASIO - Asio4All . 5 , : .

, FL Studio.

+1

- , . , ( ) . , 10 .

, .

Juce - - , - , SoftSynths . , . .

0

Source: https://habr.com/ru/post/1717121/


All Articles