I was able to successfully get the stream of audio data coming into the output device (speaker) using NAudio:
private void OnDataAvailable(object sender, WaveInEventArgs e) { var buffer = e.Buffer; var bytesRecorded = e.BytesRecorded; Debug.WriteLine($"Bytes {bytesRecorded}");
And an example output:
Bytes 19200 Bytes 19200 Bytes 19200 Bytes 19200 Bytes 19200 Bytes 19200 Bytes 19200 Bytes 19200 Bytes 19200 Bytes 19200 Bytes 19200 Bytes 23040 Bytes 19200 Bytes 19200 Bytes 19200 Bytes 19200 Bytes 19200
Then, I convert (FFT) to x and y values ββwith https://stackoverflow.com/a/3186269 :
var buffer = e.Buffer; var bytesRecorded = e.BytesRecorded;
With sample output:
x: -9.79634E-05, y: -9.212703E-05 x: 6.897306E-05, y: 2.489315E-05 x: 0.0002080683, y: 0.0004317867 x: -0.0001720883, y: -6.681971E-05 x: -0.0001245111, y: 0.0002880402 x: -0.0005751926, y: -0.0002682915 x: -5.280507E-06, y: 7.297558E-05 x: -0.0001143928, y: -0.0001156801 x: 0.0005231025, y: -0.000153206 x: 0.0001011164, y: 7.681748E-05 x: 0.000330695, y: 0.0002293986
Not sure if this is even possible, or if I just misunderstand that the stream is returning, but I would like to get the frequency of the audio stream to do something with Philips Hue. The x, y values ββabove are small for use in the CIE color space. Am I something wrong, or I donβt completely understand that the data is in the buffer in OnDataAvailable?
Thanks!
Edit:
I modified my OnDataAvailable code based on the comments, and the tutorial for the Autotune program will be below:
private void OnDataAvailable(object sender, WaveInEventArgs e) { var buffer = e.Buffer; float sample32 = 0; for (var index = buffer.Length > 1024 ? buffer.Length - 1024 : buffer.Length; index < e.BytesRecorded; index += 2) { var sample = (short) ((buffer[index + 1] << 8) | buffer[index + 0]); sample32 = sample / 32768f; Debug.WriteLine(sample32); LightsController.SetLights(Convert.ToByte(Math.Abs(sample32) * 255)); _sampleAggregator.Add(sample32); } var floats = BytesToFloats(buffer); if (sample32 != 0.0f) { var pitchDetect = new FftPitchDetector(sample32); var pitch = pitchDetect.DetectPitch(floats, floats.Length); Debug.WriteLine($"Pitch {pitch}"); } }
I hope that I use only the last set of elements from the buffer, as it does not seem to be cleared, and I am only interested in the last set of available data to get the frequency of the current audio, However, I still get an index exception when the DetectPitch method is called . Where am I going wrong? I was hoping to use frequency to change the color and brightness of color tones.