Android - How to add my own audio codec to AudioRecord?

I currently have a Loop back program for testing Audio on Android devices.

It uses AudioRecord and AudioTrack to record PCM sound from a microphone and play PCM sound from a headphone.

Here is the code:

public class Record extends Thread { static final int bufferSize = 200000; final short[] buffer = new short[bufferSize]; short[] readBuffer = new short[bufferSize]; public void run() { isRecording = true; android.os.Process.setThreadPriority (android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); int buffersize = AudioRecord.getMinBufferSize(11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT); arec = new AudioRecord(MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize); atrack = new AudioTrack(AudioManager.STREAM_VOICE_CALL, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize, AudioTrack.MODE_STREAM); atrack.setPlaybackRate(11025); byte[] buffer = new byte[buffersize]; arec.startRecording(); atrack.play(); while(isRecording) { arec.read(buffer, 0, buffersize); atrack.write(buffer, 0, buffer.length); } } } 

So, as you can see when creating AudioTrack and AudioRecord, encoding is provided through AudioFormat, but this only allows 16-bit or 8-bit PCM.

Now I have my own implementation of Codec G711, and I want to be able to encode the sound from the microphone and decode it in EarPiece. So I have encode (short lin [], int offset, byte enc [], int frames) and decode (byte enc [], short lin [], int frames) , but I'm not sure how to use them to encode and decoding audio from AudioRecord and AudioTrack.

Can someone help me or point me in the right direction?

+4
source share
2 answers

Modify the call to arec.read(buffer, 0, buffersize) to use the Bytebuffer read() method from AudioRecord .

Once you have your bytes in the Bytebuffer object, you can simply insert your call to the G711 implementation for encoding and use the ByteBuffer.asShortBuffer() method to get your captured PCM data in the encoder.

This will solve your initial question without creating this third-party library for you. (This answer is for future people who come across a question).

My question is why?

In the above code, you record PCM data from the microphone and write it directly to the playback buffer.

In your implementation, it makes no sense to follow the path of PCM -> G711 (encode) -> G711 (decoding) -> PCM. Everything you do is unnecessary processing and delay. Now, if you are going to write encoded data to a file instead of trying to play it through a piece of ear, this will be a different story, but your current code is really not very useful for encoding PCM data.

Getting to know your own codec here will only make sense in the context of writing compressed voice data to a file (for example, recording call data, for example in a compressed way) or sending over the network or something like that.

+1
source

I understand that this is a pretty old post. Could you get your own G711? My initial thought was to use lib compiled for the kernel and use JNI to call it.

0
source

Source: https://habr.com/ru/post/1303656/


All Articles