Understanding the AudioFormat constructor, AudioInputStream, and the launch method

I tried to write a program that plays a sound file, but so far have not been successful. I can not understand some parts of the code:

InputStream is = new FileInputStream("sound file"); AudioFormat af = new AudioFormat(float sampleRate, int sampleSizeInBits, int channels, boolean signed, boolean bigEndian); // I don't understand it constructor long length ; // length in sample frames // how cani i know the length of frames ? AudioInputStream ais = new AudioInputStream( is , af , length ); // open ( ais ); // start playing by invoking start method 
  • In the AudioFormat constructor, how can I find out the sampling rate, file size in advance, what are the channels and 2 boolean variable at the end?
  • How can I get the value of the frame sample ( length )?
  • Also, how do I call the start method? I do not need data from any line, but from a file stored in a folder (for example, a clip)
+6
source share
2 answers

In addition to encoding, the audio format includes other properties that further determine the exact location of the data. These include the number of channels, sample rate, sample size, byte order, frame rate, and frame size. Sounds can have different audio channel numbers: one for mono, two for stereo. The sampling rate measures how many โ€œsnapshotsโ€ (samples) of sound pressure per second per channel. (If the sound is stereo, not mono, two samples are actually measured at each point in time: one for the left channel and one for the right channel; However, the sampling rate still measures the number per channel, so the speed is independent of the number of channels. This is standard use of this term.) The sample size indicates how many bits are used to store each image; 8 and 16 are typical values. For 16-bit samples (or any other sample size larger than a byte), byte order is important; the bytes in each pattern are either in "little-endian" or in a "big" style. For encodings such as PCM, a frame consists of a set of samples for all channels at a given time, and therefore the frame size (in bytes) is always equal to the sample size (in bytes) multiplies the number of channels. However, with some other types of coding, a frame may contain a set of compressed data for a series of samples, as well as additional data without sampling. For such encoding, the sampling rate and sample size refer to the data after it is decoded in PCM, and therefore they are completely different from the frame rate and frame size.

Link

+3
source

Probably the best way to get close to this is according to the source code โ€œPlay Clip โ€ shown on the Java Sound info page . . This makes most of the questions redundant (since we donโ€™t need to worry about small details when using Clip ).

If you have any further questions after checking the source, let me know.

+1
source

Source: https://habr.com/ru/post/893802/


All Articles