I have really weird behavior in Java and I canโt say if this happens by accident or by accident.
I have a Socket connection with a server that sends me a response to a request. I am reading this answer from Socket with the following loop, which is encapsulated in a try-with-resource.
BufferedInputStream remoteInput = new BufferedInputStream(remoteSocket.getInputStream())
final byte[] response = new byte[512];
int bytes_read;
while ((bytes_read = remoteInput.read(response,0,response.length)) != -1) {
}
According to my understanding, the โreadโ method fills as many bytes as possible into a byte array. Limiting factors are either the number of bytes received or the size of the array.
Unfortunately, this is not what is happening: the protocol I am transmitting answers my request with several smaller responses that are sent one after the other over the same socket connection.
In my case, the "read" method always returns only one of these smaller answers in the array. The length of the responses varies, but 512 bytes that fit into the array is always sufficient. This means that my array always contains only one message, and the remaining / unnecessary part of the array remains untouched.
If I intentionally define a byte array smaller than my messages, it will return several completely filled arrays and one last array containing the remaining bytes until the message is complete.
(A response of 100 bytes with an array length of 30 returns three completely filled arrays and one using only 10 bytes)
InputStream - , . . , - , .
, , , , , .
, , LDAP, , .