I have a program that uses a buffer pool to reduce allocations in several performance-sensitive sections of code.
Something like this: play link
// some file or any data source var r io.Reader = bytes.NewReader([]byte{1,2,3}) // initialize slice to max expected capacity dat := make([]byte, 20) // read some data into it. Trim to length. n, err := r.Read(dat) handle(err) dat = dat[:n] // now I want to reuse it: for len(dat) < cap(dat) { dat = append(dat, 0) } log.Println(len(dat)) // add it to free list for reuse later // bufferPool.Put(dat)
I always highlight fixed fragments of length that are guaranteed to be larger than the maximum size. I need to reduce the size to the actual data length in order to use the buffer, but I also need it to be the maximum size so that I can read it again the next time I need it.
The only way I know to expand the fragment is with append , so this is what I use. However, the loop seems very dirty. And potentially inefficient. My tests show that this is not terrible, but I feel that there must be a better way.
I know only a little about the internal representation of slices, but if I could somehow redefine the length value without actually adding data, it would be very nice. I really don't need to reset it or anything else.
Is there a better way to do this?
source share