When golang performs allocation to convert a string to bytes

var testString = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
//var testString = "ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ"
func BenchmarkHashing900000000(b *testing.B){
    var bufByte = bytes.Buffer{}
    for i := 0; i < b.N ; i++{
        bufByte.WriteString(testString)
        Sum32(bufByte.Bytes())
        bufByte.Reset()
    }
}

func BenchmarkHashingWithNew900000000(b *testing.B){
    for i := 0; i < b.N ; i++{
        bytStr := []byte(testString)
        Sum32(bytStr)
    }
}

test result:

With  testString = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
BenchmarkHashing900000000-4         50000000            35.2 ns/op         0 B/op          0 allocs/op
BenchmarkHashingWithNew900000000-4  50000000            30.9 ns/op         0 B/op          0 allocs/op

With testString = "ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ"
BenchmarkHashing900000000-4         30000000            46.6 ns/op         0 B/op          0 allocs/op
BenchmarkHashingWithNew900000000-4  20000000            73.0 ns/op        64 B/op          1 allocs/op

Why is there a selection in the case of BenchmarkHashingWithNew900000000, when the string is long, but it is small when the string is distributed.
Sum32: https://gowalker.org/github.com/spaolacci/murmur3
I am using go1.6

+4
source share
2 answers

In your tests, there is a curious optimization of the Golang compiler (version 1.8).

You can see the PR from Dmitry Duvukov here

https://go-review.googlesource.com/c/go/+/3120

, , C, , . , , PR .

, , .

https://gist.github.com/fmstephe/f0eb393c4ec41940741376ab08cbdf7e

BenchmarkHashingWithNew900000000, , "".

bytStr := []byte(testString)

testString []byte. , bytStr Sum32. . , , 32 string []byte.

, , , - , .

0

- byte.Buffer, . byte.Buffer.Reset, , . . , .

, 50000000 . bufByte for, .

-1

Source: https://habr.com/ru/post/1648952/


All Articles