One common way to add managed parallelism to a loop like this is to create many working goroutines that will read jobs from the feed. The runtime.NumCPU function can help you decide how many workers it makes sense to run (make sure you install GOMAXPROCS to take advantage of these CPUs). Then you simply record tasks on the channel, and they will be processed by workers.
In this case, when the task should initialize the elements of the mid-slice, so using the *Individual pointer channel may make sense. Something like that:
ch := make(chan *Individual) for i := 0; i < nworkers; i++ { go initIndividuals(individualSize, ch) } population := make([]Individual, populationSize) for i := 0; i < len(population); i++ { ch <- &population[i] } close(ch)
A working goroutine will look something like this:
func initIndividuals(size int, ch <-chan *Individual) { for individual := range ch {
Since the tasks are not distributed ahead of schedule, it does not matter whether createIndividual takes a variable amount of time: each worker will only perform a new task when the latter is completed, and will exit when there are no tasks left (since the channel is closed at this point).
But how do we know when the work is completed? The sync.WaitGroup type can help here. The code for creating work cities can be changed as follows:
ch := make(chan *Individual) var wg sync.WaitGroup wg.Add(nworkers) for i := 0; i < nworkers; i++ { go initIndividuals(individualSize, ch, &wg) }
The initIndividuals function initIndividuals also modified to accept an additional parameter and adds defer wg.Done() as the first statement. Now the call to wg.Wait() will be blocked until all workstations are completed. Then you can return the fully constructed fragment of the population .
source share