A simple and efficient pool to recycle goroutines instead of creating new ones on the fly. Supports aggregating data into a buffered channel. The caller will wait for all the workers to complete and then get all the responses from every worker into the buffered channel.
To start using goroutine-pool
, install Go and run go get
:
$ go get -u github.com/vaibhavmew/goroutine-pool
Check the following examples to understand the usage submit submit and aggregate
- Get the response from p.Submit(), once the request is submitted.
- Aggregate data from all the workers. Pass a buffered channel to the pool. The data will be inserted into it.
- Supports all data types in request and response. Just modify the request and response structs.
- Supports timeout. No worker is blocked infinitely.
- The pool doesn't use queues, sync.Pool or any other algorithm to fetch and return the workers.
- No use of interface{} or it's conversion anywhere.
- Instead of passing a function into the pool, we pass the request body.
Increase the pool size at runtime using
p, err := pool.New(5)
if err != nil {
panic(err)
}
err = p.Tune(10) //here 10 refers to the new pool size
if err != nil {
panic(err)
}
Close the pool once the workers are done
err = p.Close()
if err != nil {
panic(err)
}
Checking the error returned by a worker
response := p.Submit(request)
if response.Err != nil {
fmt.Println(response.Err)
}