Go: Transferring Functions Through Channels

I am trying to evaluate the restriction of the functions that I call by placing them through a queue that I will access later. Below, I have a piece of queries that I created, and the requestHandler function processes each request at a certain speed.

I want it to accept all kinds of functions with different types of parameters, therefore, the {} type interface.

How can I pass functions through a channel and call them successfully?

type request struct { function interface{} channel chan interface{} } var requestQueue []request func pushQueue(f interface{}, ch chan interface{}) { req := request{ f, ch, } //push requestQueue = append(requestQueue, req) } func requestHandler() { for { if len(requestQueue) > 0 { //pop req := requestQueue[len(requestQueue)-1] requestQueue = requestQueue[:len(requestQueue)-1] req.channel <- req.function } <-time.After(1200 * time.Millisecond) } } 

Here is an example of what I'm trying to achieve (GetLeagueEntries (string, string) and GetSummonerName (int, int) are functions):

 ch := make(chan interface{}) pushQueue(l.GetLeagueEntries, ch) pushQueue(l.GetSummonerName, ch) leagues, _ := <-ch(string1, string2) summoners, _ := <-ch(int1, int2) 
+5
source share
3 answers

Ok, here is the code: https://play.golang.org/p/XZvb_4BaJF

Please note that this is not ideal. You have a queue that runs every second. If the queue is empty and a new item is added, the new item can wait almost a second before execution.

But this should be very close to what you need :)

This code can be divided into 3 sections:

  • A speed limit queue executor that I call a server (I'm terrible at naming things). The server does not know anything about functions. All he does is run an endless goroutine that issues the oldest function in the queue every second and calls it. The problem I mentioned above is in this section of BTW code, and I could help you fix it if you want.
  • Click Button Function - shows how each click of a button can call 3 diff functions (you could obviously make more / less function calls) using the server, and make sure that they are 1 second apart. You can even add a timeout to any of the functions (for fake delay), and they will still be called for 1 second from each other. This is the only place you need channels, because you want to make all function calls as quickly as possible (if the first function takes 5 seconds, you only need to wait 1 second to call the second function), and then wait for them so you need to know when everything is done.
  • Button click simulation (main function) - it just shows that 3 button clicks will work as expected. You can also put them in goroutine to simulate 3 users who pressed a button at the same time, and they will still work.

     package main import ( "fmt" "sync" "time" ) const ( requestFreq = time.Second ) type ( // A single request request func() // The server that will hold a queue of requests and make them once a requestFreq server struct { // This will tick once per requestFreq ticker *time.Ticker requests []request // Mutex for working with the request slice sync.RWMutex } ) var ( createServerOnce sync.Once s *server ) func main() { // Multiple button clicks: ButtonClick() ButtonClick() ButtonClick() fmt.Println("Done!") } // BUTTON LOGIC: // Calls 3 functions and returns 3 diff values. // Each function is called at least 1 second appart. func ButtonClick() (val1 int, val2 string, val3 bool) { iCh := make(chan int) sCh := make(chan string) bCh := make(chan bool) go func(){ Server().AppendRequest(func() { t := time.Now() fmt.Println("Calling func1 (time: " + t.Format("15:04:05") + ")") // do some stuff iCh <- 1 }) }() go func(){ Server().AppendRequest(func() { t := time.Now() fmt.Println("Calling func2 (time: " + t.Format("15:04:05") + ")") // do some stuff sCh <- "Yo" }) }() go func(){ Server().AppendRequest(func() { t := time.Now() fmt.Println("Calling func3 (time: " + t.Format("15:04:05") + ")") // do some stuff bCh <- true }) }() // Wait for all 3 calls to come back for count := 0; count < 3; count++ { select { case val1 = <-iCh: case val2 = <-sCh: case val3 = <-bCh: } } return } // SERVER LOGIC // Factory function that will only create a single server func Server() *server { // Only one server for the entire application createServerOnce.Do(func() { s = &server{ticker: time.NewTicker(requestFreq), requests: []request{}} // Start a thread to make requests. go s.makeRequests() }) return s } func (s *server) makeRequests() { if s == nil || s.ticker == nil { return } // This will keep going once per each requestFreq for _ = range s.ticker.C { var r request // You can't just access s.requests because you are in a goroutine // here while someone could be adding new requests outside of the // goroutine so you have to use locks. s.Lock() if len(s.requests) > 0 { // We have a lock here, which blocks all other operations // so just shift the first request out, save it and give // the lock back before doing any work. r = s.requests[0] s.requests = s.requests[1:] } s.Unlock() if r != nil { // make the request! r() } } } func (s *server) AppendRequest(r request) { if s == nil { return } s.Lock() s.requests = append(s.requests, r) s.Unlock() } 
0
source

First I would write this as:

 leagues := server.GetLeagueEntries() summoners := server.GetSummoners() 

And, put a speed limit on the server. With one of the speed limiting libraries.

However, you can use the interface to unify requests and use the func type to allow closure (as in http.HandleFunc):

 type Command interface { Execute(server *Server) } type CommandFunc func(server *Server) func (fn CommandFunc) Execute(server *Server) { fn(server) } type GetLeagueEntries struct { Leagues []League } func (entries *GetLeagueEntries) Execute(server *Server) { // ... } func GetSummonerName(id int, result *string) CommandFunc { return CommandFunc(func(server *Server){ *result = "hello" }) } get := GetLeagueEnties{} requests <- &get requests <- CommandFunc(func(server *Server){ // ... handle struff here }) 

Of course, this requires some synchronization.

+2
source

I would think that it is easier to use some kind of semaphore or work pool. Thus, you have a limited number of employees who can do something. It is also possible to have several worker pools.

Do you need any of these calls to be parallel / asynchronous? If not, and they can be called up so that you can have a configurable sleep (unpleasant hacking mind).

Try working pool or semaphore, not chan functions.

0
source

Source: https://habr.com/ru/post/1240057/


All Articles