I read the various timeouts that are available in the HTTP request, and all of them seem to act as tight deadlines for the duration of the request.
I am starting the http download, I do not want to implement a hard timeout after the initial handshake, since I do not know anything about connecting my users and do not want a timeout on slow connections. Ideally, I would like to disable the timeout after a period of inactivity (when nothing was loaded in x seconds). Is there a way to do this as an inline or do I need to interrupt based on the file statement?
The working code is a little difficult to isolate, but I think these are the relevant parts, there is another loop that defines the file to ensure progress, but I will need to reorganize the bit to use this to interrupt the download:
func HttpsClientOnNetInterface(interfaceIP []byte, httpsProxy *Proxy) (*http.Client, error) {
log.Printf("Got IP addr : %s\n", string(interfaceIP))
tcpAddr := &net.TCPAddr{
IP: interfaceIP,
}
netDialer := net.Dialer{
LocalAddr: tcpAddr,
}
var proxyURL *url.URL
var err error
if httpsProxy != nil {
proxyURL, err = url.Parse(httpsProxy.String())
if err != nil {
return nil, fmt.Errorf("Error parsing proxy connection string: %s", err)
}
}
httpTransport := &http.Transport{
Dial: netDialer.Dial,
Proxy: http.ProxyURL(proxyURL),
}
httpClient := &http.Client{
Transport: httpTransport,
}
return httpClient, nil
}
func StartDownloadWithProgress(interfaceIP []byte, httpsProxy *Proxy, srcURL, dstFilepath string) (*Download, error) {
httpClient, err := HttpsClientOnNetInterface(interfaceIP, httpsProxy)
if err != nil {
return nil, err
}
headResp, err := httpClient.Head(srcURL)
if err != nil {
log.Printf("error on head request (download size): %s", err)
return nil, err
}
size, err := strconv.Atoi(headResp.Header.Get("Content-Length"))
if err != nil {
headResp.Body.Close()
return nil, err
}
headResp.Body.Close()
errChan := make(chan error)
doneChan := make(chan struct{})
go func(httpClient *http.Client, srcURL, dstFilepath string, errChan chan error, doneChan chan struct{}) {
resp, err := httpClient.Get(srcURL)
if err != nil {
errChan <- err
return
}
defer resp.Body.Close()
outFile, err := os.Create(dstFilepath)
if err != nil {
errChan <- err
return
}
defer outFile.Close()
log.Println("starting copy")
_, err = io.Copy(outFile, resp.Body)
if err != nil {
log.Printf("\n Download Copy Error: %s \n", err.Error())
errChan <- err
return
}
doneChan <- struct{}{}
return
}(httpClient, srcURL, dstFilepath, errChan, doneChan)
return (&Download{
updateFrequency: time.Microsecond * 500,
total: size,
errRecieve: errChan,
doneRecieve: doneChan,
filepath: dstFilepath,
}).Start(), nil
}
Update
Thanks to everyone who contributed to this.
I accepted JimB's answer as it seems like a perfectly viable approach, which is more generalized than the solution I chose (and probably more useful for anyone who finds their way here).
In my case, I already had a loop controlling the file size, so I threw a named error if that didn't change in x seconds. It was much easier for me to pick up the named error using my existing error handling and retry the download from there.
, goroutine ( ), ( ), ( )