Select () always returns 1; Problems with connected TCP sockets in C ++

I am doing a C ++ project that requires the server to create a new thread to handle connections each time accept () returns a new socket descriptor. I use select to decide when a connection attempt was made, and also when the client sent data on a newly created client socket (the one that accepts create). Thus, two functions and two choices - one for polling a socket designed to listen for connections, one for polling a socket created on a successful new connection.

The behavior of the first case is what I expect - FD_ISSET returns true for the identifier of my listening socket only when a connection is requested and is false until the next connection attempt. The second case does not work, although the code exactly matches the different fd_set and socket objects. I am wondering if this comes from a TCP socket? Do these sockets always return true when polled by choice due to their streaming nature?

//working snippet struct timeval tv; tv.tv_sec = 0; tv.tv_usec = 500000; fd_set readfds; FD_ZERO(&readfds); FD_SET(sid,&readfds); //start server loop for(;;){ //check if listening socket has any client requrests, timeout at 500 ms int numsockets = select(sid+1,&readfds,NULL,NULL,&tv); if(numsockets == -1){ if(errno == 4){ printf("SIGINT recieved in select\n"); FD_ZERO(&readfds); myhandler(SIGINT); }else{ perror("server select"); exit(1); } } //check if listening socket is ready to be read after select returns if(FD_ISSET(sid, &readfds)){ int newsocketfd = accept(sid, (struct sockaddr*)&client_addr, &addrsize); if(newsocketfd == -1){ if(errno == 4){ printf("SIGINT recieved in accept\n"); myhandler(SIGINT); }else{ perror("server accept"); exit(1); } }else{ s->forkThreadForClient(newsocketfd); } } //non working snippet //setup clients socket with select functionality struct timeval ctv; ctv.tv_sec = 0; ctv.tv_usec = 500000; fd_set creadfds; FD_ZERO(&creadfds); FD_SET(csid,&creadfds); for(;;){ //check if listening socket has any client requrests, timeout at 500 ms int numsockets = select(csid+1,&creadfds,NULL,NULL,&ctv); if(numsockets == -1){ if(errno == 4){ printf("SIGINT recieved in client select\n"); FD_ZERO(&creadfds); myhandler(SIGINT); }else{ perror("server select"); exit(1); } }else{ printf("Select returned %i\n",numsockets); } if(FD_ISSET(csid,&creadfds)){ //read header unsigned char header[11]; for(int i=0;i<11;i++){ if(recv(csid, rubyte, 1, 0) != 0){ printf("Received %X from client\n",*rubyte); header[i] = *rubyte; } } 

Any help would be appreciated.


Thanks for the answers, but I do not believe that he has a lot of trouble when the timeout value is inside the loop. I tested it and even with tv reset, and fd_set is reset every time the server loops are selected, and then it returns 1 immediately anyway. I feel there is a problem with the way select handles my TCP socket. Each time I install, selects the highest socket ID to enable my TCP socket, it returns immediately with this set of sockets. In addition, the client does not send anything, it simply connects.

+4
source share
2 answers

One thing you should do is reset the tv value to your desired timeout every time before you call select() . The select() function changes the values ​​in tv to indicate how much time is left in the timeout after returning from the function. If you do not, your select() calls will ultimately use a null timeout, which is inefficient.

Some other operating systems implement select() differently so that they do not change the tv value. Linux changes it, so you must reset it.

+6
source

Move

  FD_ZERO(&creadfds); FD_SET(csid,&creadfds); 

into the loop. The select() function reports the result in this structure. You already get the result with

 FD_ISSET(csid,&creadfds); 
+3
source

Source: https://habr.com/ru/post/1336916/


All Articles