EOF is -1 because it is defined. The name is provided by the standard library headers that you #include . They make it equal to -1, because it must be something that cannot be mistaken for the actual byte read by getchar() . getchar() reports the actual bytes using a positive number (from 0 to 255 inclusive), so -1 is great for this.
The != Operator means not equal. 0 means false, and everything else is true. So what happens, we call the getchar() function and compare the result with -1 (EOF). If the result was not equal to EOF, then the result will be true, because things that are not equal are not equal. If the result was equal to EOF, then the result will be false, because all the same are not (not equal).
The getchar() call returns EOF when you reach the "end of file". As for C, the "standard input" (the data that you pass to your program by entering in the command window) is exactly the same as the file. Of course, you can always enter more, so you need an explicit way to say "I am done." On Windows systems, this is control-Z. On Unix systems, this is control-D.
The example in the book is not "wrong." It depends on what you really want to do . Reading until EOF means you are reading everything until the user says βI am doneβ and then you can no longer read. Reading to '\ n' means that you are reading an input line. Reading up to '\ 0' is a bad idea if you expect the user to type input because it is either difficult or impossible to create this byte using the keyboard on the command line :)
Karl Knechtel Dec 05 '10 at 12:26 2010-12-05 12:26
source share