Target c dividing two ints

I am trying to create a float by splitting two int in my program. Here is what I expect:

1/120 = 0.00833

Here is the code I'm using:

 float a = 1 / 120; 

However, this does not give me the result that I would expect. When I print it, I get the following:

 inf 
+6
source share
4 answers

Do the following

 float a = 1./120. 
+11
source

You need to indicate that you want to use floating point math.

There are several ways to do this:

  • If you are really interested in separating the two constants, you can specify that you want floating-point math by specifying the constant first as float (or double). All that is required is a decimal point.

     float a = 1./120; 

    You do not need to do a second float constant, although it doesnโ€™t hurt anything.

  • Honestly, it's pretty easy to skip, so I suggest adding a trailing zero and some interval.

     float a = 1.0 / 120; 
  • If you really want to do the math with an integer variable, you can enter it:

     float a = (float)i/120; 
+7
source
 float a = 1/120; float b = 1.0/120; float c = 1.0/120.0; float d = 1.0f/120.0f; NSLog(@"Value of A:%f B:%f C:%f D:%f",a,b,c,d); Output: Value of A:0.000000 B:0.008333 C:0.008333 D:0.008333 

For the variable float a: int / int gives the integer that you assign to float and print it like this 0.0000000

For the variable float b: float / int gives a float, assigning a float and printing it 0.008333

For a float variable, c: float / float gives a float, so 0.008333

The latter is a more accurate float. The previous ones are of type double: all floating point values โ€‹โ€‹are stored as double data types, except that the value is followed by "f" to specifically indicate a float, and not as a double.

+5
source

In C (and therefore also in Objective-C), expressions are almost always evaluated regardless of the context in which they appear.

The 1/120 expression is a division of the two operands of int , so it gives the result of int . Integer division truncates, so 1/120 gives 0 . The fact that the result is used to initialize the float object does not change the way you evaluate 1 / 120 .

This can be contradictory from time to time, especially if you are used to the way calculators usually work (usually they store all floating point results).

As the other answers say, in order to get a result close to 0.00833 (which cannot be represented exactly, BTW), you need to perform floating point division, not integer division, by making one or both of the floating point operands. If one operand is a floating point and the other is an integer, the integer operand is first converted to floating point; during integer division operation there is no direct floating point.

Please note that as @ 0x8badf00d comment says, the result should be 0 . Something else must be wrong for the print result inf . If you can show us more code, preferably a small complete program, we can help understand this.

(There are languages โ€‹โ€‹in which integer division gives a floating point result. Even in these languages, the rating does not necessarily depend on its context. Python version 3 is one of these languages: C, Objective-C, and Python version 2 not.)

+3
source

Source: https://habr.com/ru/post/903530/


All Articles