Which width point is less accurate in printf ()?
I came across some code with a line similar to:
fprintf(fd, "%4.8f", ptr->myFlt); Nowadays it doesn’t work with C ++, I read the document on printf and its ilk and found out that in this case 4 is “width” and 8 is “precision”. The width was defined as the minimum number of spaces occupied by the output, filling in leading spaces if necessary.
In this case, I can’t understand what the dot of the pattern like “% 4.8f” will be, since 8 (with zero margin, if necessary) decimal places after the dot already ensures the execution and excess of width 4. So, I wrote a small program in Visual C ++:
// Formatting width test #include "stdafx.h" int _tmain(int argc, _TCHAR* argv[]) { printf("Need width when decimals are smaller: >%4.1f<\n", 3.4567); printf("Seems unnecessary when decimals are greater: >%4.8f<\n", 3.4567); printf("Doesn't matter if argument has no decimal places: >%4.8f<\n", (float)3); return 0; } which gives the following result:
Need width when decimals are smaller: > 3.5< Seems unnecessary when decimals are greater: >3.45670000< Doesn't matter if argument has no decimal places: >3.00000000< In the first case, the accuracy is less than the specified width, and in fact the leading space is added. However, when the accuracy is greater, the width seems excessive.
Is there a reason for this format?
The width format specifier only affects the output if the total width of the printed number is less than the specified width. Obviously, this can never be when the accuracy is set to be greater than or equal to the width. Thus, the width specification is useless in this case.
Here is an article from MSDN; the last sentence explains this.
Missing or small field widths do not cause field truncation; if the conversion result is wider than the field width, the field expands to contain the conversion result.
Perhaps a programmer error? Maybe they swapped %8.4f , or they actually intended %12.8f or even %012.8f
#include <stdio.h> int main() { printf("Seems unnecessary when decimals are greater: >%4.8f<\n", 3.4567); printf("Seems unnecessary when decimals are greater: >%8.4f<\n", 3.4567); printf("Seems unnecessary when decimals are greater: >%12.4f<\n", 3.4567); printf("Seems unnecessary when decimals are greater: >%012.4f<\n", 3.4567); return 0; } Exit
Seems unnecessary when decimals are greater: >3.45670000< Seems unnecessary when decimals are greater: > 3.4567< Seems unnecessary when decimals are greater: > 3.4567< Seems unnecessary when decimals are greater: >0000003.4567< Probably just a hunch, but: Accuracy gives decimal values of the same length that will not be exceeded if you get more decimal places. In addition, the width does not allow your number to consume less space than necessary. If you are thinking of some kind of table with numbers, you can achieve uniform columns when each column in each row has the same width, regardless of the number it contains.
Thus, accuracy will be necessary at a certain price, for example, in the format of 10.00 €, where you always need 2 decimal places.
For your specific line: I feel like you are about the redundancy of the width specifier in this special case.