In my program, the user enters the number of decimal places that he requires in the output. I save this entry as deci. How to use this deci variable as an accuracy modifier?
Example: Enter a number: 23.6788766 Input decimal places: 2 Output: 23.67
If it is C, you can do:
float floatnumbervalue = 42.3456; int numberofdecimals = 2; printf("%.*f", numberofdecimals, floatnumbervalue);
In C, for example, to change the default precision of 6 digits to 8:
int precision = 8; printf("%.*f\n", precision, 1.23456789);
The precision argument must be of type int .
int
You can use * modifiers, as shown in the following Wikipedia examples:
printf ("% * d", 5, 10) will print "10" with a total width of 5 characters, and printf ("%. * s", 3, "abcdef") will result in "abc".
http://en.wikipedia.org/wiki/Printf_format_string
Source: https://habr.com/ru/post/1500211/More articles:Does iptables redirect from the external interface to the loopback port? - linuxCompiling AngularJS Static Template - angularjsA completed Java project, now creating a jar or .exe file (with database) - javaRegex in C # acts weird - c #How to specify dxf size in openscad? - openscadTesting Call Answers in NodeJS with Mocha & Sinon - node.jsFiring an event in a Python feature package when an Array element changes - pythonHow to create a dodecahedron with a polyhedron tag? - three.jsDifference between Object.getPrototypeOf (x) and x.constructor.prototype? - javascriptClojure: def for java static function - clojureAll Articles