Would decimal or double work be better for translations that need to be accurate before .00001?

I am an inspector in a machine shop. I have an html report created by another inspector that has some problems that I need to fix. This is not the first time: I need something better than PowerShell and RegEx . (Do not be afraid of Internet warriors, I know that I should not use RegEx for html. Now I use HtmlAgilityPack .)

I know that in SO and on the Internet as a whole there are a lot of such discussions. I did not find anything specific. I can write some small experimental applications to test some of them (and I'm planning), but I want to have an idea of ​​whether it will be safe in the future before I fully implement it. Although I am not a programmer by profession, I am well versed in the concepts we are talking about; Do not worry about talking over my head.

Over the course of a series of conversions, I will probably have an error greater than .0001? What about .00001?
-If report alignment is turned off, I may need to rotate it and translate it several times. -I just rotated and translated at this time, but I plan to add more transformations that can increase the number and complexity of operations.
An integer component can go into thousands.
-Our appliances are certified as .0001. Normal meaningful numeral rules for scientific measurements apply.

Will decimal overhead and manually recording trigger functions be incredibly time-consuming (change: at runtime)?
- Typically, the report contains from 100 to 100 points. Each point is actually 2 points: Nominal (as simulated) and Actual (as measured)

- The easiest to test, but I want to know before applying mathematical functions to Decimal.

Side question:
I have a point class Point3D that contains x , y and z . Since each data point is two of them ( Nominal and Actual .), I then have a MeasuredPoint class with two Point3D instances. There must be a better name than MeasuredPoint , which is not annoyingly long.

Oh yes, it's C # /. Net. Thanks,

+4
source share
6 answers

Do not perform trigger functions with a decimal number! There is a reason why the standard library does not provide them, which means that if you execute a trigger, Decimal does not provide any additional benefits.

Since you will still work in radians, your values ​​are defined as the multiplicity / ratio PI, which cannot be represented in any base system. Forcing a view to bottom ten is more likely than a reduction error.

If accuracy (the minimum error in ulps) is important for your application, then you should read What Every Computer Scientist Should Know About David Goldberg's Floating-Point Arithmetic . This article explains a much better job than me.

However, the result is that if your desired accuracy is only 5 decimal places, even a 32-bit float (IEEE-754 single-point) will be many. The 64-bit double IEEE-754 with double precision will help you to be more precise with your error term, but the 128-bit base-10 floating point value is just an excessive level of performance, and it will almost certainly not improve the accuracy of your results one iota.

+5
source

If you need precision that needs to be maintained in several operations, you really should consider using a decimal number. Although it may be good for storing numbers for a short time, no IEEE754-supported float format can support its value indefinitely as the number of operations used increases.

+3
source

Try to find a library that will suit your needs. I stumbled upon W3b.sine in a semi-hearted search. I definitely ran into others in the past.

+1
source

Since you are talking about rotations as well as translations as well as trigonometry functions, it seems safe to assume that the values ​​you are talking about are not exact multiples of 0.0001.

Based on this assumption:

  • With decimal values, you will essentially round up to 0.0001 (or your chosen precision) after each step, and these rounding errors will accumulate.

  • Double values ​​are usually more accurate: you will store internally with all available accuracy and round up to four decimal places when displaying the results.

For example, as a result of rotation or transformation, you want to move a distance of 1/3 (0.333 ....). And you want to repeat this movement three times.

If you save distances as decimal with four decimal places (0.3333), the sum will be 0.9999, error 0.0001.

If you store as doubles, you can achieve much greater accuracy, and as a bonus performance will be better.

In fact, decimal places are usually used only for financial calculations, where the results must be exactly rounded to a fixed number of base ten decimal places.

+1
source

Floats and pair pairs are quick approximations.

Besides 0.0 and 1.0 , you won’t even get exact representations for most constants (e.g. 0.1 ). Therefore, if you need to guarantee a certain accuracy, using floating point arithmetic is not an option.

But if the goal is to achieve a certain accuracy, give or take a little, then doubling can do. Just watch out for Loss of Importance .

0
source

Honestly, I think FLOAT was the wrong turn of data processing. Since people work in the decimal system, and the output is always converted to decimal, the float simply causes persistent problems. I used float when a variable could contain a wide range of values, like in the range 1E-9 to 1E9, and use integers with a fixed number of decimal places, otherwise controlled in the code. With Java's BigDecimal class and similar functionality in other languages ​​these days, there is almost no reason to use float. Perhaps in an environment where you do a lot of computation and performance, the problem is, you will agree to rounding problems. I do not think that I have used the float in the program for at least ten years.

0
source

Source: https://habr.com/ru/post/1299577/


All Articles