As a personal project, I am working on introducing a type of arbitrary precision number for my favorite project.
I already know about all the popular, trusted and reliable libraries that do this. I want to work on a solution as a self-improvement project.
I am researching an area and trying to figure out if there is a way to roughly predict if the operation will cause an overflow before I actually do the calculations. I'm also not so worried about false positives.
I want to be able to use the minimum space suitable for calculation. If the calculation remains within its native boundaries, I will save it there.
For example: Multiplying two 64 bit Integers if each are large enough will cause an overflow. I want to define this and convert the numbers to my number only if the result can exceed 64 bits of resolution. I will work with signed numbers in this experiment.
What is the most efficient and effective way to detect overflow / underload?
source share