I am sure that you heard the aphorism that "in theory there is no difference between theory and practice, but in practice there is."
In this case, there are differences in theory, but all these systems deal with the same finite amount of address memory, so in practice there is no difference.
EDIT:
Assuming that you can represent a natural number in any of these systems, you can imagine a complement in any of them. If the constraints you are worried about do not allow the representation of a natural number, you cannot imagine the complement Nat * Nat.
Think of a natural number as a pair (heuristic lower bound on the maximum bit size and lazily evaluated bit list).
In lambda calculus, you can think of a list as a function that returns a function that is called with true, returns 1 bit, and a called with false returns a function that does the same for 2 bits, etc.
Addition is the operation applied to the zip of these two lazy lists that propagate the carry bit.
Of course, you must represent the maximum bit size heuristic as a natural number, but if you only create numbers with the number of bits that are strictly less than the number you represent, and your operators do not violate this heuristic, then the bit size is inductively smaller than the number that you want to manipulate, so the operations are completed.
Regarding the convenience of accounting for edge cases, C will give you very little help. You can return special values ββto represent overflow / underflow and even try to make them infectious (like IEEE-754 NaN), but you wonβt receive complaints at compile time if you canβt check. You can try and overload the SIGFPE signal or something similar to a trap problem.
I can not say that y is zero - if x is the bottom, x + y = x for any y.
If you want to do symbolic manipulation, Matlab and Mathematica are implemented in C and C. However, python has a well-optimized bigint implementation that is used for all integer types. This is probably not suitable for representing really really big numbers.