It depends. You did not specify an RDBMS, so I can only speak with SQL Server, but data types have different storage costs associated with them. Ints range from 1 to 8 bytes, Decimal values are 5-17 and floats are from 4 to 8 bytes.
An RDBMS will need to read data pages from disk to find your data (in the worst case), and they can put as many lines on an 8-kilobyte data page. So, if you have ten-digit numbers of 17 bytes each, you will get the 1 / 17th number of lines read from the disk to read than you would if you would correctly evaluate your data and use tinyint with 1 byte cost to store X .
This storage cost will have a cascading effect when you sort (organize) your data. It will try to sort by memory, but if you have a bazillion lines and are starving for memory, it can reset the temporary storage for sorting, and you pay this cost again and again.
Indexes can help, because the data can be stored in a sorted way, but then again, if retrieving this data in memory may not be as efficient for obese data types.
[edit]
@Bohemian makes an accurate assessment of the performance of CPUs that are integer and floating point comparable, but this is surprisingly rare for the processor to be used on the database server. Most likely, you are limited by the IO disk subsystem and memory, so my answer focuses on the difference in speed between getting this data to the engine to perform the sort operation and the cost of comparing the CPU.
source share