So this may be a simple question. I have an input decimal .12345, and I want to do it 12345. Now this value can change, so I do not want to just multiply it by 100,000. For example, if the input value was .123 I want 123, If it was .3219 I I want 3219.
here's something to play with: (assuming your input is a number (double or single)
f=@(x) x*10^sum(floor(x*10.^[0:16])~=x*10.^[0:16])
this assumes floating point precision up to 16 digits ...
try f(.123)...
f(.123)
, , - . , uint64(f(x)) , .
uint64(f(x))
num2str, 0. num str2num:
num2str
0.
str2num
f=@(x) str2num(subsref(num2str(x),struct('type','()','subs',{{1,3:numel(num2str(x))}})));
, ...
Source: https://habr.com/ru/post/1693709/More articles:Как установить оба интервала между символами (kern) и стиль зачеркивания для `UILabel`? - iosCreating android-go "stdlib.h: No such file or directory" - androidMicrosoft Q # .Net Compatibility? - c #`del` on the package has some kind of memory - pythonRemoved module function in interactive mode. How to re-import? importlib.reload doesn't help - pythonAzure B2C Input policy setting - login_hint - azure-ad-b2cИсключить файлы scss из кармы - webpackJavaScript `with` alternative when creating DOM elements - javascriptLine alignment doesn't work with ansi colors - pythonIs the javascript c keyword really deprecated? - javascriptAll Articles