Dynamic allocation in C

I am writing a program and I have the following problem:

char *tmp; sprintf (tmp,"%ld",(long)time_stamp_for_file_name); 

Can someone explain how much memory is allocated for the tmp line.

How many characters is a long variable?

Thanks,

I would also appreciate a link to an exahustive resource on this information.

thanks

UPDATE:

Using your examples, I got the following problem:

 root@- [/tmp]$cat test.c #include <stdio.h> int main() { int len; long time=12345678; char *tmp; len=snprintf(NULL,0,"%ld",time); printf ("Lunghezza:di %ld %d\n",time,len); return 0; } root@- [/tmp]$gcc test.c root@- [/tmp]$./a.out Lunghezza:di 12345678 -1 root@- [/tmp]$ 

So, the len result from snprintf is -1, I compiled on Solaris 9 with a standard compiler.

Please help me!

+4
source share
7 answers

If your compiler complies with C99, you should be able to:

 char *tmp; int req_bytes = snprintf(NULL, 0, "%ld",(long)time_stamp_for_file_name); tmp = malloc(req_bytes +1); //add +1 for NULL if(!tmp) { die_horrible_death(); } if(snprintf(tmp, req_bytes+1, "%ld",(long)time_stamp_for_file_name) != req_bytes) { die_horrible_death(); } 

Relevant parts of the standard (from draft document ):

  • 7.19.6.5.2: If n is zero, nothing is written and s can be a null pointer.
  • 7.19.6.5.3: The snprintf function returns the number of characters that would be written, had n large enough, not counting the terminating null character, or a negative value if an encoding error occurred. Thus, a zero-terminated output was completely written if and only if the return value is non-negative and less than n.

If this does not work, I assume that your / libc compiler does not support this part of c99, or you may need to explicitly enable it. I run your example (with gcc version 4.5.0 20100610 (preerelease), Linux 2.6.34-ARCH), I get

 $./example Lunghezza:di 12345678 8 

+6
source

It is difficult to say in advance, although, I think, you could assume that it will be no more than 64 bits, and thus, "18,446,744,073,709,551,615" should be the largest possible value. This is 2 + 6 * 3 = 20 digits, commas are usually not included. It will be 21 for a negative number. So, go to 32 bytes as a nice and round size.

It would be better to associate this with snprintf() , so you wonโ€™t get buffer overflows if your evaluation is disabled.

+5
source

The number of characters used clearly depends on the value: if time_stamp_for_file_name is 0, you really need only 2 bytes. If in doubt, you can use snprintf , which tells you how much space you need:

 int len = snprinf(0, 0, "%ld", (long)time_stamp_for_file_name) + 1; char *tmp = malloc(len); if (tmp == 0) { /* handle error */ } snprintf(tmp, len, "%ld", (long)time_stamp_for_file_name); 

Beware of implementations where snprintf returns -1 for insufficient space rather than the required space.

As Paul P says, you can define a fixed upper bound based on the size long for your implementation. Thus, you completely exclude dynamic allocation. For instance:

 #define LONG_LEN (((sizeof(long)*CHAR_BIT)/3)+2) 

(depending on the fact that the log of the 2nd base 10 is greater than 3). This +2 gives you 1 for the minus sign and 1 for the whole division to be rounded. You will need one more for the nul terminator.

Or:

 #define STRINGIFY(ARG) #ARG #define EXPAND_AND_STRINGIFY(ARG) STRINGIFY(ARG) #define VERBOSE_LONG EXPAND_AND_STRINGIFY(LONG_MIN) #define LONG_LEN sizeof(VERBOSE_LONG) char tmp[LONG_LEN]; sprintf(tmp, "%ld", (long)time_stamp_for_file_name); 

VERBOSE_LONG may be a slightly larger string than you really need. On my compiler this is (-2147483647L-1) . I'm not sure if LONG_MIN expand to something like a hexadecimal literal or an inline compiler, but if so, it might be too short and this trick won't work. However, it is simple enough for unit testing.

If you want a tight upper bound to cover all the possibilities within the standard, up to a certain limit, you could try something like this:

 #if LONG_MAX <= 2147483647L #define LONG_LEN 11 #else #if LONG_MAX <= 4294967295L #define LONG_LEN 11 #else #if LONG_MAX <= 8589934591L ... etc, add more clauses as new architectures are invented with bigger longs #endif #endif #endif 

But I doubt it is worth it: itโ€™s better to simply define it in some portability header and manually configure it for new platforms.

+5
source

It depends on how big the long on your system. Suppose that in the worst case, 64 bits you need 22 characters max - this allows you to use 20 digits, the preceding - and the ending \0 . Of course, if you feel extravagant, you can always afford a little more and make it round like 32.

+3
source

The log 2 10 bit (~ 3.32) is required to represent the decimal digit; thus, you can calculate the number of digits as follows:

 #include <limits.h> #include <math.h> long time; double bitsPerDigit = log10(10.0) / log10(2.0); /* or log2(10.0) in C99 */ size_t digits = ceil((sizeof time * (double) CHAR_BIT) / bitsPerDigit); char *tmp = malloc(digits+2); /* or simply "char tmp[digits+2];" in C99 */ 

"+2" takes into account the sign and terminator 0.

+2
source

An octal requires one character per three bits. You print on the basis of ten, which never give more digits than octal for the same number. Therefore, select one character for every three bits.

sizeof (long) gives you the number of bytes to compile. Multiply this by 8 to get the bit. Add two before dividing by three to get a ceiling instead of a floor. Remember that C lines want to get the final zero byte to the end, so add it to the result. (Another for negative, as described in the comments).

 char tmp[(sizeof(long)*8+2)/3+2]; sprintf (tmp,"%ld",(long)time_stamp_for_file_name); 
+1
source

3*sizeof(type)+2 is a safe general rule for the number of bytes needed to format an integer type type as a decimal string, the reason is that 3 is the upper bound on log10(256) and a n -byte is an integer n digits in base 256 and, therefore, ceil(log10(256^n))==ceil(n*log10(256)) digits in base 10. +2 must take into account the terminating NUL byte and the possible minus sign if type very small.

If you want to be pedantic and support DSP, etc. using CHAR_BIT!=8 , use 3*sizeof(type)*((CHAR_BIT+7)/8)+2 . (Note that for POSIX systems this does not matter, since POSIX requires UCHAR_MAX==255 and CHAR_BIT==8 )

0
source

Source: https://habr.com/ru/post/1310099/


All Articles