Argument * before . is the width of the field, and the argument is * after . - this is accuracy.
Field width is the minimum number of bytes that will be output as a result of conversion; the output will be complemented (by default, with spaces on the left, but the remaining zero padding and space in the right space are also flag-driven options) if fewer bytes are created. The negative argument for * for the width is interpreted as the corresponding positive value with the flag - , which moves the registration to the right (i.e., Aligns the field to the left).
Accuracy, on the other hand, has a meaning that varies with the conversion being performed. Negative corrections are processed as if no precision were specified at all. For integers, this is the minimum number of digits (not a common output); if fewer digits are produced, zeros are added to the left. Explicit precision of 0 means that digits are not produced when the value is 0 (instead of one 0 ). For strings, precision limits the number of output bytes by trimming the string (and allowing a longer, non-null end, input array), if necessary. For floating point specifiers, precision controls the number of places printed, either after the number notation (for %f ), or common places of significance (for other formats).
In your examples:
printf("%*.*d\n", -6 , 7,20000);
Here the field is left-aligned (filling on the right) with a minimum width of 6, but the field becomes wider, so the width is ignored. The accuracy of 7 integers should not exceed 7 digits, so you will get 0020000 as the contents of a converted field that has already exceeded the width.
In a different:
printf("%*.*d\n", 5, -6, 2000);
Field Width 5, with default alignment by default; padding are spaces on the left. Negative precision is ignored as if it were not specified, so the contents of the converted field 2000 are only 4 bytes, which is filled up to 5 bytes, in order to fill the width with one leading space.