Print Functions
For the print functions, literal text or white space in a format string generates characters that match the characters in the format string. A print conversion specification typically generates characters by converting the next argument value to a corresponding text sequence. A print conversion specification has the format:

Following the percent character (%) in the format string, you can write zero or more format flags:
- - — to left-justify a conversion
- + — to generate a plus sign for signed values that are positive
- space — to generate a space for signed values that have neither a plus nor a minus sign
- # — to prefix 0 on an o conversion, to prefix 0x on an x conversion, to prefix 0X on an X conversion, or to generate a decimal point and fraction digits that are otherwise suppressed on a floating-point conversion
- 0 — to pad a conversion with leading zeros after any sign or prefix, in the absence of a minus (-) format flag or a specified precision
Following any format flags, you can write a field width that specifies the minimum number of characters to generate for the conversion. Unless altered by a format flag, the default behavior is to pad a short conversion on the left with space characters. If you write an asterisk (*) instead of a decimal number for a field width, then a print function takes the value of the next argument (which must be of type int) as the field width. If the argument value is negative, it supplies a - format flag and its magnitude is the field width.
Following any field width, you can write a dot (.) followed by a precision that specifies one of the following: the minimum number of digits to generate on an integer conversion; the number of fraction digits to generate on an e, E, or f conversion; the maximum number of significant digits to generate on a g or G conversion; or the maximum number of characters to generate from a C string on an s conversion.
If you write an * instead of a decimal number for a precision, a print function takes the value of the next argument (which must be of type int) as the precision. If the argument value is negative, the default precision applies. If you do not write either an * or a decimal number following the dot, the precision is zero.