# 2.2.5.5.1.6 Decimals and Numerics

Decimal or Numeric is defined as **decimal**(*p*, *s*)
or **numeric**(*p*, *s*), where *p* is the precision and *s*
is the scale. The value is represented in the following sequence:

One 1-byte unsigned integer that represents the sign of the decimal value as follows:

0 means negative.

1 means nonnegative.

One 4-, 8-, 12-, or 16-byte signed integer that represents the decimal value multiplied by 10

^{s}. The maximum size of this integer is determined based on*p*as follows:4 bytes if 1 <=

*p*<= 9.8 bytes if 10 <=

*p*<= 19.12 bytes if 20 <=

*p*<= 28.16 bytes if 29 <=

*p*<= 38.

The actual size of this integer could be less than the maximum size, depending on the value. In all cases, the integer part MUST be 4, 8, 12, or 16 bytes.