NUMBER Type Syntax and Usage-Datatypes
The syntax for the NUMBER type is straightforward:
NUMBER( p,s )
P and S are optional and are used to specify the following:
•\ Precision, or the total number of digits: By default, the precision is 38 and has valid values in the range of 1 to 38. The character * may be used to represent 38 as well.
•\ Scale, or the number of digits to the right of the decimal point: Valid values for the scale are –84 to 127, and its default value depends on whether or not the precision is specified. If no precision is specified, then the scale defaults to the maximum range. If a precision is specified, then the scale defaults to 0 (no digits to the right of the decimal point). So, for example, a column defined as NUMBER stores floating-point numbers (with decimal places), whereas a NUMBER(38) stores only integer data (no decimals), since the scale defaults to 0 in the second case.
You should consider the precision and scale to be edits for your data—data integrity tools in a way. The precision and scale do not affect at all how the data is stored on disk, only what values are permitted and how numbers are to be rounded. For example, if a value exceeds the precision permitted, Oracle returns an error:
SQL> create table t ( num_col number(5,0) ); Table created.
SQL> insert into t (num_col) values ( 12345 ); 1 row created.
So, you can use the precision to enforce some data integrity constraints. In this case, NUM_COL is a column that is not allowed to have more than five digits.
The scale, on the other hand, is used to control rounding of the number. For example:
SQL> create table t ( msg varchar2(10), num_col number(5,2) ); Table created.
SQL> insert into t (msg,num_col) values ( ‘123.45’, 123.45 ); 1 row created.
SQL> insert into t (msg,num_col) values ( ‘123.456’, 123.456 ); 1 row created.
Notice how the number 123.456, with more than five digits, succeeded this time. That is because the scale we used in this example was used to round 123.456 to two digits, resulting in 123.46, and then 123.46 was validated against the precision, found to fit, and inserted. However, if we attempt the following insert, it fails because the number 1234.00 has more than five digits in total:
SQL> insert into t (msg,num_col) values ( ‘1234’, 1234 ); insert into t (msg,num_col) values ( ‘1234’, 1234 )
When you specify the scale of 2, at most three digits may be to the left of the decimal place and two to the right. Hence, that number does not fit. The NUMBER(5,2) column can hold all values between 999.99 and –999.99.
It may seem strange to allow the scale to vary from –84 to 127. What purpose could a negative scale fulfill? It allows you to round values to the left of the decimal place. Just as the NUMBER(5,2) rounded values to the nearest .01, so a NUMBER(5,-2) would round to the nearest 100, for example:
SQL> create table t ( msg varchar2(10), num_col number(5,-2) ); Table created.
SQL> insert into t (msg,num_col) values ( ‘123.45’, 123.45 ); 1 row created.
SQL> insert into t (msg,num_col) values ( ‘123.456’, 123.456 ); 1 row created.
SQL> select * from t;MSG NUM_COL
The numbers were rounded to the nearest 100. We still have five digits of precision, but there are now seven digits (including the trailing two 0s) permitted to the left of the decimal point:
SQL> insert into t (msg,num_col) values ( ‘1234567’, 1234567 ); 1 row created.
SQL> select * from t;MSG NUM_COL
So, the precision dictates how many digits are permitted in the number after rounding, using the scale to determine how to round. The precision is an integrity constraint, whereas the scale is an edit.
It is interesting and useful to note that the NUMBER type is, in fact, a variable-length datatype on disk and will consume between 0 and 22 bytes of storage. Many times, programmers consider a numeric datatype to be a fixed-length type—that is what they typically see when programming with 2- or 4-byte integers and 4- or 8-byte floats. The Oracle NUMBER type is similar to a variable-length character string. We can see what happens with numbers that contain differing amounts of significant digits.
We’ll create a table with two NUMBER columns and populate the first column with many numbers that have 2, 4, 6, … 28 significant digits. Then, we’ll simply add 1 to each of them:
SQL> create table t ( x number, y number ); Table created.
Now, if we use the built-in VSIZE function that shows how much storage the column takes, we can review the size differences between the two numbers in each row:
SQL> set numformat 99999999999999999999999999999
SQL> column v1 format 99
SQL> column v2 format 99
SQL> select x, y, vsize(x) v1, vsize(y) v2
We can see that as we added significant digits to X, the amount of storage required took increasingly more room. Every two significant digits added another byte of storage. But a number just one larger consistently took 2 bytes. When Oracle stores a number, it does so by storing as little as it can to represent that number. It does this by storing the significant digits, an exponent used to place the decimal place, and information regarding the sign of the number (positive or negative). So, the more significant digits a number contains, the more storage it consumes.
That last fact explains why it is useful to know that numbers are stored in varying width fields. When attempting to size a table (e.g., to figure out how much storage 1,000,000 rows would need in a table), you have to consider the NUMBER fields carefully. Will your numbers take 2 bytes or 20 bytes? What is the average size? This makes accurately sizing a table without representative test data very hard. You can get the worst-case size and the best-case size, but the real size will likely be some value in between.