Dates, Timestamps, and Interval Types-Datatypes
The native Oracle datatypes of DATE, TIMESTAMP, and INTERVAL are closely related. The DATE and TIMESTAMP types store fixed date/times with varying degrees of precision. The INTERVAL type is used to store an amount of time, such as “8 hours” or “30 days,” easily. The result of subtracting two timestamps might be an interval; the result of adding an interval of eight hours to a TIMESTAMP results in a new TIMESTAMP that is eight hours later.
The DATE datatype has been part of Oracle for many releases—as far back as my experience with Oracle goes, which means at least back to version 5 and probably before. The TIMESTAMP and INTERVAL types are relative newcomers to the scene by comparison. For this simple reason, you will find the DATE datatype to be the most prevalent type for storing date/time information. But many new applications are using the TIMESTAMP type for two reasons: it has support for fractions of seconds (the DATE type does not), and it has support for time zones (something the DATE type also does not have).
We’ll take a look at each type after discussing DATE/TIMESTAMP formats and their uses.
Formats
I am not going to attempt to cover all of the DATE, TIMESTAMP, and INTERVAL formats here. This is well covered in the Oracle Database SQL Language Reference manual, which is freely available to all. A wealth of formats is available to you, and a good understanding of what they are is vital. I strongly recommend that you investigate them.
I’d like to discuss what the formats do here, as there are a great many misconceptions surrounding this topic. The formats are used for two things:
•\ To format the data on the way out of the database in a style thatpleases you
•\ To tell the database how to convert an input string into a DATE,
TIMESTAMP, or INTERVAL
And that is all. The common misconception I’ve observed over the years is that the format used somehow affects what is stored on disk and how the data is actually saved. The format has no effect at all on how the data is stored. The format is only used to convert the single binary format used to store a DATE into a string or to convert a string into the single binary format that is used to store a DATE. The same is true for TIMESTAMPs and INTERVALs.
My advice on formats is simply this: use them. Use them when you send a string to the database that represents a DATE, TIMESTAMP, or INTERVAL. Do not rely on default date formats—defaults can and probably will at some point in the future be changed by someone.
Note Refer back to Chapter 1 for a really good security reason to never use TO_CHAR/TO_DATE without an explicit format. In that chapter, I described a SQL injection attack that was available to an end user simply because the developer forgot to use an explicit format. Additionally, performing date operations without using an explicit date format can and will lead to incorrect answers. In order to appreciate this, just tell me what date this string represents: ‘01-02-03’. Whatever you say it represents, I’ll tell you that you are wrong. Never rely on defaults!
If you rely on a default date format and it changes, your application may be negatively affected. It might raise an error back to the end user if the date cannot be converted, have a serious security flaw, or, as bad, it might silently insert the wrong data. Consider the following INSERT statement, which relies on a default date mask:
Insert into t ( date_column ) values ( ’01/02/03′ );
Suppose the application was relying on a default date mask of DD/MM/YY to be in place. That would be February 1, 2003 (assuming that code was executed after the year 2000, but we’ll visit the implications of that in a moment). Now, say someone decides the correct date format should be MM/DD/YY. All of a sudden, that previous date changes to January 2, 2003. Or someone decides YY/MM/DD is right, and now you have February 3, 2001. In short, without a date format to accompany that date string, there are many ways to interpret it. That INSERT statement should be
Insert into t ( date_column ) values ( to_date( ’01/02/03′, ‘DD/MM/YY’ ) );And if you want my opinion, it has to beInsert into t ( date_column ) values ( to_date( ’01/02/2003′, ‘DD/MM/YYYY’ ) );
That is, it must use a four-character year. Several years ago, our industry learned the hard way how much time and effort was spent remedying software that attempted to “save” 2 bytes. We seem to have lost that lesson over time.
There is no excuse nowadays not to use a four-character year! Just because the year 2000 has come and gone does not mean you can now use a two-character year. Think about birth dates, for example. If you enter a birth date using Insert into t ( DOB ) values ( to_date( ’01/02/10′, ‘DD/MM/YY’ ) );is that Feb 1, 2010, or Feb 1, 1910? Either one is a valid value; you cannot just pick one to be correct.
This same discussion applies to data leaving the database. If you execute SELECT DATE_COLUMN FROM T and fetch that column into a string in your application, then you should apply an explicit date format to it. Whatever format your application is expecting should be explicitly applied there. Otherwise, at some point in the future when someone changes the default date format, your application may break or behave incorrectly.
Next, let’s look at the datatypes themselves in more detail.