|Documents Home :: 4.1.2 :: leap.htm||Download|
NTP Timescale and Leap Secondsfrom
1) POSIX time does not have leap seconds. As Jan says, unix systems variously either just ignore that 86401st second, or smear it over the entire day. That's why 'ConvertFrom','posixtime' doesn't support leap seconds - POSIX time doesn't. And strictly speaking, POSIX time is not a 'timestamp' in the sense of hh:mm:ss, it's a count of seconds. This is because while there was an offset of 315964800 seconds between Unix and GPS time when GPS time began, that offset changes each time there is a leap second. GPS time labels each second uniquely including leap seconds while Unix time does not, preferring to count a constant number of seconds a day including those containing leap seconds.Alice's Adventures in Wonderland, LewisCarroll
The Mad Hatter and the March Hare are discussing whether theTeapot serial number should have two or four digits.
In the year 2001 the Network Time Protocol (NTP) has been in usefor over two decades and remains the longest running, continuouslyoperating application protocol in the Internet. There was someconcern, especially in government and financial institutions, thatNTP might cause Internet applications to misbehave in terrible wayson the epoch of the new century, but this didn't happen. However,how NTP reckons the time is important when considering therelationship between NTP time and conventional civil time.
This document presents an analysis of the NTP timescale, inparticular the metrication relative to the conventional civiltimescale and when the NTP timescale rolls over in 2036. Theseissues are also important with respect to the Unix timescale, butthat rollover will not happen until 2038. This document does notestablish a standard, nor does it present specific algorithms whichmetricate the NTP timescale with respect to other timescales.
The NTP Timescale
It will be helpful in understanding the issues raised in thisdocument to consider the concept of a universal timescale. Theconventional civil timescale used in most parts of the world isbased on Coordinated Universal Time (UTC) (sic), formerly known asGreenwich Mean Time (GMT). UTC is based on International AtomicTime (TAI sic), which is derived from hundreds of cesium clocks inthe national standards laboratories of many countries. Deviationsof UTC from TAI are implemented in the form of leap seconds, whichoccur on average every eighteen months.
For almost every computer application today, UTC represents theuniversal timescale extending into the indefinite past andindefinite future. We know of course that the UTC timescale did notexist prior to 1972, the Gregorian calendar did not exist prior to1582, the Julian calendar did not exist prior to 54 BC and wecannot predict exactly when the next leap second will occur.Nevertheless, most folks would prefer that, even if we can't getfuture seconds numbering right beyond the next leap second, atleast we can get the days numbering right until the end ofreason.
The universal timescale can be implemented using a binarycounter of indefinite width and with the unit seconds bit placedsomewhere in the middle. The counter is synchronized to UTC suchthat it runs at the same rate (also the rate of TAI) and the unitsincrement coincides with the UTC seconds tick. The NTP timescale isconstructed from 64 bits of this counter, of which 32 bits numberthe seconds and 32 bits represent the fraction. With this design,the counter runs in 136-year cycles, called eras, the latest ofwhich began with a counter value of zero at 0h 1 January 1900. Thenext era will begin when the seconds counter rolls over sometime in2036. The design assumption is that further low order bits, ifrequired, are provided by local interpolation, while further highorder bits, when required, are provided by external means.
The important point to be made here is that the high order bitsmust ultimately be provided by astronomers and disseminated to thepopulation by international means. Ultimately, should a need existto align a particular NTP era to the current calendar, theoperating system in which NTP is embedded must provide thenecessary high order bits, most conveniently from the file systemor flash memory.
With respect to the recent year 2000 issue, the most importantthing to observe about the NTP timescale is that it knows nothingabout days, years or centuries, only the seconds since thebeginning of the current era which began on 1 January 1900. On 1January 1970 when Unix life began, the NTP timescale showed2,208,988,800 and on 1 January 1972 when UTC life began, it showed2,272,060,800. On the last second of the year 1999, the NTPtimescale showed 3,155,673,599 and one second later on the firstsecond of the next century showed 3,155,673,600. Other than thisobservation, the NTP timescale has no knowledge of or provision forany of these eclectic seconds.
Conversion to Other Timescales
The NTP timescale is almost never used directly by system orapplication programs. The generic Unix kernel keeps time in secondsand microseconds (or nanoseconds) to provide both time of day andinterval timer functions. In order to synchronize the Unix clock,NTP must convert to and from NTP representation and Unixrepresentation. Unix kernels implement the time of day functionusing two 32-bit counters, one representing the signed secondssince Unix life began and the other the microseconds or nanosecondsof the second. In principle, the seconds counter will change signin 2038. How the particular Unix semantics interprets the countervalues is of concern, but is beyond the scope of discussionhere.
While incorrect NTP time values are unlikely in a properlyconfigured subnet using strong cryptography, redundant sources anddiverse network paths, hazards remain due to incorrect softwareexternal to NTP. These include the Unix kernel and library routineswhich convert NTP time to and from Unix time and to and fromconventional civil time in seconds, minutes, hours, days and years.Although NTP uses these routines to format monitoring datadisplays, they are not used to read or set the NTP clock. They mayin fact cause problems with certain application programs, but thisis not an issue which concerns NTP correctness.
It is possible that some external source to which NTPsynchronizes may produce a discontinuity which could then induce aNTP discontinuity. The NTP primary (stratum 1) time servers, whichare the ultimate time references for the entire NTP population,obtain time from various sources, including radio and satellitereceivers and telephone modems. Not all sources provide yearinformation and not all of these provide time in four-digit form.In point of fact, the NTP reference implementation does not use theyear information, even if available. Instead, the year informationis provided from the file system, which itself depends on the Unixclock.
Most computers include a time-of-year (TOY) clock chip whichmaintains the time when the power is off. When the operating systemis booted, the system clock is set from the chip. As the chip doesnot record the year, this value is determined from the datestamp ona system configuration file. For this to be correct, the filestamp must by updated at least once each year. The NTP protocol specificationrequires the apparent NTP time derived from external servers to becompared to the system time before the clock is set. If thediscrepancy is over 1000 seconds, an error alarm is raisedrequiring manual intervention. This makes it very unlikely thateven a clique of seriously corrupted NTP servers will result ingrossly incorrect time values. When the system clock is synchronized toNTP, the TOY chip is corrected to system time on a regularbasis.
Timescale Resolution and the Tick Interval
Modern computer clocks use a hardware counter to generate processor interrupts at tick intervals in the order of a few milliseconds. At each tick the processor increments the software system clock by the number of microseconds or nanoseconds in the tick. The software resolution of the system clock is defined as the tick interval. Most modern processors implement some kind of high resolution hardware counter that can be used to interpolate the interval between the most recent tick and the actual clock reading. The hardware resolution of the system clock is defined as the time between increments of this counter. However, the actual reading latency due to the kernel interface and interpolation code can range from a few tens of microseconds in older processors to under a microsecond in modern processors.
System clock correctness principles require that clock readings must be always monotonically increasing, so that no two clock readings will be the same. As long as the reading latency exceeds the hardware resolution, this behavior is guaranteed. With reading latencies dropping below the microsecond in modern processors, the system clock in modern operating systems runs in nanoseconds, rather than the microseconds used in the original Unix kernel. With processor speeds exceeding 1 GHz, this assumption may be in jeopardy.
The International Earth Rotation Service (IERS) usesastronomical observations provided by USNO and other observatoriesto determine UTC, which is syntonic (identical frequency) with TAIbut offset by a integral number of seconds. Starting from apparentmean solar time as observed, the UT0 timescale is determined usingcorrections for Earth orbit and inclination (the Equation of Time,as used by sundials), the UT1 (navigator's) timescale by addingcorrections for polar migration and the UT2 timescale by addingcorrections for known periodicity variations. UTC is based on UT1,which is presently fast relative to TAI by a fraction of a secondper year. Since the UTC timescale runs at the TAI rate, when themagnitude of the UT1 correction approaches 0.5 second, a leapsecond is inserted or deleted in the UTC timescale on the last dayof June or December.
For the most precise coordination and timestamping of eventssince 1972, it is necessary to know when leap seconds areimplemented in UTC and how the seconds are numbered. The insertionof leap seconds into UTC is currently the responsibility of theIERS, which is located at the Paris Observatory. As specified inCCIR Report 517, a leap second is inserted following second23:59:59 on the last day of June or December and becomes second23:59:60 of that day. A leap second would be deleted by omittingsecond 23:59:59 on one of these days, although this has neverhappened. A table of historic leap seconds and the NTP time wheneach occurred is available via FTP from any NIST NTP server.
The UTC timescale thus ticks in standard (atomic) seconds andwas set to an initial offset of 10 seconds relative to TAI at 0hMJD 41,318.0 according to the Julian calendar or 0h on 1 January1972 according to the Gregorian calendar. This established thefirst tick of the UTC era and its reckoning with these calendars.Subsequently, the UTC timescale has marched backward relative tothe TAI timescale exactly one second on scheduled occasionsrecorded in the institutional memory of our civilization. Note inpassing that leap second adjustments affect the number of secondsper day and thus the number of seconds per year. Apparently, shouldwe choose to worry about it, the UTC clock, Gregorian calendar andvarious cosmic oscillators will inexorably drift apart with timeuntil rationalized by some future papal bull.
Reckoning with NTP and UTC Leap seconds
The NTP timescale is based on the UTC timescale, but notnecessarily always coincident with it. At the first tick of the UTCEra, which began at 0h on 1 January 1972 (MJD 41,318.0) the NTPclock read 2,272,060,800, representing the number of standardseconds since the beginning of the NTP era at 0h on 1 January 1900(MJD 15,021.0) according to the Gregorian calendar. The insertionof leap seconds in UTC and subsequently into NTP does not affectthe UTC or NTP oscillator frequency, only the conversion betweenNTP network time and UTC civil time. However, since the onlyinstitutional memory available to NTP are the UTC broadcastservices, the NTP timescale is in effect reset to UTC as eachbroadcast timecode is received. Thus, when a leap second isinserted in UTC and subsequently in NTP, knowledge of all previousleap seconds is lost.
Another way to describe this is to say there are as many NTPtimescales as historic leap seconds. In effect, a new timescale isestablished after each new leap second. Thus, all previous leapseconds, not to mention the apparent origin of the timescaleitself, lurch forward one second as each new timescale isestablished. If a clock synchronized to NTP in early 2001 was usedto establish the UTC epoch of an event that occurred in early 1972without correction, the event would appear 22 seconds late.However, NTP primary time servers resolve the epoch using thebroadcast timecode, so that the NTP clock is set to the broadcastvalue on the current timescale. As a result, for the most precisedetermination of epoch relative to the historic Gregorian calendarand UTC timescale, the user must subtract from the apparent NTPepoch the offsets derived from the NIST table. This is a feature ofalmost all present day time distribution mechanisms.
The obvious question raised by this scenario is what happensduring the leap second when NTP time stops and the clock remainsunchanged. If the precision time kernel modifications have beenimplemented, the kernel includes a state machine that implementsthe actions required by the scenario. At the exact instant of theleap, the logical clock is stepped backward one second. However,the routine that actually reads the clock is constrained never tostep backwards, unless the step is significantly larger than onesecond, which might occur due to explicit operator direction.
In this design time stands still during the leap second, but is correct commencing with the next second. Since clock readings must be positive monotonic, the apparent time will increase by one nanosecond for each reading. At the end of the second the apparent time may be ahead of the actual time depending on how many times the clocks was read during the second. Eventually, the actual time will catch up with the apparent time and operation continues normally.David L. Mills<[email protected]>
On this page is a date to UNIX time converter, or UNIX time calculator. Enter a date and time and we'll return a UNIX timestamp in seconds since January 1, 1970 (ignoring leap seconds).
For the other direction, try the UNIX time to date converter.
Unix Epoch Time Leap Seconds
Date to UNIX Time Calculator
What is Unix Time?
Unix Time Leap Seconds Converter
UNIX time, also known as UNIX Epoch Time, is the number of seconds since January 1, 1970 UTC. It ignores leap seconds and treats all days as exactly 86,400 seconds in length.
The Year 2038 Problem
Historically, many computers stored UNIX timestamps in a signed 32 bit integer. Interestingly, 32 bits seems like enough space to store a large number, but will eventually 'run out' of seconds – much like how storing 2 digit years led to the Y2K problem.
A signed integer can represent:
And January 1, 1970 + 2147483647 seconds = 03:14:17 AM January 19, 2038. (Note that the other sign – negative – can represent negative dates, or times before 1970).
Beyond that time, you can't represent seconds since 1970 using 32 bit signed numbers. Representations that use 64 bit numbers (or higher) can handle the foreseeable future with ease. (This calculator can handle dates beyond 2038).
Using the Date to UNIX Time Calculator
To use the Date to UNIX epoch calculator, simply enter a date and time in the Date and Time to Convert field. Next, hit the Calculate Timestamp from Date button and we'll calculator how many seconds your input is after (or before) January 1, 1970.
We'll show an answer assuming your entry was UTC or Universal Time, as well as an answer for your local time zone.
Next, enjoy some other calculators and tools.