Unix time is the plumbing beneath almost every timestamp you will ever touch: database column "created_at", JWT expiry, log-line time, file modification date, HTTP header. Despite being ubiquitous, it has quirks that surprise developers every year — the seconds-vs-milliseconds split, timezone footguns, the looming 2038 rollover. This guide covers what Unix time actually is and how to work with it safely.

What it is

Unix time is the count of seconds since 00:00:00 UTC on 1 January 1970, called the Unix epoch. Every instant in time becomes a single integer — a number line anchored at 1970 that extends forward (positive) and, less usefully, backward (negative). Today, Unix time is around 1.78 billion and climbing.

Why 1970

Unix was under development at Bell Labs when the team needed a zero-point for their new time functions. The obvious candidates — a year that felt historically significant, a round century — all brought baggage. The pragmatic choice was a recent round date: 1 January 1970, 00:00:00 UTC. Nothing in particular happened that day. Its value was that it was close enough to now to keep numbers small, and far enough in the past that existing files wouldn't need negative timestamps.

Seconds or milliseconds

The original Unix time is seconds. Every POSIX system — Linux, macOS, databases, Python's time.time() when cast to int — returns seconds. But JavaScript picked milliseconds when Date.now() was specified in 1995, and most modern APIs (Java, Go stdlib, ISO formats, modern databases) follow. The result: two conventions live side by side. A 10-digit timestamp is almost always seconds. A 13-digit timestamp is milliseconds. When you build an integration, check which side of that line the other system sits on — bugs from mixing the two are silent and produce dates 53 years apart.

Timezones and what Unix time really represents

Unix time is always UTC. It does not know about timezones. When a log says "created 1712345678", that is an exact point in global time — converting it to a human-readable string is where the timezone enters. This is actually a virtue: your servers, users, and databases can each render the same instant in their own local time without ever disagreeing on when.

The Y2038 problem

On 19 January 2038 at 03:14:07 UTC, a signed 32-bit integer holding Unix seconds overflows. The next second jumps to a negative number, interpreted as 13 December 1901. Any system still storing Unix time in int32 will see dates scramble. Embedded devices, old binary file formats, and long-lived C programs are the main risk — your web app is almost certainly fine, because every mainstream language and database moved to 64-bit long ago. A 64-bit Unix timestamp won't overflow for ~292 billion years.

Why it's still dominant

Every other timestamp format needs a timezone, a calendar system, or a text parser to make sense. Unix time is one number, interpretable without context, sortable without parsing, compact in storage, cheap to compare. Its descendants — ISO 8601 and RFC 3339 for display, monotonic clocks for elapsed-time measurement — cover what Unix time alone doesn't. But for "what instant in global time did this happen", the 1970 epoch won and is unlikely to be replaced.