The first time we ran an ORAN deployment with a default chrony setup, the radio side started dropping users at random intervals about 30 seconds after we activated the cell. The DU's logs were full of warnings about timing-source drift exceeding tolerance. The tolerance, we learned, was ±100 nanoseconds, and chrony's NTP-over-internet-public-pool was giving us roughly ±2 milliseconds — four orders of magnitude wide.

That ±100 ns target is not a comfortable budget. It is the actual ORAN specification for fronthaul synchronisation, and it is justified — at sub-microsecond timing tolerances, the precoding for MIMO and the timing alignment for symmetric uplink/downlink transmission stop working in interesting ways. Miss it and the radio still transmits, but with steadily degrading performance until the DU eventually flags the source as unusable and stops sending.

The standard answer for getting under that target is PTP — Precision Time Protocol — over a dedicated timing network with hardware-assisted timestamping at every hop. The standard answer is correct. The interesting question is what the topology should look like.

±100 nanoseconds is not a comfortable budget. Public NTP pools deliver four orders of magnitude wider error.

Our settled-on architecture has a Stratum-1 GNSS-disciplined oscillator at each region (in our case three regions, each with its own redundant GNSS pair), distributing PTP to one or more Stratum-2 boundary clocks per site. Each ORAN site has its own boundary clock, drawing time from at least two upstream Stratum-2 sources for redundancy. The site's boundary clock then distributes PTP onward to the DU and the RUs. Per-site holdover is provided by an OCXO that can hold ±1 ppb for several minutes if the upstream PTP path is interrupted.

Sync architecture · Stratum-1 GNSS to ORAN sites GNSS Stratum-1 PTP BC Stratum-2 PTP BC Stratum-2 PTP BC Stratum-2 Site A · ±48ns Site B · ±62ns Site C · ±55ns
Stratum-2 boundary clocks let each site stay under the ±100 ns ORAN tolerance even with rare GNSS outages.

The OCXO holdover is the part that took us the longest to size correctly. Cheap holdover oscillators drift fast enough that even a 60-second GNSS outage will push you out of tolerance. The OCXO we ended up with is rated for ±1 ppb over 10 minutes — overkill on paper, but the field reality is that GNSS outages cluster (a passing thunderstorm, multipath in a temporary urban canyon during construction) and you want headroom you don't immediately need.

Three takeaways:

Public NTP pool will not get you under ORAN's ±100 ns target. Don't try. PTP with hardware timestamping is the only path.

Boundary clocks per site, not per region. Centralised timing distribution dies the moment any link in the path between region and site has unhardware-timestamped equipment.

Spec your holdover for outages that cluster. The mean is fine; the tail is what hurts. Pick OCXO accordingly.

Subscribe at edgesignal.example — see also our piece on running ORAN in three Beirut microcells, where timing was the cause of one of our most expensive bugs.