Nobody predicted 20 years ago that we would ever run into a fiber capacity crunch, if not quite a crisis, across the whole spectrum of applications from cable access networks through mobile backhaul right to inter-continental submarine cables. Major fiber transmission technology developers have been caught on the hop with solutions to the capacity shortage on the horizon, but not yet commercially available. Even when new technologies do emerge, many of them, including some of the spatial division multiplexing techniques from Nokia and others that we discussed last week, will be confined to greenfield deployments.
A major cost of fiber infrastructure is installation, especially for access networks and so upgrading with new cable is often unfeasible. The idea of installing extra ducts and blowing fiber through later to increase capacity and take advantage of improvements in the physical media itself emerged almost 40 years ago in an attempt to resolve the labor cost problem. In fact, blown fiber was first developed by UK telco BT while it was still government owned in 1982 and has been around ever since, but – while not quite a complete failure – never gained much traction for various reasons including reliability concerns and lack of agreed standards.
This has left roughly three categories of fiber network looking for other solutions to the capacity problem. Firstly, there are submarine cables, which by definition are greenfield deployments, because each cable is a one-off project and can take advantage of the latest technologies, although without being able to upgrade existing cables. The second category is data center cabling, which is greenfield when new sites are being constructed, as they are on an ongoing basis by the likes of Google and Amazon, often in remote places, therefore highly reliant on high capacity fiber communications. The third category comprises terrestrial networks serving as regional trunks, broadband or mobile backhaul and for cable access networks linking the head end with the fiber node.
Blown fiber was designed particularly for this last category, but by far the most important innovation for fiber capacity over the last few decades has been Wave Division Multiplexing (WDM) in increasingly dense configurations. But that too is hitting the buffers and with video traffic increasing at ever faster rates, the pay TV industry, especially cable operators, has run out of patience and lost faith in the fiber industry’s ability to come up with a solution for them.
For this reason, CableLabs early in 2017 launched its Coherent Optics specification having become convinced of its potential to yield the substantial increases in capacity over existing fiber plant networks that were needed. There was no question of waiting for some of the new exotic technologies under the SDM (Spatial Division Multiplexing) banner waiting in the wings. Mind you, coherent optics is exotic enough and now CableLabs has revealed clearer details in an upgraded version of the original specification, called Full Duplex Coherent Optics.
In truth this latest enhancement is small beer compared with the benefits of coherent optics itself, just doubling capacity by enabling two-way traffic over a single fiber. Cable Labs is talking of spectacular capacity gains of up to 100 times from the coherent optics.
The need for such substantial performance gains can be seen just by calculating traffic requirements of typical MSOs, CableLabs contends. Capacity demands have been amplified by roll out of OTT services and migration toward full duplex DOCSIS with the trend towards more symmetric transport, which relies on what is called a node plus zero amplifiers (N+0) architecture. The point is that this multiplies demand on the fibers to the node by creating more deep nodes.
A typical node in HFC networks today delivers services to 500 households. When converted to the N+0 architecture, 12 to 18 deeper N+0 nodes are created, each capable of supplying 10 Gbps to up to 500 residential subscribers. In theory the total load could be as high as 60 Tbps.
Furthermore, fiber demand for business services and wireless backhaul serviced by MSOs is also increasing rapidly. So given Cable Labs’ assumption that costly fiber re-trenching from hub to original fiber node must be avoided, a different solution has to be found and only coherent technology has stepped up to the plate so far.
Coherent technology works by carrying information as modulations, that is variations, of the light wave’s amplitude or the size of its oscillation and also phase or timing of the wave form cycle from peak to trough and back to peak. The key point is that it is already well proven and been widely deployed for years in submarine cables, where it too is running out steam. However, for terrestrial cables coherent technology promises to yield at least 100-fold increases in capacity, which would resolve the bandwidth crunch for the foreseeable future.
What Cable Labs has done is refine the technology for the much shorter distances of access networks, typically 100 times less than submarine at a maximum of 50 kilometers rather than 5,000 kilometers.
As CableLabs Distinguished Technologist Alberto Campus wrote in a recent blog, the shorter fiber lengths greatly reduce dispersion of the optical signal and since no in-line amplification is needed at such ranges, non-linear distortion and noise are also significantly reduced. This increases the link margin and means that much lower cost optical signal transceivers can be used.
So far CableLabs has achieved 256 Gbps at 80 kilometers over a single wavelength with minimal need for dispersion compensation. That is 26 times the current capacity of optical carriers fully loaded with 1.2 GHz worth of DOCSIS 3.1 signals. Furthermore, CableLabs has multiplexed eight of these wavelengths to reach 2,048 Gbps, which is 50 times what can be achieved over four traditional analog optical carriers each carrying 10 Gbps of DOCSIS 3.1 payload.
That is a great achievement and suggests the target of 100-fold increase in capacity per fiber is not unrealistic. Reaching that will depend on further improvements in capacity per wavelength which could be gained by increasing symbol rate or the degree of modulation. The trick will be to achieve this without raising the cost of the terminating electronics too far.