There is no knocking Nokia’s achievement in getting closer to the theoretical maximum performance possible over a single optical fiber, increasing capacity by 25% over the most recently deployed systems and up to 65% over older networks.
Yet this aspect of Nokia’s new Photonic Service Engine (PSE) chipset is less useful for operators than some of the other features that increase flexibility. It also highlights the value of the Shannon limit for defining what is possible over optical networks and therefore identifying what other measures need to be taken to meet the exploding demand for data capacity over fiber infrastructures ranging from metro networks to trunk undersea cables. The answer is Spatial Division Multiplexing (SDM), which Nokia itself is working hard on.
Casual observers might be surprised to see Nokia at the vanguard of optical fiber innovation after it pulled out of that area in 2012 when the then Nokia Siemens Networks (NSN) sold its optical networks division to Marlin Equity Partners, ostensibly to focus on its core mobile broadband business. But then in April 2015 for €15.6 billion it acquired Alcatel-Lucent whose Bell Labs has long been a leader in fixed line technologies generally and it is there that the work on photonic chipsets working towards the Shannon limit originated.
The Shannon limit dates back to 1948 when Claude Shannon developed a rule governing the maximum rate at which information could be transmitted error free over a channel of given bandwidth and background signal noise. It assumes use of error correction and so by implication requires efficient coding to get close to that maximum Shannon limit. In practice the Shannon Limit defines a tradeoff between reach and signal to noise ratio for any network, optical, copper or wireless.
The key innovation enabling Nokia to approach the limit is a technique called probabilistic constellation shaping (PCS), which sounds complex but is in principle very simple. Nokia’s PSE employs QAM (Quadrature Amplitude Modulation) over optical fibers, the same mathematical technique used over cable TV networks, whereby bits are encoded in symbols created through variations in some property, or degree of freedom, of the carrier wave.
These symbols are normally arranged and transmitted in square constellations, so that for example an 8×8 constellation with 64 possible values could encode 6 bits, because these have the same number of variables. That is 64 QAM. Similarly 256 QAM with a 16×16 constellation can encode 8 bits.
The property of the carrier wave used to create symbols can be frequency, phase, or wave amplitude and as connecting electronic components improve, to become more sensitive, so it is possible to transmit ever smaller differences in these to increase the constellation size. But this in turn decreases signal to noise ratio and so makes it harder to approach the Shannon limit. PCS manipulates the constellations by assigning symbols transmitted at lower frequencies and therefore less susceptible to transmission noise to more common data and as a result reduces the average error rate. This statistical approach enables capacity to be increased, because when applied at such high bit rates fluctuations are ironed out.
It is only relatively recently that there has been such intense focus on optical performance. In the early days of optical transmission in the 1980s and 1990s it was regarded as almost a bottomless pit with little need for research effort to extract extra capacity. But that all changed with the arrival of broadband services over both fixed and mobile backhaul networks, as well as video at various stages of its lifecycle, which is going increasingly over optical networks.
For a while various techniques in addition to QAM itself enabled optical to keep ahead of the curve. These include low-loss single-mode fiber and polarization division multiplexing (PDM) creating two channels by modulating data across waves oscillating at right angles to each other and so not interfering. The most important by far, because it increased capacity by up to 100 times or more, is wave division multiplexing (WDM), encoding data over different wavelengths or “colors” of light. Even WDM has been running out of scope for further improvement as the wavelengths used get too close to avoid interference, but the extra capacity of Nokia’s PSE looks relatively slight in that context.
As it happens one of the benefits of PSE that will appeal to many operators is the chipset’s ability to fine tune wavelength capacity so that it can be ramped up or down smoothly in the range 100 Gbps to 600 Gbps per fiber, which as Nokia correctly claims will greatly simplify dynamic network operations and planning. It will help cloud service providers cope with varying demand and peak traffic levels. Also attractive is the 60% reduction in power consumption enabled by the PCS technology and Nokia is right that the performance improvement will extend the life of existing fibers.
The more important innovation in fiber is SDM, which is the last untapped degree of freedom left for optical data transmission. The concept is familiar in the wireless world through multi-antenna systems to create extra channels via spatial separation, where it works well given all the capacity in the air with scope even for exploiting interference constructively. For copper SDM is problematical because of electromagnetic interference. In the case of fiber however interference is less of an issue, arising through imperfections in the material rather than being intrinsic to the medium.
In this case the concept is simpler in theory than practice. The principle is to exploit the physical space within a fiber pipe or bundle to transmit multiple rays of light, but the implementation is complex, which is one reason why SDM is work in progress with some of the more advanced concepts still a few years away from commercial deployment.
There are two fundamental approaches to SDM for optical transmission, multicore and multimode, which are subtly different. Multicore simply means having multiple cores embedded in a single cladding, each able to carry distinct light signals but not entirely separated so that there is some crosstalk between them. Signal loss does limit the potential gain in capacity achievable by multicore fibers and so there is greater optimism over the future of multimode techniques. This exploits the way light rays are transmitted in space as they travel, allowing multiple rays or “modes” to be carried inside a single fiber if there is enough room. This is not the same as WDM where data is modulated over different frequencies or “colors” of a single signal or mode. In fact WDM can be applied over of each of these multiple modes within a single fiber.
Multimode optical transmission in fact has been around for years but confined to short distances because of greater light dispersion occurring in the larger fiber cores. It became popular in data center and campus applications because with a wider core the terminating electronics was lower precision and cheaper.
Now multimode techniques are being developed for longer distance transmission, with Nokia working on a radically new approach involving hollow fibers with just glass cladding. While light is transmitted in conventional fibers through confinement by reflection off the cladding, in a hollow core the cladding acts as a guide. This has huge potential because light transmission is less impeded and potentially spectral efficiency can be increased 1000 times. Another benefit is even lower latency, since light travels at virtually full speed through air, compared with 65% of that through glass, which Nokia speculates could be a competitive advantage in some hyper-latency sensitive applications such as financial trading and perhaps even gaming. But there are challenges actually realizing this potential in practice and so Nokia admits hollow-core deployments will not occur for some time, without specifying how long.
However Nokia has already demonstrated what it claims is a record for single mode transmission in a hollow core, 24 Tbps. This demonstration involved 96 wavelength channels carrying dual polarization 32QAM modulation.
Of course hollow core would only appeal for greenfield deployments because the cost of replacing fibers within existing networks is prohibitive. This would preclude use of hollow fibers in terrestrial fiber networks for the foreseeable future, until their time for replacement comes around, which could be anywhere between 20 and 100 years after installation.
Yet as Nokia’s Director for Optical Transmission Subsystems Research Peter Winzer pointed out, there are far more greenfield cases than is commonly realized. Whenever Google, Apple. Microsoft, Amazon or anybody else builds new data centers, usually in the middle of nowhere, short distance fiber interconnects have to be provided. Then at the other end of the range spectrum, undersea cables, which are being laid continuously, are by definition greenfield and here hollow fiber would be particularly attractive given its high capacity and low latency.
What is clear is that the telecommunications industry is waiting on SDM, especially multimode and ultimately hollow core, to yield the significant increases in capacity that will be driven in particular by ultra HD video services.