Close
Close

Published

Nokia achieves 500Gbps, getting close to Shannon limit for transport

Nokia has completed the first field trial of the latest fiber optic technology designed to improve the flexibility and economics of longer haul data transmission at the regional, national and submarine level.

The company has achieved 500 Gbps over a single wavelength using a technology called Probabilistic Constellation Shaping (PCS), inherited from Bell Labs through its 2016 acquisition of Alcatel-Lucent. The trial was with German operator M-Net as preparation for the latter’s commercial roll-out of its regional fiber network based on DWDM (Dense Wavelength Division Multiplexing).

This comes as fiber networks, once seemingly infinite in their capacity, are getting close to running out of road as previous technologies have been driven as far as they can be economically. The most important of those was WDM itself, as it morphed into DWDM and increased capacity by up to 100 times or more by encoding data over multiple wavelengths or ‘colors’ of light. Even that has been running out of scope for further improvement as the wavelengths used get too closely spaced to avoid interference.

In this context, Nokia’s PCS, embodied in its Photonic Service Engine (PSE) used in the M-Net trial, is not the answer to the capacity crunch resulting especially from proliferation in video traffic and cellular backhaul, because it only enables a modest increase in bit rate per fiber.

As Nokia points out, its PSE increases capacity per optical channel by 25% over the most recently deployed systems and up to 65% over older networks, reaching 500Gbps in the M-Net trial. This aspect of PSE then is less valuable for operators than some of the other features that increase flexibility and also distance covered without having to regenerate the signal, which scores particularly for long distance transmission, especially submarine given that repeaters have to be fed with electric power. With the latest PCS technology, the optical range without repeaters will be increased greatly, for example from 100km to 700km at 400Gbps. That is far bigger than the capacity gain itself.

PCS is also notable for extracting almost as much capacity as is theoretically possible over a given optical channel. That is defined by the Shannon Limit, a rule developed in 1948 by Claude Shannon specifying the maximum rate at which information can be transmitted virtually error-free over a channel of given bandwidth and background signal noise. It assumes use of error correction which means that getting as close as possible to the limit requires efficient coding.

Now, PCS gets closer to the Shannon Limit, exploiting the underlying principle of QAM (Quadrature Amplitude Modulation) over optical fibers, the same mathematical technique used for cable TV networks. Under QAM, bits are encoded in symbols created through variations in some property, or degree of freedom, of the carrier wave. These symbols are normally arranged and transmitted in square constellations, so that for example an 8×8 constellation with 64 possible values can encode 6 bits, because these have the same number of variables. That is 64 QAM. Similarly, 256 QAM with a 16×16 constellation could encode 8 bits.

The property of the carrier wave used to create symbols can be its frequency, phase, or wave amplitude, or a combination of these. Given advances in signal processing and electronics, the components connecting fiber segments have become more sensitive so that they can transmit ever smaller differences in these carrier wave properties to increase the constellation size.

But this in turn decreases signal to noise ratio and so makes it harder to approach the Shannon limit. PCS manipulates the constellations by assigning symbols transmitted at lower frequencies, and therefore less susceptible to transmission noise, to more common data. This means that at bit rates below the maximum PCS can support, not all the constellation points are actually transmitted. Only those with lower energy are sent. Only a stream at the maximum data rate, which is 600Gbps for Nokia’s latest PSE version 3, uses all the constellation points.

This gets far closer to the Shannon Limit than any other optical system, in fact just a few tenths of a decibel from it, compared with two decibels for most other modern deployments. The decibel in this context is a measure of signal to noise ratio, used in Shannon’s equation.

As a result of the error reduction enabled by Nokia’s PSE, capacity is increased, and because more data is transmitted over symbols at lower frequencies, energy consumption is reduced, on average by around 60% – and this is on top of power savings through needing fewer repeaters.

Another significant benefit of PSE that will appeal to many operators is its ability to fine tune wavelength capacity so that it can be scaled up or down smoothly in the range 100Gbps to 600Gbps per fiber, which as Nokia correctly points out will greatly simplify dynamic network operations and planning. It will help cloud service providers cope with varying demand and peak traffic levels.

We can see then that PSE is a much bigger deal than the headline gain in capacity suggests, when we consider the energy savings and flexibility improvements. But it does little to stave off the impending bandwidth crunch when we consider that maximum capacity per fiber has on average been increasing by 25% a year over the last decade, and much more before then. On that basis, PSE will only add a year’s worth of capacity growth.

Given that PSE is rubbing up against the Shannon Limit per channel, suggesting that 600Gbps is close to the theoretical maximum, the only way of extracting substantially greater capacity out of a single fiber is to find ways of multiplexing more channels down them. Even DWDM itself is getting to the point where it is not economically feasible to multiplex many more channels, largely because attempts to squeeze even more reduces the channel bandwidth available for each wavelength in the absence of any improvement in spectral efficiency, which is constrained by the Shannon Limit. That imposes a practical limit on the number of wavelengths that can be accommodated in a given amount of optical spectrum.

This leaves two other options. The first is multi-level modulation, which has already been employed to increase the spectral efficiency of say a wavelength by effectively inserting two or more channels in that spectrum. It can exploit polarization division multiplexing (PDM), allowing two channels of information to be transmitted on the same carrier frequency by using waves oscillating at right angles to each other so that they do not interfere. However, like PSE, this has limited scope for increasing capacity and also brings a trade-off with transmission distance to maintain backward compatibility with existing systems, which is essential for fiber optic communication where so much of the cost is in laying the cable.

So there is just one option remaining for great improvements in fiber transmission capacity, and that is Spatial Division Multiplexing (SDM). Fortunately, that looks promising with early demonstrations suggesting there is scope to boost capacity per fiber by further orders of magnitude.

The concept of SDM is familiar in radio networks through multi-antenna systems that create extra channels by spatial separation, where it works well given all the capacity in the air with scope even for exploiting interference constructively. For copper networks, SDM cannot readily be employed because electromagnetic interference still operates. In the case of fiber, however, interference is less of an issue, arising only through imperfections in the material rather than because the signal paths are physically too close together as in copper.

The principle of SDM in fiber optics is to exploit the physical space within a fiber pipe or bundle to transmit multiple rays of light. Although simple scientifically, the challenge lies in the engineering, which is one reason why SDM is work in progress with some of the more advanced concepts still a few years away from commercial deployment.

There are two fundamental approaches to SDM for optical transmission, multicore and multimode, which are subtly different. Multicore means embedding multiple cores in a single cladding, each core carrying distinct light signals but not entirely separated so that there is some crosstalk between them. Signal loss does limit the potential gain in capacity achievable by multicore fibers and so there is greater optimism over the future of the alternative multimode approach.

This exploits the way light rays are transmitted in space as they travel, allowing multiple rays or “modes” to be carried inside a single fiber if there is enough room. This is not the same as WDM where data is modulated over different frequencies or “colors” of a single signal or mode. In fact, WDM can be applied over each of these multiple modes within a single fiber – and has been in trials.

Multimode optical transmission in fact has been around for years but confined to short distances because of greater light dispersion occurring in the larger fiber cores. It became popular in data center and campus applications, because with a wider core the terminating electronics was lower precision and cheaper.

Now multimode techniques are being developed for longer distance transmission, with Nokia working on a radically new approach involving hollow fibers with just glass cladding. While light is transmitted in conventional fibers through confinement by reflection off the cladding, in a hollow core the cladding acts as a guide. This has huge potential because light transmission is impeded less, resulting in greater spectral efficiency by a factor of up to 1,000.

Another benefit is even lower latency, since light travels at virtually the maximum speed allowed by Einstein’s theory of relativity, compared with 65% of that through glass. This could be a significant benefit in some hyper-latency sensitive applications such as financial trading and perhaps even gaming. But there are challenges actually realizing this potential in practice and so hollow-core deployments will not occur for some time.

Yet there have been some early records set, with Nokia in 2017 demonstrating single mode transmission in a hollow core at 24 Tbps using STM. This involved 96 wavelength channels carrying dual polarization 32 QAM modulation.

Then in 2018, NICT Network System Research Institute and Fujikura developed an optical fiber combining three modes, capable of wide-band wavelength multiplexing transmission combined with STM. This team demonstrated transmission over 1045 km at a data-rate of 159 Tbps. The key advance here lay in overcoming the different propagation delays of optical signals in each of the three modes, which has previously prevented multi-mode fibers being used for long distance transmission at such high data rates.

NICT also applied STM over multi-core fibers at short ranges for all-optical switching at 53.3Tbps, claiming this was a world record switching speed for short-reach data center networks. This all-optical approach would simultaneously boost switching capacity in a data center and reduce energy cost per bit.

With SDM left as the last major degree of freedom yet to be exploited in optical communications, it is not surprising research has ballooned over the last few years. This has led inevitably to the term DSDM (Dense SSDM), in this case defined as 30 channels or more per fiber. This gives an idea of the potential capacity per fiber over the nearer term by multiplying the maximum number of spatial and wavelength channels by the maximum channel capacity, yielding around 120 x 30 x 600Gbps, or 216Tbps. There is potential to go a lot further still and we have seen several papers proposing schemes that would take single fibers beyond 1Pbps, over 100 times the current limit. That would sustain current rates of fiber capacity increase, which admittedly has slowed a lot since the early years in the 1980s, for 15 to 20 years.

Close