banner
Home / Blog / Blending copper and fiber
Blog

Blending copper and fiber

Apr 27, 2023Apr 27, 2023

Copper and fiber cables are evolving to meet the needs of data centers, but both will have a place in the future of networks

Not a week goes by without a new data center opening somewhere, or a large hosting provider expanding its existing facilities. Recent research from iXConsulting backs up that trend. Its 14th Data center Survey polled companies each controlling around 25 million square feet of data center space in Europe, including owners, operators, developers, investors, consultants, design and build specialists, large corporates, telcos, systems integrators, colocation companies and cloud service providers.

All expressed a desire and intention to build out their current data center footprint, both in-house and through third parties, with 60 percent saying they would increase in-house capacity in 2017 and 38 percent in 2018. Over a third (35 percent) said they would expand their third party hosting capacity by 2019.

More than any other part of the market it is the hyperscale cloud service providers which appear to be currently driving that expansion. Canalys suggests that the big four cloud players on their own - Amazon Web Services (AWS), Google, IBM and Microsoft - represented 55 percent of the cloud infrastructure services market (including IaaS and PaaS) by value in the second quarter of 2017, in total worth US$14bn and growing 47 percent year on year.

Irrespective of the size of the hosting facilities being owned and maintained, the unrelenting growth in the volume of data and virtualized workloads being stored, processed and transmitted as those data centers expand will put significant strain on the underlying data center infrastructure. And that is especially true for internal networks and underlying cabling systems that face an acute lack of bandwidth and capacity for future expansion with current technology and architectural approaches.

In each individual data center the choice of cabling will depend on a number of different factors beyond just capacity, including compatibility with existing wiring, transmission distances, space restrictions and budget.

Unshielded (UTP) and shielded (STP) twisted pair copper cabling has been widely deployed in data centers over the past 40 years, and many owners and operators will remain reluctant to completely scrap existing investments.

As well as being cheaper to buy, copper cabling has relatively low deployment costs because there is no need to buy additional hardware, and it can be terminated quickly and simply by engineers on site.

Fiber needs additional transceivers to connect to switches, and also requires specialist termination. By contrast, copper cables use the same RJ-45 interfaces, backwards compatible with previous copper cabling specifications which simplifies installation and gradual migration over a longer period of time. Standards for copper cabling have evolved to ensure this continuity (see box: copper standards evolve).

Data center networks that currently rely on a combination of 1Gbps and/or 10Gbps connections at the server, switch and top or rack layers today are likely to see 25/40Gbps as the next logical upgrade. But in order to avoid bottlenecks in the aggregation and backbone layer, they will also need to consider the best approach to boosting capacity elsewhere, and particularly over longer distances which copper cables (even Cat8) are ill equipped to support.

Many data center operators and hosting companies have plans to deploy networks able to support data rates of 100Gbps and beyond in the aggregation and core layers, for example.

That capacity will have to cope with the internal data transmission requirements created by hundreds of thousands, or millions of VMs, expected to run on data center servers in 2018/2019, and most are actively seeking solutions that will lay the basis for migration to 400Gbps in the future.

Where that sort of bandwidth over longer cable runs is required, the only realistic choice is fiber - either multimode fiber (MMF) or single mode fiber (SMF). MMF is cheaper and allows lower bandwidths and shorter cable runs. It was was first deployed in telecommunications networks in the early 1980s and quickly advanced into enterprise local and wide area (LAN/WAN) networks, storage area networks (SANs) and backbone links within server farms and data centers that required more capacity than copper cabling could support.

Meanwhile, telecoms networks moved on to single mode fiber, which is more expensive and allows greater throughput and longer distances. Most in-building fiber is still multi-mode, and the network industry has created a series of developments to the fiber standards, in order to maximize the data capacity of those installations (see box: making multi-mode do more).

As data centers have continued to expand however, the distance limitations of current MMF specifications have proved restrictive for some companies. This is particularly true for hyperscale cloud service providers and those storing massive volumes of data like Facebook, Microsoft and Google which have constructed large campus facilities spanning multiple kilometers. Social media giant Facebook, for example, runs several large data centers across the globe, each of which links hundreds of thousands of servers together in a single virtual fabric spanning one site. The same is true for Microsoft, Google and other cloud service providers for whom east to west network traffic (i.e. between different servers in the same data center) requirements are particularly high.

What these companies ideally wanted was single-mode fiber in a form that was compatible with the needs and budget of data centers: a 100Gbps fiber cabling specification with a single mode interface that was cost competitive with existing multi-mode alternatives, has minimal fiber optic signal loss and supports transmission distances of between 500m and 2km. Four possible specifications were created by different groups of network vendors. Facebook backed the 100G specification from the CWDM4-MSA, which was submitted to the Open Compute Project (OCP) and adopted as part of OCP in 2011.

Facebook shifted to single-mode because it designed and built its own proprietary data center fabric, and was hitting significant limitations with existing cabling solutions. Its engineers calculated that to reach 100m at 100Gbps using standard optical transceivers and multi-mode fiber, it would have to re-cable with OM4 MMF. This was workable inside smaller data centers but gave no flexibility for longer link lengths in larger facilities, and it wasn't future proof: there was no likelihood of bandwidth upgrades beyond 100Gbps.

Whilst Facebook wanted fiber cabling that would last the lifetime of the data center itself, and support multiple interconnect technology lifecycles, available single-mode transceivers supporting link lengths of over 10km were overkill. They provided unnecessary reach and were too expensive for its purposes.

So Facebook modified the 100G-CWDM4 MSA specification to its own needs for reach and throughput. It also decreased the temperature range, as the data center environment is more controlled than the outdoor or underground environments met by telecoms fiber.

It also set more suitable expectations for service life for cables installed within easy reach of engineers.

The OCP now has almost 200 members including Apple, Intel and Rackspace. Facebook also continues to work with Equinix, Google, Microsoft and Verizon to align efforts around an optical interconnect standard using duplex SMF, and has released the CWDM4-OCP specification which builds on the effort of CWDM4-MSA and is available to download from the OCP website.

The arrival of better multi-mode fiber (OM5 MMF) and the lower-cost single-mode fiber being pushed by Facebook, could change the game significantly, and prompt some large scale providers to go all-fiber within their hosting facilities, especially where they can use their buying power to drive the cost of transceivers down.

In reality, few data centers are likely to rely exclusively on either copper or fiber cabling – the optimal solution for most will inevitably continue to rely on a mix of the two in different parts of the network infrastructure for the foreseeable future.

The use of fiber media converters adds a degree of flexibility too, interconnecting different cabling formats and extending the reach of copper-based Ethernet equipment over SMF/MMF links spanning much longer distances.

So while future upgrades to the existing Cat6/7 estate will involve Cat8 cabling supporting 25/40Gbps data rates will handle increased capacity requirements over short reach connections at the server, switch and top of rack level for some years to come, data center operators can then aggregate that traffic over much larger capacity MMF/SMF fiber backbones for core interconnect and cross campus links.

Most facilities currently rely on a mixture of Category 6 (Cat6) and Cat7 copper cabling that supports 10Gbps bandwidth over 100m, and higher data rates of up to 40Gbps over much shorter distances. But the evolution of those copper cabling specifications is now fundamental to meeting the requirements of not only hyperscale cloud service providers, but also larger enterprises and telcos with big ambitions to expand their use or delivery of either private or hybrid cloud hosted applications and services.

In 2016, the Telecommunications Industry Association (TIA) TR-42 Telecommunications Cabling Systems Engineering Committee approved the next stage in that evolution - Cat8, compatible with 25/40GBase-T over short runs of 5 to 30m shielded twisted pair cabling with a standard RJ-45 Ethernet interface. Due to its relatively short reach, Cat8 is, for the moment, targeted at switch to server connections in top of rack or end of row topologies.

Defined by their core and cladding diameters, multi-mode fiber types are designated by the IEC as OM1 through to OM4. When OM1 bandwidth requirements surpassed 100Mbps, its 62.5 µm diameter was reduced to 50 µm (OM2) to improve capacity to 1Gbps and even 10Gbps over shorter link lengths of 82m.

That was boosted again with OM3 (or laser optimized multimode fiber - LOMMF) in the 1990s. OM3 used vertical cavity surface emitting laser (VCSEL) rather than LED based equipment to increase the reach of OM2, now supporting transmission rates of 10Gbps over 300m.

Various enhancements to OM3 pushed bandwidth and reach to 40/100Gbps over distances up to 100m, but the arrival of OM4 (which uses the same 50 µm diameter and VCSEL equipment) extended 10Gbps bandwidth to 550m and allowed 100Gbps data rates over 150m. All four types of MMF cabling are still found in many of today's data centers, but OM3/4 predominate due to their higher bandwidth, longer reach and VCSEL compatibility.

A fifth implementation - OM5 - previously known as wide band MMF (WBMMF) uses short wave division multiplexing (SWDM) and was published as the TIA-492AAAE standard in 2016. It uses the same 50 µm diameter and VCSEL equipment as OM3/4 and is fully backward compatible with its predecessors, but increases the capacity of each fiber by a factor of four to support much higher data rates up to 100Gbps over duplex fiber connections and in the future 400Gbps over the same 8-fiber MPO interfaces.

There has been little OM5 deployment in data centers to date, largely because few manufacturers have produced appropriate transceivers in any volume. Suppliers only formed the SWDM MSA group in March 2017, whilst Finisair announced it has started to produce QSFP28 SWDM transceivers supporting 100Gbps over a single pair of fibers the following November. There is little doubt that OM5 will rapidly become the de facto MMF implementation for new data centers in 2018, whilst operators will also begin to upgrade existing facilities with new cabling and transmission equipment as required.

Recognizing the gap in provision and the potential size of the market opportunity, several network cabling suppliers formed multi-source agreements (MSAs) to collaborate on delivering single-mode fiber in a form usable in data centers. Four potential candidates for a suitable specification have emerged in the last few years.

The 100G CLR4 Alliance spearheaded by Intel and Arista Networks aimed to create a low power, 100G-CWDM solution in QSFP form factor supporting 100Gbps bandwidth over duplex SMF at distances up to 2km.

The OpenOptics 100 Gigabit Ethernet MSA was jointly founded by Mellanox Technologies and optical start-up Raniovus. It proposed a 100 GbE specification and 1550nm QSFP28 optical transceiver with a 2km reach using a combination of SMF and silicon photonics to offer capacity of 100G/400G and beyond based on WDM. Supporters include Ciena, Vertilas, MultiPhy and cloud service provider Oracle.

The CWDM4-MSA also targets 100G optical interfaces for 2km cable runs using 4 lanes of duplex 25Gbps SMF. The five founding members were Avago Technologies, Finisar, Oclaro, JDSU and Sumitomo Electric, with additional members including Brocade, Juniper Networks and Mitsubishi Electric. Though an interface was not specified by the consortium, the expectation is that the QSFP28 form factor will be applied.

The Parallel Single Mode 4-Lane (PSM4) MSA defined a specification with a minimum 500m reach that transmits 100Gbps over eight single mode fibers (four transmit and four receive) each transmitting at 25Gbps and supporting QSFP28 optical transceivers. Original members included Avago, Brocade, Finisar JDSU, Juniper Networks, Luxtera, Microsoft, Oclaro and Panduit.

This article appeared in the December/January issue of DCD Magazine. Subscribe to the digital and print editions here:

This article appeared in the December/January issue of DCD Magazine. Subscribe to the digital and print editions here: