Data Center Architecture

Modern data center architecture has evolved from an on-premises infrastructure to one that connects on-premises systems with cloud infrastructures where networks, applications and workloads are virtualized in multiple private and public clouds. This evolution has influenced how data centers are architected as all of the components of a data center are no longer co-located and may only be accessible to one another over the public Internet.

Let’s take a look at the latest advancements in data center networking, compute and storage technologies.

Request a Demo IDC Hybrid Data Center Buyer's Guide

Data Center Computing

Advancements in the virtualization of infrastructure components enables staff to quickly spin up systems and applications as needed to meet demands. For example, hypervisor environments isolate the compute and memory resources used by virtual machines (VMs) from the resources of the bare-metal hardware. Container systems provide virtual operating systems for applications to run on. Both VMs and containerized applications are portable and can run on-premises or on a public cloud as needed.

 

While virtual machines and applications enable faster delivery of infrastructure and applications, edge computing solves a different problem and moves compute resources to the edge where the data resides, thus reducing the latency and bandwidth issues that occur in transport.

 

A primary use case that edge computing solves is processing the data generated by remote Internet of Things devices. Real-time applications, such as video processing and analytics used in self-driving cars and robotics need processing done near the edge. Micro data centers which are distributed and compact units have emerged. These units gather, process, analyze and store data close to the edge devices that collect the data and need the result of the analysis in real time.

 

The microprocessors that we’re familiar with today that contain several CPUs in a single chip have come a long way since their invention in the early 1970s. Over time the processing speed of general purpose CPUs has increased and benefited from Moore’s law, which forecasted a doubling of the number of transistors on a microchip every two years. But the structure of CPUs may not be suited for some tasks.

 

With the emergence of artificial intelligence (AI) and machine deep learning, it has been found that graphic processing units (GPUs) can be 250 times faster than CPUs in training deep learning neural networks. Their structure makes them more efficient than the general-purpose central processing units (CPUs) for algorithms that process large blocks of data in parallel.

Data Center Storage

Storing data – both their own and that of their customers – is a core part of a data center’s duties.  As storage becomes cheaper and more efficient, the use of local and remote backups become more common, further increasing data storage.

 

Data centers owners have disaster recovery plans in place to recover lost data. Backup techniques include saving data to a physical medium and then storing the data local or remote, directly transferring the data to another site, or uploading the data to the cloud. For example, data is often distributed between multiple, physically separated data centers. That way, if a data center is compromised by a wildfire, earthquake, or other natural disaster, the lost information can be restored from backup data center content.

 

Advancements in data storage technologies like SDS (Software-defined Storage), NVMe and NVMe-oF are changing how data centers store, manage and use data. Managing data via software abstraction (SDS) enables automation and lowers management costs of managing the data.

 

NVM Express (NVMe) and Solid State Drives (SSDs) are replacing legacy spinning disks and the SATA and SAS interfaces used to access them with lower latency and better performance. While NVMe applies to PCI Express interfaces in a storage system, NVMe over Fiber allows one computer to access block-level storage devices attached to another computer via remote direct memory access over the network. This enables organizations to create high-performance storage networks with very low latencies.

Data Center Networks

Data center bandwidth requirements are driven by applications and the number of internal and external systems, and users connected to the network. Peaks need to be monitored across Storage Area Networks (SANs), Local Area Networks (LANs), external and Internet links using monitoring tools to gauge when a move to the next size circuit is needed, e.g. when hitting 50% capacity on a regular basis.

 

Bottlenecks in traffic flow can occur at any connection point. In particular, managers should ensure firewalls, load balancers, IPS and WAFs can support overall throughput requirements. For WAN connectivity, managers should plan for enough bandwidth to support occasional traffic spikes to ensure voice and video demands, Internet access, MPLS and SD-WAN service requirements are met. Bandwidth is a small expense to pay compared to a bad user experience.

 

One data center network architecture is a tree based network topology made up of three layers of network switches. Access is the lowest layer where servers connect to an edge switch.

 

The aggregate layer is a mid-level layer that interconnects together multiple access layer switches. Aggregate layer switches are connected to each other by top level core layer switches. A common practice is to deploy firewalls, load balancers, and application acceleration boards on switches here.

 

Core layer switches connect the data center to the Internet. Core switches have high switching capabilities and can handle bursts of traffic. To minimize the number of individual server addresses that core switches need to handle, core switches route traffic to batches or pods, encoding data packets in such a way that the core only needs to know which pod to direct traffic toward rather than handling individual server requests.

 

One of the latest advancements in data center networks is hyperscale network security technologies. This is the ability to improve and scale appropriately as more demand is added to the system. More resources can be dynamically allocated to the system resulting in a strong, scalable and distributed system.

 

Modern data centers networks also use Software-defined Networking (SDN) which enables network managers to configure, manage, secure, and optimize network resources in software. SDN abstracts a network infrastructure into an application, a control plane, and a data plane layer. Control of the network then becomes directly programmable, enabling automated provisioning and policy-based management of network resources. Benefits of SDN include reductions in operating costs, centralized operational control and the ability to scale services such as security when needed.

The Data Center Architecture Evolution is Ongoing

Changes in compute, storage, and network technologies have had a dramatic impact on how data centers are architected and operate.  As technology continues to evolve, organizations need to ensure that they have solutions in place to secure their ever-shifting digital attack surfaces.

×
  Feedback
This website uses cookies for its functionality and for analytics and marketing purposes. By continuing to use this website, you agree to the use of cookies. For more information, please read our Cookies Notice.
OK