Automating the Edge with Robotics

Data gravity at the edge: How ‘data hubs’ and edge data centers can help

Categories Edge Computing News  |  Guest Posts
Data gravity at the edge: How ‘data hubs’ and edge data centers can help

By Bill Severn, Executive Vice President of 1623 Farnam

Is data gravity on your radar? It should be. Many of tomorrow’s IT challenges will be based on data gravity, and there isn’t a well-defined best practice to address it. Why is data gravity such a challenge to overcome? How might it affect IT strategy? And how might rethinking cloud storage and edge data centers create a path forward?

With more users and devices entering networks every day, there are more apps, more devices — and more data. As data piles up, applications and services inevitably move closer to the data sets: that’s data gravity, and it’s a big deal. It’s already been identified as a key megatrend; in some industries, its intensity will double in the next couple of years.

Businesses of all industries need to care about data gravity because ignoring it will create problems down the road — problems that affect an organization’s entire IT infrastructure. Optimally, data should be openly available to access by its related applications and services, easily managed, analyzed and activated regardless of location. That means traffic must flow easily everywhere across a company’s network footprint: from private cloud to public cloud to on-prem, from the core to the edge, from the public internet to every private point of presence for the business.

However, the gravity of massive data sets can lock applications and services in one particular operational location. Whether or not that location is ideal, the stored data is, in essence, trapped there because it can’t be made useful anywhere else. It’s a problem of centralization that affects every other aspect of the system as a whole.

The path to success for IT teams, then, is to make data gravity a key strategic concern. When mapping out data management plans, a main goal should be to ensure that no particular data sets become uncontrollable by overwhelming IT capacity with excessive data volumes. But how? Unfortunately, the answers aren’t clear and the variables are many, so each organization will need to devise its own solutions.

The considerations include the volume of data being generated and consumed. It is also important to consider the number and type of places where data is stored and used, the way data is distributed across such places, and the speed the data is transmitted. The good news is that managing data gravity effectively can become a competitive differentiator. Companies with poorly designed infrastructure won’t be extracting maximum value — and will not be providing the best possible experience for customers. In short, data gravity affects a company’s IT ability to be innovative and agile, which can be either a blessing or a curse depending on the approach.

Even with individualized variables, the main thrust of addressing data gravity is two-fold. Firstly, teams need to maintain multiple centers where data processes take place. Secondly, teams must design an architecture where applications, compute and storage resources can move efficiently within and throughout those centers.

At first blush, this may seem like an easy job for the cloud to solve, until the teams realize they must deal with the cost and time involved with moving data around once their IT elements are decentralized. Unanswered questions around scale can cause transaction and egress fees to pile up and vendor lock-in to cause headaches.

One solution can be found in hybrid or multi-cloud setups through colocation data centers. If they’re located near a cloud location, they can facilitate solutions from multiple clouds, eliminating data duplication and reducing latency at the same time. The right colocation provider can provide cross-connects, private circuit options and hyperscale-ready onramps. The primary question, then, becomes about geographical distribution.

It may seem that the best way to deploy capacity geographically is to double down on major urban centers, creating an emphasis on expanding interconnected ecosystems around existing data gravity.

However, there is another approach: organizations differentiating themselves by focusing on data hubs in multi-tenant data centers near cloud campuses at the edge. In this way of thinking, placing data closer to the end user solves the latency problem. Further, the ability to process some data close to cloud computing applications can solve the problem of data storage being too dense to move.

What does this mean for edge data centers? For starters, it means they need to prepare for architecture where tenants increasingly want hybrid or multi-cloud solutions. Edge data centers must work closely alongside cloud-based models where storage capacity at the edge can reduce the size of otherwise centralized data sets by cutting away unneeded data and compressing the vital stuff. So, to accommodate the distributed data, edge data centers probably need to look a little more like cloud data centers, except with additional considerations like control systems and IoT devices.

They’ll also need to prepare for increased bandwidth as more and more processing will take place under its roof. Of course, not all data can persist at the edge, but edge data servers can serve as the first stop for processing before it moves to the cloud. In fact, the trend is well underway already. According to some predictions, 75 percent of data will be processed at the edge in three years, including 30 percent of total workloads.

Looking into an industry crystal ball, it’s hard to predict which ways data gravity will influence the ways networks work and look. That said, solving data gravity in the future will involve an intermeshed collaboration between parties such as content providers, edge data centers and enterprise IT teams.

Tactical use of cloud solutions will be integral, yes, but so will carrier-neutral data centers in lower-tier markets — especially those located near major cloud provider facilities. In order to support data gravity, the pressure points may need to change.

About the author

Bill Severn is Executive Vice President of 1623 Farnam, a provider of data center services based in Omaha, Nebraska. Severn oversees all marketing activities and day-to-day business operations. In addition to this role, Bill is the Executive Vice President for Berk’s Group; a division of the privately held News Press & Gazette Company. He has also been leading business development in the technology infrastructure sector for Bradley Holdings since 2012. Prior to his current roles, Severn was COO for NPG Cable, Inc.

DISCLAIMER: Guest posts are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Edge Industry Review (EdgeIR.com).

Article Topics

 |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Automating the Edge

“Automating

Deploying AI Models at the Edge

“Deploying

Latest News