By Scott Loughmiller, CPO and Co-founder of Scale Computing
In 1977 NASA launched the Voyager Program, a mission of staggering ambition and foresight in which its two robotic probes would provide scientists with our closest look at the outermost planets and some 40 years later, would become the first human-made object to leave our solar system. One of the unheralded technical aspects that made Voyager a particularly novel mission was that it represented the first in a new class of NASA spacecraft that could be remotely reprogrammed from Mission Control—even as it was hurtling through space at more than 35,000 miles per hour. The ability to deploy updated code to an aging piece of hardware was a game-changer for NASA, enabling the agency to maximize the utility and lifespan of these priceless probes.
Back here on Earth, our infrastructure challenges might be more mundane, yet they are no less daunting. For all of our technological advances, the provisioning and configuring of a single server remains a repetitive, time-consuming and error-prone process. For the distributed enterprise with hundreds of satellite stores or Remote Office Branch Offices (ROBO) and little to no technical staff working on-site, the challenge not only scales linearly with the number of sites, but the remote logistics become overwhelming.
While we’ve managed to automate many operational tasks within the highly-controlled confines of the traditional data center, bringing that same level of scalability and efficiency to the network Edge — where expert IT resources are scarce and connected devices are proliferating — continues to be an elusive goal.
Understanding the Edge Challenge
The term Edge Computing has become something of a catch-all to describe a broad set of IT infrastructure use cases that are not adequately served by conventional data centers or the cloud due to a variety of environmental or operational constraints. Unlike sanitized data centers which were purpose-built with specific space, power, and temperature requirements in mind, no two edge deployments are alike. Whether it’s a server plugged into a back corner broom closet or installed in the midst of a bustling factory floor, edge deployments are often best characterized by their disparities.
Consequently, the form factor of edge equipment matters a great deal. Since standard data center equipment doesn’t account for the less-than-ideal environment of an edge deployment, it’s vital that edge equipment is built with sufficient ruggedness capable of handling a wider range of conditions that are commonly found at the edge. Gear that was otherwise designed to operate in a tightly controlled data center environment is more likely to develop reliability issues when located in a poorly ventilated storage closet at an edge installation.
What all these diverse edge use cases do have in common is a need to place compute resources in very close proximity to where data is being generated and consumed by the business. They also can’t afford the latency of pushing data back and forth to the cloud nor the dedicated, fit-to-purpose pristine data center space to host their own equipment. For instance, a chain of grocery or retail stores with critical revenue-producing systems like Point of Sales terminals needs to maintain those systems on-site. A steel manufacturer meanwhile might operate an arc furnace that in order to run at peak performance, requires immediate access to their equipment’s sensor data, thus their ability to deploy an edge system close to the source becomes an issue of operational necessity due to latency.
Finally, while the rapid proliferation of IIoT sensors and other connected devices across industries promise to drive greater efficiency and actionable business insights, they also come with a cost: all of these connected things generate reams of machine data, which in order to be of use to the business, must be processed and analyzed in near or real-time. Even if you take bandwidth constraints out of the equation, offloading these workloads to the cloud is neither technically viable nor economically feasible. Of course, with every new device and sensor that’s added to an IT footprint, the complexity of the environment itself is also compounded.
Zero-ing in on the Zero Touch Future
To fully appreciate the transformative impact that the Zero Touch provisioning future holds, it’s instructive to contrast it against our current state of ITOps. Imagine the IT leader of a retailer with a hundred or so stores that operate in malls across the country. These stores all have a few things in common: they have very limited space to host equipment, the space that they do have available is lacking, and more often than not, the store manager who has little to no technical experience is the one maintaining the physical infrastructure.
When corporate headquarters wants to deploy updated equipment in this scenario, their options are extremely limited. They often have little choice but to ship one of their IT staff around to visit each remote site to manage the process or hiring a high-priced local IT consultant to help out – who would still require expert assistance from the home office. Obviously, none of these scenarios are economically feasible or operationally scalable which is why so many of these organizations remain reluctant to modernize their infrastructure.
But the remote provisioning of infrastructure only represents one slice of the Zero Touch calculus. To meet the diverse and evolving requirements of edge deployments, we also need a new hardware paradigm to pair it with. Hyperconverged Infrastructure (HCI) which integrates the hardware, networking, and virtualization functions into a single consolidated unit represents the next stage in the IT lifecycle. The latest generation of HCI is rugged, can be held in the palm of your hand, and can be installed practically anywhere – in a storage closet, in the hold of a ship, or even the top of a windmill.
If you were that same IT leader of a retailer with a hundred edge locations that required new or updated infrastructure, a zero-touch provisioning model represents a true transformational shift. In this paradigm, the IT administrator could automate the configuration and provisioning of each individual unit from their home office, ship them off to each location, and the only thing the store manager would need to do is plug the unit in. All of these remote systems could then be centrally provisioned, managed, monitored, and administered as a unified fleet by trained professionals from anywhere in the world who fully understand the nuances of their IT environment.
As NASA demonstrated with their Voyager program, you can never fully anticipate your future requirements. But as the IT leaders of today can surely appreciate, the more flexible your deployment is today the more likely you are to extend the horizon of their value and utility.
About the Author
Scott Loughmiller is the Chief Product Officer and Co-founder of Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions.
DISCLAIMER: Guest posts are submitted content. The views expressed in this blog are that of the author, and don’t necessarily reflect the views of Edge Industry Review (EdgeIR.com).
device management | edge server | HCI | ITOps | provisioning | Scale Computing