Edge Computing - What it is, what it isnt

Edge Computing: What It Is, and What It Isn't

It’s time to hit reset and bring some long-overdue clarity to the edge conversation.

Ask 10 different people to define edge computing and you’ll probably get 10 different definitions.

The reason is that edge computing is not, whatever many might say, a technology. Instead, it’s a concept or a philosophy relating to the management and use of data.

...edge computing is not, whatever many might say, a technology. Instead, it’s a concept or a philosophy relating to the management and use of data.

 

Cloudflare defines it as such: “Edge computing is a networking philosophy focused on bringing computing as close to the source of data as possible in order to reduce latency and bandwidth use.”

Processing data closer to where it’s collected makes huge sense from a cost, speed and security point of view. Unfortunately, edge computing is in real danger of becoming the new “blockchain” of enterprise tech: over hyped and underutilized. Marketers and analysts have distracted many organizations from thinking clearly about edge computing by talking them into believing every company needs to have an edge strategy. They don’t.

So it’s time to hit reset. Because if approached in the right way, the edge computing philosophy, and the outcomes it encourages, can drive powerful transformations in how companies work with data.

Why should filtering out the edge computing hype matter to IT leaders?

Businesses are already generating huge amounts of data, with exponential growth every year as more devices come online — including many items that previously had no business being anywhere near the internet: toothbrushes, doorbells, coffee machines, smart speakers, watches, and – infamously – juicers. As the World Economic Forum reports: “The world produces 2.5 quintillion bytes a day, and 90% of all data has been produced in just the last two years.”

This has coincided with the growth in hyperscale cloud adoption. Edge computing solves today’s cloud-era challenges around moving and managing data to ensure it generates the most value for the minimum of cost, and that it can be processed and acting on in a timely fashion.

Edge’s advantages can be summarized like this:

  • Lower latency, since it’s quicker and more reliable to send data to a local point-of-presence than it is to send it to a hyperscale cloud provider’s data center.
  • Decreased bandwidth use and associated costs.
  • Decreased need for large server resources and associated costs.
  • Data for which it previously wasn’t feasible to send to a centralized data center can now be analyzed to drive new functionality or insights.

Defining the foundations of edge computing

To put the philosophy to work and unlock these advantages, we first need to define the foundational elements of edge computing.

Illustrative edge computing use cases include sensors in a factory or medical setting, a remote oil pipeline, or surveillance cameras designed to recognize security threats. The high volume of data generated by these devices drives time-sensitive decisions or actions. It simply wouldn’t be ideal or cost effective to send it all to the cloud for processing and storage.

In cases like these we’re looking to create the fewest number of hops between the device generating the data and the first – but not the only or last – “thing” that will do something with that data.

On this basis, the key components of an edge computing network are:

  • Public cloud: Data centers run by hyperscale cloud-providers. These offer very high availability and durability but also higher latency.
  • The compute edge:  A small data center containing racks of servers, located on the same site as the devices which are generating the data. These offer lower latency but you have to build out, maintain and secure them yourself.
  • The device edge: A collection of small servers, potentially ruggedized and situated very close to the sensors generating the data. Sometimes referred to as a nano data center, the small compute footprint means low maintenance requirements but they may need to back off some analytics to the cloud.
  • The sensor: The device generating the data, and the reason we’re here. If there’s zero onboard compute capability, then any analytics must be run on one of the other three locations in the edge computing network. Where onboard compute is present (such as self-driving cars) it may have to be significant, with the device typically processing a lot of data in real-time and communicating with centralized services to share information about location, faults or collisions. (The advent of fast data networks, including 5G, are a big part of this capability.)

Looking at this list, it’s clear that many enterprise models typically encompass at least two of these things – cloud plus one other –  whether you formally recognize them as having an edge component or not. So it’s a diversion to focus too much on “edge,” or to hype edge computing as inherently and automatically transformative.

Instead, the intended business outcome – and the practical application of technology to deliver on it – should always be your strategic driver. Not edge computing for edge computing’s sake.

Instead, the intended business outcome – and the practical application of technology to deliver on it – should always be your strategic driver. Not edge computing for edge computing’s sake.

The right way to discern what edge computing means to your organization

To understand the extent to which you need to integrate an edge computing philosophy into your tech strategy, you first need to identify your data types. How critical is your data? Is it time sensitive? How big is it?

You also need to understand the level of compute required. Processing data in the cloud can quickly get expensive. So can pulling it back out or making repeat queries if your initial ones are wrong or need following up on. So do you really need to send it to the cloud? And even if you can afford to, is the latency involved acceptable for your use case?

Edge computing is not just a computing challenge either. It’s often about short- or medium-term storage of data for reuse, for example. So the level and location of storage you need requires careful consideration.

Another key question, and one that demands some creative thinking, is whether or not processing your data nearer to the point it is generated would unlock new functionality for your business or customers.

And then, as always, there’s the need for security. While an edge solution is as secure as any other system, if it’s well designed, the biggest consideration is the increase in exploitable attack vectors that comes with a proliferation of edge devices. Most edge-specific security issues relate to poorly implemented and maintained code on those devices. There have been instances of these being hijacked into botnets for orchestrating DDoS attacks, for example. You need to adopt a security-first mindset, but that’s no different to working with any other technology.

I must sound one note of caution, however. Services running in non-centralized locations for no good reason are not an edge computing challenge or strategy. That’s just things in the wrong place. What do I mean? Think about an important intranet site which started small but now contains essential documentation. That really should be moved into the redundant, centralized cloud-provider space and out of the local comms room. The same goes for mail servers and most classes of object storage.

Edge computing and the art of questioning

I started out by noting that edge computing is a philosophy more than anything else, and philosophies are often born out of questions. In the case of edge computing, every company should start by asking itself, “is my data in the right place?” rather than “what is my edge computing strategy?”.

As data volumes have grown, more companies are beginning to think harder about where to put data for optimal costs and performance. And the importance of a solid data strategy continues to grow as the IoT revolution continues and with more devices being brought online, fueling an explosion in data processing and storage requirements.

If you’re one of those companies, to get to the right answers about your data processing requirements and where that should be happening, you first need to ask the right questions.

The answer you get won’t always be edge computing. And that’s OK, no matter what anybody else tells you.

 

Join the Conversation: Find Solve on Twitter and LinkedIn, or follow along via RSS.

Stay on top of what's next in technology

Learn about tech trends, innovations and how technologists are working today.

Subscribe

Tech Trends Ebook Q3 2020

Tired of Navigating Tech Trends and Avoiding the Hype?

About the Authors

Ed Randall

Consulting Architect

Ed Randall

With nearly 20 years of managed-services experience, Ed Randall has thrived across a variety of IT sectors, from financial services and retail ecommerce to defense and education. With a background in Linux, Ed specializes in operating systems, virtualization, networking and hyperscale cloud platforms. He has also developed a passion for strategy development that facilitates cost optimization, end-to-end supportability and the implementation of the proper technological solutions to drive business outcomes and meet organizational goals As a Consulting Cloud Architect on the Rackspace Technology Professional Services Team, Ed brings his considerable experience to bear in the financial services space to help drive internal adoption of Google Cloud Platform. Outside of work, Ed enjoys reading, mountain biking, working on technology projects and spending time with his two young daughters.

Read more about Ed Randall