What is Edge Computing and Why You Should Care About It

The underlying concept of Edge Computing is simple: it’s the idea that pushing “intelligence” to the edge of the network will reduce the load on the core servers and on the network itself, and make the network more efficient overall.  

Edge Computing, like many other technological advancements, arose as a response to an issue. The issue? A massive overload on servers caused by a “surplus” of data generated by networked devices incapable of making decisions on their own.

If you’ve read The Hottest IT Trends Of Our Time, you’d probably recall the security camera example. In this example, I refer to a scenario in which you have a few security cameras recording information and sending it to a centralized server as long as they are turned on.

This, of course, is not an issue if you have just a few cameras. But, when you get into a situation like that of the city of London, which has over 400,000 cameras, you run into a huge issue: an overload on the main server and network that can, and probably will, break things even if you have the biggest pipe in the world.  

Edge Computing Network Engineering Information Technology IoT

37298498 – cloud computing concept, global computer network

In a hypothetical case similar to London’s, ideally you’d want to filter the information that is being sent to the network’s main servers. This, as you might already know, would require those data-generating devices to be capable of making decisions—of identifying which information is relevant and which is irrelevant.

There are many more applications of Edge Computing other than surveillance cameras. However, surveillance cameras are one of the biggest use cases for Edge Computing because they require a lot of bandwidth at peak operation. Also, there are several things that can be done with data collected from surveillance cameras, that wasn’t realistically possible even within the past few years when most cameras were still “dumb” devices.  

For example, you could be a British government agency hunting down a criminal in London. So, you scan a picture of the criminal and create a biometric map of his/her face. Then, you can configure your security cameras to only report back to the main server once they detect someone that matches that biometric map (obviously this would require facial recognition software).

This, of course, is only possible if security cameras sitting on the edge of the network are capable of making decisions. Otherwise, the network’s main servers would go nuts processing data, most of which would be garbage, from over 400,000 cameras streaming 24/7!

The overload problem wasn’t an easy one to solve, but since sensors and software are now incorporated into “edge” devices (these are devices living on the edge of the network), network engineers can now configure these devices to only send relevant information to the network’s main servers, or in other words, these devices can now make decisions without having to rely on anything other than their own computational power. Thus, making things better for everybody.

Related: How to become a network engineer in less than a year

Edge Computing not only helps to reduce network loads, but also increases efficiency, functionality of devices and speed of information processing since data doesn’t have to travel far to be analyzed. But it’s not all good news, there are problems that arise from the incorporation of Edge Computing, as well as smaller issues that can also affect business’ operations. Let’s take a look at the pros and the cons of Edge Computing:  

Pros to Edge Computing

  1. Reduces network load

Let’s go back to the London security camera example for a second. Pretend you’re running security for the entire city, and that you have decided to upgrade all of your cameras to stream in 4k definition to make better use of recordings…

A 4k definition stream consumes something around 25Mbps. So, if all of these cameras were only capable of sending information back, your network would have to smoothly process the information coming from over 400,000 cameras every second. In that case, you’d have to wait for technology to catch up to your needs, my friend!

Edge Computing makes these type of crazy but quite common scenarios possible. With Edge Computing, networks can scale to meet the demands of the IoT world without having to worry about overconsumption of resources on the network and servers or wasting resources on processing irrelevant data.  

  1. Functionality

Network engineers can program Edge Computing devices to perform several different kinds of functions. I’ve covered the example of filtering data before sending it over the network, but Edge Computing devices are capable of doing many more things. Since they have their own software and can process their own data, they can be configured to handle edge data in ways that have not yet been imagined.

With the new capabilities presented by leveraging data at the edges, networks will inevitably have more functionality and will hopefully become increasingly more efficient as well.

  1. Efficiency (real time access to analytics)  

Another benefit of Edge Computing is that it enables real time data analysis performed on the spot—which is a big deal for businesses.

For example, if you’re part of a manufacturing business with several manufacturing plants, as a manager of one of those plants you could greatly benefit from analyzing your plant’s data as it is being recorded, rather than having to wait for that data to go to a central server to be analyzed and then be sent back to your plant.

Such speed translates into immediate action, which ultimately results  in cost reductions and/or revenue increases (the main things businesses try to do).

Imagine being a manager of a manufacturing plant having an issue with your production process and you have to wait for your data to be analyzed by the company’s main server. Whereas with Edge Computing, you could find what that issue is virtually immediately. Thus, saving significant resources!

Cons to Edge Computing

  1. Security

The main negative aspect about Edge Computing is security. Before edge devices became “intelligent” they weren’t a vulnerable part of the network—they were just “dumb” devices performing very limited tasks.

However, by adding advanced software and hardware into these devices and also empowering them analyze their data locally, all of these devices become more vulnerable to malicious attacks.

edge computing

With networked devices analyzing their own data on the spot, it is now even more likely for any of them to be infected with malware and begin distributing it across your network.

The security problem isn’t going away anytime soon. As of right now, and for several years to come, building full-blown security into every endpoint of a network would not be feasible, which makes conventional security virtually impossible to accomplish. Therefore, networks utilizing Edge Computing will have to rely more on security through the network itself. [More on that topic here]  

  1. Increased risk of human error

With several new intelligent devices connected to a network, configuring all of these devices correctly becomes a challenge. Network engineers can now go into each one of those little devices and perform several configurations, which makes it easy to make a mistake.

Going back to the security camera example, imagine something as simple as a setting up one of your cameras to record during the day rather than at night. This, of course, isn’t inherently caused by Edge Computing, but by human error. However, the possibility for these potential misconfigurations can certainly be enhanced with the incorporation of more Edge Computing devices.

  1. Cost of functionality

The advancement of Edge Computing devices makes it possible for new businesses to modify their business model. Nowadays, for example, instead of selling you a device, they sell you the device plus the ability to use certain key functionalities—only if you pay extra, of course. It is very easy to pretty much double the cost of the hardware just by adding additional functionalities.

Related: How the Network Intuitive will change the future of IT professionals

This is something to watch out for if you’re deploying Edge Computing into your network because, as you know, the IT industry revolves around money so you must keep your priorities in check. Make sure that you read the fine print every time you purchase new technology so that you’re only paying for what you’re truly going to use, and aren’t being charged for functionalities that you don’t need.

Why should you care?  

Edge Computing is enabling the Internet of Things to take over the world. According to Cisco Systems, by 2020 there will be tens of billions of devices connected to the Internet. Even if all of these devices were to send text files all day every day, we’d still need Edge Computing technologies to avoid big issues.

This means that every single network in the near future will use Edge Computing to operate. Hence, if you start digging into the weeds of Edge Computing, you’d be at the forefront of the industry—at least until the next big change comes by.

Nonetheless, you must watch out for costs, since they can skyrocket in the blink of an eye; you must go the extra mile to protect the integrity of your data, since Edge Computing could make it quite vulnerable to malicious attacks; and, you must think about better methods for configuration management and orchestration of network devices as more and more computing and intelligence is deployed to the edge of the network!

1 Comment

  1. Christopher Howard

    This was a good read and relevant to todays technology trend when it comes to big data. Serious techies may believe in edge computing while the every day citizen or casual technology user may feel overwhelmed with the amount of technology used in todays society. I believe its time to be more intelligent with our technology especially with security cameras because things are not always what they seem. Also, as humans our perceptions and even perspectives differ from one another so the more data that is collected the more intelligent decisions can be made when processing information.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.