What is Edge Computing and Why Edge Computing

Edge computing is placing the computation unit as close as where the data are created. Although the concept of edge computing is not new, it has become a buzzword recently. The reason lies in the rapid growth of generated data. With the growth of IoT devices producing data endlessly, data has started to flood. Research by Statista.com confirms this trend (Figure 1). From 2010 to 2017, the worldwide volume of data has grown from 2 zetabytes to 33 zetabytes with the annual growth rate of 44% and is expected to reach 181 zetabytes in 2025 with the annual growth rate of 27%. The volume of data and information created has grown fast and is expected to grow more. In this situation, the traditional computing architecture, a centralized data center and the internet, is facing challenges to handle the unprecedented scales of data.

To know why edge computing is getting more and more attention, understanding where and how data are generated and computation is done is important. Let’s take a look at how data are handled in traditional computing architecture (Figure 2). Data are the byproducts of interactions between digital devices or humans and digital devices. For example, when we press a button on the refrigerator connected to the internet, data are generated. After the generation, the data are transferred to a central data center via internet. At the data center, computations on the data are performed and the results are sent back again to the original device. In this architecture, data should travel back and forth using internet.

When the amount of data was somewhat limited, the traditional architecture was not a big issue. However, the endlessly growing volume of data has brought congestion in the network, a vessel delivering data. The problems caused by the congestion include bandwidth limitation, network latency, unstable connectivity. This so-called digital data dyspepsia is leading the spread of edge computing concepts. Edge computing moves compute engines and storages out of the central data center and puts them near or at the origin of the data source. Initially, the data is processed and analyzed at the edge, and the result is sent to the central data center for more complicated workloads or human reviews. Since data do not have to traverse over a network to a data center, problems with the transmitting data such as latency and unstable connection are greatly reduced. Some examples of edge locations are connected vehicles that can provide the status of cars and provide automated driving. Other examples include an analytic system on steel- producing machine designed to collect and analyze defects on the machine so that shutting down or repairing can take place as soon as possible.

Direct benefits of edge computing are improvement in network limitations including 1)Limited bandwidth, 2)Latency, and 3)Instability in the network connection.

  1. Bandwidth and Latency: All networks have limited bandwidth, and this indicates that only a limited amount of data can be traversed to a central data center. Also, the physical distance between the origin of data and the data center increases latency. Because of these limits, a very little portion of data is utilized in many cases. As an illustration, a Mckinsey & Compnay study (The Internet of Things: Mapping the Value Beyond the Hype) published in 2015 found out that less than 1% of data out of all the data captured by 30,000 sensors are used to make a decision. This is surely an underutilization of captured data. Therefore, edge computing generally operating using LAN will guarantee ample bandwidth and low latency and be able to gear us up with more data-driven analytical powers.
  2. Instability in a network connection: Long-distance connectivity is not always stable. Outages can take a place because of natural disasters, congestions, and so on. Whatever the reason, when an outage occurs, the system relying on the central data center for analytics and processing loses its capability. Edge computing can be a solution for this improving reliability of the whole system.
  3. Data security and Data sovereignty: Even if a data center or cloud giants including AWS, Azure, and GCP provide excellent security, the protection is confined within the area of that data center. The data left from the origin and traveling are exposed to security risks. Furthermore, the data sometimes have to leave the national border to reach a data center. This means that legal issues can arise regarding the management of the data. For instance, AWS does not have a region in Vietnam, and users using AWS in Vietnam have to send their data to the closest Region, Singapore. Vietnam’s information security law prohibits businesses from storing sensitive data outside of the country, which in turn makes it impossible to utilize AWS fully. Edge computing, in both situations, can be a great solution.

The concepts and advantages of edge computing are explained. However, there are still many problems edge computing has to address before it becomes mainstream. Examples of the problems include limited processing power, a stable minimum level of connectivity, and security in device scale. Surely edge computing will grow and will grow fast. Only after these issues are properly addressed, we will be able to observe the true potential of edge computing.

Writes about cloud computing, company cultures, and finance