What is Edge Computing and Why Edge Computing

SukYeon Jung
4 min readJun 21, 2021

Edge computing involves placing the computation unit as close as possible to where the data is created. While the concept of edge computing is not new, it has recently become a buzzword due to the rapid growth of generated data. The reason behind this lies in the exponential increase in data production, primarily driven by the growth of IoT devices. These devices continuously produce vast amounts of data, causing a data flood.

Research by Statista.com confirms this trend (Figure 1). Between 2010 and 2017, the worldwide volume of data grew from 2 zetabytes to 33 zetabytes, with an annual growth rate of 44%. Projections indicate that by 2025, the volume of data will reach a staggering 181 zetabytes, with an annual growth rate of 27%. The sheer volume of data and information being created has surged rapidly and is expected to continue growing.

As a result, the traditional computing architecture, which relies on centralized data centers and the internet, is facing significant challenges in handling the unprecedented scales of data.

Figure 1: Volume of data/information created, captured, copied, and consumed worldwide from 2010 to 2025 (Source: Statista.com)

To understand why edge computing is receiving increasing attention, it is crucial to comprehend where and how data is generated and computation is performed. Let’s delve into how data is handled in traditional computing architecture (Figure 2). Data is produced as a result of interactions between digital devices or humans and digital devices. For instance, when we press a button on an internet-connected refrigerator, data is generated. Subsequently, this data is transferred to a central data center via the internet. At the data center, computations on the data are executed, and the results are then sent back to the original device. In this architecture, data needs to travel back and forth using the internet.

Figure 2: Traditional Computing Architecture

When the amount of data was somewhat limited, the traditional architecture was not a significant issue. However, with the endlessly growing volume of data, congestion in the network, the vessel delivering data, has become a problem. This congestion leads to issues such as bandwidth limitations, network latency, and unstable connectivity, which are collectively referred to as “digital data dyspepsia.” Consequently, the concept of edge computing is gaining popularity.

Edge computing involves moving compute engines and storage units out of the central data center and placing them near or at the origin of the data source. Initially, the data is processed and analyzed at the edge, and the results are sent to the central data center for more complex workloads or human reviews. By avoiding the need for data to traverse over a network to a central data center, issues with transmitting data, such as latency and unstable connections, are greatly reduced.

Several examples of edge locations include connected vehicles that can provide real-time car status and enable automated driving. Another example is an analytics system on a steel-producing machine designed to collect and analyze defects on the machine, facilitating prompt shutdown or repair actions. Edge computing allows for efficient processing and analysis of data closer to its source, leading to improved performance and reduced network-related problems.

Figure 3: Edge Computing Architecture

The direct benefits of edge computing include improvements in network limitations, such as 1) limited bandwidth, 2) latency, and 3) instability in the network connection.

Bandwidth and Latency: All networks have limited bandwidth, which means only a limited amount of data can be traversed to a central data center. Additionally, the physical distance between the data source and the data center increases latency. Due to these limitations, only a small portion of the captured data is utilized in many cases. For instance, a Mckinsey & Company study (The Internet of Things: Mapping the Value Beyond the Hype) published in 2015 found that less than 1% of the data captured by 30,000 sensors is used to make decisions. This underutilization of data is a concern that edge computing can address, as it operates using local area networks (LAN) to provide ample bandwidth and low latency, enabling more data-driven analytical capabilities.

Instability in a Network Connection: Long-distance connectivity is not always stable, and outages can occur due to natural disasters, congestion, and other reasons. When an outage happens, systems relying on a central data center for analytics and processing lose their capability. Edge computing can be a solution to improve the reliability of the entire system.

Data Security and Data Sovereignty: While data centers and cloud giants like AWS, Azure, and GCP provide excellent security within their premises, data traveling from the origin to the data center are exposed to security risks. Additionally, data may need to cross national borders to reach a data center, which can raise legal issues concerning data management and sovereignty. Edge computing can address these concerns in both situations.

Although the concepts and advantages of edge computing are explained, there are still many challenges that edge computing needs to address before becoming mainstream. Some of these problems include limited processing power, ensuring a stable minimum level of connectivity, and maintaining security at the device scale. As edge computing continues to grow, it will need to tackle these issues to fully realize its potential.

--

--

SukYeon Jung

Writes about cloud computing, company cultures, and finance