edge computing

The Future of Computing is on the Edge


Computing began in an age of consolidation. For 50 years, mainframes, minicomputers and their cousins ruled the computing landscape. That was until the dawn of the PC. The PC marked the birth of the powerful endpoint supported by the server, rather than the terminal being solely dependant on a centralized compute resource. By the aughts, the cloud and smartphone had begun their rise as two faces of the same coin. Where the smartphone miniaturized a significant portion of a PC’s functionality, the cloud supplements the smartphone and brings the collective compute capability back almost on par with the client server model of the PC era. Edge computing takes this concept to its next evolution, and brings the capabilities of the cloud to the endpoint. Scaling down the endpoint even further, to the point that compute can be placed nearly anywhere, provides value in unexpected and previously unremarked-upon areas.


Edge computing already exists around us in unseen places. Content distribution networks, or CDNs like Cloudflare, bring content closer to the end consumer by caching a version served by the originating host. Changes to the content are periodically cached back to the edge. By having these caches all over the globe, the speed to load content is perceptively faster. It also places less strain on network interconnects between the cached copy and the original by limiting downloads of the original by the consumer.


Caching the content helps loads by bringing the data closer to the consumer, but sometimes the data is already at the endpoint on the edge. Sensors have long existed in the environment, measuring fuel levels, rainfall, and temperature. Five years ago, cameras were more often connected to a VCR than the Internet. Driven by the ready access to sufficient bandwidth and small, cheap compute, these sensors can now be paired with a small bit of compute and a network connection. Perhaps more importantly, they are now in the price range of the average consumer.


This revolution of Internet of Things (or IoT) devices has evolved to produce a disruptive opportunity to leverage the small amount of computing available on the IoT itself to run computations and deliver preliminary results, without sending them up to the cloud first, and without the need for a traditional PC.


Two different flavors of edge computing are just reaching production. Leveraging serverless scripting technologies like AWS Lambda, a code function designed to be run in the cloud can be run as well on cloud-edge CDNs or on IoT devices themselves. Cloud-edge CDNs like AWS Lambda@Edge will offload work traditionally done by servers or virtual machines running in the cloud out to the edge. This decentralizes the work of running these jobs onto optimized and purpose-built solutions, and complements the caching functions of CDNs, providing similar benefits in computing closer to the consumer. The second form of this edge computing revolution takes these same cloud functions and runs them against sensor data on the IoT device. This might take the form of a rain gauge recognizing an unexpected dry spell and triggering sprinklers to run for a longer period of time than previously set. Raspberry Pi and AWS Greengrass are very popular with home tinkerers in this space to create homebrewed solutions leveraging these cloud functions. Home security cameras also leverage onboard compute resources to recognize a fluttering curtain versus a person, and record or not record accordingly. Cell phones are also packed with sensors, and can themselves act as very capable IoT devices.


Edge computing has some distinct advantages over other methods of compute. Caching data closer to your consumer is discussed above, but this paradigm can also dictate the fundamental structure of an application. Code can be designed to be run at the endpoint or consumer device. Code can also consume otherwise idle cycles on the endpoint. SETI@Home may be one of the most famous examples of this technique. Bitcoin has also been in the news lately, but underlying the cryptocurrency is distributed ledger technology. Distributed ledger technology, or DLT, leverages each consumer device to create a trusted and indestructible fabric on which to run any number of transactions, from smart contracts to elections. DLT is great for a number of cases where integrity and public verifiability are key.


Edge computing also excels where models can be trained in the cloud and then downloaded to the endpoint for use. A great example of this is the Where’s the Bear? project based out of the University of California, Santa Barbara. They are able to train a machine learning model using Google Cloud Platform and then push it out to their resource-constrained sensor network to watch for and record bear encounters. Results can then be added to a central database periodically and asynchronously. GPU-intensive applications are also another great instance to leverage edge computing. Virtual reality applications like Google Expeditions make extensive use of a phone GPU to unpack a pre-rendered virtual environment.


Edge computing is made possible by evolutions in cloud and mobile technology, but is fundamentally an architectural lens through which to observe the use of new models of compute. IoT’s presence in the general cultural lexicon exposes the benefits of compute miniaturization and decentralization to the general population, even if its future potential is currently hidden from the average consumer.

Over the last 10 years, Sara has held numerous program management, engineering, and operations roles. She is a vocal advocate for DevOps, microservices, and the cloud as tools to create better products and services.


Leave a Comment

Your email address will not be published. Required fields are marked *

Skip to toolbar