Machine Learning @ Edge




Edge refers to computing resources hosted out of cloud and closer to IoT devices. Traditionally, edge devices used to be dumb gateways meant to establish network connectivity for devices without radio or wireline communication capability. Lately, however, these have evolved to smarter form factors where even small machine learning algorithms can be run for specific use cases.


In terms of applicability, intelligent edge is suitable for multiple IoT use cases ranging from self-driving cars to remote healthcare devices to video surveillance of retail stores and warehouses. Noticeably, all such use cases depend on real time decisioning based on data acquired from end devices. It is, therefore, safe to assume that such real time analysis of streaming data is one of the primary concerns of many IoT systems. Consequently, edge intelligence is at the very core of the evolving IoT ecosystem, across use cases and industry domains.


Decentralized computing operates by collaboration of local nodes and does not depend on a central node for processing. Smarter edge is in line with the recent decentralization trend like that of blockchain where application of decentralized ledger is used to create smart contracts to simplify and automate use cases like license renewals or lease renewals.


Edge intelligence can be considered as a confluence of two trends - decentralization and edge efficient machine learning. There are inherent benefits of edge intelligence in the world of IoT. Some of them are:


  • Latency for real time analytics is reduced as round trips to cloud based IoT platforms are minimized. It also, indirectly, reduces the amount of data transferred to the cloud

  • Operational data is analyzed in real time, thereby enabling implementation of critical functions like alarms and alerts

  • Compliance to regulations around data security and data residency is strengthened considering the fact that local data does not cross on-premises boundaries

  • Ability to function in the event of impaired or intermittent connectivity as local computing does not require “always on” network thereby improving resiliency

  • Monthly compute cost of cloud components is optimized as fewer data packets are sent to the cloud backend.

  • Analytical workload is decentralized to multiple edge locations thereby enabling economies of scale.



Architecture


Most platform based IoT solutions involve a cloud layer and an edge layer. The cloud handles device data ingestion and storage aspects to build and train models, using AWS SageMaker or Azure ML or any other such component. After getting trained, these models can be deployed to the edge layer to handle data analytics. This necessitates an edge layer to upgrade the model at periodic intervals and minimizes network connectivity to the cloud layer. A generic architecture for intelligent edge is depicted below:





Cloud layer - ingests continuous data feeds for the purpose of training machine learning models. These models are upgraded periodically and shipped to the edge layer. The training models suitable for cloud may not be relevant for devices considering low memory and compute capacity at the edge layer.


There are different techniques to compress ML models for the edge layer. Quantization can reduce storage size and can optimize latency requirements for running single inference at edge. Pruning can keep essential ML model parameters only by eliminating those having minor impact. It is important to note that compressing techniques impact accuracy of model and need careful deliberation as part of the application design.


Facebook and Google have tailored popular machine learning frameworks like Tensor flow and Pytorch to match resource scarcity, specifically for edge devices. Microsoft is working on an Embedded Learning Library (ELL) framework for edge devices.


Edge layer - needs sufficient resources to generate inferences in real time by running lightweight models. Edge devices also need to aggregate, filter and transform data before shipping it to the platform. In addition, it also needs to have buffering and caching ability to gracefully handle intermittent connectivity scenarios.



Platform Offerings


  • AWS offers Greengrass for edge layer as a container for on-premise processing. It offers built-in runtime (Neo DLR) for running inferences locally. Amazon SageMaker has the capability to convert models from popular frameworks like PyTorch to Neo runtime, which can be executed by any Greengrass device. Neo runtime also manages model optimization and tuning to suit resource requirements for edge devices. In addition to ML inference, AWS extends serverless constructs to the edge layer to enable local response in real time.


  • Microsoft has Azure IoT edge to handle on-premises device management. Azure extends cloud services like Azure Stream Analytics and Azure Function to build intelligent edge. Machine learning and complex event processing modules developed on Azure cloud can be deployed to IoT edge as well. Additionally, Azure offers AI toolkits targeted specifically for IoT edge.


  • Google Cloud Platform has a purpose-built Edge TPU (TensorFlow Processing Unit) chip to handle inference at the edge layer. Edge TPU in combination with Google cloud services like Cloud ML, Cloud Datalab and Data studio provides end to end infrastructure for deploying AI capabilities. GCP has Cloud IoT Edge that connects with Cloud pub-sub and other services to collect, filter and send data to cloud applications.



Conclusion


Ubiquitous connectivity and low cost computing will continue to lower entry barriers for intelligent edge computing. It is notably pertinent for use cases where latency can be minimized by saving network trips. Some examples of these scenarios could be using computer vision algorithms to detect anomalies in production processes of manufacturing industry or employing augmented reality to promote products in case of smart retail.


Having said that, it should not be viewed as a “one size fits all ” approach, as it may not be applicable in cases where local computation takes the same or more time than a network trip. It may also be inappropriate where data aggregation from multiple edge devices is needed at a central location for data analytics. With the exception of these situations, edge intelligence paradigm has significance in almost all IoT use cases across industry domains.


The emergence of smarter edge, it's widespread adoption, supported by investments from all major platform providers is a testimony to the fact that edge intelligence is leading the evolution of the IoT ecosystem.


198 views

Contact Us