Serverless computing is a new paradigm wherein developers can focus on code and forget about managing underlying infrastructure. Typically, cloud providers handle availability and scalability concerns for serverless components. However, that does not mean serverless is the silver bullet for all use cases. There are limits to its usage especially considering the complex service environment of IoT projects. Here are some reference patterns that can be considered while employing serverless constructs.
Light backend for Internet of Things (IoT) - Device management APIs have stateless characteristics as devices are expected to send identity details in each request. API traffic patterns may not need a dedicated virtual machine if CPU cores stay under-utilized for considerable duration. Thus, data ingestion layer can use serverless functions for orchestration tasks. For instance, device messages are sent to an AWS Kinesis/Azure IoT hub, which trigger AWS lambda/Azure function to perform device management and routing tasks
Entity Synchronization - Tracking the status of shared entities like customer or user is cumbersome in the microservices world, considering its characteristic of a separate datastore per microservice. Additionally, point to point REST calls have the potential to increase coupling between services. As an alternative, each service can post events and domain specific state machines built with serverless components that can simplify synchronization of entity status. In this case, an AWS step function and a Lambda can be a useful combination to perform this function.
Event enrichment - SaaS applications have the capability to push entity change notifications to registered consumers e.g. Zendesk can send notifications for every ticket created. An application can stream entity changes to an event bus but consumers may need additional attributes than what's present in the event object. In such cases, serverless components can be wired to the event bus to enrich events and publish customized events to consumers. Usually, cloud platforms enable easy integration of serverless components with event bus along with push/pull configurations. With this approach, applications are loosely coupled to the notification process, thereby, adding agility to customize events per consumer.
Authorization filter - Typically, there is a single service to handle authentication in the microservice world. Other services reuse this authentication service & end up duplicating glue code. The change in glue logic needs careful coordination across teams. This logic can be abstracted into a serverless component and associated with an API endpoint. An example of this in AWS can be a situation where APIs are exposed using AWS API gateway and Lambda is attached as a custom authorizer to secure endpoints.
Event based scaling - Applications follow various mechanisms of scaling and auto-scaling and many of these may not be suitable to all situations considering the time it takes to scale resources. There is a risk of losing traffic if auto-scaling takes too long to trigger a scaling strategy. If application scaling patterns can be predicted based on sequence of events, serverless components can automate decision making to trigger scaling strategy. For example, the last day of the month can be an important event for banks to be ready for employees who wish to check salaries credited to their accounts. This offers sufficient time to over-provision cloud infrastructure resources and make applications ready to tackle anticipated hike in load. It also helps to relieve some burden off operational teams.
At the outset, serverless seems to solve many pressing concerns of custom services deployed in an IoT solution. However, there are some constraints to its usage as well. The pricing of serverless components is based on the way these functions are consumed. It is generally defined in terms of the number of invocations of a function in a month, time taken for execution of each invocation and the memory consumed in each invocation.
Each cloud provider has a slightly different mechanism for pricing serverless components. That is why constraints or limits vary for each cloud.
In terms of the number of allowed invocations per region, there is a limit of 3000 invocations per second in case of AWS Lambda, though this limit varies for some regions. These are limited to 1 million per second for Google cloud functions, however no such limit is set for Azure Functions.
Similarly, the memory per function permitted varies quite a bit. AWS Lambda limits it to 3008 MB, Azure to 1500 MB and Google constraints it to 2048 MB per function invocation.
Finally, the execution time allowed per invocation is 15 mins for AWS Lambda, 10 mins for Azure functions and 9 minutes for Google Cloud functions.
Disclaimer: The numbers presented here are for indicative comparison purposes only. These limits get updated from time to time and can also be modified with dedicated virtual machines or moving across tiers.
Serverless certainly amplifies developer productivity but it has the potential to overshoot cloud spends. As described above, serverless pricing is based on a combination of compute & memory capacity. It represents a finer grained level next to pay per use, which is, pay per execution time. This is the reason why consuming CPU and memory resources need careful attention as part of the serverless architecture. Business metrics (like cost per tenant) can be a good measure to compare serverless with container-based options. At the end, the architects need to follow balancing tactics and ensure serverless does not cross monthly budgets while achieving scalability and developer convenience.