When most people talk about serverless architecture, what first comes to mind are cloud-based services such as Lambda. In the cloud, serverless lets you run virtually unlimited numbers of functions on-demand, using a cost-efficient payment model.
However, not all serverless frameworks reside in the cloud. There are some that can be deployed on-premises, such as OpenWhisk and Fission.io.
Why would you deploy serverless on-premises? And which special considerations do you need to take into account when doing this? Let’s explore those questions in this article.
Difference between on-premises and cloud-based serverless
With cloud-based serverless architecture, application code is deployed to a cloud provider such as Amazon Web Services Lambda or Microsoft Azure Functions, and then is triggered by specified events ranging from HTTP requests to GitHub, depending on what the cloud provider supports. Cloud-based serverless architecture is useful because it reduces costs through pay-as-you-use features.
In contrast, with on-premises serverless architectures, serverless functions are hosted on local infrastructure instead of being run as a hosted service in the cloud. Application code is deployed to local servers and is triggered into running through a specified event.
On-premises serverless is available in different forms for different companies. Cloud management solutions such as Platform9 allow users to deploy managed serverless platforms on top of their managed cloud. Platform9 provides a multi-tenant cloud service architecture without running the workload, which allows users to run their workloads on-premises. Open source options such as OpenWhisk are also deployable on-premises.
Why I would choose on-premises
Although not having to manage physical servers and having virtually unlimited scalability are the main appeals of cloud-based serverless architecture, there are some significant benefits that come from on-premises serverless.
- Avoiding cloud vendor lock-in: With cloud-based serverless, your application is completely dependent on a third-party provider, which means that you are counting on that provider’s continued availability, and accounting for their costs (which might be subject to change). Changing your provider would almost certainly result in major changes to your application. With an on-premises implementation, however, the risk that comes from vendor lock-in is reduced as the workload is run locally.
- Security: For companies that handle very sensitive data, a cloud-based solution may not be optimal. Most service providers are multi-tenants, which means that they run software for different customers on the same physical server. Even if workloads are isolated through virtual workloads or containers, any security flaws or failures in neighbor applications may negatively affect the availability and general performance of your application code. Running your workload on a local on-premises server reduces these risks and ensures that your data is safe.
- Efficiency and overhead cost: Running a workload on a dedicated local server may be much less expensive in the long run than performing long tasks on cloud-based serverless architecture. In addition, on-premises serverless reduces under-utilization of infrastructure in cases where application code is run intermittently (since it’s more efficient to run multiple applications on only a few servers).
Conclusion
While on-premises serverless architectures might not receive as much attention as cloud-based serverless deployments, once your IT team has the expertise to build and operate a serverless platform, it’s a viable option that should be considered, based on the needs of your organization. So when you’re planning your serverless strategy, don’t limit yourself to the cloud; instead, consider whether you might be able to reap security and cost benefits from choosing an on-premises serverless deployment.