How to push PaaS usage beyond 12-factor apps

Platform-as-a-service (PaaS) emerged as a leading force in the ever-evolving quest to streamline software development. PaaS dates back to 2006 with Force.com, followed by Heroku, AWS Elastic Beanstalk, and DotCloud, which later transformed into Docker.

While the PaaS sector commands a substantial $170 billion market share within the cloud industry, companies still grapple with manual deployment and workload life cycle management today. So why isn’t platform-as-a-service more widely adopted?

Providing a PaaS experience across all workloads

PaaS platforms could be more versatile, and I am not speaking of language and framework compatibility. While PaaS is often defined as a one-stop shop for deploying any application, there is a catch. By applications, what is usually implied here is 12-factor applications.

However, many workloads don’t neatly fit the mold of typical web apps; they come with unique requirements, such as batch processing jobs, high-performance computing (HPC) workloads, GPU-intensive tasks, data-centric applications, or even quantum computing workloads.

I won’t go over all the advantages that PaaS provides. Still, companies should manage all their workloads in the easiest way possible, and abstracting their deployment and management is the way to go.

A shift is needed. First, companies embracing the PaaS paradigm must recognize that there won’t be a one-size-fits-all workload solution. In a recent discourse on the topic, former Google engineer Kelsey Hightower reinforces this notion that a single, all-encompassing PaaS remains improbable.

Companies embracing the PaaS paradigm must recognize that there won’t be a one-size-fits-all workload solution.

He also used workload API to designate a tool that provides this seamless “here is my app, run it for me” experience. I like the term “workload API” because it clearly states the intent: to run a specific workload. Compared to platform-as-a-service (PaaS), which needs to be more accurate and leads to this confusion that PaaS is a silver bullet to running anything. I will use this term for the remainder of the article.

The second change for companies wanting to provide a seamless deployment and management experience for all their workloads is considering that each workload should have its workload API. For example, Amazon Lambda could be used for batch jobs, Vercel for front end, Vertex AI for machine learning models, and Korifi for web apps.

Now, let’s explore how to pick workload APIs.

Managing the lock-in concern

While vendor lock-in can be a valid worry, it is more manageable than it appears.

Look for platforms that are following standards. For example, many providers built their services on a shared foundation. For instance, BOSH, SAP Business Technology Platform, and VMware Tanzu are all based on open source Cloud Foundry, which makes transitioning between these platforms more manageable.

Another example is picking a tool that works with containers, which act as standardized building blocks — making it easier to migrate between workload APIs that support containers without encountering significant roadblocks.

Look for workload APIs that are GitOps-based or at least compatible. Because any codebase, no matter the workload type, generally lives on a Git repository. Examples of GitOps platforms include Weaveworks and Kubefirst.

For machine learning models, look for workload APIs that support common ML frameworks like open source TensorFlow, PyTorch, or scikit-learn. Also, make sure they accept open source formats like ONNX or PMML. Finally, another bonus is compatibility with platforms designed to help manage the machine learning life cycle, such as open source MLflow and Kubeflow.

While I cannot review every possible workload, integration with open source solutions and open standards is the common thread.

Finally, if you still need to decide between a few options, look at the exit routes. Workload API providers want your business and often build migration tools to import your workloads from another platform. Make sure to consider this in your selection process.

Making it cost-effective

While workload API provider costs constantly decrease, they can be higher than running your servers. However, consider the equation with a TOC (total cost of ownership) in mind.

As mentioned above, paying for a workload API service will likely be more expensive than running your infrastructure. Still, you will also need to hire a team large enough to build, maintain, improve, and secure your infrastructure. Also, companies specializing in workload APIs will provide a better solution with less downtime, better scalability, and performance, leading you to more cost reductions down the road.

For some workloads, such as machine learning or HPC (high-performance computing), the cost of buying the hardware can be prohibitive, or the hardware might not be available. For example, with the rise of LLMs, there has been a strong demand for machines that can run GPU-based workloads, leading to shortages. On the other hand, workload API providers may own or book up the computing capacity they need to serve their customers, meaning higher availability.

It would be best if you also baked in the potential revenue impact of opportunity loss. Your competition might be using workload APIs, increasing the velocity at which they can test and deploy new features and provide a smoother customer experience, ultimately driving more business to their product.

Finally, using open source options — such as Dokku and Korifi for web app workloads — is an alternative solution. They allow you to control the infrastructure and operation costs without paying for a software license. Large organizations with specific needs can also modify the code as they see fit or use the software as building blocks.

Understanding what is under the hood

While abstracting the complexity from end users is essential, understanding the inner workings of your chosen workload API is equally crucial. Ensure it aligns with all your technical requirements, spanning scalability, security, and reliability. A deep understanding will also allow you to troubleshoot effectively when things go awry.

And because workload APIs are, by nature, straightforward to set up and use, do not hesitate to try them out to verify if they can accommodate the workload you are trying to migrate or if they fit the requirements you have in mind. It will often be faster than going through the marketing material and documentation.

Watch out for data compliance

With 70% of countries having data compliance legislation in place, this is not a topic that can be ignored. If you are not hosting your PaaS, ensure that the provider data compliance and security standards match the regulations you must abide by. While every jurisdiction will have its requirements, looking for data security and encryption, residency, privacy policies, and retention is a good start.

Once you have multiple workload APIs in place, you can take control of some of the underlying parts — for example, by putting an IaC between your workload APIs and the public cloud to provision resources. This way, you can ensure that any security or compliance requirements are met across your workloads. Tools such as Terraform, OpenTofu, and Pulumi are popular choices.

Come up with a strategy

Once you’ve picked the tools, migrate progressively. Start with a noncritical workload to mitigate the impact of potential instability. Learn from these initial experiences before advancing to mission-critical workloads.

Once you’ve deployed a workload API, you must gauge user satisfaction: developer experience. A tool is as good as its adoption rate. If practitioners don’t like it, they may stop using it and find some alternative. This is especially true for large organizations.

Finally, ensure that the migrated workload has been through multiple life cycles to provide comprehensive coverage. For instance, verify if the workload API can handle sudden load spikes and that a backup and restoration operation ran successfully. This progressive approach will ensure that you learn from potential failures and keep them in mind when you migrate your other workloads.

The effort is worth it

Moving to workload APIs is worth it. Cost savings, faster development, better scalability, reduced downtime, and improved productivity are benefits that organizations are experiencing. It’s time to apply this concept to all your organization’s workloads.