4 enterprise developer trends that will shape 2021

Technology has dramatically changed over the last decade, and so has how we build and deliver enterprise software.

Ten years ago, “modern computing” was to rely on teams of network admins managing data centers, running one application per server, deploying monolithic services, through waterfall, manual releases managed by QA and release managers.

Today, we have multi and hybrid clouds, serverless services, in continuous integration, running infrastructure-as-code.

SaaS has grown from a nascent 2% of the $450B enterprise software market in 2009, to 23% in 2020 and crossed $100B in revenue. PaaS and IaaS revenue represent another $50B in revenue, expecting to double to $100B by 2022.

With 77% of the enterprise software market — over $350B in annual revenue — still on legacy and on-premise systems, modern SaaS, PaaS and IaaS eating at the legacy market alone can grow the market 3x-4x over the next decade.

As the shift to cloud accelerates across the platform and infrastructure layers, here are four trends starting to emerge that will change how we develop and deliver enterprise software for the next decade.

1. The move to “everything as code”

Companies are building more dynamic, multiplatform, complex infrastructures than ever. We see the “-aaS” of the application, data, runtime and virtualization layers. Modern architectures are forcing extensibility to work with any number of mixed and matched services.

Traditionally we have relied on automation and configuration tools such as scripts and cron jobs to manage and orchestrate workflows between different systems and services. e.g., run a data pipeline or provision a new server. These tools have been based on a series of daisy-chained rules that lack real versioning, testing or self-healing, and require constant DevOps intervention to keep running.

With limited engineering budgets and resource constraints, CTOs and VPs of Engineering are now looking for ways to free up their teams. Moving from manual, time-consuming, repetitive work to programmatic workflows, where infrastructures are written as code and owned by developers.

Companies like HashiCorp, BridgeCrew and exciting new open-source projects are introducing new ways of building, managing and operating every layer of the developer stack using the same version-controlled, immutable, maintainable, programmatic patterns we have grown so accustomed to in software development.

2. Death of the DevOps engineer

DevOps has been a transformational approach to the modern developer stack. Teams can develop, deploy and rapidly iterate without the traditional friction of release managers, waterfall builds, DBAs, siloed departments and more. It has led to innovations powering faster, more scalable software delivery.

The ethos behind DevOps was always to bring developers and operations together. Any changes to infrastructure were to be developed and released as code and made available for use by any developer. If done correctly, operating the infrastructure and releasing software could be treated the same as any other codebase.

DevOps was always intended to be an approach, not a role. Unfortunately, we went awry somewhere. Today we have teams of DevOps engineers as large as the application or data teams. The bifurcation into a role led to a number of unfortunate side effects. DevOps engineers have become gatekeepers of the infrastructure. Much of their role is taken up by just keeping infrastructure running. Setting up new clusters, environments or pipelines still require a DevOps engineer to deploy. Deploying new infrastructure means scaling up DevOps headcount. Whether it’s the pace of change in the modern developer stack, some level of role preservation or some combination of the two, the result is a widening division between developers and operations.

That is about to change. We need to go back to DevOps as a way of bringing the building and delivering of software together, not separate.

Similar to what we saw in the evolution of the QA role folding into the developer role as functional and unit tests became standard, DevOps will follow the same path. In a resource and budget-challenged engineering organization, every available headcount will be for developers pushing code and creating robust software. That doesn’t necessarily mean the positions will be eliminated, but the ethos and definitions of their roles will change.

The result will be to go back to having only developers. There are application developers who build and monitor new services, Infrastructure developers who deploy and monitor new infrastructure, data developers who create and monitor new data flows. All enabled by tools and services that allow developers to leverage infrastructure as code and manage “operations” as a feature. The future will bring “DevOps” back to its original ethos, and give birth to the infrastructure engineer focused on building infrastructure through code.

3. Introduction of the virtual private cloud as-a-Service

The long-standing challenges enterprises face with managed services (SaaS/PaaS/IaaS) run on public clouds are its multitenancy nature.

An enterprise’s data is increasingly one of its most strategic assets. The risks of data leakage, security breaches, regulatory or security concerns and costs have driven enterprises to introduce hybrid environments. Hybrid clouds are combinations of public clouds for specific services and storage, and private clouds — configurable pools of shared resources that are on-premise to the enterprise.

The virtual private cloud (VPC) option has emerged as an alternative to meeting the data security and performance challenges that face enterprises. VPCs are isolated environments within a public cloud instance, meaning a private cloud within a public cloud without the IT and operational overhead of bare metal and resource management. With VPCs, enterprises can take advantage of public cloud benefits of on-demand infrastructure and reduced operational overhead, while maintaining data, resources and network isolation.

But VPCs aren’t the end-all solution. VPCs often don’t have all the features or capabilities of the regular public cloud SaaS offering, given the isolated instance and base. Monitoring and uptime are specific to each instance — meaning if your VPC is down, it’s likely not the case for other customers, often leading to slower resolution times. The age-old “cattle, not pets” methodology for managing servers is to leverage dispensable systems that have immediate redundancy and failover. It only works for a system with resource flexibility, which a VPC by nature does not.

Welcome to the era of the VPC-as-a-Service. A fully managed environment and service, offering the performance, reliability and scale of a multitenant public cloud service, but with the data security, namespaces and isolation of a virtual or on-premise private cloud. These services will offer network isolation, role-based access management, bring-your-own SSO/SAML, and end-to-end encryption, but operate like cattle, not pets. Offerings such as MongoDB’s Atlas have become the future reference architecture for enterprise-friendly “-aaS” offerings — performant, reliable, scalable and ultra-secure.

4. The new age of open-source infrastructure unicorns

As technologies and architectures shift or grow stagnant, the open-source community is often the catalyst for new methods and approaches. The last 10 years have seen a remarkable focus and reinvention of the data, runtime, middleware, OS and virtualization layers. As a result, the open-source community is responsible for creating many billion-dollar companies, including: Confluent, Databricks, Mulesoft, Elastic, MongoDB, Cockroach Labs, Kong, Acquia, Hashicorp, Couchbase, Puppet Labs, WP Engine, Mapbox, Fastly, Datastax and Pivotal.

Today the cloud still only accounts for a fraction of the $450B enterprise software market. As the cloud continues to evolve, so do the reference architectures that sit on top of it. Five years ago, we couldn’t conceive of serverless services, within containers, across clouds, segmented into microservices, scaled on-demand.

As architectures shift, so do budgets. In the coming months and years, we will see a new wave of migrations off legacy, proprietary and on-premise systems that are becoming choke points or single points of failure.

These migrations will specifically focus on the infrastructure, network, storage and data flow layers — the systems that power our services that haven’t been touched in years, with IaaS and PaaS offerings. From legacy ETL systems, data services, gateways, network and storage management, CASB and WAF, to the reinvention of vertical services from the likes of Oracle, SAP, SAS and IBM. IaaS and PaaS will represent the fastest-growing segments of the cloud.

Enterprises will demand the freedom to choose to mix and match managed and hosted services. They will demand lower total costs of ownership that are cloud-native by design, can modularly fit into any environment and are written as code.

Open-source projects (and their commercial developers) such as Druid (Imply), Arrow (Dremio), Flink (Ververica) and others will emerge as the open-source leaders to power the infrastructure for the next generation of enterprise software.

2021 and beyond

Companies such as Hashicorp and MongoDB are just scratching the surface of these emerging trends. As the Fortune 5000 continues to accelerate spend to the cloud, so will the emergence of a new generation of companies that modernize the way software is delivered. We will finally say goodbye to the days of complex multi and hybrid cloud orchestration, segmented DevOps teams, manual remediation and constant operator intervention. We will say hello to programmatic infrastructures, expressed in end goals and desired states, built by developers for developers.

It’s time to shift focus from delivering software to delighting customers. Welcome to a new era of modern software delivery.