Evolution of business logic from monoliths through microservices, to functions
The evolution of business logic from microservices to event driven functions because of underlying technology advancements. Learn More!
Jun 08, 2023 • 11 Minute Read
The whole point of running application software is to deliver business value of some sort. That business value is delivered by creating business logic and operating it so it can provide a service to some users.
The time between creating business logic and providing service to users with that logic is the time to value. The cost of providing that value is the cost of creation plus the cost of delivery.
Business Logic from microservices to event driven functions
In the past, costs were high and efficiency concerns dominated, with high time to value regarded as the normal state of affairs. Today, when organizations measure and optimize their activities, time to value is becoming a dominant metric, driven by competitive pressures, enabled by advances in technology, and by reductions in cost.
Put another way, to increase return on investment you need to find ways to increase the return, start returning value earlier, or reduce the investment. When costs dominate, that’s where the focus is, but as costs reduce and software impact increases, the focus flips towards getting the return earlier.
Monolithic application
As technology has progressed over the last decade, we’ve seen an evolution from monolithic applications to microservices and are now seeing the rise of serverless event driven functions, led by AWS Lambda. What factors have driven this evolution? Low latency messaging enabled the move from monoliths to microservices, low latency provisioning enabled the move to Lambda.
To start with, ten years ago, a monolithic application was the best way to deliver business logic, for the constraints of the time. Those constraints changed, and about five years ago the best option shifted to microservices. New applications began to be built on a microservices architecture, and over the last few years, tooling and development practices changed to support the microservices evolution.
Today, another shift is taking place to event driven functions — because the underlying constraints have changed, costs have reduced, and radical improvements in time to value are possible.
In what follows, we’ll look at different dimensions of change in detail: delivery technology, hardware capabilities, and organizational practices, and see how they have combined to drive this evolution.
The early days of process optimization
At the start of this journey, the cost of delivery dominated. It took a long time to procure, configure and deploy hardware, and software installations were hand crafted projects in their own right.
To optimize delivery the best practice was to amortize this high cost over a large amount of business logic in each release, and to release relatively infrequently, with a time to value measured in months for many organizations. Given long lead times for infrastructure changes, it was necessary to pre-provision extra capacity in advance and this lead to very low average utilization.
The first steps to reduce cost of delivery focused on process automation. Many organizations developed custom scripts to deploy new hardware, and to install and update applications.
The DevOps movement
Eventually common frameworks like Puppet and Chef became popular, and “infrastructure as code” sped up delivery of updates. The DevOps movement began when operations teams adopted agile software development practices and worked closely with developers to reduce time to value from months to days.
Scripts can change what’s already there, but fast growing businesses or those with unpredictable workloads struggled to provision new capacity quickly. The introduction of self service API calls to automatically provision cloud capacity using Amazon EC2 solved this problem.
When developers got the ability to directly automate many operations tasks using web services, a second wave of DevOps occurred. Operations teams built and ran highly automated API driven platforms on top of cloud services, providing self service deployments and autoscaled capacity to development teams.
The ability to deploy capacity just-in-time, and pay by the hour for what was actually needed, allowed far higher average utilization, and automatically handled unexpected spikes in workloads.
The age of containers
Another wave of optimization arrived when docker made containers easy enough for everyone to use. Docker containers provide a convenient bundled package format that includes a fixed set of dependencies, a runtime that gives more isolation than processes, but less than a virtual machine instance, startup times measured in seconds, and a substantial saving in memory footprint.
By packing many containers onto an instance, and rounding off run times to minutes or seconds instead of hours, even higher utilization is possible. Container based continuous delivery tooling also sped up the work of developers and reduced time to value.
When there’s a reasonably predictable amount of work coming in, containers can be run at high utilization levels, however many workloads are spiky or drop to zero for extended periods. For example applications used in the workplace may only be active for 40 of the 168 hours in a week.
To maintain high availability, it’s usual to spread application instances over three availability zones, and even to require more than one instance per zone. The minimum footprint for a service is thus six instances. If we want to scale down to zero, we need a way to fire up part of an application when an event happens, and shut it down when it’s done.
This is a key part of the AWS Lambda functionality, and it transforms spiky and low usage workloads to effectively 100% utilization by only charging for the capacity that is being used, in 0.1 second increments, and scales from zero to very high capacity as needed. There’s no need to think about or provision servers, and that’s why this is often called the serverless pattern.
Advances in delivery technology provide stepping stones for improvements in time to value, but there are other underlying changes that have caused a series of transitions in best practices over the last decade.
Advances in CPU and network technology
The optimal size for a bundle of business logic depends upon the relative costs in both dollars and access time of CPU, network, memory and disk resources, combined with the latency goal for the service.
For the common case of human end users waiting for some business logic to provide a service, the total service time requirement hasn’t changed much. Perception and expectations haven’t changed as much as the underlying technology has over the last decade or so.
CPU speed has increased fairly slowly over the last decade, as the clock rate hit a wall at a few GHz, however on chip caches are much larger, and the number of cores increased instead. Memory speed and size have also made relatively slow progress.
Networks are now radically faster, common deployments have moved from 1GBit to 10GBit and now 25GBit, and software protocols are far more efficient. When common practice was sending XML payloads over 1GBit networks, the communication overhead constrained business logic to be co-located in large monolithic services, directly connected to databases.
A decade later, encodings that are at least an order of magnitude more efficient over 25Gbit networks — meaning that the cost of communication is reduced by more than two orders of magnitude.
In other words, it’s possible to send 100 to 1000 messages between services in the same amount of time as communicating and processing one message would take a decade ago. This is a key enabler for the move away from monolithic applications.
Advances in storage & database technology
Storage and databases have also gone through a revolution over the last decade. Monolithic applications map their business logic to transactions against complex relational database (RDBMS) schemas, that link together all the tables, and allow coordinated atomic updates.
A decade ago best practice was to implement a small number of large centralized relational databases connected via storage area networks to expensive disk arrays using magnetic disk, fronted by large caches.
Today cached magnetic disks have been replaced by solid state disks. The difference is that reads move from slow, expensive and unpredictable — as cache hit rate varies, to consistently fast and almost unlimited. Writes and updates move from being fast for cached disks to unpredictable for solid state disks, due to wear leveling algorithms and other effects.
New “NoSQL” database architectures have become popular for several reasons — but the differences that concern us here are that they have simple schema models and take advantage of the characteristics of solid state storage. Simple schemas force separation of the tables of data that would be linked together in the same relational database, into multiple independent NoSQL databases, driving decentralization of the business logic.
The Amazon DynamoDB datastore service was designed from the beginning to run only on solid state disk, providing extremely consistent low latency for requests. Apache Cassandra’s storage model generates a large number of random reads, and does infrequent large writes with no updates, which is ideally suited to solid state disks.
Compared to relational databases, NoSQL databases provide simple but extremely cost effective, highly available and scalable databases with very low latency. The growth in popularity of NoSQL databases is another key enabler for the move away from monolithic schemas and monolithic applications. The remaining relational core schemas are cleaned up, easier to scale and are being migrated to services such as Amazon's RDS and Aurora.
Moving from project to product
It’s common to talk about “people, process, and technology” when we look at changes in IT. We’ve just seen how technology has taken utilization and speed of deployment to the limit with AWS Lambda, effectively 100% utilization for deployments in a fraction of a second.
It’s also made it efficient to break the monolithic code base into hundreds of microservices and functions, and denormalized the monolithic RDBMS into many simple scalable and highly available NoSQL and relational data stores.
There have also been huge changes in “people and process” over the last decade. Let’s consider a hypothetical monolith built by 100 developers working together. To coordinate, manage test and deliver updates to this monolith every few months it’s common to have more people running the process than writing the code.
Twice as many project managers, testers, DBA’s, operators etc. organized in silos, driven by tickets, and a management hierarchy demanding that everyone write weekly reports and attend lots of status meetings as well as find time to code the actual business logic!
The combination of DevOps practices, microservices architectures, and cloud deployments went hand in hand with continuous delivery processes, cellular based “two pizza team” organizations, and a big reduction in tickets, meetings and management overhead. Small groups of developers and product managers independently code, test and deploy their own microservices whenever they need to.
The ratio of developers to overhead reverses — with 100 developers to 50 managers. Each developer is spending less time in meetings and waiting for tickets, getting twice as much done with a hundred times better time to value.
A common shorthand for this change is a move from project to product. A large number of project managers are replaced with far fewer product managers. In my somewhat contrived example, 150 people are producing twice the output that 300 people used to. Double the return a hundred times sooner, on half the investment. Many organizations have been making this kind of transition and there are real examples of similar improvements.
The early days of functions
Lambda based applications are constructed from individual event driven functions that are almost entirely business logic — and there’s much less boilerplate and platform code to manage. It’s early days, but this appears to be driving another radical change.
Small teams of developers are building production ready applications from scratch in just a few days. They are using short simple functions and events to glue together robust API driven data stores and services. The finished applications are already highly available and scalable, high utilization, low cost and fast to deploy.
As an analogy, think how long it would take to make a model house starting with a ball of clay, compared to a pile of Lego bricks. Given enough time you could make almost anything from the clay, it’s expressive, creative, and there’s even an anti-pattern for monolithic applications called the “big ball of mud”.
The Lego bricks fit together to make a constrained, blocky model house, that is also very easy to extend and modify, in a tiny fraction of the time. In addition, there are other bricks somewhat like Lego bricks, but they aren’t popular enough to matter, and any kind of standard brick based system will be much faster than custom formed clay.
If an order of magnitude increase in developer productivity is possible, then my example 100 developer monolith could be rewritten from scratch and replaced by a team of ten developers in a few weeks. Even if you doubt that this would work, its a cheap experiment to try it out. The invocation latency for event driven functions is one of the key limitations that constrains complex applications, but over time those latencies are reducing.
The real point I’m making is that the ROI threshold for whether existing monolithic applications should be moved unchanged into the cloud or rewritten depends a lot on how much work it is to rewrite them. A typical datacenter to cloud migration would pick out the highly scaled and high rate of change applications to re-write from monoliths to microservices, and forklift the small or frozen applications intact.
I think that AWS Lambda changes the equation, is likely to be the default way new and experimental applications are built, and also makes it worth looking at doing a lot more re-writes.
I’m very interested in your experiences, so please let me know how you see time to value evolving in your environments.
Get the skills you need for a better career.
Master modern tech skills, get certified, and level up your career. Whether you’re starting out or a seasoned pro, you can learn by doing and advance your career in cloud with ACG.