The 5 S's of Shipping Software: Key Metrics That Matter | Pluralsight
Shipping more software doesn't always equal better quality! Learn what Richard Seroter, vice president of product marketing at Pivotal, considers to be the top 5 S's of shipping software effectively.
Apr 22, 2019 • 3 Minute Read
When it comes to shipping software, more doesn’t necessarily translate to better. Shortening your release cycle could potentially slow down your modernization or transformation, so it’s important to know what metrics can actually indicate how well you’re doing and whether you’re improving.
Richard Seroter, vice president of product marketing at Pivotal, shared the “five S’s” used by his company to think about outcomes and measure them in ways that are genuinely useful. Here’s an explanation of each of them and the questions you should be asking yourself and your team. Watch Richard’s full webinar for free.
Speed
You can go fast and ship awful products. The actual goal of developing with speed is shipping fast in small batches so you can learn, iterate and implement changes.
Speed for the sake of speed is usually just a sign of chaos and disorganization. The best companies take at least a day to go from “I have a change to make” to “that change is in production.”
Deployment frequency can be a vanity metric if you’re not operating with awareness. It’s easy to fall into the trap of setting a numerical deployment goal, checking in little changes to a style sheet and going to production — which isn’t a full deployment. Make sure your measurement of deployment frequency isn’t meaningless by basing it on the actual outcome, not just a number.
When thinking about speed, get specific about how you’ll measure it. For example, if you’re going to measure infrastructure provisioning time, ask yourself: How long will it take? Is your lead time for changes measured in hours, quarters or years? If you open a ticket for a new dev test environment or a performance test environment before production, what will those be measured in?
Scale
One of the best things about cloud computing is that you can spin up a variety of secondary environments. It’s also one of the worst parts of cloud computing, because it means there’s more stuff to patch, install, maintain, shut down and pay for. As you scale, your environments are increasing.
And if you’re like a lot of larger enterprises, you’re often taking internal systems that were never built for APIs or mobile clients or crazy volumes in the first place. The question becomes: How to you handle this scale without overtaxing your teams on non-productive work?
How often your devs are actually coding is a valuable measurement for product managers and team leads. The best companies are at around 80%, but a Forrester report showed that average dev productivity is only around 40%. Think about that — going from 40% to 80% is like hiring another person!
Dev productivity isn’t about making people do more work—it’s about freeing them to do the work they were hired to do in the first place, which means reducing meaningless meetings and wasted time spent fiddling with infrastructure.
Stability
A lot of people think they’ll be more stable if they avoid making changes, but the data says otherwise. Speed and automation combined with the right practices is what enables you to move faster and be more stable. Speed and stability don’t need to compromise each other.
One of the first things you should look at when measuring stability is impact minutes. For example, if you're doing site reliability engineering and are thinking of error budgets, how much time are your end users being impacted by stability problems?
Any decently sized distributed system is always in a state of continuous partial failure—if you’ve got a thousand-node cluster, something's always going wrong. There’ll never be a day where everything’s just humming. But if you architect systems correctly, an end user never feels that. Impact minutes don’t necessarily relate to component downtime — you can have zero impact minutes even if there’s a tire fire happening in a quarter of your data center.
You should also be looking at your downtime during deployment. If you’re good at software, deployments are boring events that don’t call for celebratory cake. Ideally, you should be experiencing zero downtime during deployments. Maintenance windows should be a thing of the past, because most people use technology constantly, and there’s no real safe time to take down core systems.
So instead of trying to architect out component failure, accept failure and engineer through the overarching lens of stability and impact minutes. Look at mean time to recovery, not just mean time between failure. By measuring metrics related to incidents and displaying them prominently on corporate dashboards, you can learn to overcome the pain of failure while also showcasing progress to executives.
Security
Patch time isn’t a vanity metric — it’s a business success metric. And your ability to patch unpatched systems quickly will be critical to your success.
When measuring security, there are three “Rs” you should always keep in mind:
• Repair: How fast can you repair the operating system, platform and application code? Being able to do it quickly will limit your threat surface.
• Rotate: How often are you rotating credentials, and do you have the technology to make it possible?
• Repave: How often are you rapaving the environment? If you look at most threat actors and how they get into systems, they’re finding a box that nobody ever restarts or patches, squirreling away on a drive somewhere, sniffing for network credentials that haven’t changed in a few years.
Sustainability
One of the most important things you can do is get all your apps on deployment pipelines. For some of you, I know what you’re saying already. "I don't need all my apps to get deployed all the time. Putting them on a pipeline seems like a waste because my app doesn't change that often—I'm not deploying it 1,000 times a day."
But there's a lot of under the surface value of automating delivery.
It’s not that you have the change the application all the time. It’s about getting your applications on some sort of continuous delivery pipeline so you can always be updating and repairing in an automated, lightweight fashion of any sort of environment.
Look at your test coverage. It doesn’t have to be 100%, but you definitely want it to be high to ensure your code’s more sustainable. It’s not heroic to jam it into production and have it be brittle. If someone adds new functionality, what will happen?
You can’t have pipelines if you don’t have tests. And without tests, you can’t have real confidence in your code, and you’ll fall back into manual processes. People should be out of the equation when it comes to software deployment.
Here’s a tip if you want to know how you’re doing in these 5 areas: Look your employee net promoter score. Companies who are good at software attract and retain good people (obviously) have great eNPS scores, and their employees are 1.8 times more likely to recommend their team to a friend because they’re motivated by their work. They also have dev-to-operator ratios of around 200:1 which allows them to put more money into innovation.
You shouldn’t be celebrating 100-hour work weeks. You should be celebrating actual accomplishment outcomes — like moving faster so you’re able to learn, scaling in an organized manner, becoming more stable so your customers are happier with your services, treating security as a critical business need and writing sustainable code.