Over the last twenty years, we’ve seen an impressive transformation when it comes to the perceived value and deployment of software in the enterprise. We’ve gone from IT doesn't matter to software is eating the world to mastering development is a top priority for every kind of business today.
It’s hard to escape the feeling that the pace of this transformation has only accelerated. Frankly, things seem to be moving so fast that relatively new innovations can feel like they have been around forever. As just one small example, it’s not uncommon to see companies looking for candidates with 12+ years of Kubernetes experience when Kubernetes was only released in 2014!
The technological transformations that we’ve seen have also been accompanied by a steady stream of management trends aimed at harnessing technology’s undeniable power. As an engineer, I must say it’s easy to be cynical or dismissive of these trends. At the same time, I think we need to give credit where credit is due:
- The Agile Manifesto has made lasting contributions to our profession, even if the number of questionable adoptions of Scrum and SAFe suggest the opposite.
- Microservices architectures are not the silver bullet they are often claimed to be, but so long as they are understood and applied properly, they can support fast growth and time to market.
- The so-called “Spotify model,” despite frequent misuse, remains relevant – and sets the stage for innovations like backstage.io.
With change comes adaptation and evolution. And through a combination of time and natural selection, I believe we’re gradually discovering the truly best of the best practices for software development.
This brings me to the latest hot topic in our industry: value stream management. Top technology influencers such as Eveline Oehrlich, former Chief Research Director at the DevOps Institute, believe that this lean business practice is a welcome, driving force to accelerate Agile and DevOps success at scale. Flint Brenton, CEO of Syntellis Performance Solutions, also thinks that value stream management has become ever more critical as CEOs delve deeper into “the value software innovations bring to the company.”
Not surprisingly, many engineers are skeptical. It’s easy for them to view value stream management as little more than a new management fad, one borrowed from decades-old Lean manufacturing practices. Indeed, they may even see it as a stealthy way to revert to outdated, top-down development practices the industry has left behind.
I understand the skepticism. Nevertheless, there are real challenges we face when it comes to software development and I believe value stream management and lean software practitioners can address them.
One common problem I see is that software engineers don’t have sufficient hard data and end-to-end visibility into their organizations to identify bottlenecks and optimize value across the software delivery pipeline. In customer-centric organizations especially, collecting this information to calculate and evaluate the value of work done across backlog, integration, testing, delivery, and insights stages can be tedious and time-consuming.
Knowledge regarding services, APIs, ownership, and dependencies is often siloed, making the critical task of diagnosing bugs or optimizing features a huge challenge. And lack of insight into feature usage and NPS scores can make it difficult to clearly connect business strategy and IT.
As a result, many engineering managers I know end up relying on gut feeling when making decisions aimed at improving development operations. They know that inefficiencies and waste exist across their dev pipelines, but they don’t have the critical data required to effectively address, prioritize, or eliminate these issues.
Difficulties such as these mean that engineering teams can too often serve as poor partners to business teams, particularly when it comes to quickly dealing with feature requests made and bugs reported by paying customers.
We need to be honest when it comes to talking about the value of software delivery. Engineering leaders and business leaders seem to lack a common language and common reference points, and quite often, the organization lacks a common ground due to a lack of data.
The good news is, we do know the kinds of data that are needed and where that data lives: CI/CD pipelines; source code repositories; Kubernetes clusters; public cloud providers, and so on. The key is collecting this data and storing it in a dynamic inventory where engineering leadership and DevOps teams can use it to do the following:
- Establish end-to-end visibility across the software development stack, thus breaking down knowledge silos.
- Analyze DORA and flow metrics to uncover bottlenecks and improve delivery performance.
- Assess security and compliance vulnerabilities, while prioritizing work items in a way that increases quality without disrupting productivity.
- Set principles and standards to track deviations, ensure governance, and implement compliance guardrails.
- Provide data-driven insights to efficiently allocate resources and balance innovation velocity with product reliability.