Why You Should Add DevOps Metrics to Your Data Fabric

Print Friendly, PDF & Email

Making DevOps data more visible and easily accessible helps teams improve their software delivery and gives them an edge over their competitors.

According to the recently released analysis of data collected from more than 250,000 developers worldwide, developers code a median of 52 minutes per day on workdays. That translates to a total of four hours and 21 minutes from Monday to Friday — only 11% of a standard 40-hour workweek. The report also found that less than 10% of developers spend more than two hours per day coding.

These findings may suggest that developers frequently face interruptions at work that prevent them from finding time to code, such as too many meetings and wait time on processes and systems — all of which impact engineering teams’ ability to ship software quickly and efficiently.

The first step to improving software delivery is applying observability to development — making DevOps performance data more visible and accessible to teams. With the right data, teams can more quickly diagnose inefficiencies and choose the right tools, automations, and processes to improve engineering productivity.

The most valuable DevOps data is hidden in silos

Today, most data about DevOps performance is hidden in proprietary tools and data silos, such as code repositories, CI/CD pipelines, and issue trackers. It’s complex and cumbersome to extract and transform this data into usable, shareable analytics.

Stitching data together across tools is time-consuming and expensive. It requires a clearly-defined ETL process, processing and manipulating the data into quantifiable metrics, and analysis in a business intelligence tool. Teams also need to apply continuous updates as APIs for data sources change over time.

As a result, many of the biggest engineering bottlenecks — slow reviews, long delays, and inefficient tooling — can be difficult to quantify. But without the data to clearly point to these bottlenecks, they can silently destroy engineering productivity.

Better DevOps data improves software delivery

The first step to improving software delivery is making DevOps data visible by collecting data from across the stack, from version control activity to deployment workflows. Better visibility into the development lifecycle empowers engineering and operations teams to discover and share new insights about engineering efficiency.

Data should also be self-service, flexible, modular, and accessible to anyone, enabling teams to explore their engineering metrics while protecting individual privacy.

DevOps metrics can provide unique insights into the software delivery pipeline by measuring the flow of work from code to production. Here are a couple examples where DevOps data can help:

  • If there’s resource contention, teams can invest in self-service tooling and automated environment setup, and then measure the impact on lead time.
  • If the success rate of production deployments drops below 99%, DevOps teams can invest more time into making sure that they never fail.
  • If mean time to recovery is trending up, teams can invest in APMs, alerting, and tools to streamline communication during incidents.

The ultimate goal of incorporating DevOps data into a data fabric is to provide a 360-degree view of development. Investing in a tool to stitch together data across tools will pay high dividends in the future for any company on the path to becoming a high-performing engineering organization.

About the Author

Brett Stevens is the co-founder of Software.com, a DevOps metrics platform that helps teams measure and improve their organization’s DevOps performance. He holds a Bachelor of Science in Mechanical Engineering from Brown University and currently resides in Brooklyn.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*