Skip main navigation

Metrics and Feedback

.
17.1
Let’s talk about metrics and feedback. Closing out that feedback loop in DevOps when we’ve gone through the cycle, we’ve gone to production, and now we’re pushing data, information back in to give us feedback so we can make better decisions later. Now, what have we done currently? And what are we doing currently and in the past? If you look at what we’ve done, a lot of it’s been reporting– ad-hoc, after-the-fact reporting. A lot of time, it’s been manual. Sometimes automated, but historical reporting, nonetheless. And very often, with these metrics and feedback, we’ve been collecting the wrong metrics. We’ve been looking at things that don’t necessarily drive change in the organisations. You may be collecting the right data.
61
I’m not saying you’re not. However, if you’re collecting the wrong metrics, you end up with a very high friction environment where the metrics are being used potentially to punish individuals. Worse, the monitoring we’re doing is really reactive. We’re looking at what’s out there. We’re getting a report. Maybe it’s the end of the week we get a report that we’re looking at. And we’re forced to react to things that have occurred in the past. It’s backward-looking data. And that backward-looking data shows what went wrong, not what is going wrong or what will go wrong. And that’s one of the fundamental problems with the way we gather and report data today.
106.4
Very often, we don’t gather the appropriate data for usage, availability, and performance. And we start to blame each other. It was the requirements. It was the development. Testers didn’t validate. And this blame game starts to trump the learning that we need to do as an organisation. That’s what we’ve done in the past. And we need to transition to do new things, different things in the future. And that is based on telemetry, where our reports, our feedback, our reporting, all of these things are based on real-time metrics that are collected in production. We may be collecting some metrics from a dev environment or from a test environment. That’s OK.
151
But today, I want to focus on what we’re getting out of production. And it’s a proactive monitoring stance. And we want to focus on accelerating the detection, remediation of problems, and the improvement of our application. And frankly, we want to get rid of some of those vanity metrics we’ve used in the past. How many hits on our web page– well, useful? not useful? How many followers do we have on Facebook? Eh, maybe useful. May not be useful, not if someone’s paid for those followers on Facebook, for instance. We want to use the data that’s gathered in production to identify the usage patterns of our users.
190.9
Frankly, we can go out there and see what features of our application are being used most and how they’re being used. Secondly, if you slid in some brand new feature that’s really brilliant, and you thought it would be widely used and you find out it’s not, you either have a problem with the feature itself or the discoverability of that feature. And it allows you to respond very, very rapidly and do some experimentation to find out if the feature really isn’t as valuable as you thought or simply wasn’t as discoverable as you needed that feature to be. We want to discover performance problems early on, identify those, remediate them.
229.4
And we want to get some insight into what we should build next. And I think this is one of the key values of closing that loop– using application performance monitoring and telemetry to let us know what’s being used today to point us in the direction of what we can build tomorrow.

In the previous step, we looked at application performance and the various types and benefits of application monitoring. In this step, Steven Borg discusses how to ‘close the feedback loop’ by pushing out a product using the returned data to improve the product.

Remember to engage with peers or share your experience in the comment section below. When you are happy with your contribution, click on Mark as complete and in the following step, we will explore when to use RUM (Real User Monitoring) vs synthetic transactions.

This article is from the free online

Microsoft Future Ready: Fundamentals of DevOps and Azure Pipeline

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education