139 points | by yagizdegirmenci4 days ago
https://commoncog.com/becoming-data-driven-first-principles/
https://commoncog.com/the-amazon-weekly-business-review/
(It took that long because of a) an NDA, and b) it takes time to put the ideas to practice and understand them, and then teach them to other business operators!)
The ideas presented in this particular essay are really attributed to W. Edwards Deming, Donald Wheeler, and Brian Joiner (who created Minitab; ‘Joiner’s Rule’, the variant of Goodhart’s Law that is cited in the link above is attributed to him)
Most of these ideas were developed in manufacturing, in the post WW2 period. The Amazon-style WBR merely adapts them for the tech industry.
I hope you will enjoy these essays — and better yet, put them to practice. Multiple executives have told me the series of posts have completely changed the way they see and run their businesses.
It then discusses ways that the factory might cheat to get higher numbers.
But it doesn't even mention what I suspect the most likely outcome is: they achieve the target by sacrificing something else that isn't measured, such as quality of the product (perhaps by shipping defective widgets that should have been discarded, or working faster which results in more defects, or cutting out parts of the process, etc.), or safety of the workers, or making the workers work longer hours, etc.
Business leaders like to project success and promise growth that there is no evidence they will or can achieve, and then put it on workers to deliver that, and when there's no way to achieve the outcome other than to cheat the numbers, the workers will (and will have to).
At some point businesses stopped treating outperforming the previous year's quarter as over-delivering, and made it an expectation, regardless of what is actually doable.
As "Goodhart's law" is used here, in contrast, the focus is on side effects of a policy. The goal in this situation is not to make the target useless, as it is if you're doing central bank policy correctly.
Here's the thing, there's no fixing Goodhart's Law. You just can't measure anything directly, even measuring with a ruler is a proxy for a meter without infinite precision. This gets much harder as the environment changes under you and metrics' utility changes with time.
That said, much of the advice is good: making it hard to hack and giving people flexibility. It's a bit obvious that flexibility is needed if you're interpreting Goodhart's as "every measure is a proxy", "no measure is perfectly aligned", or "every measure can be hacked"
They need something they can check easily so the team can get back to work. It's hard to find metrics that are both meaningful to the business and track with the work being asked of the team.
You can look at revenue and decide "hey, we have a problem here" and go research what's causing the problem. That's a perfectly valid used for a KPI.
You can do some change by something like the Toyota process, saying "we will improve X", make the change, and track X out to see if you must revert or keep it. That is another perfectly valid use for a KPI.
What you can't do is use them to judge people.
Its easy to fake one metric, it harder to consistenly paly around 100 of them.
(But then it’s no longer KPIs probably, as one looking at the data needs to recognise that details and nuance are important)
Do you have enough KPIs that you can be sure that these targets also serve as useful metrics for the org as a whole? Do you randomize the assignment every quarter?
As I talk through this ... have you considered keeping some "hidden KPIs"?
can't say what the deep idea in this case is per se (haha (maybe the other commenter can shed light on that part)), but i guess if you have enough KPIs to be able to rotate them you have yourself a perpetual motion machine of the same nature as the one that some genius carried down from the mountain on stone tablets that we can sustain maximum velocity ad infinitum by splitting our work into two week chunks and calling them "sprints"... why haven't marathoners thought of this? (h/t Rich Hickey, the source of that amazing joke that i butcher here)
maybe consciousness itself is nothing more than the brain managing to optimize all of its KPIs at the same time.
A 90 day target is questionable, but regularly changing the metrics are a good way to keep people from gaming them.
I might be wrong but I feel like WBR treats variation (looking at the measure and saying "it has changed") as a trigger point for investigation rather than conclusion.
In that case, lets say you do something silly and measure lines of code committed. Lets also say you told everyone and it will factor into a perforance review and the company is know for stack ranking.
You introduce the LOC measure. All employees watch it like a hawk. While working they add useless blocks of code an so on.
LOC commited goes up and looks significant on XMR.
Option 1: grab champagne, pay exec bonus, congratulate yourself.
Option 2: investigate
Option 2 is better of course. But it is such a mindset shift. Option 2 lets you see if goodhart happened or not. It lets you actually learn.
(a) All processes have some natural variation, and for as long as outputs fall in the range of natural process variation, we are looking at the same process.
(b) Some processes apparently exhibit outputs outside of their natural variation. when this has happened something specific has occurred, and it is worth trying to find out what.
In the second case, there are many possible reasons for exceptional outputs:
- Measurement error,
- Failure of the process,
- Two interleaved processes masquerade as one,
- A process improvement has permanently shifted the level of the output,
- etc.
SPC tells us that we should not waste effort on investigating natural variation, and should not make blind assumptions about exceptional variation.
It says outliers are the most valuable signals we have, because they tell us we are not only looking at what we thought we were, but something ... else also.
If companies knew how to make it difficult to distort the system/data, don't you think they would have done it already? This feels like telling a person learning a new language that they should try to sound more fluent.
* Create a finance department that's independent in both their reporting and ability to confirm metrics reported by other departments
* Provide a periodic meeting (for executives/mangers) that reviews all metrics and allows them to alter them if need be
* Don't try to provide a small number of measurable metrics or a "north star" single metric
The idea being that the review meeting of 500+ gives a better potential model. Further, even though 500+ metrics is a lot to review, each should be reviewed briefly, with most of them being "no change, move on" but allows managers to get a holistic feel for the model and identify metrics that are or are becoming outliers (positively or negatively correlated).
The independent finance department means that the reporting of bad data is discouraged and the independent finance department coupled with the WBR and its subsequent empowerment, allow for facilities to change the system.
The three main points (make difficult to distort the system, distort the data and provide facilities for change) need to be all implemented to have an effect. If only the "punishment" is provided (making it difficult to distort the system/data) without any facility for change is putting too much pressure without any relief.
> that **false** proxies are game-able.
You say this like there are measures that aren't proxies. Tbh I can't think of a single one. Even trivial.All measures are proxies and all measures are gameable. If you are uncertain, host a prize and you'll see how creative people get.
https://en.wikipedia.org/wiki/Net_present_value#Disadvantage...
Second off, you don't think it's possible to hide costs, exaggerate revenue, ignore risks, and/or optimize short term revenue in favor of long term? If you think something isn't hackable you just aren't looking hard enough
But I'm suspicious of a claim that it can't be hacked. I've done a lot of experimental physics and I can tell you that you can hack as simple and obvious of a metric as measuring something with a ruler. This being because it's still a proxy. Your ruler is still an approximation of a meter and is you look at all the rulers, calipers, tape measures, etc you have, you will find that they are not exactly identical, though likely fairly close. But people happily round or are very willing to overlook errors/mistakes when the result makes sense or is nice. That's a pretty trivial system, and it's still hacked.
With more abstract things it's far easier to hack, specifically by not digging deep enough. When your metrics are the aggregation of other metrics (as is the case in your example) you have to look at every single metric and understand how it proxies what you're really after. If we're keeping with economics, GSP might be a great example. It is often used to say how "rich" a country is, but that means very little in of itself. It's generally true that it's easier to increase this number when you have many wealthy companies or individuals, but from this alone you shouldn't be about to statements two countries of equal size where all the wealth is held by a single person or where wealth is equally distributed among all people.
The point is that there's always baked in priors. Baked in assumptions. If you want to find how to hack a metric then hunt down all the assumptions (this will not be in a list you can look up unfortunately) and find where those assumptions break. A very famous math example is with the Banach-Taraki paradox. All required assumptions (including axiom of choice) appear straight forward and obvious. But the things is, as long as you have an axiomatic system (you do), somewhere those assumptions break down. Finding them isn't always easy, but hey, give it scale and Goodhart's will do it's magic
Exactly (btw. very nice way to put it)
> stop wasting time on tracking false proxies
Some times a proxy is much cheaper. (Medical anlogy of limited depth: Instead of doing a surgery to see stuff in person, one might opt to check some ratios in the blood first)
This would not count as a false proxy however. The problem in software is, it is very hard to construct meaningful proxy metrics. Most of the time it ends up being tangential to value.
I agree in principle, just want to add a bit of nuance.
Lets take famous “lines of code” metric.
It would be counterproductive to reward it (as proxy of productivity). But it is a good metric to know.
For the same reason why it’s good to know the weight of ships you produce.
The value in tracking false proxies like lines of code, accrues to the tracker, not the customer, business or anyone else. The tracker is able to extract value from the business, essentially by tricking it (and themselves) into believing that such metrics are valuable. It isn't a good use of time in my opinion, but probably a low stress / chill occupation if that is your objective.
In theory you can return to the metrics later for shorter intervals.
From a programming standpoint, and off the top of my head, I would include TDD, code coverage, and anything that comes out of a root cause analysis.
I tell junior devs who ask to spend a little more time on every task than they think necessary, trying to raise their game. When doing a simple task you should practice all of your best intentions and new intentions to build the muscle memory.
I don't know how to track TDD, but for me, code coverage is an example of the same old false proxies that people used to track in the 000s.
Before creating a metric and policing it, make sure you can rigorously defend its relationship to NPV. If you can't do this, find something else to track.
Can't it? Amazon may be an exception, but most of the time running without numbers or quantitative goals seems to work better than having them.
This seems to pop up in a lot of areas and I find myself asking is X thing a thing I really desire or is it something that is a natural side effect of some other processes.
Once you start looking for these things that are done for their own sake (or really to gain respect in a community) you notice how pervasive they are and how different they can be for two people next to each other.
I recommend Gregory's Savage Money on the subject. My review here: https://entropicthoughts.com/book-review-savage-money
You can also ask what is life about?
This is hard to do because the conclusion may need to break moulds, leading to family estrangement and losing friends.
I suspect people who end up having a TED talk in them are people who had the ability through courage or their inherited neural makeup to go it alone despite descenting voices. Or they were raised to be encouraged to do so.
However, it's still better to recognise a problem, so you can at least look into ways of improving the situation.
This is wrong, and the wrongness of it undermines the whole piece, I think:
- A fourth way people respond is to oppose the choice of target and/or metric; to question its value and lobby to change it.
- A fifth way people respond is to oppose the whole idea of incentives on the basis of metrics (perhaps by citing Goodhart's Law... which is a use of Goodhart's Law).
Goodhart's Law is useful not just because it reminds us that making a metric a target may incentivize behavior that makes THAT metric a poor indicator of a good system, but also because choosing ANY metric as a target changes everyone's relationship with ALL metrics-- it spells the end of inquiry and the beginning of what might be called compliance anxiety.
Your proposed fourth and fifth outcome behaviours, on the other hand, are neither. Most importantly, they are transient (at least ideally). Either the workforce and the management come to an agreement and metrics continue (or discontinue) or they don't and the business stays in a limbo. It is an emergency (or some other word with lower impact; incident?). There isn't a covert resistance by some teams specifically working against the metric and lowering it while also hiding themselves from notice.
I am bemused that you deride them, given that they are, in fact, how I have responded to metrics in technical projects since I first developed a metrics program for Borland, in ‘93. (I championed inquiry metrics and opposed control metrics.)
That sounds interesting, how did you do that? What different inquiry metrics did you use if I can ask, what did the others think, how did it all work out?
"I really appreciated this piece, as designing good metrics is a problem I think about in my day job a lot. My approach to thinking about this is similar in a lot of ways, but my thought process for getting there is different enough that I wanted to throw it out there as food for thought.
One school of thought 9https://www.simplilearn.com/tutorials/itil-tutorial/measurem...) I have trained in is that metrics are useful to people in 4 ways:
1. Direct activities to achieve goals
2. Intervene in trends that are having negative impacts
3. Justify that a particular course of action is warranted
4. Validate that a decision that was made was warranted
My interpretation of Goodhart’s Law has always centered more around duration of metrics for these purposes. The chief warning is that regardless of the metric used, sooner or later it will become useless as a decision aid. I often work with people who think about metrics as a “do it right the first time, so you won’t have to ever worry about it again”. This is the wrong mentality, and Goodhart’s Law is a useful way to reach many folks with this mindset.The implication is that the goal is not to find the “right” metrics, but to instead find the most useful metrics to support the decisions that are most critical at the moment. After all, once you pick a metric, 1 of 3 things will happen:
1. The metric will improve until it reaches a point where you are not improving it anymore, at which point it provides no more new information.
2. The metric doesn’t improve at all, which means you’ve picked something you aren’t capable of influencing and is therefore useless.
3. The metric gets worse, which means there is feedback that swamps whatever you are doing to improve it.
Thus, if we are using metrics to improve decision making, we’re always going to need to replace metrics with new ones relevant to our goals. If we are going to have to do that anyway, we might as well be regularly assessing our metrics for ones that serve our purposes more effectively. Thus, a regular cadence of reviewing the metrics used, deprecating ones that are no longer useful, and introducing new metrics that are relevant to the decisions now at hand, is crucial for ongoing success.One other important point to make is that for many people, the purpose of metrics is not to make things better. It is instead to show that they are doing a good job and that to persuade others to do what they want. Metrics that show this are useful, and those that don’t are not. In this case, of course, a metric may indeed be useful “forever” if it serves these ends. The implication is that some level of psychological safety is needed for metric use to be more aligned with supporting the mission and less aligned with making people look good."
A jaded interpretation of data science is to find evidence to support predetermined decisions, which is unfair to all. Having the capability to always generate new internal tools for Just In Time Reporting (JITR) would be nice, even so reproducible ones.
This encourages adhoc and scrappy starts, which can be iterated on as formulas in source control. Instead of a gold standard of a handful of metrics, we are empowered to draw conclusions from all data in context.
One good book on the positive impact of a metric that everyone on a team or organization understands is "The Great Game of Business" by Jack Stack https://www.amazon.com/Great-Game-Business-Expanded-Updated-... I reviewed it at https://www.skmurphy.com/blog/2010/03/19/the-business-is-eve...
Here is a quote to give you a flavor of his philosophy:
"A business should be run like an aquarium, where everybody can see what's going on--what's going in, what's moving around, what's coming out. That's the only way to make sure people understand what you're doing, and why, and have some input into deciding where you are going. Then, when the unexpected happens, they know how to react and react quickly. "
Jack Stack in "Great Game of Business."
GGOB, by 1. involving employees in decision-making and teaching them about metrics, 2. giving them a line-of-sight for how their contribution impacts the overall business, and 3. providing a stake in the outcome
creates collective accountability and success, and reduces the likelihood of metric manipulation.
Bottom line: GGOB recognizes that business success takes everyone, at all levels, and values the input of each employee, right down to the part-time janitor. The metrics are used as tools, like the scoreboard in baseball, to guide decision making and establish what winning as a team looks like. It all comes down to education and getting everyone aligned and pulling in the same direction.
https://commoncog.com/the-amazon-weekly-business-review/
Over the past year, Roger and I have been talking about the difficulty of spreading these ideas. The WBR works, but as the essay shows, it is an interlocking set of processes that solves for a bunch of socio-technical problems. It is not easy to get companies to adopt such large changes.
As a companion to the essay, here is a sequence of cases about companies putting these ideas to practice:
https://commoncog.com/c/concepts/data-driven/
The common thing in all these essays is that it doesn’t stop at high-falutin’ (or conceptual) recommendation, but actually dives into real world application and practice. Yes, it’s nice to say “let’s have a re-evaluation date.” But what does it actually look like to get folks to do that at scale?
Well, the WBR is one way that works in practice, at scale, and with some success in multiple companies. And we keep finding nuances in our own practice: https://x.com/ejames_c/status/1849648179337371816
Reality has a lot of detail. It’s nice to quote books about goals. It’s a different thing entirely to achieve them in practice with a real business.
As to Jack Stack's book, I think the genius of his approach is communicating simple decision rules to the folks on the front line instead of trying to establish a complex model at the executive level that can become more removed from day-to-day realities. In my experience, which involves working in a variety of roles in startups and multi-billion dollar businesses over the better part of five decades, simple rules updated based on your best judgment risk "extinction by instinct" but outperform the "analysis paralysis" that comes from trying to develop overly complex models.
Reasonable men may differ.
My two questions (a) and (b) were not rhetorical. Let’s get concrete.
a) You are advising a company to “check back after a certain period”. After the certain period, they come back to you with the following graph:
https://commoncog.com/content/images/2024/01/prospect_calls_...
“How did we do? Did we improve?”
How do you answer? Notice that this is a problem regardless of whether you are a big company or a small company.
b) 3 months later, your client comes back and asks: “we are having trouble with customer support. How do we know that it’s not related to this change we made?” With your superior experience working with hundreds of startups, you are able to tell them if it is or isn’t after some investigation. Your client asks you: “how can we do that for ourselves without calling on you every time we see something weird?”
How do you answer?
(My answers are in the WBR essay and the essay that comes immediately before that, natch)
It is a common excuse to wave away these ideas with “oh, these are big company solutions, not applicable to small businesses.” But a) I have applied these ideas to my own small business and doubled revenue; also b) in 1992 Donald Wheeler applied these methods to a small Japanese night club and then wrote a whole book about the results: https://www.amazon.sg/Spc-Esquire-Club-Donald-Wheeler/dp/094...
Wheeler wanted to prove, (and I wanted to verify), that ‘tools to understand how your business ACTUALLY works’ are uniformly applicable regardless of company size.
If anyone reading this is interested in being able to answer confidently to both questions, I recommend reading my essays to start with (there’s enough in front of the paywall to be useful) and then jump straight to Wheeler. I recommend Understanding Variation, which was originally developed as a 1993 presentation to managers at DuPont (which means it is light on statistics).
- Use not one, but many metrics (article mentioned 600)
- Recognize that some metrics you control directly (input metrics) and others you want to but can’t (output metrics).
- Constantly refine metrics and your causal model between inputs and outputs. (Article mentions weekly 60-90min reviews)
Edit: crucial part, all consumers of these metrics (all leadership) is in this.
Is Goodhart's Law as useful as you think?
https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...