"How to Measure Anything" is a 2010 book by Douglas Hubbard, an esteemed expert in measurement methodologies. The book offers applicable methodologies for organizations.
If I distill a key message from the book, it would be this: everything can be measured, even traditionally deemed "soft" subjects. The crucial steps involve defining the purpose of the measurement (the decision to be made), clarifying the subject's significance in the decision's context, and executing the measurement.
The book covers the following topics:
Perception of measurement
Preparation and planning
Modeling the decision
Evaluation and measurement
Measurement methods
This is not a typical Business Intelligence (BI) book focusing on establishing a data environment and generating reports. The primary emphasis is on decisions, perhaps one-time decisions requiring optimal data-driven choices. The focus on data collection, especially for these decisions, is highlighted. The book includes principles, mathematical calculations, formulas, and models incorporated, although not detailed in this summary. The summary encapsulates the majority of messages and principles without necessitating mathematical proficiency.
For those seeking further understanding or delving into mathematical models, an invitation is extended to read the book and explore its appendices and the accompanying website, howtomeasureanything.com. The website provides documents and files aiding in the application of these models. Happy reading!
Perception of measurement
Many managers operate under the assumption that numerous issues are inherently immeasurable. Intangible aspects such as service value, quality, or employee ethics are often considered "not measurable," leading to decisions by managers that rely less on data than anticipated. If suboptimal decisions result, a cost is incurred regarding expenses, resource allocation, rejection of viable ideas, and more. The central concept proposed in the book is that everything can be measured when approached correctly.
The book introduces the premise that measurements are intended to facilitate better decision-making rather than being perceived as part of an ongoing routine. Sometimes, a more straightforward measurement can yield insights into the subject under consideration. An illustrative case from the book involves a ten-year-old girl named Emily who sought to measure the efficacy of healers possessing a "therapeutic touch." She conducted a simple experiment with 21 healers, demonstrating that even a seemingly complex attribute could be evaluated statistically. Her findings, published in the prestigious academic journal JAMA, earned her accolades.
Another example in the book illustrates precise measurement through a transfer technique—measurement from a different context that can be projected onto the desired one. The book references the well-known example of Eratosthenes, a Greek from the third century BCE, who calculated the size of the Earth by measuring shadows in various locations at a specific time. The text also emphasizes the concept of differentiation, asserting that precise measurement is not always necessary; delimiting the result can suffice. For instance, estimating the number of pianists in Michigan through calculations involving piano installations, average working time, and repairs per day.
The underlying idea is straightforward: measurement can take various forms, including direct measurement, inference, and interdisciplinary assessment, as long as it reduces uncertainty, either partially or entirely, leading to better decision-making. In this context, measurement is defined as a process that mitigates uncertainty based on examination and observation.
Additional insights provided in the text include:
Success hinges on the ability to measure everything but on a clear understanding of what needs to be measured and proficiency in measurement techniques.
Typically, we possess more primary data than we estimate.
Expert evaluations often fall short of expectations.
A substantial amount of data is usually unnecessary for effective measurement.
Caution is advised against exclusively measuring subjects that are easy to measure.
Preparation and planning
Given that Hubbard, the book's author, does not delve into routine measurement, preliminary preparation and planning become imperative when deciding whether upgrading an existing organizational system is worthwhile or gauging customer satisfaction with a product.
The preparation encompasses several key aspects:
Resources: Understand the financial scope and significance of the decision, including delineating a feasible budget for measurement.
Learning: Conduct preliminary research about the field to streamline the measurement process, mainly if undertaken by a measurement expert. This involves reviewing existing reports and conducting interviews.
Experts: Identify a select group of experts to aid the measurement process. Typically, 4-5 experts (with a maximum of 20) are recommended. While experts may not estimate as accurately as good statistical models, their role is crucial in defining measurement components and assigning weights to indices.
Work Plan: Establish a comprehensive work plan (process) and prepare for the subsequent stage—workshops for modeling the decision. This includes coordinating individuals and schedules to ensure a smooth progression.
Modeling the decision
This decision modeling phase is a pivotal step in making measurement applicable. Furthermore, it is crucial to delineate precisely what will be measured. The primary challenge in decision-making, especially in "soft" areas, does not arise from the inherent difficulty of measuring these areas due to their "softness," but rather from our perspective, where these areas are often insufficiently defined—mainly owing to their abstract nature.
The decision modeling stage is dedicated to dissecting a decision from an abstract concept into tangible factors. For instance, referencing Deming's practical definitions becomes valuable if the goal is to measure quality. Deming, being a statistician, defined quality as the degree of consistency with expectations. This definition shapes the approach to examining the quality of a production process, measured by the frequency of deviations from the defined norm in various product functions.
To break down the field into its components, it is imperative to revisit the sought-after decision. Why are we measuring? What understanding/achievement/decision do we seek? Connecting the field to a decision facilitates the analysis of various components, enabling the prioritization of the most critical ones, those most pertinent to the decision. The recommended implementation of this methodology involves a series of workshops (typically 3-5) with managers, experts, and other stakeholders in the field to:
Define the desired decision.
Define domain components.
Determine existing knowledge about the topic and identify uncertainties.
Prioritize issues crucial to measure due to their substantial impact.
As an illustration, let's consider Information Security. The decision to be supported is whether it is right to invest in enhancing security systems to reduce the frequency and impact of infectious viruses. While not delving into all the learning details, the outcomes of the process encompass:
The annual incidence of transversal viruses in the agency: 90% probability between 2-4.
The number of people affected by each attack is 90% probability between 25,000-65,000.
Percentage decrease in output with each attack: 90% probability between 15-60%.
Duration of impact for each attack: 90% probability 4-12 hours.\
Annual costs per employee: 90% likelihood $50,000-100,000 (based on 2,080 hours per employee per year).
This comprehensive breakdown provides a reliable estimate of the cost of loss, enabling an informed assessment of the justification for investing in systems to mitigate damage. The subsequent step involves evaluating how much each solution reduces these losses.
Evaluation and measurement
A sound measurement can be a close estimate, not necessarily entirely accurate. The preceding chapter demonstrated 90% estimates, eschewing the need for exhaustive precision.
When undertaking a measurement, the primary condition is that it must be calibrated and not subjective. Bias is inherent in people, often manifesting as overconfidence or insecurity. Studies indicate that assessments by bookmakers and laypeople tend to outperform those of senior managers. However, individuals can be taught to enhance their assessments, making them calibrated and valuable.
While detailed calibration steps are not explored here, some general methods to enhance accuracy include:
Repeating assessment exercises with feedback provided in between.
Determining additional parallel estimates for each evaluation, enabling comparisons and inferences.
Analyzing at most minuscule two justifications for the assessment's accuracy and two reasons for potential errors.
Avoiding misleading anchor numbers.
Defining broad assessment areas and gradually refining them until reaching an accurate assessment.
To achieve an optimal measurement, quantitatively assessing the uncertainty (level of risk) associated with the measurement is essential. Quantitative risk analysis is recommended using the Monte Carlo model, a statistical model not detailed in this summary. The book's website contains Excel resources for those interested in running the model.
Additionally, evaluating the value of additional information is necessary. Formulas exist for exact numbers and assessing the quality of a range of numbers (between X and Y).
Additional insights include:
Careful consideration of metrics chosen for measurement, avoiding common errors like measuring what is easy rather than essential.
A rapid decrease in uncertainty in initial measurements (eight measurements usually provide at least 90% certainty for the assessment).
Iterativeness is crucial; refrain from heavy investment in learning to measure a highly uncertain subject. Preliminary measurement, evaluation, and a reassessment process should be employed to determine the need for additional measurements to inform decision-making.
Measurement methods
How do you measure? The process involves deciding on the measurement method and subsequently carrying out the measurement.
Deciding on the measurement method involves the following steps:
Selection of Decision Components: Identify components of the decision where uncertainty exists.
Defining Observation: Examine how to define observations and determine their contribution to the desired measurement.
Scope of Measurement: Decide on the scope to ensure a high probability of a reliable result.
Error Analysis: Conduct a preliminary analysis of potential sources of errors and strategies to avoid them.
Tool Selection: Choose tools for conducting the measurement.
Statistical Model: Decide on a statistical model for analyzing the data.
Advantages of using measurement tools over human assessment include:
Recognition: Tools can recognize aspects that humans might find challenging.
Consistency: Measurements are consistent.
Calibration: Tools can be calibrated to minimize bias.
Defined Measurement: Tools measure something defined and confined, unlike human assessments that may examine the entire picture.
Efficiency: Measurement tools are cost-effective and fast.
However, human questioning also has advantages and should not be categorically dismissed.
Types of errors in measurement include:
Systemic Errors: Fixed biases favoring a particular outcome, such as mistakes from an unrepresentative population or incorrect observations.
Random Errors: Unpredictable mistakes occurring to a specific individual. Systemic errors must be identified and reduced, while accidental mistakes should be ignored.
Data analysis involves using various models to improve measurement:
Unstructured information and non-objective processes → Little improvement
Structured information, consistently represented, and informal evaluation → Little improvement
Calibrated Standard Model (z-score) → Significant improvement
Using a Lens model or the Rasch model for standard results → Significant improvement
An objective model requiring historical information is also essential.
Additional insights:
Examine if there is already published information on global measurements. Starting with an adjustment of measurement and decision is better than a completely open wish.
If one method fails, don't despair. You can always transition to another method.
Investing 2-10% of the expected value in the measurement is usually sufficient for a good result.
Recommendations for defining questions for measurement include keeping them short, precise, and not composed of multiple parts. Questions should be neutral, avoid negative connotations, and have both questions and their opposites to verify results.
Comments