A Data Lover Reads 'Telling Training's Story' (Part One)

| Comments

TL;DR — first in a read-along series; tension between high worth of training and development, low esteem held for L&D sustained by difficulty communicating results effectively; existing complicated methods make that even harder even if they work, but that doesn’t mean give up on data.

I recently read Robert Brinkerhoff’s Telling Training’s Story for the first time, some months after flipping through and being impressed by another of his books at a conference. The book details Brinkerhoff’s Success Case Method as a framework to approach L&D program evaluation, including step by step processes, then explicates via case studies. I really like how the book’s strong theoretical model rests on very practical foundations that integrate the need to check data for accuracy, construct a viable business narrative, and motivate specific steps for improvement. Many other theoretical models for learning are weak in exactly these areas, which hamstring their applied use.

telling-trainings-story

I come to the book with a background acquired in the quantitative social sciences on working with data, especially the practical analysis of data to answer concrete questions. For the last several years I’ve been working closely with L&D departments to figure out how to transform themselves into more capable organizations, typically using an Experience API Learning Record Store and the data it enables. You can read a high level perspective on that in my eLearning Guild blog post on Making Fast.

This isn’t a review post, but the first in a series of read-along posts. The book has a lot of ideas I want to talk about, and I’m going to take the time to talk about some. I don’t know exactly how many there will be, as I’m going to follow a simple format: as I read, I will write the current post, discussing specific quotes and ideas in the part I’m reading. Some of my thoughts will be more free-associated than others. When the post is long enough and at a suitable wrap up point, I’ll put down the book for a bit and come back to it in the next post in the series, which will work the same way.

So, without further overly wordy ado, Telling Training’s Story (Preface, p. xi).

“Most of us in the training and development profession know in our guts that what we do is valuable and worthwhile—we wouldn’t have stuck with this job if we didn’t believe we were doing good. The problem is that often our clients and customers are highly skeptical, and when there is pressure on resources, we usually get the short end of the budget stick. Customers and senior executives want proof, but most of us can only offer promises.”

That’s what starts the whole book, the very first paragraph in the preface. I see this dilemma come up over and over again when talking with L&D practitioners. What isn’t stated here, and often isn’t stated elsewhere, though it comes up regularly in surveys about business attitudes toward employee learning/development, is that those same skeptical “customers and senior executives” asking for proof deeply agree that there are huge, valuable, important things to be done in a company that fit the L&D remit. I often wonder if criticisms of L&D’s output are in part because somewhere underneath the critic imagines a glorious wave of improvement that make even solidly created experiences feel small in comparison.

The next quotation is from Chapter One, Getting to the Heart of Training Impact, pp. 7-8.

“Experimental methods with randomized, double-blind treatment and control groups are considered the “gold standard” when it comes to determining the effects of interventions and making causal claims. But they are far too impractical and costly for use in the typical organizational setting.
..utility analysis or time-series designs… …are very complex, require sophisticated research and measurement skills, and their statistical manipulations and reports are difficult to comprehend.
Simpler methods such as the return-on-investment (ROI) methods… …can be time consuming and expensive. More importantly, they leave many questions unanswered and involve statistical calculations and extrapolations that raise serious doubts among report audiences.”

Brinkerhoff is surveying the state of learning program assessment, and if anything he’s being kind. Starting at the end, when he says ROI methods “raise serious doubts among report audiences”, read that as “when business hears L&D make a case for ROI, they almost always view that as strong evidence that L&D doesn’t understand the business’s needs”. Businesses do not operate primarily on ROI calculations, even when those ROI calculations are a part of the process. In many cases ROI calculations are more about poking at the moving parts of a business model, not the actual numbers, and L&D’s frequent tendency to focus on the ROI number as if it justifies a program is entirely missing the point.

But the even more important part of the quote is about the difficulty in applying traditional data science research methods to L&D program evaluation. Even among the ones that are possible at all, they require

  1. data L&D generally doesn’t have on hand and must gather at expense.
  2. techniques that must be applied by quantitative experts using domain knowledge to do things like check for model violations; this is why almost all quality application of quantitative data analysis to real L&D data are done by Industrial-Organizational Psychologists. There’s also a modest amount of research from schools of education, but those rarely deal with actual L&D data.
  3. an ability to communicate complex model results to business stakeholders, which is difficult even for experts with deep understanding of the models.

The good news is, there are ways to automate the most expensive parts, steps one and two. The bad news is, this isn’t as easy as it sounds. For example, quantitative experts will generally try to use the simplest method possible. Sounds easy, right? Just have a bunch of simple methods that can be applied, and some guidance on how to pick among them. Except “simplest” is predicated on a deep understanding of tradeoffs and the nature of the specific data, involving an intense feedback loop between the data and the researcher. That feedback loops extends all the way back to questions of data collection and cleaning, not just algorithm choice.

algorithm-analysis

Image Credit: Jonah Sol Gabry via andrewgelman.com

Additionally, the “simplest” model is often complex in ways that make step three, communicating results to business stakeholders, more difficult by anyone except an expert in the model. If the analysis is being done automatically by a non-expert, there is no expert handy to do that communication.

Believe me, I know perfect is the enemy of good. The naive use of quantitative algorithms almost never gives “technically flawed but serviceable” results, but “unusably bad, and often dangerously misleading”.

But as I said, there are ways to automate and reduce complexity. It is possible to make tools that use more complex algorithms to reduce the data preparation, algorithmic knowledge, and explanation complexity needed. Seems counterintuitive that adding complexity reduces all those things, right? For a good-but-imperfect analogy, consider Google search. The actual search algorithms Google uses are hideously complex (they’ve long since moved past naive Pagerank), but with them they’ve managed to create a (mostly) automated process to go from raw Internet to type a question in a box and get back useful answers. Back when they applied a much simpler algorithm (or worse, back before Google), searching for many simple things required one be much more expert, which is why sites like ChaCha that made it even easier for users could flourish, but as Google’s become better at anticipating typical needs, they’ve declined.

Whew. I’ve already gone on longer than I intended to for any one quote, but I’m glad for the chance to talk about some of this in a more extended fashion. For those who made it this far and want to talk even more, there’s a comment section below, or even better there’s Twitter, where I’m @fugu13. This post will be updated with links to further posts in the series as I write them. At this rate… I won’t be done with the read-along for an appreciable while. I’ll try not to write quite so much about quite so short a span of pages quite so often as I go.

Interested to learn more? Read “Telling Training’s Story - Part 2”.

Comments