TL;DR — second in read-along series; human algorithmic thinking matters (read Papert); L&D’s systemic connections abound
You can read Part 1 here
I left this series for longer than I hoped, but better to continue now than never. Last time I reached page eight of Telling Training’s Story, and ended with a discussion of how answering complex problems well for non-experts generally means moving complexity into algorithms. One thing I didn’t get into great detail on is the nature of algorithms. A reasonable reader might think I only meant things computers do; I do not. Algorithms can be implemented by humans.
Learning to apply algorithmic thinking and specific algorithms to problems is entirely possible for humans from a very early age. It leads to dramatically impressive insights, and integrates with the depth of other faculties available to people — it is not merely replicating what a computer does. Seymour Papert is the greatest thinker in this area. Read his Mindstorms as a starting place.
Brinkerhoff’s own Success Case Method is an excellent example of a human-implemented algorithmic approach. Some steps are mechanistic, but many of them use human judgment, within an algorithmic framework, to accomplish far stronger outcomes than other approaches can achieve. “Human-implemented algorithms” have a rich history in L&D: they’re commonly laid out as “job aids” and the like. So, back to the text.
The section starting on page nine is titled, “Demonstrating Impact, One Trainee at a Time”. Here, Brinkerhoff starts his explication of both the reasoning behind and process for the Success Case Method. After guiding the reader through a stylized evaluation approach and criticisms, he lays out a remarkably under appreciated thesis:
“[W]e systematically raise and test the answers to… [Basic Impact Claim] questions… We do this by asking questions directly… we ask other people… We also look for evidence that would substantiate claims of impact learning, performance, and outcomes in documents and records. In addition, we test alternative explanations, such as whether a change in office procedures or market conditions may lead to equally significant performance improvement. If we find that [someone] really learned and used something from the training, and we could not find evidence that any alternative explanation… was valid, we have to conclude that this training probably did work.”
Chapter One, Getting to the Heart of Training Impact, pp. 11-12.
I say under appreciated because I often see evaluation veering the other way: seeking things to add to the attributed effects, rather than trying to rule out as much as possible, leaving only the clearest outcomes behind. But, beyond implementing the text directly, observe the analytical, algorithmic thinking approach underlying the process: “systematically raise and test”, “look for evidence”, “test alternative explanations”. This style of thinking, married with professional expertise and ethics around the practice of L&D, will support L&D as it takes a greater role in the success of organizations and the individuals within them. As Clark Quinn said recently, “if L&D were truly enabling optimal execution as well as facilitating continual innovation (read: learning), then they’d be as critical to the organization as IT.”
Quinn emphasizes that L&D’s greater role would mean:
“facilitating productive engagement and interaction throughout the workflow”. L&D’s activity isn’t isolated; no organizational function is. A panoply of systems come together to make any business outcome happen. But that doesn’t mean it isn’t possible to learn about the contributions of those systems, which is Brinkerhoff’s next point:
“Training alone is never the sole factor in bringing about improved performance, and is often not even the major contributor. Given this, we never try to make an impact claim for training alone. Nor do we try, as some popular evaluation methods and models do, to estimate, isolate, or tease out the difference that training alone might have contributed…
…we are very content with being able to show that training made a difference, and an important difference, and that the training contributed to valuable outcomes. In fact, the Success Case Method has the additional goal of pinpointing exactly what additional factors played a role in the success of the training such as a manager’s commitment or a new incentive. Training is
alwaysdependent upon the interaction of these other performance system factors in the improvement of performance.”
I’ve added emphasis in two places. Brinkerhoff is not given to absolute declarations, preferring to qualify and moderate, as with any good analyst, but here we see two big absolutes in quick succession, emphasizing the systemic nature of L&D outcomes. I think that’s very important.
I’ve said that no organizational function is isolated, but I’m going to argue for something stronger: L&D should be the least isolated organizational function. Take a look at major trends in L&D, such as social networks, performance support, and employee generated content. Ultimately, these are about helping the systems in play at work become better forms of themselves. That’s radically different from sales, marketing, et cetera. Other functions connect a purpose to the business, but L&D’s purpose is the business. All those L&D trends are about being an active agent of change in how people relate to people, how people relate to systems, how systems relate to other systems, and how change propagates in an organization. The L&D of the future must focus on feedback loops, navigating wicked problems, complex interactions, and much more.
And that’s as good a point as any to pause. At this rate I’ll complete sometime next decade, but I expect the pace to pick up as the book shifts from laying out core ideas to operationalizing them and then case studies. Please, share some of your thoughts on what I’ve talked about, especially where I don’t have it right. The best place is Twitter, where I’m @fugu13. Thanks for reading :)
The referenced book from Papert can be found here.