Developing effective learning products should be a central goal of all learning businesses, and one way to gauge effectiveness and then refine learning products is with learner surveys. In this episode of the Leading Learning Podcast, co-host Jeff Cobb talks with a true expert in the domains of learner surveys and learning effectiveness: Dr. Will Thalheimer. Will is a consultant, speaker, researcher, and author of Performance-Focused Learner Surveys: Using Distinctive Questioning to Get Actionable Data and Guide Learning Effectiveness.
Jeff and Will talk about four pillars of training effectiveness, learner surveys and smile sheets, Will’s Learning Transfer Evaluation Model (LTEM), distinctive questioning for evaluations, the role of evaluation in making learning products more effective, ways to get more learners to respond to surveys, and the important role of translating research into practical recommendations.
To tune in, listen below. To make sure you catch all future episodes, be sure to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). And, if you like the podcast, be sure to give it a tweet.
Listen to the Show
Access the Transcript
Read the Show Notes
Will Thalheimer: [00:00:00] A lot of times we don’t ask about that. So we’re not asking about learning effectiveness. We’re asking about did you like the coffee? Did you like the course? Did you like the instructors? Things like that. The better data, the more precise, the more it’s related to learning effectiveness, the more successful we’re going to be.
Celisa Steele: [00:00:21] I’m Celisa Steele.
Jeff Cobb: [00:00:23] I’m Jeff Cobb, and this is the Leading Learning Podcast.
Celisa Steele: [00:00:31] Developing effective learning products should be a central goal of all learning businesses, and one way to gauge effectiveness and then refine learning products is with learner surveys. In this episode, number 379, we talk with a true expert in the domains of learner surveys and learning effectiveness: Dr. Will Thalheimer. Will is a consultant, speaker, researcher, and he does his work through Work-Learning Research. He’s also the author of Performance-Focused Learner Surveys: Using Distinctive Questioning to Get Actionable Data and Guide Learning Effectiveness. Jeff and Will talk about four pillars of training effectiveness, learner surveys and smile sheets, Will’s Learning Transfer Evaluation Model, distinctive questioning and questions to use in your evaluations, the role of evaluation in making learning products more effective, ways to get more learners to respond to surveys, and the important role Will and others play in translating research into practical recommendations. Jeff and Will spoke in September 2023.
A Research-Backed Approach
Jeff Cobb: [00:01:48] Could you say a bit more about how you go about your work? Because I know you read a lot. You’re very into the research, so just say a little bit more about what’s involved in you being able to be the expert you are in the areas that you are an expert in.
Will Thalheimer: [00:02:04] Well, sure. So really there’s two aspects of it. One, I’ve been a practitioner for a long time. So I’ve been an instructional designer, a simulation architect—I had that title once. That was actually on my business card. I’ve been a trainer. I’ve been a project manager, ran a leadership development product line. I’ve been a consultant for a while. Speaker. So I’ve got all that. But I also focus on the learning sciences. What I’ve done over the years, some years—actually many years—I’ve read over 200 articles from the scientific, refereed journals on learning, memory, and instruction and then translate those into practical recommendations. There’s a lot of jargon in them. There are a lot of statistics. There’s a lot of this and that, but I try to cull the most important stuff. And I’ve used that in my work over the years, doing what I call learning audits. I’d go in, and I would research-benchmark a learning program. I say, “Hey, this is how you’re aligned with the research. This is how you’re misaligned.” Or “These are the recommendations for going forward.” But also, along the way, I got very interested in learning evaluation. I noticed that some of what we did in learning evaluation just didn’t make sense from a learning science perspective.
Will Thalheimer: [00:03:19] What we knew about learning wasn’t aligned. I’ll give you an example. Oftentimes, when we measure learning, we measure right at the end of learning. Well, the issue with that is that people forget. And so, if we measure them right at the end of learning, everything is top of mind. And so we’re getting a really biased view. We’re not getting a complete view. And we might get some misinformation about how well we’re doing because, as learning professionals, we need to not just help people learn, but we also need to help them not forget or to remember and be able to apply it as well. So just some biases like that. I got really fascinated. I started writing research-to-practice reports on learning evaluation, and then I saw this statistic once from a meta-analysis, a scientific study of many other scientific studies, and it said that our smile sheets were correlated with learning results at 0.09. Anything below 0.30 is a weak correlation. So 0.09 is virtually no correlation at all. My first instinct, when I saw that was, wait a minute, maybe we should just not use these things. And then I realized, wait a minute, we’ve been doing this for decades. It’s a tradition.
Will Thalheimer: [00:04:35] And it’s also respectful to ask our learners what they think. And so I said, “Well, can we make them better?” And I wrote a book on that, published in 2016, rewrote it for 2022 in a second edition. But I’ve gotten interested in learning evaluation because it’s part of the way we can get feedback about the work that we do. We want to create the most effective learning possible, and so we should look at the science of learning to design our learning, but we also ought to be measuring how well we’re doing so we can create feedback loops and iterative cycles of improvement. I help organizations do learning evaluation as well. So using the learning sciences is one aspect of my work. The other aspect is using learning evaluation, whether it be learner surveys, helping people develop good learner surveys, but also taking it further. Some people want to do a full learning of evaluation study. They want to look at their KPIs. They want to look at other ways to measure besides learner surveys. There’s a lot of complexity there, but I do that work as well. And I enjoy both.
Jeff Cobb: [00:05:43] Well, we’re going to focus mainly on learner evaluation in this conversation, though we may get into some other areas because I’ve got you here, so I want to be able to ask you about a lot of different things. But, as you mentioned, your research led to an entire book on learner evaluation, which we did talk to you about before on the podcast—hard to believe that was more than 300 episodes ago: Performance-Focused Smile Sheets. But you do have the second edition out, relatively recently, that you referenced, so we want to return to that. And, because we have talked about the first edition of the book on the podcast before, I don’t want to rehash too much of that territory. We’ll definitely link to that previous episode and strongly recommend that listeners go have a listen to that. But I think we probably do need to make sure we frame the essential problem or challenge that you address. To tee that up, I’d like to quote directly from the book, from a place in the introduction where you essentially, as I see it, throw down the gauntlet, in a way. So here’s what you say: “Learner surveys, as typically designed, do not just tell us nothing. They tell us worse than nothing. They focus our worries toward the wrong things. They make us think our learning interventions are more effective than they are. More than any other practice in our field, they have done the most damage.” Now, those are pretty damning words.
Will Thalheimer: [00:07:12] I love that. That’s poetry to me.
Jeff Cobb: [00:07:15] You’ve gotten into it a little bit, but what, from your viewpoint, is so broken and even damaging about that traditional approach to learner surveys that we’ve taken?
Will Thalheimer: [00:07:28] Well, I’ll just reiterate one thing. We’re trying to do our work, we’re trying to be effective, and we have lots of ways to do that. Our work’s really hard, first of all. We do noble work. We help people in their jobs and their careers, etcetera, and that’s important stuff. But how do we do a good job? Well, there’s all these tools we have to know. We have to have all these skills, from emotional intelligence to technical skills. But one thing we should probably be doing is doing some evaluation. It’s baked into some of our legacy, legendary models, like the ADDIE model, the big E at the end of ADDIE. We know we ought to do it, but some of the feedback we’re getting from our smile sheets tells us we’re doing great. In fact, if you spend any time looking at the typical smile sheet data, all the data goes from 3.8 to 4.5, and there’s no differentiation. Well, all that looks pretty good. So then what do we do? We get paralyzed. We don’t do anything. I speak at a lot of conferences, and, almost invariably, somebody raises their hand at the conference and says, “Will, we gather all this data. We don’t even look at it.” Or “We gather all this data, and we’re paralyzed by it. We don’t really know what to do.” So that’s an issue. A real profession—you put your stuff out there, you get feedback on it, and you make it better. There are iterative loops.
Will Thalheimer: [00:08:55] I watched some football yesterday. The football teams go back and look at the videotape. I guess it’s not tape anymore, but they look at the video, and they see what did we do right? What did we do wrong? They’re always trying to learn. So we should do the same things. The danger is that we think we’re doing better than we are, and we don’t investigate at all, or we don’t focus on the right things. So we focus on the learner satisfaction or their sense of the reputation of the course, but we don’t look at some of the things that we know make learning work. In the book, I talk about the four pillars of training effectiveness. Do the learners understand? Are they motivated to apply what they’ve learned? Do they remember? Are there after-training supports that support learning transfer? These are some of the essentials. It’s an oversimplification. We need more than that. But, if you have those four elements, that’s really strong. A lot of times we don’t ask about that. So we’re not asking about learning effectiveness. We’re asking about did you like the coffee? Did you like the course? Did you like the instructor? Things like that. The better data, the more precise, the more it’s related to learning effectiveness, the more successful we’re going to be.
Partner with Tagoras
Celisa Steele: [00:10:10] At Tagoras, we’re experts in the global business of lifelong learning, and we use our expertise to help clients better understand their markets, connect with new customers, make the right investment decisions, and grow their learning businesses. We achieve these goals through expert market assessment, strategy formulation, and platform selection services. If you’re looking for a partner to help your learning business achieve greater reach, revenue, and impact, learn more at tagoras.com/services.
Performance-Focused Learner Surveys, Distinctive Questioning, and Increasing Response Rates
Jeff Cobb: [00:10:44] Prior to our conversation, I did some robust investigative journalism because you have this new edition out, and I discovered there are a number of, what seem to be, important changes in this new edition. I’ll start with changes to the title. I know you’re a thoughtful guy and probably don’t take title changes lightly, so I thought it might be worth talking about these a bit. That first edition was titled Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form. This new one is Performance-Focused Learner Surveys: Using Distinctive Questioning to Get Actionable Data and Guide Learning Effectiveness. So it seems like only “performance-focused” actually survived the two titles, which does seem important, obviously. But can you tell us about the other changes and why you felt it was important to make those? I’m assuming that wasn’t just a marketing move.
Will Thalheimer: [00:11:38] In the first edition, I looked around me, here, in North America, and the most popular word for “smile sheets”—and there are a number of words (reaction forms, level ones, bunch of different words around the world as well, happy sheets, etcetera)–but I looked around in North America, and I saw that “smile sheets” was the most common word, and it has a little bit of a derogatory connotation, so I was trying to rehabilitate the term. I thought, if we could create better ones, that we could agree that they were okay. But I realized from the first book to the second book, that was not working. And so the more generic term, which is not laden with any negative connotation, is “learner surveys.” We are surveying our learners, so I went with that: “learner surveys.” In terms of the subtitle, I really wanted to focus on the value-add. The first one was…[laughing] what was the first title? The Dangerous Art Form or something like….?
Jeff Cobb: [00:12:37] A Radical Rethinking of a Dangerous Art Form. Yes.
Will Thalheimer: [00:12:39] So the first edition, I wanted to get people shooken up about this: “Hey, pay attention.” The second edition, two things happened. One, I wanted to focus on data and what you could do with it. But also people told me—between the first edition and the second edition—“Will, you need a name for the type of questions you’re recommending.” And so I had a hard time thinking about this, but I finally came up with the term “distinctive questions” or “distinctive questioning.” And that’s actually critical because the big difference between a traditional smile sheet and a performance-focused learner survey is that we’re no longer using Likert scales; we’re no longer using numeric scales. Sometimes, maybe, but we’re trying to get away from them because they’re too fuzzy. They have three problems. One, they’re not motivating—people circle the same number all the way down our smile sheets. We’re not supporting the learners in thinking about the learning because they’re choosing between strongly agree and agree. The data we get is better on the performance-focused learner surveys because, instead of getting an average, you’re getting a sense of how many, what percentage of people chose this answer, this very concrete answer choice. So it helps the learners make decisions, and it helps us, as those who are consuming, looking at the data, make decisions about, “Well, is this good or bad? What should we do about it?”
Jeff Cobb: [00:14:05] And there’s something in there too—and I think you even referenced this in your earlier conversation with Celisa—about learners know when they’re being asked worthwhile questions. Basically, it improves motivation. They’re more likely to actually engage with you and say, “Okay, this person is taking me seriously.”
Will Thalheimer: [00:14:24] Absolutely. One of the big problems everybody asks me, “Will, how do we get a better response rate for our learner surveys? We only got 10 or 15 percent filling them out.” I actually added a chapter in this edition, the second edition, with 40 or 50 ways to do that. But the two most important ways are use better questions because they’re more motivating. The learners look at those Likert scales, and they go, “Oh!” Their eyes glaze over. The second way to get better response rates is to give the learners time to answer them during the learning. Don’t wait. Don’t even send it out five seconds after the learning because then it’s a separate thing. You’re really trying to create a partnership between your learners and you, as the course creators or trainers: “Hey, we really need your feedback on this.” So those two things are the major things. But, if you want more, there are 40 additional things in the book.
Comments and Open-Ended Questions
Jeff Cobb: [00:15:24] And you just alluded to another area that my investigative journalism revealed, which is that there are, I think, five new chapters, roughly 100 new pages in this book. One of those is on getting responses, which you talked about in the first book, but now there is a chapter devoted to it, which is something I definitely wanted to touch on. So I’m glad you brought that up. We can’t go into each of those new chapters in depth, but another one that you added this time was around comment questions, or open-ended questions, which you didn’t address the first time around. And I’m glad you did this time because I find that organizations tend to shy away from these. I think partly because they feel like the analysis is going to be labor-intensive if they’re surveying a thousand learners and trying to digest all those. Comment questions can be somewhat labor-intensive, but, on the one hand, technology is changing that. AI is coming along and making it much, much easier to analyze those questions. And, technology aside, they’re just important. Can you talk about the use of comment questions, or open-ended questions, as part of learner surveys and evaluations?
Will Thalheimer: [00:16:37] Yes, actually, I’m embarrassed by not talking about them in the first edition of the book because they were important. I just had a blind spot about it. I used to be a highly mediocre leadership trainer, and one of the ways I got better was actually reading the comments on my learner surveys. So, if you ask a choice question or a forced choice question, then you get a sense of the whole group; you can look at it, see where everybody stands. But, if you ask a comment question, you’re getting individualized responses. So it tends to be richer, it tends to be more detailed. Really critical, I always add three questions—well, unless it’s a short learning experience—but try to add three questions at the end. What’s going well? What should we definitely keep as part of the training? What could have been better?
Will Thalheimer: [00:17:27] And usually admonish people, say, “Hey, we really need your critical feedback on this.” And then, the third question, the third open-ended question is, “Hey, is there anything else I should have asked? Anything else you want to tell us?” Now, not a lot of people will answer that last question, but sometimes the people that do will give you the most important information. They’re the ones that are willing to go the extra mile, and they have insights that you might not otherwise think of. But there’s one other thing you can do with comment questions, and that is sometimes you can pair them with choice questions. So you ask a choice question, and then you ask a follow-up question like, “Hey, now, in your own words, dah-dah-dah-dah-dah.” You don’t want to do that all the time because you’ll over-survey your folks; you’ll give them survey fatigue. But sometimes that’s really valuable.
Jeff Cobb: [00:18:12] Do you have any tips from your own practice, from your work with clients around how to go about analyzing, getting the most out of the information you get from those comment questions? Because you may get a wide range of diverse views, just depending on the question and your audience, and somehow you’ve got to synthesize that into some action that you’re going to take. How do you approach that?
Will Thalheimer: [00:18:39] As you said, there are going to be new opportunities with AI analyzing that. Some people do what’s called sentiment analysis. I wouldn’t recommend it because it’s pretty sloppy, but that’s basically the idea. You put it through a program that gives you a sense of whether most things are positive or negative, but that can be misinterpreted. You can do category sorting if you really want to get into it. You take the comments and put them into different groups, then look at them that way. But the most common method and the most powerful, I think, is you try to do a fair analysis, and then you take representative samples of the comments. Those quotations can be the most powerful things that you report out. People can see a bunch of data and go, “Huh?” And then you show them a quote, and they go, “Oh, wow.” So that’s a really powerful way, is to make sure you’re getting a representative sampling on that, but then pick one comment that represents that sense of what people were trying to get across.
Learning Transfer Evaluation Model (LTEM)
Jeff Cobb: [00:19:55] Now, another new chapter that you included in the book was focused on the Learning Transfer Evaluation Model, or LTEM, and that model as an alternative and, I guess, hopefully a replacement to the traditional Kirkpatrick-Katzell Model. Could you tell us more about LTEM and the advantages that it offers over Kirkpatrick? And I know this is something that I think has probably evolved since the first edition. I know you’ve written extensively about it elsewhere, and you refer people to that in the book, but it’s an important part of where you’re going with all of this investigation and recommendations around evaluation.
Will Thalheimer: [00:20:35] Yes. I published LTEM in 2018, so it’s just recently had its fifth birthday. I was working with TiER1 Performance, and every year we would put out a survey of the industry, and we asked a group of questions about evaluation. And, 2023, we asked, “What models are you using for evaluation in your work?” And 38 percent said the Kirkpatrick Model—their organization was using Kirkpatrick. And 27 percent were using LTEM. And so I was blown away by that, that it’s got a lot of traction. I think part of that could be some sampling because I was associated with that. I think a lot of people might have taken the survey that knew me. But, still, it’s out there gaining traction all the time. Well, Kirkpatrick has been around since the 1950s—the four-level model for your listeners who don’t know it. Level one is the reactions, the learner reactions; level two is learning; level three is behavior; and level four is results. There have been complaints about the model for many decades, but people will use it. It’s the most popular one. It’s still the most popular. It’s still embedded in a lot of organizations. But what struck me about it was that it lacked a little bit of learning wisdom. And what I mean by that is it puts learning all in one bucket, at level two, and so, when you measure learning, you could measure on a wide continuum.
Will Thalheimer: [00:22:13] You could measure the regurgitation of trivia or the recall or recognition of meaningless information or meaningful information, focusing on knowledge. You could measure people’s decision-making competence, their ability to do a task, or their ability to use a skill through a number of tasks. That’s a pretty wide continuum, from trivia to skills. So it was lacking in that. The way I think about it is every model has good things about it and some bad things about it. There’s no perfect model. And so the four-level model—by the way, you called it the Kirkpatrick-Katzell Model. I’ve been encouraging that because it turns out that Raymond Katzell actually created the four-level structure, and then Donald Kirkpatrick added the names and popularized it, so I feel like both gentlemen deserve credit for that. But the four-level model has had some criticism, not just from practitioners but also in the research literature. I got thinking in probably 2016, 2017, “Well, okay, doing some good stuff, but can we make it better? Can we create a different model, then we make it better?” I started from scratch, and I went through 11 iterations. One of my iterations was the 15-level Model Learning Evaluation. So, fortunately, we did not end up with that.
Jeff Cobb: [00:23:36] There’s some adoption challenges with that.
Will Thalheimer: [00:23:39] Exactly. I got a lot of feedback from really smart people—like learning experts (Julie Dirksen, Clark Quinn), like evaluation experts (Rob Brinkerhoff, Ingrid Guerra-López), and a whole bunch of smart practitioners—and I published that, in version 11, in 2018. And so LTEM has eight tiers. The first two tiers are measuring attendance and measuring learner activity. And, for that, I say you should do these, maybe, but don’t validate your learning based on that because people can engage in activities and learning, they can attend, but they might learn the wrong thing. They might not learn at all. So that’s not a good way to validate our learning. And then tier 3 is learner perceptions. The way we typically measure that, not the only way we can measure that, but the typical way we measure that is through surveys.
Will Thalheimer: [00:24:33] Tier 3 has two levels to it: level A, which is when you ask learner perceptions about things that are related to learning effectiveness, and tier 3B is when you ask about learner satisfaction, reputation of the course, things that are not directly related to effectiveness, and then you move up. But I should say, since I’m the guy that wrote the book on performance-focused learner surveys, tier 3A, the fine print on that says, “Hey, this is good to do, but it’s not enough.” Then we go to tier 4, which is measuring knowledge. Tier 5: decision-making competence. Tier 6: task competence. So those—tier 6 and down—that’s all what we measure in learning. And then we go to tier 7; we’re measuring transfer, learning transfer behavior change. And then tier 8 is results. I call it the effects of transfer.
Will Thalheimer: [00:25:23] So you can see it’s a more complicated model. In fact, when I first polished it, people said, “Oh, Will, people in the learning field, they’re not smart enough to go beyond four levels.” I was insulted. I was insulted for the field too. I was upset about that. But, anyway, it’s out there. It encourages us to think not only about what to do but what not to do as well. And people are using it to benchmark, to see where they’re doing now, like a gap analysis. Where are we now? What are we doing this year? We’re doing this and that. And then what could we do? And then you look at LTEM, and you see these are all our options. People are using it to also motivate their teams to think about learning design. I didn’t design it this way, but it turns out people are using it not only to improve their learning evaluations but also to improve their learning designs. For example, you first say, “Hey, what do we want to measure?” “Oh, we should measure decision-making.” “Okay, great.” “We should measure behavior change.” “Oh, great.” And so people think about those things, and then they say, “Oh, if we’re going to be evaluated on decision-making competence, we should put more decision-making practice in our learning.” And so teams get inspired to do better learning designs. So it’s had that advantage as well.
Jeff Cobb: [00:26:53] I definitely do encourage folks to familiarize themselves with LTEM if they haven’t yet, and we’ll be sure to link to resources on that in the show notes. You mentioned Kirkpatrick really coming out in the ’50s. It feels sometimes like it was brought down from the mountain with Moses, the level of deference that it’s given. So it’s good to have a different model to look at.
Learning Science and Learning Myths
Jeff Cobb: [00:27:24] A couple more areas I’d love to ask you about before we wrap up today. One is you mentioned learning science at the beginning of our conversation. Learning science, obviously a very important part of your research and your writing. And I know one of the things that you will often do is take different myths to task that are out there around learning. I’m wondering, right now, what do you feel are some of the myths that are persisting that the learning science is disproving, basically, but that still hangs on and is still causing trouble out there?
Will Thalheimer: [00:28:07] Sure. Let me first say that the learning science isn’t the only place that we should get our wisdom from. There are a lot of other fields and backgrounds that we need to have to be good at our work. But the reason I got focused on debunking some of the myths that were out there is because they hold us back. In the medical profession, there’s a statement, “First, do no harm.” And so that’s why I focused on some of the myths that are out there. So the big ones that I’ve seen over time that are still pretty…they’re getting less and less. Actually, in the last 10 years, there’s been this amazing shift in the learning space, in the social media, in the conversations we’re having around learning. I’ll just go through two of them. This one was this myth that people remember 10 percent of what they see, 20 percent of what they hear, 30 percent of what they read, something like that—because they always change a little bit. That was one usually put over a pyramid. That was one big myth—pyramid myth—that’s completely disproven. There’s never been any research behind that. It sends some bad signals. The other myth is the learning styles myth. It seemed to have good face validity that, hey, people are different.
Will Thalheimer: [00:29:23] We all like to believe we’re different, and so we should treat all our learners differently. But it turns out that the way we’ve separated people into learning styles is not that effective. There’ve been all these research reviews on that over the years, that show that, if you focus on that, you’re really not getting the bang for the buck; it’s not supporting your learners. You’d be better off, instead of wasting time redesigning your learning for learning styles, to redesign your learning for something else, like more retrieval practice, better context alignment using the spacing effect, giving better feedback, guiding attention better. Those are the kinds of things that we know work, so spend your time doing those. I’m very optimistic. I’m always optimistic. You go have a conversation now, or somebody puts something up on LinkedIn, and, if they put the learning styles myth or the pyramid myth on there, it’s like, whoa, somebody’s going to speak up. And this was not true 10 years ago. People were not doing it. And so that myth got circulated around and around, and now people put the brakes on. So that’s good. That’s good for an industry. We are maturing in that.
Will Thalheimer: [00:30:35] Now, those aren’t the only two myths. There are all kinds of myths out there. We go through fads. Neuroscience was big, and that was some truth, some mythology. What are you worried about now, Will? Well, maybe this AI thing. It’s going to make a difference, but are we going to develop some mythologies around how to use it? Yes, we are.
Jeff Cobb: [00:31:00] Yes.
Will Thalheimer: [00:31:00] Anytime there’s a new technology…. I’ve been around a while. I know you can’t see me here, but I turned 65 years old this year. I’ve been in the field over 35 years. I’ve seen different technologies come and go. We always, in the beginning, when there’s a new technology, we all get excited, and we all mess things up first until we make it better. I think we’re going to do the same thing with AI. I definitely think it’s coming. I’ve been reading some books on AI, and it’s convinced me that it’s going to make a difference in our work and in our lives. But, in the beginning, particularly in the learning field, where we get sold too easily on things that are not working that well, we should keep some skepticism and be open to opportunities, but hold some skepticism there as well.
Lessons Learned Inside and Outside of Solo Consulting
Jeff Cobb: [00:31:47] I think that’s very good advice. I too am encouraged that people do seem to be more aware of the science and will speak up when things are being said that are not evidence-based. I think that’s a reflection of your work and the work of others, many of whom have been on this podcast. I’m grateful that work has been done. A couple areas to wrap up with, really focusing more on your personal learning journey. I know that one thing that happened—I believe this was in the period since the first edition came out and now—is that you left solo consulting for a while or at least mostly left, and you’ve recently come back, and now you are solo consulting. So can you tell us a little bit about that, and, probably most importantly, what you learned along the way?
Will Thalheimer: [00:32:33] Yes, I had been running my consulting practice, which I called Work-Learning Research, but that sounds like it was a 10,000-person consultancy. But basically it was me. I did that for 22 years, and, due to some personal things, it made sense for me to go look for a job. And so I found a company called TiER1 Performance, worked there two and a half years, and I was a principal there with 60 or 70 other principal consultants. And it was great to collaborate with people. It was great to get some ideas that I probably wouldn’t have got access to or heard about. It was great to have some of the work that you do have force multipliers behind it. I’m back now at Work-Learning Research, restarting it starting in the summer and thinking about what’s next. I’ve turned 65, so this is one of those years where you begin to reflect: “What do I want to do with my time? What is the rest of my life going to be about?” So I’m focusing now on helping, empowering learning and development professionals to get the most out of learning while also helping organizations do well in terms of learning and learning evaluation strategy. I’m finishing up a book called The CEO’s Guide.
Will Thalheimer: [00:33:55] I may change the title, but I finished the first draft. I’m now getting feedback from people. But it’s called tentatively The CEO’s Guide to Training, eLearning & Work: Reshaping Learning into a Competitive Advantage. And my goal in that book is really to help us create a better conversation, a more useful conversation between senior folks in organizations that we work with or work in and us in the learning and development field. There have been a lot of breakdowns over the years in that conversation. They don’t always know our work that well. We don’t know always how to speak their language. To me, one of the things that I think has been harmful to us is we don’t really believe in ourselves that much. Sometimes we grovel too much, and we gripe too much. I want us to begin thinking from a position of strength and develop some practices that are stronger and more aligned with what the CEO wants. But I also want to help the CEO manage us better. And so it’s a relatively short book with a lot of chapter notes if people want to go into details about research and stuff. But it’s really a conversation I’m having with a CEO. Should be fun. Getting good feedback on it right now. So I think that’s aligned with what I’m trying to do in my work now, empowering us as learning professionals.
Evaluating Our Own Learning
Jeff Cobb: [00:35:19] We’ll look forward to that book and also to your continuing work because it is very empowering to have what you provide and be able to use it in our work. We like to ask guests about their approach to lifelong learning, and I believe we’ve asked you that before. Given the nature of our conversation today, I’d like to ask something a little bit different, which is how do you evaluate your own learning?
Will Thalheimer: [00:35:48] That’s a great question. I’m sure, Jeff, you sent me this question in advance, but I didn’t think about it, so this will be off-the-cuff. I don’t think I do it formally, but, in some sense, I can’t help myself. I’m just curious. And, if I see a new research article, and I think, “Oh, I should know about that,” I go and read it. One of the things I do now is listen to audio books when I’m out for a walk or out running or whatever, and I get ideas. Sometimes I stop and write them down. I’ve got a bunch of other books that I want to write, and so I’m thinking in advance, “Oh, this is related to presentation science. Oh, this is related to leadership.” Things like that. How well am I doing in this? I don’t know. I will say that I try not to beat myself up. I’m sure I could be doing a better job. And I don’t do this consciously, but I think it’s just good to have fun—fun in learning—because that keeps you at it. If I started beating myself up—“Oh, Will, you should write down more stuff, or you should be more organized”—I wouldn’t get there because I know sometimes you think of something, and it just disappears. You’re in the car; you don’t have a chance to write it down. But I’ve heard that about creative people. I’ve studied creativity a little bit. And creative people do try to write things down, but they also recognize they’ll have some ideas that just got lost in the ether. And that’s okay. That’s okay. But I’m going to now contemplate your question. Maybe the next time you have me back, in 300 more episodes, then I’ll have a better answer for you.
Jeff Cobb: [00:37:28] Yes, that was a great answer, and I did not send that in advance because it occurred to me as I was reviewing for the show today. And it’s a question I realize I should be asking of myself more because I don’t think I’m very conscious about it. But, like you said, it’s not something to beat yourself up on. It’s just to have some consciousness and some intentionality about it, but to keep having fun, as you were saying.
Celisa Steele: [00:37:56] Will Thalheimer is a consultant, speaker, and researcher at Work-Learning Research, and he’s author of Performance-Focused Learner Surveys: Using Distinctive Questioning to Get Actionable Data and Guide Learning Effectiveness.
To make sure you don’t miss new episodes, we encourage you to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). Subscribing also gives us some data on the impact of the podcast.
We’d also be grateful if you would take a minute to rate us on Apple Podcasts or wherever you listen. We personally appreciate reviews and ratings, and they help us show up when people search for content on leading a learning business.