Dr. Bror Saxberg is passionate about learning engineering and focuses on applying what’s known about learning science and learning measurement at scale to the practical business of making effective, efficient, usable, and graceful learning environments that get learners the outcomes they need to be successful.
Bror is richly and wonderfully experienced in the world of learning, having served as the chief learning officer and co-founder of K12 Inc., the chief learning officer of Kaplan, and the head of learning sciences for the Chan Zuckerberg Initiative. Most recently, he’s the founder of LearningForge, a learning engineering consultancy that helps organizations think creatively about applying learning, development, and motivation science to their products, services, and strategies.
In this episode of the Leading Learning Podcast, co-host Celisa Steele talks with Bror about the components of expertise, why and when to perform a cognitive task analysis, and motivation’s impact on learning. They also discuss learning engineering, effective iteration, and the importance of evidence-centered design.
To tune in, listen below. To make sure you catch all future episodes, be sure to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). And, if you like the podcast, be sure to give it a tweet.
Listen to the Show
Access the Transcript
Read the Show Notes
[00:00] – Intro
An MD PhD
[01:50] – I was struck by the fact that you’re a doctor doctor—an MD and a PhD.
Bror did human and machine vision research at MIT’s Artificial Intelligence Laboratory in the 1980s. He’s an engineer, who, over time, got interested in how to use what we know about human minds and human learning to act at scale.
Medicine is an engineering branch of human biological sciences, and it looks to create practical solutions at scale. There are decades-long series of research articles in journals that report on bedside research, not laboratory petri dish research. Medicine wants to see if insights from the sciences lead to something valuable at the bedside.
We don’t have enough of that kind of practical, real-world research going on in learning. We aren’t doing enough evidence-gathering to see what really works in learning. How do you help adults learn? How do you give them the skills and attitudes to become top performers—not simply average performers—in their fields?
[04:11] – How do you define expertise, and what are the components of expertise?
Cognitive science research over the last 50 years or so has given us more insight into how expert minds are organized—it’s not the way most of us think, and it’s a real problem for learners and developers.
Experts don’t have a very good verbal understanding of how they make decisions. They have a huge number of patterns, processes, etc., hardwired into their long-term memory, but those disappear in verbal awareness. This is because human verbal capacities tend to be in working memory, which is quite slow.
Long-term memory, where expertise gets stashed as you do repeated practice and feedback, is very fast and can do multiple things in parallel, but it’s not very verbally accessible.
This means that we have to do real work to unpack how top performers decide and do what they do. Many teaching and training materials and environments trust that all we need are experts. Cognitive scientists figured out we need to do some careful interviewing and observation of experts making decisions and thinking aloud to unpack more of what a top-performing mind does.
[07:29] – Cognitive task analysis can help us uncover about 75 percent of what a top performer’s mind is doing.
If you’re trying to help your learners accelerate in their careers after your learning programs, don’t restrict yourself to textbooks and standardized curricula because those weren’t built by deeply understanding what top performers decide and do.
Top performers often have a range of time management and communication skills that are not well articulated in domain-centric training manuals and materials. Yet that ability to manage time and to communicate with different professionals in the workforce can be just as important to professional success as anything those top performers have in long-term memory about their domain expertise.
This points to the value in doing a cognitive task analysis, or, at least, getting well-trained instructional designers to engage with top performers to understand more, as opposed to trusting the experts to share what’s important about what they do.
If we don’t identify the outcomes we want for any learning experience, we’ll miss the target, even if we do good design for the rest of it.
Expertise is changing faster and faster.
These days you go a couple of years, and there are new tools, new technologies, and new pieces of software that top performers actually need and are using to do even better work, which means you can’t just train in your 20s to be good at something and then rely on that for 40 years and retire. You’ve got to be in a continuous improvement model.Bror Saxberg
We need to figure out what top performers are doing and then build that into continuous training environments.
When to Do Cognitive Task Analysis
[11:18] – When does it make sense to do cognitive task analysis, and when might a lighter, more agile approach to understanding expertise in a particular domain make sense?
Learning engineering is a design exercise. There is a certain amount of art and intuition that we bring to bear when trying to solve real-world problems and decide which are the right techniques and how many resources to apply.
Find out what’s not going well in a particular situation. In the case of workplace training, you might talk to employees to find out what was hardest in their first year on the job. It’s a way of unpacking, at least at a high level, what there are issues.
Identify the hardest, most important things that seem to be missing in training materials. Those are areas to go deeper on to see how top performers handle them. Use learning science and the best methods you can come up with to create practice, feedback, work examples, etc.
For areas that are easier to learn, you can use lighter-weight approaches.
Lastly, iterate. Instrument your learning environment so that you can identify where it’s not going well and why. Get information about what the learner experience was like in those areas that didn’t go well. Then you can bring in more learning science and invest more in doing this as well as you possibly can once you discover what’s broken.
Check out our related series “Learning Science for Learning Businesses.”
Sponsor: WBT Systems
[15:33] – We’re grateful to WBT Systems for sponsoring the Leading Learning Podcast.
TopClass LMS provides the tools for you to become the preferred provider in your market, delivering value to learners at every stage of their working life. WBT Systems’ award-winning learning system enables delivery of impactful continuing education, professional development, and certification programs.
The TopClass LMS team supports learning businesses in using integrated learning technology to gain greater understanding of learners’ needs and behaviors, to enhance engagement, to aid recruitment and retention, and to create and grow non-dues revenue streams.
WBT Systems will work with you to truly understand your preferences, needs, and challenges to ensure that your experience with TopClass LMS is as easy and problem-free as possible. Visit leadinglearning.com/topclass to learn how to generate value and growth for your learning business and to request a demo.
Motivation’s Impact on Learning
[16:33] – What do we know about how motivation impacts learning?
Richard Clark, a cognitive scientist, surveyed a wide range of research literature to uncover what causes people to not be motivated.
The way motivation was defined was do people start, persist, and put in mental effort? And it’s important to highlight the word “like” does not appear in that set of things, right? Often, people think motivating somebody is about getting them to like what they’re doing. In fact, no, that’s actually not connected to learning.Bror Saxberg
If you’re using a well-designed program and you start, persist, and put in mental effort, you don’t have to like every single part of it. Liking is one reason to learn, but it’s not the only reason.
Clark also looked at behavioral, motivational, and cognitive psychology; behavioral economics; and other areas. He landed on a four-part model of what makes a difference in people’s willingness to start, persist, and put in mental effort.
There are four traps that can cause people to lose motivation:
- Values mismatch (“I don’t care enough to do this.”)
- Lack of self-efficacy (“I don’t think I’m able to do this.”)
- Disruptive emotions (“I’m too upset to do this.”)
- Attribution errors (“I don’t know what went wrong with this.”)
To learn more about these traps, see the video below. We also recommend the Harvard Business Review article “4 Reasons Good Employees Lose Their Motivation” by Richard E. Clark and Bror Saxberg.
[21:02] – Learning designers and providers should diagnose which of those four traps are happening and then intervene in the learning environment, including teachers, trainers, software, etc.
When designing learning environments, we need pay equal attention to two kinds of design:
- Cognitive design uses learning sciences and research. (We highly recommend the book e-Learning and the Science of Instruction by Ruth Colvin Clark and Richard E. Mayer for a good synthesis of research that can guide effective cognitive design for learning. We also recommend our podcast interview with Ruth Colvin Clark.)
- Motivation design involves how to detect that someone is having trouble not because of a cognitive issue but because of a motivation issue. Motivation design is just as important as cognitive design, but it’s often underappreciated.
For the most part, if a learner can start, persist, and put mental effort into a well-designed learning program that starts from where you are and heads to important outcomes, then significant learning can happen.
[25:01] – Cognition and motivation intersect. For example, if you’re getting a learner to start something that’s new and complicated, such as learning to write persuasively, it’s important to draw on contexts that are highly familiar to and highly valued by the learner.
On the motivation side, the learner would write persuasively because they care a lot about the subject. On the cognition side, if it’s an area the learner knows a lot about, then they have a lot of information about the situation already in long-term memory, and using things that are in long-term memory doesn’t feel like effort. What’s effortful is when working memory is trying to create or learn something new.
So match the context to the learners as much as possible. Technology may play a role in helping to understand different learners’ contexts and interests and then use these as springboards for mastering more complex cognitive skills.
Once a learner starts to master complex cognitive skills, they can use those to probe new contexts. Learners go back and forth between these two different points—familiar contexts and unfamiliar skills—and use their familiar skills to explore unfamiliar contexts.
[27:23] – You wrote the preface to the Learning Engineering Toolkit, and there you assert that learning engineering should be “guided by (but not limited to) research and practice.” What, beyond research and practice, should be guiding our use of learning engineering and how we’re designing experiences?
The best engineering always stands on the shoulders of prior practice and research. But, to get the best kind of solution, you have to take into account the exact context you’re dealing with and what’s going on right now in this particular environment. Sometimes you have to make a bit of a leap.
This is true in all kinds of design and engineering work. To design inclusively, work with the people who are going to be the most affected by the learning solutions you’re putting together. Hear from them about what does and doesn’t work, what excites them, and what they hate about learning experiences. In that conversation, you should share learning science and motivation science so all the pieces are on the table.
Look at what practical solutions you could implement. Pilot and iterate in agile ways so you’re not copying and pasting from a previous exercise.
Too much work around learning and learning engineering has drawn on a software engineering model where the goal is to make a great training program, a great textbook, etc., and then copy it and get everyone to use it. That may work for software tools, but, for learning, we also need to consider the civil engineering tradition, which incorporates context.
Check out our related episode “Learning Engineering with Jim Goodell.”
[32:51] – Iteration is important in the learning engineering process. What are the effective parts of iterating well?
First, focus your desired outcomes on what students should be able to decide and do. This naturally leads you to evidence-gathering. You need to instrument for failure so you can learn from what doesn’t work and what goes wrong along the way.
You need multiple sources of evidence connected to your outcomes and motivation to see for whom this learning experience is going well. In the learning world, we tend to group everyone together and ask if it worked. Instead, we need to look at whom it worked for and when. That is we need to pay attention to context.
You don’t want to lose sight of the fact that an intervention did work for a subset of users. But you want to look at the ones who did not succeed and try to figure out why—is it their experience, background, or their motivation?
This is where you can start to do interventions and iterate on a subset because you’ve already identified the people who are doing fine, and you don’t need to mess with that success. Look at the context in which learning is happening, gather good enough evidence, and then make evidence-based changes to see if that helps on the cognitive and/or motivation side.
It’s too easy to focus on the learner as if the learner is the only important thing in this whole situation.
In fact, context can be way more than the learner. It’s like, where is this learner learning? Who else is there? What are the trainers like? What are their peers like? What are those interactions like? There could be things that are outside the learner that are just as powerful to help the learner if you make changes to them as things that are, somehow, targeting directly to the learner. So you actually want to think about this context as more than just variables from the learner.Bror Saxberg
The Current State of Learning Ecosystems
[36:51] – How would you describe the current state of learning ecosystems? What is and isn’t working well? What is still misunderstood at this point?
A Need for Evidence-Based Practice
All the learning and training ecosystems worldwide, from pre-K up, have the same problem: Those designing and delivering learning don’t have evidence-based groundings in how learning and motivation work.
Bror compares and contrasts this with the situation in medicine, particularly related to the COVID pandemic. Everyone got kicked off the cliff. But, on the healthcare side, almost all of the solution-making efforts were inside the guardrails of evidence-based practice.
On the education and training side, everyone got forced into virtual learning, but there were no evidence-based guardrails. Motivation was a huge failure point for education during the pandemic. One of the biggest issues with our ecosystem right now is that we don’t have a shared story of learning that is optimistic and suggests that, if you want to get there, you probably can.
A Need for Multiple Lines of Evidence
[42:02] – Something else missing in ecosystems as a whole is multiple lines of evidence. People are beginning to understand this, but there has too much emphasis on trying to build the one perfect solution (assessment, training course, e-learning offering). But that’s not how this works.
Bob Mislevy from Educational Testing Service (ETS) and some colleagues created a way to think about measuring learning called evidence-centered design. If you have an invisible, complex cognitive structure that you’re trying to probe, you need to use multiple lines of evidence to see if the brain is actually different in the way you hoped it would be.
A single line of evidence is very likely to lead you astray because people will manipulate that one line of evidence. When you put together multiple lines of evidence, you can begin to get information relevant to that construct, and you’re less likely to make the mistake of pushing on just one line of evidence and not realizing you missed some of the most important things.
A Need to Better Understand Motivation
[43:45] – We have underinvested in understanding motivation. Motivation ties into issues of identity, long-term memory, context, and other things that connect to cognition.
We need to get all our citizens around the globe, at all ages, to get excited about starting, persisting, and putting in mental effort into new things that they want to do, as opposed to feeling like, “I can’t do that.” “I’m too old.” “I’m not smart enough.” Or “I wish I could, but I can’t.” We need that fuel as well as better learning environments to actually power us forward as humanity, especially as things like artificial intelligence and various technologies begin to snap up what used to be simple, still complex, but relatively simple cognitive tasks, which we should no longer have people doing.Bror Saxberg
Research suggests it takes about 10 years of intense, deliberate practice to become world-class. If we start at age 20, that means every person could build six or seven world-class competencies. You could spend the time all on one, but what’s interesting to Bror is the combination. This gives every human a unique fingerprint of expertise that’s not replaceable by AI.
We need to get better at being able to change our competencies. We need to have the motivation to do that throughout life. Then we can each start to create interesting career paths and progressions over the long term.
[47:39] – Wrap-up
Bror Saxberg is the founder of LearningForge, a learning engineering consultancy.
To make sure you don’t miss new episodes, we encourage you to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio PodBean, or any podcatcher service you may use (e.g., Overcast). Subscribing also gives us some data on the impact of this particular part of our content strategy.
We’d also be grateful if you would take a minute to rate us on Apple Podcasts at leadinglearning.com/apple or wherever you listen. We personally appreciate reviews and ratings, and they help us show up when people search for content on leading a learning business.
Episodes on Related Topics: