In Part II of our very first two-part interview, we continue the conversation with Jeff De Cagna, respected contrarian thinker on the future of associating and associations, about a topic all leaders in the business of lifelong learning need to be thinking about: artificial intelligence.
In Part I, Celisa and Jeff talked about an AI first world and some common misconceptions and ethical issues surrounding AI. This time around they delve into the implications of AI for work and learning, the idea of integrated intelligences, and helpful advice to ensure the responsible and ethical future of AI.
Listen to the Show
Read the Show Notes
[00:18] – A preview of what will be covered in this episode where Celisa continues her discussion with return guest, Jeff De Cagna, speaker, author, and executive advisor for Foresight First LLC in Part II of a two-part interview about AI. See Putting Foresight – and Artificial Intelligence – First with Jeff De Cagna – Part I.
[01:41] – What are the main implications you see for work and learning as AI evolves? What challenges and opportunities are we going to see? Jeff says that as we move towards an AI first world, it’s crucial to look at work and learning as more intimately connected than we ever have before. He references the Harvard Business Review article, “Making Learning a Part of Everyday Work” by Josh Bersin and Marc Zao-Sanders who argue for “learning in the flow of work”. From their perspective and research, they see that most knowledge workers are so busy they can only find an average of five minutes to do any formal learning each day. So this idea of learning in the flow of work is about integrating learning directly into work to the point at which it essentially becomes invisible. In Jeff’s view, there’s no question that time considerations are a factor when it comes to learning – or not learning. But an even more significant consideration, he says, is human attention. He makes the distinction that our time allotment is the same for everyone (and we can quantify and manage it in different ways), but how we use our attention is a little more challenging to address because it’s not easily quantified—it’s finite, hugely fragmented, and fragile. We have to look at attention in two different modes. The first is attention as a resource that we apply when we’re trying to focus ourselves on the most important issue at hand. Jeff points out is in desperately short supply because we’re allocating it across so many different outputs/stimuli. The other way to look at attention is as the way we interact with and experience the world around us. This is about allowing/enabling ourselves to immerse into what is happening. One mode isn’t better than the other but we have to figure out how to use them both well.
[06:18] – Jeff talks about how if we’re going to make human learning more generative in an AI first world, than we really need to consider how we’re using our attention more effectively within a relatively simple cycle of learning that he says knowledge workers go through everyday. He illustrates how this cycle of sense-making – meaning-making – decision-making works using AI as an example. Jeff explains that this process is repeated reflexively many times a day by knowledge workers on a variety of issues. So from a learning perspective, what we need to do is really help our learners become more aware of their learning processes (i.e. metacognition), something the authors of the above referenced HBR article really advocate for. This will help workers develop the more intimate connection between work and learning that Jeff described.
[11:34] – Jeff adds that as we look at the future of work, we also need to help workers/learners look, at a more granular level, at the work they are doing to make sure they can adapt in the face of automation. As companies are looking at remaking the workforce, they are looking at the specific tasks and activities that workers perform and trying to identify repetitive and routine forms of work that can be accomplished with minimal interaction/collaboration as targets for automation. So Jeff says the challenge for learning leaders is to help workers develop the human skills they need to avoid becoming these targets of automation. These skills include collaboration, creativity, empathy, imagination, etc. He also thinks it’s critical that if you’re providing a credential, it’s important to examine the underlying curriculum to ensure that it isn’t forcing learners down learning pathways that only build or reinforce skills that are already, or will soon become, targets for automation. What we need is to help learners develop a combination of technical skills, digital skills, and human skills while getting away from this unhelpful dichotomy between hard and soft skills, which Jeff argues isn’t a good way of talking about it. Instead, we need to get a lot more specific and granular because that’s what’s going to help our learners be able to evaluate what skills they need to have in an AI first world.
Sponsor: Authentic Learning Labs
[15:24] – If you’re looking to supplement your human skills with technical and digital skills to improve your learning business, we encourage you to check out our sponsor for this quarter.
Authentic Learning Labs is an education company seeking to bring complementary tech and services to empower publishers and L&D organizations to help elevate their programs. The company leverages technology like AI, Data Analytics, and advanced embeddable, API-based services to complement existing initiatives, offering capabilities that are typically out of reach for resource-stretched groups or growing programs needing to scale.
[16:11] – You’ve also pointed to the need for integrated intelligences—the blend of human and machine intelligences—for being future ready. Will you talk a bit about that integration of intelligences? What does it look like, and what are the benefits? Jeff discusses how it’s important for all organizations to acknowledge that we’re not going to be able to build the kind of future readiness that we need for organizations that have been around for a long time based on human intelligence alone. What we want to try to do is look at ways to integrate human and machine intelligence to derive, not only their separate benefits, but also the amplified benefits that will come when they reinforce and strengthen each other. Returning to the learning cycle of sense-making – meaning-making – decision-making, Jeff discusses how there are some aspects of it where machine learning will provide value that is superior to what humans can accomplish on their own. But there will be places where human intelligence will be able to provide value that machines can’t offer. Another important aspect of this is being able to bring in the kind of tacit knowledge that human beings possess from their own experiences. So if we learn to integrate all of these things, we should be able to work more quickly, understand more deeply, and anticipate what’s coming next with greater consistency, to help all the stakeholders of these organizations thrive. This integration of humanity with technology needs to happen and it has to be done in a responsible and ethical manner. Rather than being machine versus human intelligence, Jeff says it’s recognizing that it’s the integration of machine and human intelligence that’s what’s going to really drive us forward.
[21:36] – What advice would you give to listeners in regards to AI? What should they be doing or asking or thinking about? Jeff shares four areas that he hopes will benefit people:
- It’s critical to focus our attention on what’s happening in the AI space and to look beyond product announcements, market forecasts, the opinions of tech billionaires (who are typically white men), and mainstream media. Instead of listening to the most widely recognized part of the field, go deeper to understand what people on the ground are saying about what’s happening in the space, including understanding the diversity crisis and issues around bias. He suggests checking out the AI Now Institute at New York University.
- It’s important for anyone in an organization who intends to use AI to clarify their intentions around it, establish clear principles around the application of it in learning (there are a number of existing ones such as at the AI Now Institute at New York University), and act on those principles. One specific area Jeff warns should be among these principles is making sure steps are being taken to require learning content creators to disclose any role that AI might play in the creation of their content – and to be able to vouch for the veracity of that content. This is because there’s a growing and very serious set of concerns around deepfakes, which is essentially content that isn’t real but appears to be.
- Make all decisions related to AI using diverse teams that include people with expertise outside of technology. We have to find ways to compensate for the shortcomings that exist in how AI is often developed.
- Ask every technology provider that you’re working with – and to be specific – about what they’re doing to ensure that AI is being integrated into their applications in a responsible and ethical manner. It should be a red flag if somebody isn’t willing to give you an answer on this. And if they aren’t yet using AI, ask where AI integration is on their roadmap and what steps they’re taking to ensure that the implementation of AI will be done in a responsible and ethical fashion.
Sponsor: Blue Sky eLearn
[31:33] –We’re about to turn to our signature question to Jeff about one of his most powerful learning experiences, but, if you’re looking to create powerful learning experiences, we suggest you check out our sponsor for this quarter.
Blue Sky eLearn is the creator of the Path Learning Management System, an award-winning cloud-based learning solution that allows organizations to easily deliver, track, and monetize valuable education and event content online. Blue Sky also provides webinar and webcast services, helping you maximize your content and create deeper engagement with your audience across the world.
[32:23] – What is one of the most powerful learning experiences you’ve been involved in, as an adult, since finishing your formal education? Jeff talks about the role learning has played in his life since he was a kid and how grateful he his to have attended the top universities in the world (for his degrees and now, for professional development opportunities). He also shares about an experience he had attending foresight practitioner training at the Institute for the Future and how it greatly impacted him and is a major reason he’s pursing the work he does today. He also discusses how hopeful he is that in the learning experiences he’s led over the years, that he was able to provide significant value for those that participated because of his deep personal commitment to learning and helping others.
[37:00] – How to connect with Jeff and/or learn more:
- Website: https://foresightfirst.io
- Email: firstname.lastname@example.org
- Twitter: @dutyofforesight
- LinkedIn: https://www.linkedin.com/in/foresightfirst
[38:20] – Wrap-Up
We’d also appreciate if you give us a rating on iTunes by going to https://www.leadinglearning.com/itunes. We personally appreciate your rating and review, but more importantly reviews and ratings play an important role in helping the podcast show up when people search for content on learning and leading.
Finally, consider telling others about the podcast. You can send a tweet by going to leadinglearning.com/share. You can also Like us on Facebook at facebook.com/leadinglifelonglearning and share us with others there. However you do it, please do help to share the good word about the podcast.
[40:31] – Sign off