With artificial intelligence (AI) being one of the top technologies already positioned to transform and disrupt just about every aspect of our lives, including the future of work and learning, it’s a topic that we could all use some foresight on. And return guest, Jeff De Cagna, executive advisor for Foresight First LLC, is a future-focused, contrarian thinker who has recently been applying his foresight lens to the vast complexities surrounding the rise of AI.
In Part I of our very first two-part interview on the Leading Learning podcast, Celisa talks with Jeff about what it means to live in an AI-first world, some common misconceptions about AI, as well as the numerous ethical issues surrounding AI and what we might do to deal with them.
To tune in, just click below. To make sure you catch all of the future episodes, be sure to subscribe by RSS or on iTunes. And, if you like the podcast, be sure to give it a tweet!
Listen to the Show
Read the Show Notes
[00:18] – A preview of what will be covered in this episode where Celisa interviews return guest, Jeff De Cagna (see his first interview, Putting Foresight First with Jeff De Cagna), speaker, author, and executive advisor for Foresight First LLC in Part I of two-part conversation.
[02:09] – Introduction to Jeff and some additional background information about his work, including how he’s been working to build his understanding of artificial intelligence (AI).
[05:01] – Foresight is obviously a big focus of yours and artificial intelligence is, of course, an area where foresight is needed. What does an AI first world look like, and how close are we to one? Jeff shares that the first time he encountered the use of that term in a public fashion was three years ago when Sundar Pichai, CEO of Google, announced that the company was shifting from being mobile-first to AI-first (and other companies have followed that lead). It’s basically a strategic choice to prioritize AI over other technologies. Jeff says it’s pretty clear to him that, broadly speaking, we are moving toward being in an AI-first world, but we’re not quite there yet. The real question we should all be asking ourselves is what kind of AI-first world are we creating (or is being created on our behalf), by some of these large tech companies. And is the AI-first world that’s being created for us one in which corporations are the sole beneficiaries of increased automation and really deriving that benefit for the company and its shareholders. Or are we building an AI-first world that eliminates drudgery from our lives, augments human performance, and basically improves the overall well being of our society.
[07:36] – Jeff references the New York Times article, “The Hidden Automation Agenda of the Davos Elite”, which reveals the dichotomy that exists between public and private conversations regarding the potential implications of AI on workers. He also cites a report from the Pew Research Center, “Artificial Intelligence and the Future of Humans”, which was focused on finding out whether AI would leave people better off over the next decade. The report found that, “Overall, and despite the downsides they fear, 63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off.” So Jeff says we can choose to be hopeful about what an AI-first world will look like as most of the experts in this Pew report are. But at the same time, we all have to exercise care and not get too caught up on one hand in the excessive optimism that is encouraged by the utopian views of AI that it will solve the world’s most intractable problems. On the other hand, we don’t want to get caught up in the unreasoned fear that comes with a dystopian view around AI that includes massive job losses, etc. Jeff suggests it’s perhaps better to look at these polls and recognize that the ultimate outcome is likely to be somewhere between these. AI is beneficial but there are clearly issues that need to be addressed and he has a number of very significant concerns regarding issues around AI automation.
Sponsor: Blue Sky eLearn
[12:42] – Whatever the future brings, you’ll always need good partners for your learning business, and so we suggest you check out our sponsor for this quarter.
Blue Sky eLearn is the creator of the Path Learning Management System, an award-winning cloud-based learning solution that allows organizations to easily deliver, track, and monetize valuable education and event content online. Blue Sky also provides webinar and webcast services, helping you maximize your content and create deeper engagement with your audience across the world.
[13:27] – You like to be contrarian and unorthodox, so apply that lens to AI for us—what do mainstream, orthodox views of AI get wrong or miss? Jeff first explains why he’s adopted contrarian/unorthodox views. He says AI is just one of many forces/themes that is driving the underlying and quite comprehensive transformation of our society. And that transformation raises many complicated questions for all of us and because they are complicated, they defy easy answers. Jeff emphasizes that it’s absolutely essential that all of us seek to question the orthodox beliefs (the deep seeded assumptions we make about how the world works) that have existed within our organizations for a very long time. These orthodox beliefs interfere with our ability to learn and learning is a crucial aspect in determining what happens over the next decade and beyond. Jeff talks about how he found real value in adopting the contrarian view because it’s a way of ensuring that alternative ways of thinking are being represented in all conversations. Because these are complicated questions that defy easy responses, if we limit our individual and collective thinking to just what we’ve always thought, the tables where decisions get made will be missing out on some important perspectives. So the contrarian view is really essential to having conversations that are going to lead to better decisions.
[17:06] – Jeff hits on what mainstream/orthodox views of AI get wrong or miss by sharing some common misconceptions that exist:
- AI is not new– The fascination around it has been a feature of human history for a long time and as a scientific discipline it’s existed since the 1950s (with lots of ups and downs).Within the last decade though, we’ve seen a reemergence and resurgence of interest with new momentum being built, which has led to new interest in the power of AI. Jeff notes that some believe we may experience another AI “winter” (where interest is again lost) but he doesn’t think that’s likely.
[20:42] –
- AI is not magic – At a foundational level, AI and particularly machine learning, is math—most of AI is basically powerful algorithms being trained on massive amounts of data to make predictions. But mainstream media tends to report on AI in a way that makes it sound like magic—and it’s not. This is a big part of why Jeff says we need to be vigilant about what it is we’re creating.
[23:22] –
- AGI (Artificial general intelligence) is not at hand – The major AI developments that we’re talking about today are all forms of “narrow” or “weak” AI, which means they’re basically developed with specific applications in mind. Jeff says there is ongoing research into developing AGI (sometimes referred to as “strong” AI), but we’re not there yet. Some experts believe it will take about a decade to get there and others think we’ll never get there.
Jeff notes that with all of these (particularly with AGI), it’s important for us to be vigilant and mindful that there’s a lot going on and what’s getting reported in the media is often presenting a singular view of what that is. And sometimes other things that could be even more significant on the horizon maybe aren’t getting the same level of coverage because they don’t have that same magical quality about them.
Sponsor: Authentic Learning Labs
[26:24] – While not magic, data analytics can be powerful mojo for a learning business, and so we encourage you to check out our sponsor for this quarter.
Authentic Learning Labs is an education company seeking to bring complementary tech and services to empower publishers and L&D organizations to help elevate their programs. The company leverages technology like AI, Data Analytics, and advanced embeddable, API-based services to complement existing initiatives, offering capabilities that are typically out of reach for resource-stretched groups or growing programs needing to scale.
[27:11] – I know you’re also thinking about the ethics of AI. What are the ethical issues of AI, and what questions do we need to be asking to deal with those issues? Jeff explains there are many ethical considerations in AI and some are beyond the scope of this conversation. In the Pew report, they refer to these ethical considerations as “chaos”—things like autonomous weapons, cyber crime, and the “weaponization of information” (which is already happening). Jeff is also concerned about the battle for supremacy in AI between China and the US and what that means in terms of the ethical standards that might be applied. He also thinks there’s another ethical consideration around the “have” and “have not” aspect of things where powerful governments are the ones driving in some way what’s happening with AI. It leaves a significant portion of the world’s population being disadvantaged because they aren’t the beneficiaries in any way of AI—they are simply somewhat at the mercy of other countries and how they choose to apply AI in the companies that operate in those countries.
[30:23] – Jeff highlights three areas he says everyone in every type of organization really should be thinking about with respect to AI:
-
Responsible AI –
- Are we developing AI in a responsible manner? Are we clear about why AI/a particular algorithm application is being created in the first place and it’s beneficial effects for people?
- There’s a need for greater diversity among those who are working on the development of algorithms. If the primary developers of AI are essentially of one background (most tend to be white men), then you won’t get the benefit of being able to see other problems that could emerge when your life experience is different than theirs.
- Issues around recognizing and removing human biases from the data sets that are being used to train AI algorithms.
[33:27] –
-
Ethical AI –
- Are we implanting AI in an ethical manner? This means actually translating our beneficial purposes of creating it into the implementation and not shifting to use it for something else.
- Protecting data privacy.
- Preserving human agency by augmenting human performance—so keeping people in a position where they’re making choices, rather than being dependent on the algorithms. So making people better, in general, while also protecting human dignity by not simply looking to AI as a way to eliminate human workers because it serves the organization’s bottom line.
[35:02] –
-
Explainable or interpretable AI –
- Explainability – there isn’t always clarity/transparency on how AI gets its predictions and this can undermine trust in those outcomes. So creating transparency and trust with AI by ensuring there are ways for humans to explain how AI is arriving at its predictions. This is particularly important when it comes to issues in the healthcare space.
- Interpretability – the idea that, at a minimum, a human being can interpret the connection between cause and effect.
[37:33] – Jeff adds another broader concern but says he’s hopeful that as AI unfolds for us in the coming years that as a society we will learn the lessons of social and mobile. Both of these platforms have created real benefits for our society. At the same time, we as flawed human beings have used these platforms in ways that have also been detrimental to our society. When it comes to AI, Jeff warns the stakes around it could not be higher because there are real-world lives and jobs at stake with how we use it. So we have to be more intentional than we have been with either social or mobile on trying to work collaboratively to maximize the positive outcomes of AI, while doing everything in our power to minimize the negative consequences. As individuals working in any field we need to all take to heart that we have a responsibility to ourselves and others to try to drive the application of AI in a direction that is beneficial to our society, while also heeding the lessons that we have all experienced first hand with social and mobile.
[41:43] – Wrap-Up
Note that we continue Part II of this discussion with Jeff De Cagna in the next episode. In the meantime, here’s now to connect with Jeff and/or learn more:
- Web site: https://foresightfirst.io
- Twitter: https://twitter.com/dutyofforesight
- LinkedIn: https://www.linkedin.com/in/foresightfirst
If you are getting value from the Leading Learning podcast, be sure to subscribe by RSS or on iTunes and we would be truly grateful, as it helps us get some data on the impact of what we’re doing.
We’d also appreciate if you give us a rating on iTunes by going to https://www.leadinglearning.com/itunes. We personally appreciate your rating and review, but more importantly reviews and ratings play an important role in helping the podcast show up when people search for content on learning and leading.
And please do be sure to visit our sponsors for this quarter. Find out more about Authentic Learning Labs and Blue Sky eLearn.
Finally, consider telling others about the podcast. You can send a tweet by going to leadinglearning.com/share. You can also Like us on Facebook at facebook.com/leadinglifelonglearning and share us with others there. However you do it, please do help to share the good word about the podcast.
[43:59] – Sign off
See Also:
Leave a Reply