When those of us in the business of lifelong learning think about using technology for learning, we’re usually focused first and foremost on technology that’s useful, technology that supports learning. We also usually focus on what’s practical and realistic given the technology we and our learners have access to, are familiar with, and can afford.
And while it’s important to be practical and realistic, it’s equally important for us to have a sense of what’s on the horizon: to have our eyes on the technologies that can disrupt and irrevocably change how we live and learn.
So, in this post, I want to look at three cutting-edge developments in technology–the Internet of Things, artificial intelligence, and virtual reality, or IoT, AI, and VR, for short –and how those three technologies might impact the business of lifelong learning in the years ahead.
Internet of Things (IoT)
The Internet of Things (IoT) is the is the ever-growing network of physical objects that have Internet connectivity and the communication that occurs between these objects and other Internet-enabled devices and systems. 1
The idea is that ultimately pretty much any device with an on/off option will ultimately be connected to the Internet: cell phones, alarm clocks, coffee makers, refrigerators, wearables like fitness bracelets and watches, on up to big non-home, non-personal items like an airplane engine or an oil rig drill. 2
The Internet of things already exists, but it’s predicted to grow astronomically in the next few years. Gartner Research predicts that the typical family home will contain 500 networked devices by 2020. Ericsson forecasts 50 billion connected “things” by 2020. 3
With that kind of rapid growth, we’re on the cusp of what some call the IoE, or the Internet of Everything. And the IoT is bound to change the way we learn because it’s changing how we live.
The Internet of things is growing along with our knowledge of brain science and how learning happens. It’s not hard to imagine scenarios where IoT fits into learning.
Think how the biometric data that wearables provide might feed into a traditional classroom learning environment. The wearable can alert the facilitator if learners are getting sleepy or hungry or stressed, and then the facilitator can respond appropriately—announce a break so learner can stretch to wake up or grab a snack to satisfy their hunger—or the facilitator could switch learners to an easier or different task if they’re getting stressed.
We can even imagine how networked “things” in environment itself might respond directly to biometric data without involving a human facilitator—lights that automatically brighten to help invigorate sluggish learners or computers that serve up easier questions if a learner is showing signs of stress coupled with wrong answers.
And there are projects underway that use the Internet of things to teach. In Australia, they’re looking at using sensor gloves to provide feedback to children learning Auslan sign language from a computer. A learner attempts to sign while wearing the glove; the glove feeds information back to a computer that gives the learner feedback on the accuracy of her signing. 4
The Internet of things is giving us ways—and will continue to grow and give us more ways—to respond to learners’ environmental and situational needs and to directly teach.
Artificial Intelligence (AI)
Artificial intelligence (AI) is intelligence exhibited by machines or software. It’s the ability to acquire and apply knowledge and skills as exhibited by machines or software.
Apple’s Siri is an example of AI many are familiar with, as she’s been around since the iPhone 4. Alexa—Amazon’s AI, available via Echo and other devices—has been out since 2014. But there have been many recent developments in AI.
In April, Facebook launched Bots for Messenger at F8, its annual developers’ conference. Bots on Messenger uses AI and natural language processing combined with human help to allow people to talk to Messenger bots just like they talk to friends—or that’s the goal, anyway. So, for example, with 1-800-Flowers (an early development partner for Bots on Messenger) it’s possible for you to order flowers by sending its Messenger bot a friend’s name. Or CNN can send you a “daily digest” of stories that match your interests, and skip the topics you don’t care about.
Importantly, the bots on Bots on Messenger can send more than just text. They can respond with structured messages that include images, links, and call to action buttons. These can let users make a reservation, check-in for a flight, review an e-commerce order, and more. You can swipe through product carousels and pop out to the Web to pay for a purchase. (It’s worth noting that Facebook doesn’t currently allow payments directly through a credit card added to Messenger). 5
In May, the creator of Siri did the first public demo of his new creation, Viv, at TechCrunch Disrupt NY. Viv is an AI virtual system that aims to be “the intelligent interface for everything.” The demo started with Viv doing things that Siri and Google can do—things like responding to a question about whether it will rain today. But then the questions got harder. Viv was asked, “Was it raining in Seattle three Thursdays ago?” Viv says, “Yes, it rained on Thursday, April 21.” 6
A clear strength of the Viv platform is the “stackability” of inquiries. Siri has a short-term memory, but Viv can handle follow-up questions without stuttering or grasping for context that was just said seconds before.
Messenger bots and Viv mark a move away from silos and specialized apps. They aim to take a more central role—to be the integrator that pulls in third-party services seamlessly when needed, making apps unnecessary.
Both chatbots and Viv clearly have great implications for how we find information and buy products and services. And this next AI examples that ties even more directly to learning.
Ashok Goel teaches Knowledge Based Artificial Intelligence (KBAI) every semester at Georgia Tech. KBAI is a requirement for Tech’s online MS in computer science. Every time the class is offered, the 300 or so students in the class post roughly 10,000 messages in the online forums—too many for him and his eight teaching assistants to handle. So, this spring, Goel added a ninth TA—Jill Watson.
At the end of April, students—and the rest of the world—found out that Jill is a virtual TA, built on IBM’s Watson platform.
Goel and his colleagues got access to all the discussion posts asked and answered since the course was launched in November 2014 (about 40,000 postings in all). Then they started to feed Jill the questions and answers. “One of the secrets of online classes is that the number of questions increases if you have more students, but the number of different questions doesn’t really go up,” Goel said in a Georgia Tech News Center piece. “Students tend to ask the same questions over and over again.”
“Initially her [Jill’s] answers weren’t good enough because she would get stuck on keywords,” one of the graduate students who co-developed the virtual TA reported in the Georgia Tech News Center piece. 7 “For example, a student asked about organizing a meet-up to go over video lessons with others, and Jill gave an answer referencing a textbook that could supplement the video lessons—same keywords—but different context. So we learned from mistakes like this one, and gradually made Jill smarter.”
After some adjustments by the research team, Jill started answering questions with 97 percent certainty. When she did, the human TAs would upload her responses to the students. By the end of March, Jill didn’t need any assistance: She posted directly to students directly if she was 97 percent positive her answer was correct.
Many students were floored to find out Jill was AI, not a human TA, but some had had their suspicions—this is a course on AI after all. One student had even posted to the discussion forum asking if Jill was AI—but that same student thought his e-mails with Goel might be AI-generated—and they weren’t.
Goel plans to use a virtual TA again next semester—the goal is to have the virtual TA answer 40 percent of all questions by the end of the year. This time Goel plans to tell students up front that they’ll have a virtual TA—he just won’t tell them which one.
Taking the examples of Jill Watson, chatbots, and Viv, it’s interesting to think about the way organizations might interact with their subject matter experts in the future. What if we could pool the knowledge of all our SMEs and create a really deep, vibrant AI in particular fields or professions, an AI that is added to over time, as we feed it new more information, and that learns more over time as we interact with it? That might be where we’re headed.
Virtual Reality (VR)
The third and final cutting-edge technology I’ll touch on is virtual reality. VR is an artificial environment, which is experienced through sensory stimuli (as sights and sounds) provided by a computer and in which one’s actions partially determine what happens in the environment. 8
In August 2015, Palmer Luckey made the cover of Time Magazine. Luckey founded Oculus VR in 2012, and Facebook bought Oculus in 2014. The Oculus Rift, a high-definition virtual reality head-mounted display, was released in March 2016. The Rift is made up of four components–headset, sensor, remote, Xbox One controller—and cables, which cost $599. 9 Plus you need an appropriate Windows PC. In short, the Rift is not cheap.
Oculus, though, has a cheaper VR solution as well. The Samsung Gear VR was released before Rift, in November 2015, and it works with Samsung GALAXY smartphones, not a PC, and comes in at a more affordable $99. 10
But Google has an even cheaper option—its Carboard is only $15, and works on Android 4.1 or higher and iOS 8.0 or higher and larger phones with screens sizes of 4 to 6 inches. 11 It may not be as pretty as the Oculus Rift or even Gear, but Google’s goal is market penetration over product perfection or wow factor. The five-millionth Cardboard shipped out in January 2016—5 million Cardboard shipped before Rift was even available. Google wants virtual reality for everyone—and it’s arguably close to achieving that goal. 12
To achieve its vision of virtual reality for all, Google has given away a lot of Cardboard. Last November, The New York Times and Google organized a massive giveaway of Google Cardboard, sending a million of the headsets to coincide with the release of a new NYT VR app. In May, they did it again, but this time for online-only subscribers, with The New York Times sending out 300,000 more Google Cardboard headsets to coincide with the May 19 release of its eighth VR production: Seeking Pluto’s Frigid Heart, a visualization of the dwarf planet based on data from the New Horizons spacecraft. 13
Not only is Google Cardboard cheap; it’s easy to set up—literally, I can personally attest that it takes only two or three minutes. My five-year-old daughter, who was standing by, really wanted to try it. When I handed the headset to her, she was immediately immersed. No questions asked about what to do or how to use it. She immediately began looking around to see what was in her environment—it was a a street scene in Tokyo. It was utterly natural for her. So natural that she took off walking—and walked right into a chair and wound up with a fat lip. Cheap and easy but perhaps Cardboard’s not entirely safe yet?
In terms of the use of VR, it’s pretty easy to imagine educational applications for virtual reality—we can create environments that mimic the real world and therefore allow for relevant learning and exploration to take place but without the potentially expensive or dire consequence of mistakes in the real world.
As the the haptic—haptic refers to the sense of touch features—get better (Oculus’s Touch hasn’t even been approved by the Federal Communications Commission for release yet), even more real-world applications will be available in VR. We can imagine surgeons or nurses getting literally hands-on experience on appropriate procedures in VR.
As an example of how VR is being used—or will soon be used—Harvard has announced it will stream its most popular class, Computer Science 50, in virtual reality this fall. 14 This is not a radical application of VR to education, but it’s readily achievable in this day and age, and it may address the persistent complaint that online learning lacks the feeling and advantages of being physically present among teachers, facilitators, and other learners. VR may be a way to simultaneously achieve the benefits of physical presence and the benefits of digitalization.
There’s also a perceived empathetic benefit of VR. The United Nations has done a VR film about a Syrian refugee camp, and there are VR experiences that show you what solitary confinement in a US prison is like or what it’s like to be a cow led to the slaughterhouse. 15
One of the most important impacts of virtual reality that it allows us to experience “virtual empathy” for unfamiliar situations or people. 16 So, in addition to VR’s direct sensory benefits, there’s an empathetic benefit, meaning VR has the potential for teaching bedside manners as well as medical techniques.
IoT, AI, VR, and the Holy Grails of Learning
I truly believe IoT, AI, and VR will change the way we live. Which means they’ll change the way we learn. There are already examples of how each is being used to change or enhance learning.
While IoT, AI, and VR may be relatively new technologies, they tie back to some of our oldest of Holy Grails for learning. These three technologies separately and certainly when used in conjunction have tremendous potential for the following:
- Just-in-time learning, delivered at the point of need
- Personalized learning, tailor made and constantly recalibrating based on a specific learner’s knowledge and interests
- Elevating learning to the higher-order levels of thinking on Bloom’s taxonomy
And what I find personally most interesting and most promising is not these three technologies separately but what they can achieve in concert.
These three technologies are not neatly discrete—IoT, AI, and VR are bumping up against each other, and I think the technologies and the design of the use of the technologies will continue to improve so that how the three bump up against each other becomes better coordinated, more intentionally orchestrated.
Think about the power of an immersive virtual reality that is intelligent in responding to how we respond to it and that makes use of sensors in objects in the real environment.
The potential is there for VR and AI and IoT to work together to create really engaging experiences that could be used to great success for learning. Together these three technologies could scaffold and support almost any learning because they give us essentially all the affordances of the real world and the flexibility to not be bound by the real world.
Be sure also to catch our Leading Learning Podcast episode on Artificial Intelligence, Virtual Reality, and the Internet of Things.
Notes
This post is based on a Content Pod (i.e., a short presentation) I delivered at our 2016 Learning • Technology • Design conference.
1. https://www.webopedia.com/TERM/I/internet_of_things.html
2. https://www.forbes.com/sites/jacobmorgan/2014/05/13/simple-explanation-internet-things-that-anyone-can-understand
3. https://hbr.org/2016/04/the-internet-of-things-needs-design-not-just-technology
4. https://www.cisco.com/c/dam/en_us/solutions/industries/docs/education/education_internet.pdf
5. https://techcrunch.com/2016/04/12/agents-on-messenger
6. https://techcrunch.com/2016/05/09/siri-creator-shows-off-first-public-demo-of-viv-the-intelligent-interface-for-everything
7. https://www.news.gatech.edu/2016/05/09/artificial-intelligence-course-creates-ai-teaching-assistant
8. https://www.merriam-webster.com/dictionary/virtual%20reality
9. https://www.oculus.com/en-us/rift
10. https://www.oculus.com/en-us/gear-vr
11. https://vr.google.com/cardboard
12. https://www.wired.com/2016/04/google-vr-clay-bavor
13. https://www.theverge.com/2016/4/28/11504932/new-york-times-vr-google-cardboard-seeking-plutos-frigid-heart
14. https://www.class-central.com/report/harvard-cs50-virtual-reality
15. https://techcrunch.com/2015/01/23/un-launches-powerful-oculus-virtual-reality-film-following-syrian-refugee-girl, https://www.theguardian.com/world/ng-interactive/2016/apr/27/6×9-a-virtual-experience-of-solitary-confinement, https://ww2.kqed.org/futureofyou/2016/04/22/stanfords-virtual-reality-lab-turned-me-into-a-cow-then-sent-me-to-the-slaughterhouse
16. https://www.rohitbhargava.com/2016/06/4-brilliant-exposes-the-worlds-ugliest-color-and-other-non-obvious-insights-issue-18.html
Photo Credit and Copyright: halfpoint / 123RF Stock Photo
Leave a Reply