
Unsure about AI guardrails, AI data risks, or which artificial intelligence project to pick first? You’re not alone.
Tori Miller Liu, president and CEO of the Association for Intelligent Information Management (AIIM), talks with Leading Learning Podcast co-host Celisa Steele about treating artificial intelligence initiatives as data initiatives, picking one to three practical use cases that augment people (without replacing them), writing a lightweight AI usage policy that’s easily understandable and updatable, and balancing mission and margin with a simple product profitability lens. They also touch on how AIIM’s portfolio supports information leaders who are turning messy, unstructured content into fuel for generative AI—and improved performance.
To tune in, listen below. To make sure you catch all future episodes, be sure to subscribe on Apple Podcasts, Spotify, or wherever you listen to podcasts.
Listen to the Show
Access the Transcript
Download a PDF transcript of this episode’s audio.
Read the Show Notes
Celisa Steele: [00:00:03] If you want to grow the reach, revenue, and impact of your learning business, you’re in the right place. I’m Celisa Steele.
Jeff Cobb: [00:00:10] I’m Jeff Cobb, and this is the Leading Learning Podcast.
Jeff Cobb: [00:00:17] If you’re grappling with how to start or how to continue responsible AI use in your learning business, this episode can help. If you’re unsure about guardrails, worried about data risk, or struggling to pick the right first project, you’re not alone.
Celisa Steele: [00:00:31] This episode, number 465, features my conversation with Tori Miller Liu, president and CEO of the Association for Intelligent Information Management (AIIM). She’s very passionate about collaboration with people, collaboration with technology, and how people can collaborate effectively with technology. AIIM helps organizations turn messy, unstructured information into fuel for performance, and Tori brings a pragmatic, personal lens to AI adoption.
Jeff Cobb: [00:01:00] Tori talks about why data access and data quality are the real gating factors, how to pick one to three concrete AI use cases that augment people rather than replace them, and why a lightweight AI usage policy you can revisit beats a 12-month strategy document.
Celisa Steele: [00:01:19] Tori and I also talk about mission versus margin trade-offs, product profitability, and how AIIM’s portfolio supports its learners.
Jeff Cobb: [00:01:28] In short, there’s lots in this episode to help you take action, so let’s roll the interview with Tori Miller Liu.
About the Association for Intelligent Information Management
Celisa Steele: [00:01:41] For listeners who may not be familiar with the Association for Intelligent Information Management, give us a short version of what AIIM does and the problem it exists to solve or the resource it exists to be.
Tori Miller Liu: [00:01:56] We were founded in 1944—recently celebrated our 80th anniversary—and we are a nonprofit organization that serves information leaders in over 67 countries around the world. Our vision is to create a world where every organization can benefit from information management. For those who are not as familiar with that terminology, information management is the management typically of unstructured and semi-structured data. That’s what people might call “dirty data” that’s typically outside of your tidy databases. E-mails, invoices, contracts, engineering schematics, text messages, social media posts, even videos and audio—that’s all unstructured data. These days, it is absolutely the fuel for generative AI. It’s gotten quite a bit more focus in the last three years than it had before that. But folks have been doing unstructured data management since before even the 1940s, since the establishment of our organization, and it started with records management and archivists, but it’s expanded well beyond that.
Celisa Steele: [00:03:01] Tell us a little bit about how you came to lead AIIM and the path that got you there.
Tori Miller Liu: [00:03:08] I started with AIIM in November 2022. And I should mention that AIIM looks at the practice of information management through the lens of technology. We’re very, very focused on artificial intelligence and automation. AIIM, when they were looking for a new CEO, specifically wanted someone with a technology background, and my background happens to be in IT. I was formerly a chief information officer for an association, so that was a natural fit. And I’ve been really pleased and happy to help AIIM grow. We help people understand this landscape of information management through certification, resources, research, and training. It’s all that kind of stuff that I loved doing as a CIO that I now get to help our members with.
AIIM’s Learning Portfolio
Celisa Steele: [00:03:52] You mentioned certification, training. Go ahead and give us a fuller picture of what AIIM offers in terms of education and learning resources.
Tori Miller Liu: [00:04:05] We have a Certified Information Professional certification, and that’s a certification that’s for experienced technology-focused practitioners in the unstructured data management space. For our events and educational programs, we have our hallmark program, which is our annual AI+IM Global Summit. That’s a recently redesigned and rebranded event that serves a 400-person group of individuals who want to explore the intersection between AI and information management, and we do that through interactive sessions, hands-on workshops, and also cohort-based learning. We have about 40 to 45 cohorts that are meeting throughout that annual event, and they also meet beforehand and afterwards. It’s a very interesting educational program.
Tori Miller Liu: [00:04:55] Beyond that, we have regional events. We have forums that are day-long, problem-solving events that feature roundtables and solution mapping. And then we have regional exchanges that are two-hour roundtable events that are focused on critical trends. We also have a wealth of virtual programs. We do live virtual training workshops around M365 (Microsoft 365) and also preparing for our certification and the exam. We also do Webinars and insight series that explore critical trends in the space. And we have an online training library and several different certificates, including an AI certificate, and about 45-plus and growing on-demand courses as part of that library.
Celisa Steele: [00:05:37] It sounds like a pretty robust set of offerings. Tell me about the size of the team that is behind making all of this possible.
Tori Miller Liu: [00:05:46] Small. We’re a staff of five. As part of my role as CEO, I did have to lead the organization through some transformation. We’ve reorganized/reduced workforce over the last three years. Now we have a very efficient and stellar group of staff that are doing a ton of work in keeping a big ship moving with that skeleton crew.
An Intentional Shift to Peer-Based Learning
Celisa Steele: [00:06:12] That’s impressive to hear all that you were able to outline in terms of what AIIM offers and then to hear that that’s being done with five staff behind it. Impressive. The other thing that I heard when you were talking through those offerings is that emphasis on collaboration. It sounds like you have a lot of these cohorts; you have these round tables. Prior to you joining AIIM, has that been something in the DNA of these folks who work in information management, or has that been more of an intentional shift towards more peer-based learning in more recent years?
Tori Miller Liu: [00:06:47] I would say it’s really an intentional shift and probably a reaction. It’s ironic. While we talk a lot about AI and automation—you could say we’re pro-AI; I’m pro-automation, of course—we also understand that the way that you learn about emerging technologies and feel comfortable with them is often through human-to-human, peer-to-peer learning and understanding and being able to talk through case studies and talk to peers and crowdsource your problems. It’s been a very, very deliberate shift over the last three years to make sure that that’s at the heart of all our educational programming. We also do community conversations on an informal basis, but, on a formal basis, we found a lot of success with virtual live training workshops and all the roundtables and cohort-based learning.
The Mission/Margin Tension
Celisa Steele: [00:07:38] For many learning businesses that are part of an association, there can be a bit of a tension between mission and margin, especially when it comes to education offerings. Where does AIIM sit in terms of how you think about the education and learning that you provide, in terms of supporting mission versus generating revenue? Which one is more important?
Tori Miller Liu: [00:08:02] Our organization has been in a period of financial transformation, and we’re finally beyond that hurdle and in a position where we can stop worrying about keeping the lights on and focus on thriving. That said, we didn’t have the luxury of not thinking about product profitability, and a lot of small associations, even the large ones, have been in a similar situation. I think if you’re so focused on mission that you’re not focused on whether or not your business is healthy and functioning, you’re delusional. The business model of constantly eating into reserves is not sustainable, and part of our responsibility as association executives is not to just fulfill the mission but also to protect the legacy of this organization and its sustainability. So I’m a little torn. We’re a mission-first organization. We always ask, when we’re introducing new programs, “Is this advancing our mission, fulfilling our mission?” But if there’s no profitability, we are not doing it. We’re not moving forward.
Celisa Steele: [00:09:04] That sounds like it’s part of the initial discussion around standing up a product—is it supporting mission, and is there a business model that can support it? Talk a little bit about, once a product’s out there in the market, if it turns out to not, perhaps, be bringing in the revenue, is AIIM intentionally looking at things like sunsetting or periodically reviewing the portfolio to say maybe we need to recall some of these things?
Tori Miller Liu: [00:09:27] We do, yes. Our board and finance committee have been instrumental in helping me do that. We do have a product profitability matrix, where we’re looking at both the direct and indirect costs of the products that we produce, and it’s a pain to constantly update, but it is helpful in at least starting that dialogue. We do have one or two products in our portfolio that aren’t necessarily meeting our profit margin that we would want, but the board has at least intentionally had a discussion of, “No, this is core to our mission. We do need to see if we can improve the profitability, but we are unwilling to sunset this.” And, in other cases, we have stopped programs because they were not worth the squeeze. “The juice is not worth the squeeze,” as one of my old bosses used to say.
AI Aspiration Versus AI Reality
Celisa Steele: [00:10:15] You’ve mentioned AI and how that has changed a bit of the focus in your area of information management, how that’s become so much more important in the AI era. I know you’ve written and spoken a lot about AI and about how there’s AI aspiration, but then there’s this gap between that aspiration and reality. What do you see as the biggest barriers to being able to use AI in the way that you think organizations should be? Is it governance, skills, fear, or something else?
Tori Miller Liu: [00:10:53] I appreciate all the thought leadership that’s occurred around shifting culture and changing mindsets and change management around AI. While I appreciate it, and I agree with that, people are fundamentally critical to successful technology implementations. As AIIM’s president and CEO, I’m really focused on a paramount obstacle, which is data quality and data access. When we lack access to content for AI, that’s huge. If you can’t put data in a large language model to train it, then you’re already shooting yourself in the foot. The other side of that is, when you’re permitting access to content that should not be permitted, you are jeopardizing your association.
Tori Miller Liu: [00:11:40] Organizations have been rolling out tools like Microsoft Copilot, which is a great tool. That’s one of the biggest hurdles that you have to overcome—that it’s shining a spotlight on security issues. And then the third thing, is your data even good quality? Are you missing metadata? Is it an old version? The funny thing about all this is none of this is a new problem. We’ve been doing data analysis for over a decade now. And, as organizations were rolling out tools like Tableau and Power BI, that was shining a spotlight on content access and content quality. Really, what AI has done is point an even bigger spotlight on those preexisting issues, and so there does need to be quite a bit of attention paid to data.
Data Is Essential for AI
Celisa Steele: [00:12:31] There needs to be attention paid to data. How does an organization go about doing that? They want to be able to embrace what AI can offer. They come to the realization that their data isn’t in the best shape. What are some of the practical steps or ways that an organization can start addressing that?
Tori Miller Liu: [00:12:49] First of all, don’t allow yourself to get paralyzed by the magnitude of the task. It’s unreasonable for a leader to come in and say, “Oh, well, data is the problem, so fix all of the data.” That is not attainable unless you have a ton of resources to back up that challenge. It’s really about managing scope. With an AI project, you want to identify a use case first, look at the data that you need to be successful with that use case, and manage the scope. Then it becomes a much smaller dilemma to deal with. You can learn from that experience. You can learn from saying, for this particular AI project—maybe it’s, say, a matchmaking AI system, where I’m going to give it all of my membership data, and I am going to give it all of the sessions for our conference, for instance, or all of the interest areas for our members, and now I have three sets of data that I want to look at, and I want to help match people who might have similar interests.
Tori Miller Liu: [00:13:51] That’s a much more manageable project than saying, “Oh, I need to look at all my finance data.” “I need to look at all my publications data, events data.” Etc. Keep it small, keep it focused, and then focus on how good is good enough. You don’t need the data to be perfect for AI, but it does need some fundamental things, especially around metadata. It needs to know, is this data the most current version? There might be other metadata fields that you need populated, depending on the circumstance and the use case. But there are consultants that can help you with that, if all of that still seems overwhelming. And there are also certainly quite a number of certifications and data certifications that you could look at to help build those skill sets and understanding, not just from AIIM, but groups like DAMA [Data Administration Management Association). There are lots of groups that offer really good training that can help you feel a little more confident.
Enthusiasm and Skepticism Around AI
Celisa Steele: [00:14:42] You also mentioned security and risk when you were talking about some of those barriers to AI implementation and getting to where organizations might aspire to be. Recognizing that there are those risks in potentially sharing data that shouldn’t be shared with the AI, all of those things, but then also seeing the potential for what AI can do, where do you put yourself on that spectrum between AI enthusiast and AI skeptic—you can name the endpoints on that spectrum with other terms if you prefer—but how do you describe how you like to think about AI and your opinion of it?
Tori Miller Liu: [00:15:18] I don’t know if this is an option on this spectrum, but I’m probably an enthusiastic skeptic. I’m very pro-AI. Fundamentally, it can help us make better decisions and help us process and make use of really large sets of data that were not humanly possible for us to make any use of before. When it comes to diagnostic care in healthcare, or when it comes to helping us make decisions about an association—of all these proposals, which proposal helps us most meet this critical learning objective—those are all cases where it’s a ton of data, and it’s great for us to have AI as a tool. I’m very skeptical when it’s AI replacing human creativity. In a responsible AI world—and I hope we’re investing in implementing responsible AI—I think humans are responsible for providing oversight and context.
Tori Miller Liu: [00:16:21] I also think that my biases are probably against AI replacing human creativity. I have a background in the arts, so I don’t love when AI is replacing writing, film, audio, or images. Humans are best, still, at providing that level of creativity and ingenuity. But I am skeptical also for implementations, and that’s the role of information managers and data managers. I love that the Project Management Institute (PMI) talks about every AI project is also a data project. Every AI project needs someone who is curating and stewarding the data, and they are best as skeptics. You want them to be skeptical about the quality of the data; otherwise, they’re not really fulfilling the role on that project. I think having that healthy dose of skepticism around content quality, content accessibility, system interoperability is healthy when you’re trying to implement successful AI.
The Costs of Not Consciously Thinking About AI Adoption
Celisa Steele: [00:17:19] There’s a lot of potential for AI to help organizations with what they’re trying to achieve. You’ve been throwing out some examples, which are great to think about—the potential ways that an organization might make use of AI. If you think about an organization that might still be on the sidelines, maybe dabbling a little bit in AI but not in any sort of systematic or full-fledged way in trying to explore what it might do, what do you see as the cost that comes with sitting on the sidelines and not trying to think about AI adoption at this point?
Tori Miller Liu: [00:17:58] There’s a part of me that feels like it’s so naive that people think that it’s even a choice of if. It’s a choice of when and how. You’re not delaying the decision or deciding we’re just not going to use AI. You’re just running the risk of inviting shadow AI into your organization. Your workforce is going to adopt the use of AI, whether you like it or not. Your members are going to adopt the use of AI. I even had a member a couple weeks ago that—and this wasn’t a bad thing; I’m not judging them for doing this—took one of our research papers and created an AI application to help them better understand the research. It was a really interesting dilemma. But that’s an example of shadow AI. And I don’t want to take us off course talking about how I managed that, but it’s going to happen whether or not you like it. Couple that with your vendors are already rolling out AI features, and, in some cases, they’re not giving IT leaders a choice in whether or not those AI features are exposed to users or not.
Tori Miller Liu: [00:18:53] The big point is you saying, “Well, we’re not ready for AI” is just you delaying the inevitable, and then the problem is, when you’re lacking direction and intentionality, there’s a cost for that. The cost is going to come first and foremost through an impact on employee satisfaction and member satisfaction, which is eventually going to trickle to market share. It’s essentially the equivalent of—let’s time-travel back to the mid-90s—if your association was saying, “Well, we’re not going to create a Web site.” Eventually, we all did, and you lost some market share while you were dragging your feet on creating a Web site. I think that’s going to repeat itself. We have been here before with Web sites, with social media. I don’t think, fundamentally, it’s all that different from historical scenarios.
AIIM’s Future Challenges and Opportunities
Celisa Steele: [00:19:48] When you think about the next 12 to 24 months and you’re thinking about AIIM and AIIM’s future, what are the opportunities and the challenges—which may or may not be tied to AI and all that we’ve been talking about—what are those challenges and opportunities that you’re thinking about? And then also adding that lens of education and learning, how might that impact what you’re offering to the market in terms of your learning portfolio?
Tori Miller Liu: [00:20:17] Our biggest challenge right now is our research is showing that the practitioners themselves are changing, so we’re moving away from a lot of the traditional titles of records manager and document manager. The titles are all over the place. I have members who are engineers and developers now. And the organizations are also all over the place, so a lot of our members are no longer in legal or inside an insular, little records management team. They’re in the IT, marketing, or HR teams. That’s all to say the state of the practice is really changing. They’re being pulled into AI projects. We’re looking at what’s happening to the people that we serve and recognizing that they’re transforming from information managers to information leaders in the AI era.
Tori Miller Liu: [00:21:10] It’s a much different educational challenge to train someone on “Here are best practices in information governance and records management” to “Here is how we prepare you to sit at the table of an AI project and be that data logistician, that data steward.” A lot of our certification and education improvements and enhancements are wrapped around that. It started with us rolling out an AI certificate in the past 12 months, but it’s going to continue with how we update our certification exam, what new courses we introduce. It’s even impacted the education strategy that my staff has developed for 2026—to make sure that the education that we do is focused on how do you prepare that person for that seat at the table of an AI project.
Celisa Steele: [00:21:59] That’s great. It seems like you have a lot of clarity, despite the fact that there’s so much shifting and changing in the roles, but this idea of, okay, imagine this person at that table helping make decisions about those AI projects, that then allows you to step back and understand what education, what learning, what resources do they need to, and then you’re making those available. It sounds really smart and on target.
Tori Miller Liu: [00:22:21] I want that sound bite: “It sounds like you have a lot of clarity.” I can play it on repeat in the morning as morning affirmations. Because when you’re doing it, it feels like you’re swimming in chaos. So I need Celisa saying, “Sounds like you have a lot of clarity,” and I’ll just play that 30 times to make myself feel better.
Celisa Steele: [00:22:39] Great. Glad I could give you that.
Tori Miller Liu: [00:22:40] Thank you.
How to Become More AI-Ready
Celisa Steele: [00:22:42] Again, if we have organizations, wherever they might be in their AI journey at this point, but if there’s something they could do in the short term—next week, in the next seven days, or so—what is it that you would recommend that an organization do to help them get a little bit more AI-ready in that near term?
Tori Miller Liu: [00:23:04] I’ll keep it simple for folks because it’s so easy to get overwhelmed with all this nonsense. Challenge yourselves, if you haven’t already, to come up with one to three good use cases, focusing on complementing human potential and not replacing it. That would be step one. You could do that with your leadership team, with your board, whomever. The other big thing is—if you don’t already have an AI usage policy—do some research into that. Again, you can keep this really simple. I am firmly against comprehensive strategies and comprehensive policies around AI because it’s going to change, and it’s going to adapt. This technology is moving so quickly. If you’re investing 12 to 18 months in developing a strategy or developing a usage policy, it’s wasted effort. So keep it really simple. Challenge yourselves to put down the very basics of what you need to feel like you’re responsibly implementing AI and have that conversation with folks.
Tori Miller Liu: [00:24:07] I did, over the summer, work on a facilitation guide with Alex Mouw with Amazon and Thad Lurie with the American Geophysical Union (AGU), and it’s literally a conversation guide—this is how you have conversations with stakeholders, your board, and your staff about AI. You can download that. I’m happy to share it with folks. But there are also other resources out there. It’s really about having the conversations. And then, beyond that, once you have use cases identified, once you have the guardrails around how you want to approach AI, you can start thinking about AI maturity and readiness. And that comes with looking at the available data, looking at its quality, looking at its accessibility, and focusing on your staff. Do they have the data literacy and information literacy to advance your use cases, to make use of AI or automation to accomplish the goals? I always worry that still feels really complicated, but it’s not. You can talk to folks like me. It’s not as bad as you think it is.
Celisa Steele: [00:25:11] That conversation guide is an excellent resource, and we can make sure to make that available to listeners. I might be misremembering the number of points, but you also shared AIIM’s AI usage policy at some point, and it’s five points or something. It’s a very short thing.
Tori Miller Liu: [00:25:27] Yes, it’s so simple. We had to update it again. I just updated it with my staff four months ago because, even though it was only a year old, it was already out of date. This stuff is changing so rapidly.
Celisa Steele: [00:25:40] Just out of curiosity, do you remember what you tweaked, or what changed in the policy?
Tori Miller Liu: [00:25:44] Yes, before it was binary around…. When we first came up with it, it was really early days of ChatGPT. This was early 2023 when we were thinking about this, and we originally said, “We’re going to disclose any time we use AI.” That is not a sustainable task or a requirement of my staff because now it bleeds into the work. We don’t copy and paste AI. It’s really hard to distinguish how much of this was AI editing versus human authorship. It’s impossible. So that we removed. Unless we can confirm this is 85-percent AI—then we’ll probably disclose it.
Tori’s Approach to Her Own Learning and Development
Celisa Steele: [00:26:29] This is the Leading Learning Podcast, and so, when we have a guest on, it’s always fun, at least for me, to get to ask them how they tend to approach their own lifelong learning. Tori, do you have practices, habits, or resources that you like to use to help you continue to learn and grow?
Tori Miller Liu: [00:26:50] I like treating association management almost like a scientific or academic discipline. For me, that means that—even if we don’t have scientific journals for association management—I’m seeking out new research and thought leadership almost as a daily practice. That comes from ASAE Collaborate, Associations Now, CEO Update, Association Charrette, LinkedIn, anywhere where I can get it. And I find it interesting. For me, I’m too chaotic to say, “Oh, I’m going to spend an hour every day doing this.” It’s almost like every time I pick my phone up, I’m like, “Oh, let’s see what people are talking about now.” But I do it on a daily basis. And then, beyond that, I have challenged myself for the last three years to set one big educational goal, and sometimes that ends up in me getting a certificate in something. This year, I was focused on, specifically, executive leadership in uncertain times, which directed where I was spending my own professional development dollars. But, in prior years, I’ve gotten a certificate in objectives and key results. Last year, I got a certificate in facilitation. Those are the big goals that I like to set annually.
Celisa Steele: [00:28:06] We’ve covered a fair amount of ground in our time. If you were to point listeners to one specific thing—or two or three things—that we’ve touched on, what is it that you would hope they would walk away with from listening to this conversation?
Tori Miller Liu: [00:28:21] Three things. One, and probably the most important, is, for associations to continue to thrive, we have to focus on what makes us unique in the AI era, and that means human connections, human creativity, and human-generated content. First and foremost, focus on that, and then that can feed into what’s the appropriate use case for AI and all that good stuff. The second one is keep it simple with AI. Like I said, if you’re doing the usual governance stance where you’re spending 12 months developing strategies and plans, that’s not going to work. Keep it agile, but keep it responsible. And then remember that responsible AI starts with data and information literacy. You need to understand data to be able to be successful with AI but also to be responsible with AI.
Recap and Wrap-Up
Jeff Cobb: [00:29:15] We’re not done quite yet—stick around for a recap of this conversation with Tori Miller Liu.
Celisa Steele: [00:29:20] Check out the AIIM Web site, the AI conversation guide that Tori mentioned, and her profile on LinkedIn.
Jeff Cobb: [00:29:33] If you got value from this episode, please share it with a colleague or leave a rating and a review. Those help others find the show and support the work we’re doing with the Leading Learning Podcast.
Celisa Steele: [00:29:44] Tori did a very nice job of summarizing three key takeaways, so I won’t reiterate those. Instead I’ll say that I appreciated her reminder that AI efforts are really data efforts, and having access, governance, and a mindset of “good enough” quality for specific AI use cases are what are going to enable real results.
Jeff Cobb: [00:30:06] I’ll echo Tori’s skepticism around AI replacing human creativity. Like Tori, you and I have backgrounds in the arts, and I’d prefer for songs and poems to remain human. So I appreciated her suggestion of picking one to three practical use cases where AI augments and doesn’t replace people, doing something with them, and learning from them.
Celisa Steele: [00:30:31] Thanks again for listening—and see you next time on the Leading Learning Podcast.
To make sure you catch all future episodes, please subscribe on Apple Podcasts, Spotify, or wherever you listen to podcasts. Subscribing also gives us some data on the impact of the podcast.
We’d be grateful if you’d rate us on Apple Podcasts or wherever you listen. Reviews and ratings help the podcast show up when people search for content on leading a learning business.
Finally, follow us and share the word about Leading Learning. You can find us on LinkedIn.

AI in the Learning Business Maturity Model
Leave a Reply