It’s critical that learning business professionals pay careful attention to the research they create and the research they rely on for making decisions. This means asking questions, knowing the research methods used, and understanding the limitations of the data. But, because most of us aren’t trained in research, this can be a challenge.
That’s why Elizabeth Engel, chief strategist at Spark Consulting, and Polly Karpowicz, a seasoned association consultant, co-authored “Caveat Emptor: Becoming a Responsible Consumer of Research.” Their goal in writing the white paper was to help associations and others to be better informed about the research they use and to remain trusted, unbiased sources of information for the audiences they serve.
In this episode of the Leading Learning Podcast, co-host Jeff Cobb talks with Elizabeth and Polly about primary versus secondary research, qualitative and quantitative research, mixed research methods, the ethics of using people in research, and bias. They also discuss data validity, reliability, and statistical significance, and they share valuable tips for the responsible consumption and production of research.
To tune in, listen below. To make sure you catch all future episodes, be sure to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). And, if you like the podcast, be sure to give it a tweet.
Listen to the Show
Access the Transcript
Download a PDF transcript of this episode’s audio.
Read the Show Notes
[00:00] – Intro
[01:54] – How do you engage with organizations, and what does working with you entail?
Polly and Elizabeth both worked for the American Political Science Association in the past.
With over 25 years of experience, Polly is skilled in understanding how all the pieces fit together because she was spread between many different departments. Polly helps associations that need insight related to research.
Elizabeth spent the first 15 years of her career as an association executive working for professional and trade associations of different sizes. During that time, she wore a wide variety of hats, often at the same time. But the thread through all her positions was always membership.
Elizabeth launched Spark Consulting to provide soup-to-nuts membership work for associations, from big-picture strategy to member-value proposition work, structure work, recruitment and retention campaigns, and membership audits.
“Caveat Emptor: Becoming a Responsible Consumer of Research”
[04:26] – You co-authored the white paper “Caveat Emptor: Becoming a Responsible Consumer of Research.” What compelled you to write this, and why does it matter?
At Spark Consulting, Elizabeth has been writing white papers, so this one in a series. “Caveat Emptor” is number 14, and all the white papers have been collaborative efforts.
We interviewed Elizabeth along with Shelly Alcorn in episode 44 and discussed their white paper “The Association Role in the New Education Paradigm.”
Elizabeth decided to provide a white paper on research because executives, broadly speaking, are tasked with using existing research and generating original research every day for their organizations. The issue is that most lack formal training in research methods, and so there’s a lot of not-great research out there.
To quote the white paper, “Good research does not guarantee good decisions,” but it certainly helps, “and bad research, barring getting lucky and guessing right, almost inevitably leads to bad decisions.”
When we think about associations, we are viewed by our audiences as trusted, unbiased sources of information for those audiences that we serve. And providing quality research products is critical to remaining worthy of that trust.
Elizabeth Engel
Elizabeth and Polly took on this white paper because they want to help executives and professionals be more informed about what constitutes sound research. This can then help those individuals better evaluate the research and make better decisions in service to their members, customers, stakeholders, and constituents. They can also make better decisions in service to their organization’s mission and remain worthy of those constituents’ trust.
An Information Literacy Problem
[07:15] – What are your theories on why we have an information literacy problem? Is this an intractable problem, or are we making progress?
Elizabeth agrees that there is an information literacy problem worldwide. She’s written a white paper on content curation with Hilary Marsh of Content Company, “Cut Through the Clutter: Content Curation, Associations’ Secret Weapon Against Information Overload.”
In that white paper, they cite research undertaken by a team at Stanford University on information literacy. No one did well assessing the validity of information sources. The Stanford team looked at everyone from middle school students to college students and trained researchers. Of all the various groups they looked at, the only group that did well in information literacy was professional fact-checkers.
There’s declining trust in institutions and in gatekeepers of information.
We’re also presented with a huge volume of information, which makes it difficult to take the time to assess quality.
We also have a fragmented media landscape with some intentionally bad actors blurring the lines between journalism and propaganda.
Social media has been a pernicious influence.
There’s a lot going that people have to deal with to ensure they get good information from good sources.
[10:21] – Polly shares that they want to help readers understand that applying information literacy skills is an essential part of becoming more responsible consumers and producers of research.
As organizations producing research, learning businesses need to ensure they conduct research appropriate and talk about it appropriately so that they can help those who consume their research understand what they’re looking at.
The ability to discern reliable research is essential for all works today. Being able to talk about that research is also essential.
It really touches on both sides of understanding what you’re reading, but also being able to tell a story about the research that you’ve found somewhere else or that your organization is creating. It really is something we all need to know. It’s no longer the case we can say, “It’s not my job. It’s someone else’s.” It’s just now an essential part of being a professional today.
Polly Karpowicz
Sponsor: WBT Systems
[12:26] – We’re grateful to WBT Systems for sponsoring the Leading Learning Podcast.
TopClass LMS provides the tools for you to become the preferred provider in your market, delivering value to learners at every stage of their working life. WBT Systems’ award-winning learning system enables delivery of impactful continuing education, professional development, and certification programs.
The TopClass LMS team supports learning businesses in using integrated learning technology to gain greater understanding of learners’ needs and behaviors, to enhance engagement, to aid recruitment and retention, and to create and grow non-dues revenue streams.
WBT Systems will work with you to truly understand your preferences, needs, and challenges to ensure that your experience with TopClass LMS is as easy and problem-free as possible. Visit leadinglearning.com/topclass to learn how to generate value and growth for your learning business and to request a demo.
Key Methods of Research
[13:26] – What are some of the key research methods that responsible consumers need to be aware of? How are they different, particularly in terms of the types of results that we can reasonably be expected to get from them?
The white paper goes into this in more detail, but two of the major category distinctions are the following:
- Quantitative and qualitative research Surveys yield quantitative data while interviews and focus groups can provide qualitative research.
- Primary or secondary research Primary research involves an original, formal study that you create yourself. Secondary research makes use of studies and information that someone else conducted or collected.
All methods have pros and cons.
Pros of quantitative research include the fact that you can have a high degree of confidence in the results if you’ve constructed and administered your survey well. Surveys are relatively cost-effective and quick, so they can get you definite answers in a resource-friendly way.
Cons of quantitative research (e.g., surveys) including their inflexibility, the risk of a number of types of bias, and a lack of explanation about why things are the way they are or why people chose the answers they chose.
Pros of qualitative research including the ability to get at people’s “why.”
Cons of qualitative research include the lack of numerical certainty and its resource-intensiveness.
Conversations are subject to two particular types of bias:
- Hawthorne effect
- Social desirability
[16:56] – When you consider whether to pursue primary or secondary choice for research, there are also pros and cons.
The main pro of primary research is that you design the study. You get to ask the specific, exact questions you want answered.
Cons of primary research include its resource-intensive nature. There are also some ethical considerations involved in doing research that involves people (detailed in the white paper).
Pros of secondary research include being able to leverage existing effort and resources. Secondary research is often a good fit for background research and familiarizing yourself, in broad strokes, with what’s going on in a particular field of study or area.
Cons of secondary research include the fact that it may not answer your specific questions and the fact that it may not be trustworthy, which brings us back to the problem of information literacy. You have to vet the quality of sources if you’re going to use secondary research.
The secret sauce is to mix your methods. All methods have pros and cons, but they can often offset one other.
Mixed Data for Meta-Analyses
[18:47] – Are you seeing greater use of meta-analyses, where people take many different secondary research studies and rolling them up?
Elizabeth says yes, and one of drivers is the increasing number of publicly available data sets. There are all kinds of governmental and non-governmental data you can access.
We also have more computing power to help us interpret large data sets.
Even if you haven’t been trained as a data analyst, there are user-friendly tools that allow you to bring in data from different sources and combine those sources into one whole. But then you don’t have to worry just about trusting one source—you have to trust multiple sources.
Potential Roadblocks to Reliable Research
[20:26] – What tends to go wrong when conducting research and when interpreting research (for example? It’s easy to access publicly available data sets, but using it reliably is another matter entirely.
It’s easy to become excited about finding a data point you’re looking for and to want to jump ahead in applying it. Or you’re under pressure to rush ahead and start planning or designing research right away.
Polly recently left a position where she had the privilege of being the person in between the researchers and clients on research projects.
Resist the urge to jump ahead. It’s better to first understand the research you’re looking at. Look at the methods if you’re looking at something that’s already produced. If you’re doing original (primary) research, then carefully lay the groundwork before you start doing any research.
Common areas that can go wrong include bias, and you can have problems with validity and significance.
Validity
Validity helps us understand whether we can stand behind our research or not. Validity signals whether the data that we’re looking at is real and accurate.
Problems with validity can arise essentially anywhere in the research, including the design of research and the data collection. If there is a validity issue (whether we are asking the wrong people or certain questions are flawed), it can cause issues with the results, making the data potentially untrustworthy, outdated, unclear, or prematurely collected.
The most egregious kind of validity issues are related to fabricated or falsified data.
Reliability
[24:09] – When we choose a research method or approach that can provide consistent results, it’s replicable and therefore reliable. If we take that same instrument and ask the same kinds of people originally asked, we should get the same general results. That’s reliability.
When you put validity and reliability together, you can understand the degree to which you can safely use and rely on those results. It’s a range, and not 100-percent cut and dry.
If some things are a little off on the validity or reliability scale, you may want to do further exploration or analysis of the research to clear up what might be unclear. If it’s inconclusive, you can’t draw valid takeaways from the research, and you should start again to determine what went wrong.
Statistical significance boils down to the level of confidence we have that the findings can be generalized. This is where researchers will use techniques like p-value andmargin of error.
Bias
[26:10] – Bias is as an error. It’s a gap between the truth and the data that we have. Bias threatens the validity of the data and hinders good decisions based on that data.
The good news is we know where bias can show up. The bad news is we can’t get rid of bias completely. But we can look for it and try to mitigate it.
You can see bias in who responds to research and how, and it can be categorized into three buckets:
- Response bias or recall bias means that a respondent can’t respond to a question for one of a variety of reasons.
- Instrument bias or measurement bias is an error in how the research is designed and executed.
- Reporting or analysis bias is an error in how the results are analyzed and reported.
We should all be mindful of reporting bias. We are honor-bound as researchers to make sure we’re open and honest about how we report results.
Tips for Reliable, Valid Data
Look for and talk about any concerns you might have about the data.
As you’re designing research, follow standard practices with sampling. Have someone else look at your questionnaire from that sample group.
Be aware of when it might be best to have a third party do the research.
Use validated instruments and perhaps mixed methods.
Plan well in advance how you’re going to handle analysis and reporting and what you will report on.
Make sure that you take the time to develop a strong instrument, whether it’s a focus group protocol or an interview script.
And for consumers of research…make sure you check your predetermined perspective at the door when you’re looking at results. You may want to find something that supports the argument. You may want to make your decision that you want to make. Try to notice where you may be biased in your own interpretation of what you’re looking at.
Polly Karpowicz
Case Studies
[33:11] – Would you share a little about one of case studies in the white paper that might help to bring to life some of what we’re focusing on with responsible research?
Elizabeth briefly highlights a number of case studies. The Association of American Medical Colleges is a great example of engaging volunteers in designing your study.
The American Association of Colleges of Pharmacy is an example of responding to an unusual circumstance.
The Casualty Actuarial Society is an example of taking on a really big problem (the under-representation of historically marginalized groups in the actuary profession) and involving multiple organizations.
IEEE is an great example of being creative in how you collect your data and that creativity leading to insights that otherwise would have been unavailable to the research team.
Research Consumption Habits
[35:32] – What habits of your own can you share to help listeners consume research more responsibly?
Polly suggests listeners read the white paper, which includes a list of free online and in-person education sources to learn more. Also, notice research methods, and note particular concerns related to research in your organization. In short, keep learning, keep asking questions, be persistent, and find help where you can.
[37:33] – Elizabeth recommends, when looking at existing research, read the methods section. Good-quality research will tell you what they did and how they got to the conclusions that they got to.=
When conducting your own original research, Elizabeth says be aware of all the types of bias that can creep in and take affirmative steps to mitigate those biases. Most of all, be skeptical, and ask questions.
Ask questions of “Who’s funding this research?” “Who’s sponsoring this research?” “What are their perspectives, if they have any?” “Who was or wasn’t included?”
Elizabeth Engel
[39:15] – Wrap-up
Elizabeth Engel is chief strategist at Spark Consulting, and Polly Karpowicz is an association consultant. Check out Spark Consultings’s series of free white papers.
To make sure you don’t miss new episodes, we encourage you to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio PodBean, or any podcatcher service you may use (e.g., Overcast). Subscribing also gives us some data on the impact of this particular part of our content strategy.
We’d also be grateful if you would take a minute to rate us on Apple Podcasts at leadinglearning.com/apple or wherever you listen. We personally appreciate reviews and ratings, and they help us show up when people search for content on leading a learning business.
Finally, consider following us and sharing the good word about Leading Learning. You can find us on Twitter, Facebook, and LinkedIn.
Episodes on Related Topics:
Leave a Reply