Welcome to episode eight of Tails of Testing! Amanda Rutter, our Marketing Manager, chats with Jessica Anderson, Director of Test Development for Licensure and Certification at Data Recognition Corporation (DRC). DRC is a full-service information management company and Jessica coordinates and facilitates content development activities for licensure and certification program exams.
Curious about when to use an alternate item type? This video provides an overview for anyone ready to explore beyond a standard multiple-choice question.
[Onscreen: A split-screen of Amanda (left) and Jessica (right) chatting virtually.]
Hi, everyone. Welcome to another episode of Tails of Testing. I’m Amanda Rutter, Marketing Manager at Internet Testing Systems. Today, we have Jessica Anderson with us from Data Recognition Corporation, also known as DRC. Thanks for joining us, Jessica.
Thank you. I’m happy to be here.
Today, we’re talking about alternate item types. Can you explain to me what that is?
Well, really, alternate item types are anything that’s outside of our typical, what we think of as our multiple choice questions: multiple choice, multiple response, anything that’s outside of that box. So, oftentimes these might be essay questions, they might be case studies, they might be portfolios that we’re asking them to compile. But a lot of times these items are administered alongside multiple choice questions, and so those item types are typically what we’re talking about when we’re speaking about alternate item types. They are quantitative, fill in the blanks. They are matching questions. We might ask a candidate to click on an area of an image, especially in the medical exams, might be asking them to click on what bone is broken in an X-ray scan. And a lot of times they’re asking the candidate to do something slightly outside of the typical “select the most appropriate response” question.
OK. That makes sense. When would you suggest that someone use those?
Really, you can use them anytime that it fits the content. If it makes sense to measure that construct in that particular way, you can use an alternate item type. So, if it makes logical sense that it fits with the content, then I’d say go ahead and try it out, see if it measures in an effective way. Obviously, there’s logistics that you need to thoroughly consider: the item banking system, the delivery program that you’re using, and just the overall exam format. There’s a lot to consider in addition to that, but if the item type fits the content or the construct, then oftentimes you can go ahead and try to explore that space.
OK, I heard you say measure a few times. So, do they measure in a fair and reliable way?
They can when they’re used properly and in an effective way. So, we have the advantage when working with multiple choice questions that we have these known aspects of them that we know will be more likely to produce valid and reliable results. But these item types, there’s a bit more unknown and they’re a little less restricted in their nature, so we need to take more aspects of those items into consideration. So, for example, how are we going to define the incorrect response? Is there an area of this image that could be partially correct? And we need to, when we’re building the items, take that into consideration as well to ensure that they are equally as valid and reliable as a multiple choice question.
That’s great advice. What do you need to consider when exploring alternate item types?
I encourage groups to really take a look at the content development from along the entire exam life cycle. So, that means, during the job task analysis, you’re looking at what constructs are we trying to measure, and does the item type fit that construct that we’re looking into? And then, again, at every single phase of the exam development cycle, are we building questions that are going to give us the most valid and reliable results and measure what we want to measure, what we’re looking to measure?
OK, I think that’s great advice! Actually, speaking of advice, we like to close with some recommendations or advice for our audience. Do you have any advice for them today?
I would say, ask everyone that’s involved in your content development, and that’s everyone—your psychometricians, your folks that are working in your item banking system, your delivery people, anyone who touches the content or uses that content in some way, and that can be your subject matter experts as well—start to ask questions about, does this make sense for what we are testing?
That’s great advice. Well, thank you so much for joining us today, Jessica. I’m sure we’ll talk to you soon.
Thank you very much.
About Our Guests
Amanda Rutter is the Marketing Manager at ITS and has eight years of experience in the assessment industry. Amanda reinforces the concept that marketing is the sales quarterback, focusing on strategic approaches to drive market growth and bottom-line profitability. Outside of work, you can find her snuggling her dogs or soaking up the sun at a local brewery.
Jessica M. Anderson, Director, Test Development – Licensure and Certification, has been helping organizations and individuals in the licensure and certification industry to better understand the people side of their businesses, industries, and workplaces for more than 14 years. In her current role at Data Recognition Corporation, Ms. Anderson is responsible for coordinating and facilitating content development activities for licensure and certification program exams along all phases of the test development lifecycle. She blends her experience with high-stakes credentialing programs and knowledge of industry standards and research to develop customized and applicable solutions for credentialing programs. Ms. Anderson completed PhD-level coursework in Industrial and Organizational Psychology at the City University of New York Graduate Center and holds a bachelor’s degree in Psychology from the University of Wisconsin–River Falls.