I’m excited for 2026 and what AI can do for our industry. What I’m seeing at ITS and with our testing partners is AI enabling us to experiment and innovate in ways that used to be impractical. It’s making it easier to try new things, run pilots, learn from them, and iterate fast.
One of these innovations is AI Avatar Items, which started as a weekend prototype, is now moving to production and will soon be available to our customers for use in their programs.
AI Avatar Items are test questions that use a digital avatar as the primary form of interaction. The digital avatar provides a more realistic experience for the test taker while enabling the test question to better elicit evidence through interaction, such as how a candidate gathers information, explains decisions, and responds to follow-up.
Here’s an early preview (see following image):

AI is expanding what is possible
I’m seeing two things happening at the same time in our industry. Both are being driven by AI.
First, the effort and cost of developing multiple-choice questions is dropping. AI can help with early drafts, item variants, quality checks, and post-delivery analytics. At the same time, AI introduces threats to test integrity by offering new ways to cheat. Here, AI is both an opportunity and a threat to test integrity.
Second, AI is lowering the cost of developing performance-based items that have traditionally been reserved for programs with larger budgets and longer development timelines. Simulations, case-based tasks, and structured interactions are valued because they capture evidence that goes beyond knowledge-based recall.
AI avatars are one example of why this is becoming practical now. In the past, a realistic interaction often meant scripted branching, custom development, or live role players. With the AI Avatar Item, you can deliver a guided conversation inside a defined scenario without hard coding every path.
I’m working with our team on several more of these types of interactive items – and will share over the next few months.
An overview of AI Avatar Item types
In our exploration into next‑generation assessment, we’ve built two related, but distinct, conversational item types:
- AI Conversational Task (ACT)
- AI Conversational Exhibit (ACE)
Both use an AI avatar to engage candidates in realistic, scenario-based interactions. The avatar provides a guided interaction, while the item design determines what is captured as evidence and scored.
An easy way to think about the two:
- ACT: the conversation with the AI avatar is the item response
- ACE: the conversation with the AI avatar provides the context
This offers a choice: do you score the conversation itself, or do you use the conversation to provide context for a set of related, scored test questions.
Conversation as item response
An AI Conversational Task (ACT) is an item type where the candidate interacts directly with an AI avatar to complete a task. The conversation is the response. What the candidate asks and how they respond is what is scored.
This type of item is not scored turn-by-turn in real time. Instead, the full conversation is reviewed after the session, like how programs handle constructed responses or essays.
This format lets programs measure applied decision-making, reasoning, and judgment in a way that is hard to capture with selected responses alone, while keeping scoring auditable.
Conversation as item context
An AI Conversational Exhibit (ACE) uses the AI avatar as an interactive exhibit, like a reading passage or case stimulus. The candidate converses with the avatar to gather information, ask clarifying questions, or explore a scenario, but the conversation itself is not what gets scored.
With this item type, the conversation is context that is used to answer any type of related item, including multiple-choice or short‑answer questions. In this case, it’s the linked questions that are scored – as opposed to the conversation.
This format offers programs that want more realism a practical starting point without changing their scoring model. This keeps the assessment anchored in familiar item formats while adding interactivity.
Where the industry is headed
Going forward, selected-response items will remain a core part of exams, but programs will need faster refresh cycles to keep test integrity high. At the same time, programs will add a broader range of performance-based items as the economics change and tooling improves.
A key focus at ITS in 2026 is helping programs adopt new measurement approaches in a practical way. Our team can help your program work with new item types, learn from real use, and iterate quickly based on data and feedback.
If your organization is considering new ways to use AI for assessment, reach out and I’ll connect you with the right people at ITS.
Frequently Asked Questions about AI Conversational Item Types
What are AI avatar item types in assessment?
AI avatar item types are assessment formats where a candidate interacts with an AI avatar guided by task instructions. Instead of responding only to a fixed prompt, the candidate can ask questions, explain their reasoning, or work through a scenario. The interaction becomes part of the item experience and is what is scored or captured as evidence.
At ITS, we support two forms of AI avatar item types: AI Conversational Tasks (ACT) and AI Conversational Exhibits (ACE). They are designed for different use cases and differ in their scoring models.
What’s the difference between an AI Conversational Task (ACT) and an AI Conversational Exhibit (ACE)?
The simplest distinction is what gets scored.
In an AI Conversational Task (ACT), the conversation with the AI avatar is the response. The full dialogue is treated like a constructed‑response or essay and scored after the exam, with human review remaining part of the process.
In an AI Conversational Exhibit (ACE), the AI avatar provides context, similar to a reading passage or scenario. Candidates are assessed through linked traditional item types, such as multiple‑choice or short‑answer questions.
Why is the assessment industry interested in AI avatar item types?
As professional practice becomes more complex, many organizations are looking for assessment formats that better capture reasoning, judgment, and applied decision‑making skills.
AI avatar item types are one way to capture that kind of evidence in a structured scenario, without requiring live role players. They offer new ways to bridge the gap between how competence is demonstrated in real life and how it’s measured on an exam.

About the Author
Ron Lancaster is the Chief Technology Officer with nearly 30 years of experience in product and technology leadership, including 20 years in the assessment industry. With a passion for artificial intelligence and engineering excellence, he is focused on advancing ITS’s AI, cloud, and new product strategies.
Leave a Reply