As we look ahead to 2026, many testing and certification programs are asking the same questions: What does growth really look like now—and what needs to change? How do we prepare for what’s next without losing trust, quality, or credibility along the way?
To help explore those questions, we’re sharing a series of short, expert-led videos from the ITS team. Each video offers a practical, real-world perspective on program growth—from protecting exam security and improving candidate experience to rethinking assessment design, embracing AI thoughtfully, and building systems that can adapt as programs evolve.
Whether you’re actively planning for the year ahead or just beginning to think differently about growth, these insights are designed to help you move forward with clarity and confidence.
Growing Testing Programs the Right Way
As programs begin to plan for 2026, many are realizing that growth takes more than new tools—it requires a shift in how change happens. In this conversation, Brodie Wise, EVP of Business Development & Marketing at ITS, shares his perspective on what it takes to grow programs the right way as candidate expectations evolve, technology advances, and uncertainty becomes the norm.
Read the full transcript
Are you ready to grow your program? Let’s do it the right way.
Hi everyone. This is Brodie Wise, Executive Vice President of Business Development and Marketing here at ITS.
As I look ahead into 2026, I keep coming back to one central idea: we need to grow our programs, and our candidates are asking more from us than ever before.
Over the past year—especially in 2025—I’ve heard a lot of uncertainty from our partners.
What are we going to do next?
Where should we change?
How are we going to get our employees ready, and how do we prepare for that growth?
Those are all fair questions, and they often come from working with an old mindset—old ways of planning, old ways of doing things, old ways of building tests, and old ways of expecting candidates to engage with our programs.
At the same time, everything around us is changing. AI is evolving, new technologies are emerging, and we have new people coming in and out of the industry influencing how things are done.
That creates opportunity—and it creates partnerships where we can help you move forward.
So as we think about growing programs the right way, I want to focus on three priorities that can help.
First, security matters. We need to protect test security and validity. Trust is the foundation of everything we do, and growth only works if we protect the integrity of what we deliver.
Second, we need to make our programs more relatable to candidates. They need to know we understand what they need. Candidates expect more clarity, more flexibility, more options, and better experiences. If we want our programs to grow, we need to meet candidates where they are today.
Third, we need to take advantage of technology to create new options for delivery—because it’s already happening. That means new delivery models, new item types, and new ways of using technology to support growth, including AI—not just maintaining the status quo.
At ITS, we see ourselves as more than just a technology solution. We’re a true partner focused on helping you navigate this change. We want to work alongside you on strategy, planning, and execution.
That’s why you’ll see us reaching out more—wanting to have conversations, brainstorm together, explore new ideas, look at technology side by side, and test what works. Our goal is to build solutions that support sustainable program growth.
We have a history of doing that.
So as we move into this next year, let’s plan together, let’s experiment together, and let’s grow your programs together. I know we can do it, and we’d love to collaborate, problem-solve together, and support you the way a good partner should.
Thank you. I look forward to hearing from you.
Responsible AI for Scalable Testing Programs
As AI becomes more embedded in assessment workflows, many testing and certification programs are asking new questions about how to scale responsibly. In this video, Ron Lancaster, CTO at ITS, shares a thoughtful perspective on what responsible AI adoption looks like in high-stakes testing environments—exploring how programs can leverage AI to support content generation and evaluation while maintaining accountability, human oversight, and strong security controls. Ron offers practical insight into how programs can embrace AI thoughtfully without compromising trust, validity, or long-term credibility.
Read the full transcript
Hi, my name is Ron Lancaster. I’m the CTO at ITS. For the high-stakes industry, the question isn’t whether AI will affect operations—it’s how fast the capability is compounding, and whether our programs remain valid, defensible, and controllable.
One of the clearest external signals comes from METR (Model Evaluation and Threat Research).
METR measures how long a model can reliably complete a real-world task end to end, and how that duration—what they call the time horizon—is increasing over time. METR notes that task completion trends are doubling roughly every seven months, with a greater than 50 percent success rate.
This matters because a multi-hour horizon pushes models from assistance into operational agents—systems that can plan, execute, and self-check across an entire workday.
That brings us to what we saw last year.
In 2025, AI moved inside the high-stakes workflow not as the decision-maker, but as an accelerator: drafting content, stress-testing it, summarizing evidence, and routing work. Adoption moved faster for AI-assisted evaluation than for AI-generated content, largely because evaluation is easier to govern and easier to defend.
What changes in 2026 is the rise of end-to-end agentic pipelines—models that can retrieve policies, run checks, generate artifacts, and produce a complete audit trail. If we design these systems correctly, there are three implications for high-stakes testing.
First, content generation will scale—but provenance becomes mandatory. AI can draft items and variants at industrial speed, but in a defensible program, every item still needs an accountable owner, documented alignment, and an evidence trail.
Second, evaluation becomes continuous and instrumented. Evaluation is AI-assisted, but human-decided. Models can pre-review for ambiguity, bias risk, blueprint misalignment, and key validity threats. They can also simulate synthetic candidates to surface issues before items ever reach production. Final decisions affecting candidates remain human-owned and reviewable.
Third, agent control becomes a security problem—no longer just a convenience feature. As agents gain tool access, the critical design surface becomes permissions and traceability. Any step that changes eligibility, scoring, accommodations, or reporting must require human approval and be replayable after the fact.
At ITS, we’re using METR’s data to inform how often we revisit assumptions, how deeply we evaluate systems, and where explicit human controls are required—even as AI capability continues to advance.
Thank you for listening.
Scaling Item Development for Program Growth
As programs look to grow in 2026, item development is often one of the first areas to feel the strain. In this video, Kyle Miller, Manager of Item Workshop at ITS, shares practical guidance on how programs can scale item development without adding unnecessary complexity. He explores how modern item banking workflows and thoughtful use of AI can help teams work more efficiently while maintaining item quality and long-term sustainability.
Read the full transcript
Hi there! I’m Kyle with ITS.
Here at ITS, we’re passionate about assessment across industries and across assessment vendors. In 2026, we’d love to help you grow your assessment programs, even if it’s not with us.
To that end, I’d like to share three tips for doing more content development than you did in 2025—and with fewer resources.
Tip #1: Use AI to check the quality of your test questions right after they’re written
Flagging issues early can save time and money. While AI is probably not good enough to be the sole reviewer of your test questions, it can certainly flag issues for human review, such as inconsistent option length and disallowed words or phrases. By now, your item banking vendor likely has this feature as part of their core offering.
Tip #2: Reach out to your item banking vendor about your current processes
Chances are your processes have been tweaked since you last sat down with your vendor. Likewise, your item bank may have new features that weren’t available when you settled into your current workflow. Reach out and make sure your item bank is properly configured to support your program.
Tip #3: Use AI to bulk review examinee comments
If your program has a large number of examinees, or if you test throughout the year, examinee comments can quickly become overwhelming. The good news is that text summarization is one of the things AI does best. Make sure comment summarization is part of your workflow in 2026 so you can confidently review examinee feedback in just a few minutes per item.
That’s going to be it for me. Thank you so much for watching.
If you’re going to be at ATP 2026, please come find me—I’d love to chat. And if you find me outside of the ITS booth, I’m not obligated to try to sell you something.
See you there.
Continuous Certification for Modern Testing Programs
Traditional recertification models often rely on high-stakes checkpoints spaced years apart. In this video, Ryan Howard, Director of Learning & Assessments at ITS, explores how continuous certification offers a different path—one grounded in spaced repetition, frequent engagement, and real-time feedback. Rather than treating competence as something to prove periodically, Ryan discusses how longitudinal assessment models can help professionals strengthen knowledge over time while giving programs a more meaningful, ongoing view of performance.
Read the full transcript
Hi everybody.
I’m Ryan Howard, Director of Learning and Assessments at ITS.
When we talk about continuous certification, it’s easy to focus on the technology. But at the heart of all of this is the candidate—their time, their growth, and their ability to stay confident in a world that isn’t slowing down.
More and more organizations are moving away from traditional point-in-time recertification, and the reason is pretty simple: relevance and competence.
Today’s professionals don’t want to just prove competence every five or ten years. They want to stay confident every day. Continuous certification makes that possible. It’s an evidence-based approach grounded in spaced repetition and frequent, low-stakes engagement. Instead of preparing for a single high-stress event, candidates build knowledge steadily through small, ongoing assessments that reinforce learning over time and integrate new content as their field evolves.
At ITS, we’ve built our continuous certification and longitudinal assessment model around what years of research already tell us:
- Spaced repetition strengthens retention.
- Frequent engagement deepens understanding.
- Real-time feedback leads to meaningful behavioral changes far more effectively than a one-and-done exam ever could.
For many of our partners, this shift has transformed recertification from a source of anxiety into an empowering, natural part of professional growth.
By merging learning with assessment, we’re turning every question into a teachable moment—complete with rationales, linked resources, and adaptive follow-up items that meet candidates exactly where they are. We’ve designed the experience to be genuinely flexible. Candidates can engage on any device, at the moment that works for them, and at the pace that fits their program’s goals.
This isn’t just convenient—it’s powerful for certification bodies. Continuous certification provides a richer, more accurate picture of competence. It reduces the operational strain of high-stakes exam cycles and supports a genuine, ongoing relationship with candidates across their entire careers.
But here’s the part we don’t talk about enough: Continuous certification helps people feel more capable in the work they do every single day. Think about your physician staying sharp on emerging treatments. Your social worker keeping up with best practices. The construction professional who built your neighborhood daycare center staying current on safety standards. Don’t we all want the professionals we rely on to feel confident, current, and fully equipped to make the best decisions for the people and communities they serve? That’s the heart of continuous certification. It’s not just about maintaining a credential. It’s about continuous relevance, continuous confidence, and continuous excellence.
We’re helping programs evolve from one-and-done testing events to a model that truly supports professionals throughout their careers. And we’re ready to support you on that journey. If your program is exploring ways to modernize, elevate the candidate experience, and strengthen long-term relevance, we’d love to partner with you. Reach out to us. We’re excited to explore how we can move into the future of certification together.
Secure Growth for Modern Testing Programs
As certification programs expand, the stakes increase. In this video, Chris Glacken, Director of Innovative Technologies at ITS, shares how programs can scale without compromising exam integrity or the candidate experience. By designing security and experience together through layered controls, integrated systems, and real-world threat modeling, programs can grow with confidence while maintaining trust.
Read the full transcript
Hi, my name is Chris Glacken. I’m Director of Innovative Technologies at Internet Testing Systems. Today, I’m going to talk about scaling without losing control of security and the candidate experience. This is a common question that programs have, and it’s an important one to ask.
When programs grow, everything speeds up.
More candidates, more sessions, more pressure. With that growth comes real risk — risk to exam integrity, risk to your program’s reputation, and honestly, risk to the candidate experience.
Security problems don’t usually show up on day one. If there’s value in a credential, there’s going to be a market for bad actors. Those challenges tend to surface when volume increases, when edge cases start to multiply, and when systems that work well at a smaller scale begin to strain.
The mistake you want to avoid is thinking that security and experience are tradeoffs. They’re not. Security doesn’t have to mean friction. In fact, the most successful programs design integrity and experience together from day one.
A well-rounded approach goes beyond a single control. It includes a strong, secure browser, thoughtful test design, and intelligent use of AI.
When those components are part of the same integrated solution, they’re simply more effective. There’s more you can do with an integrated secure browser and test delivery system working together than either could do on its own. When candidates trust the process and administrators trust the data, you gain confidence — and confidence allows programs to grow.
When we work with our programs, our focus isn’t just on stopping bad behavior. It’s on helping programs scale securely without slowing things down or frustrating candidates who are doing the right thing.
Our partners benefit from our ongoing investment in security, from our constant secure browser enhancements to the responsible incorporation of AI, and from our multiple decades of experience in test design and delivery.
That combination allows us to apply layered security, real-world threat modeling, and practical insight — so growth never comes at the cost of trust.
If secure growth is a priority for your program, let’s talk. Let’s explore what’s possible when integrity and experience work together.
Thank you.
Using APIs to Scale Modern Testing Programs
As programs expand, complexity can become a barrier to growth. In this video, Trish Thomas, EVP of Technology at ITS, shares how APIs help testing programs scale without adding operational friction. By automating processes, integrating into partner ecosystems, and unlocking real-time insight, APIs become more than background technology—they become a strategic advantage for sustainable growth.
Read the full transcript
Hi, I’m Trish Thomas. I’m the EVP of Technology at ITS, and I’m here to talk to you about how you can use APIs to grow your program.
When many people hear the term API, they often think of something technical—something only developers and engineers would get excited about. But I’m here to tell you that APIs are one of the most effective tools you can use to scale your program, reduce friction, and unlock real-time insights without sacrificing quality or control.
I’m going to share three practical ways you can think of APIs not as a technical strategy, but as a growth strategy.
As testing programs grow, they tend to run into the same challenges. Manual processes increase, data gets delayed or siloed, and partners want integrations that don’t exist yet. Over time, the road starts to feel hard. This is where APIs can make a big difference.
First, APIs help you scale without adding overhead. One of the biggest advantages APIs offer testing programs is the ability to grow without increasing operational complexity.
Here at ITS, we have a wide range of APIs that cover every aspect of the testing process. We have APIs that allow you to import candidates, eligibilities, orders, item content, and test configurations. We also have APIs that allow you to retrieve results, reports, and remote proctoring data. We support real-time notifications of every major event that happens in the ITS system.
With access to all these APIs and services, you can automate your entire program process. This means eliminating manual uploads, manual handoffs, and the manual management of data in spreadsheets. Instead of spending time managing data movement, your team can focus on higher-value work, such as improving the candidate experience, expanding partnerships, or strengthening program oversight.
For growing programs, this matters because manual steps don’t just slow you down, they limit you. They limit how big you can get. APIs can remove that ceiling.
The second way APIs help you scale your program is by expanding how and where your test shows up.
Growth doesn’t happen in isolation. Testing programs grow when they integrate into the ecosystems their candidates and partners already use, such as learning platforms, training platforms, and credentialing platforms. ITS APIs allow your program to plug directly into these environments.
This can mean candidates taking your test within their learning management system, scores flowing directly into a credentialing platform, and partners adopting your test without changing their existing workflows. From the candidate’s perspective, testing feels seamless. From your perspective, your program becomes easier to adopt, easier to scale, and easier to partner with.
This is a powerful shift. Instead of asking your customers to adapt to your process, your testing process adapts to them. That’s how programs expand faster and more sustainably.
The third and final way APIs can scale your program is by turning testing data into real-time insight.
You already know data is valuable. What APIs change is how quickly and effectively you can use that data. With real-time or near real-time data flowing through APIs, you can monitor your program performance as it’s happening, not weeks later.
This allows you to determine what’s working and what’s not, now. It allows you to identify trends you need to be aware of, now. It allows you to make informed decisions, now.
Programs that grow successfully aren’t guessing. They’re responding to real signals—and APIs make that possible.
In conclusion, at ITS, we see APIs not as background technology, but as enablers of smart, sustainable growth.
When you think of APIs as tools that reduce friction, strengthen partnerships, and unlock insight, they stop being just a technical option and become a strategic advantage.
The best APIs are invisible, but behind the scenes, they’re doing the work that helps you scale with confidence.
Growing with Intention in 2026
Program growth in 2026 is not just about scale. It is about trust, integration, insight, and intentional design.
Across this series, our team has shared perspectives on security, assessment design, continuous certification, responsible AI, scalable infrastructure, and ecosystem integration. Each reflects the same underlying belief: growth should strengthen your program, not strain it.
If these conversations resonate with the challenges your program is navigating, we would welcome the opportunity to continue the discussion.
Leave a Reply