The pandemic saw an explosion of remotely proctored exams across the industry, the impacts of which are only now starting to be realized. The rush to remote proctoring served as a wake-up call for testing sponsors and vendors alike to the severity of the security threats posed in a remotely administered testing environment. Building on some of our learnings from the past few years, ITS has continually reinvested in our secure browser technology and launched our remote proctoring solution, ProctorNow™, to address many of the security issues we see in our industry. One example of this continual improvement is that our secure browser recently added advanced detection and blocking capabilities for all types of HDMI splitters, video capture cards, and other hardware devices designed to intercept an assessment machine’s video feed.
In 2023, Chris Glacken, our Director of Innovative Technologies, shared an article with the I.C.E. community around the potential emerging threats of generative AI and software manipulation. At ITS, we also felt it was necessary to warn about the potential avenues for AI cheating on our blog.
In recent weeks, we’ve been having more conversations suggesting that this threat is no longer theoretical, it’s here. AI-powered cheating tools are being developed, marketed, and used. Testing programs are asking questions like, “Does your secure browser block this?” and they’re sharing real-world examples that prove we’ve transitioned from speculation to implementation.
While the industry has long been aware of threats such as brain dumps and proxy testing, the next generation of cheating is faster, smarter, and more difficult to detect. It’s time to talk about what that means for exam security and how programs can respond.
What’s Happening
We’re seeing a wave of new, productized AI-powered applications designed to actively defeat secure browsers. These tools utilize AI to read test content and provide answers in real-time, directly on the test-taker’s screen. Many are being heavily marketed in the higher education space, but credentialing and certification programs are starting to feel the impact.
Unlike traditional cheating tactics, these tools:
- Are easy to use, requiring minimal technical knowledge
- Are widely accessible via trusted online payment platforms
- Don’t require prior knowledge of the test content, and aren’t deterred by large item pools
- Are built to evade detection by secure browsers and bypass traditional result-based data forensics methods
Worryingly, some apps now advertise “modes” tailored for different subjects, making it easier than ever to target specific exams. Unfortunately, this problem is likely to worsen. AI capabilities continue to advance, and the barrier to developing these tools is low.
Why It Matters
Recent history has shown that our industry has a large addressable market for test answer cheating services, as evidenced by the wide variety of companies selling these services prior to the rise of generative AI.
The active selling of question banks and other threats to test integrity were limited in potential impact and could be mitigated by layers of security.
- Traditional automated question answering software was blocked by secure browsers and relied on data leaks of question content.
- Brain dumps could be thwarted by large item pools, content refreshes, and randomized delivery methods such as linear on the fly and adaptive testing.
- Proxy testers were expensive, required a willingness to engage with bad actors, and left forensic data that could be used to identify test fraud and act.
However, a secure browser bypasses leveraging AI, changing the equation. AI cheating tools don’t require prior knowledge of the items or a willingness to engage with bad actors; they simply need access to the content displayed on the screen. That means secure content can be compromised if it’s visible on screen. These tools may even share the secure test content to Large Language Models under a shared content policy, further exposing test sponsor data to AI providers for training. Many common security mechanisms (e.g., timed items, large item pools) may no longer be sufficient when AI is answering on behalf of the candidate. That’s why testing programs should be proactive in adapting their strategies now before these tools become even more sophisticated.
What Programs Should Consider
While there’s no single fix, programs can take meaningful steps to reduce risk without sacrificing exam integrity or accessibility.
- Add layered security, matched to exam stakes.
- Low-stakes learning checks may benefit from kiosk-mode secure browsers, which offer a lighter security touch compared to a traditional secure browser while still preventing AI from reading the screen. This is in addition to traditional time-based item display approaches, which aim to prevent the use of AI tools off-screen.
- Short-form or micro-credential exams should also consider adding secure browsers and low-touch remote proctoring systems, such as ProctorNow.
- High-stakes certification or licensure exams should still require test center delivery.
- Rethink item design.
- Incorporate item types appropriate to your testing audience that AI tools struggle to answer, such as labs, simulations, or multi-tab case studies.
- Use exhibits or references that aren’t visible at the same time as the question to disrupt AI reading models.
- Coordinate closely with your program’s ecosystem of vendors, partners, and stakeholders on new and emerging threats to test integrity to ensure issues are being swiftly addressed.
- Understand that AI test security threats will evolve.
What works today may not work six months from now. That’s why test design, delivery, and security need to be continuously evaluated together, not in silos.
Final Thoughts
There’s an existential question to be asked about why we continue to test candidates on questions that could be automatically answered by AI. As long as human knowledge, skills, and abilities continue to be valued by society, the capability to measure them in a scientifically sound way matters. Security is an essential component for testing the validity and value of each sponsoring organization’s credentials. AI-powered cheating tools are here, and they’re only getting better. Now is the time to rethink, reevaluate, and protect the value of your credentials.
About the Author
Pat Hughes, VP of IT Assessments, has 11 years of experience in the assessment industry. He holds a bachelor’s degree in history from Loyola University and is a PMP-certified professional. Pat collaborates with IT certification exam sponsors, providing expert advice on a range of topics such as item banking, accessibility, security, and performance-based testing. He has served on the Board of Directors for the IT Certification Council since 2024. He was honored with the Association of Test Publishers (ATP) 2024 Rising Star Leader Award, a testament to his significant impact in advancing the assessment industry.

Leave a Reply