The future belongs to accountable systems, not AI for AI's sake
There's a version of the AI conversation in hiring that treats automation as the goal. Faster screening, fewer manual steps, more candidates processed per hour. Efficiency becomes the metric, and the assumption is that more AI means better outcomes.
The evidence suggests something more complicated. A 2024 Gallup survey found that 93% of Fortune 500 CHROs were integrating AI into their business practices. But only about a third of employees knew their employer was using AI tools in hiring or management. The tools are being adopted at pace. The transparency, testing, and governance around them often aren't.
ASIC's October 2024 report on AI governance across financial services licensees found nearly half lacked policies addressing consumer fairness or bias. Even fewer had policies governing the disclosure of AI use. In one case, an AI model used to generate credit risk scores was flagged as a "black box" with no way to explain the variables influencing an applicant's score or how they affected the outcome. The finding was specific to financial services in Australia, but the pattern is universal: systems get faster while the ability to explain what they're doing doesn't keep pace.
.png)
What happens when you can't explain the system
The accountability gap becomes visible when decisions are challenged. The Mobley v. Workday litigation, one of the first major class-action lawsuits alleging discrimination through algorithmic bias in hiring, illustrates what's at stake. A federal judge ruled in 2024 that the AI screening tools in question were participating in the decision-making process, not just implementing employer-set criteria. By 2025, the court ruled those tools could be considered an "agent" of the employer, and the EEOC filed a brief supporting the position that algorithmic hiring tools can violate anti-discrimination laws even without explicit intent.
The case is US-based, but the principle it establishes travels. If an AI tool produces a discriminatory outcome, the employer bears accountability regardless of whether a third-party vendor built the tool. That principle is already embedded in the EU AI Act's high-risk framework, reflected in Australia's parliamentary recommendations on AI and employment, and consistent with Singapore's emphasis on board-level accountability for AI risk. Wherever you operate, the direction is the same: using a vendor's platform doesn't transfer liability.
Separately, Stanford researchers found in October 2025 that AI resume screening tools gave older male candidates higher ratings than female candidates and young candidates, even when all resumes were generated from identical data. The bias wasn't in the job description or the employer's intent. It was in the model itself, inherited from patterns in its training data. If no one is testing for that, no one will catch it until the damage is done.
The audit gap
Bias audits are supposed to be the mechanism that makes AI hiring accountable. But there's growing evidence that the audits themselves aren't working as intended.
A 2025 ACM study examined 116 publicly available bias audits produced under New York City's Local Law 144, one of the first laws requiring annual audits of automated hiring tools. The findings were concerning. Many audits relied on incomplete demographic data. Others used opaque aggregation methods or test data that didn't reflect real deployment conditions. The metrics used often failed to represent how the tools actually operated in practice.
The researchers found audits for approximately 2% of Fortune 500 companies, despite industry data showing that over 98% use applicant tracking software with automated screening features.
This matters for every organisation using AI in hiring, regardless of jurisdiction. If your audit process is based on incomplete data, tests that don't match real conditions, or metrics that don't capture how the system actually behaves, compliance becomes a paper exercise. The accountability is hollow.
What accountable looks like
Accountability means being able to show, clearly and from your records, what the system does and how it was configured. It means evidence of ongoing testing against bias and accuracy, conducted against real deployment conditions rather than synthetic data. It means human oversight where the reviewer understands the system well enough to challenge its output, not just approve it. And it means defined ownership: when a decision is wrong, accountability doesn't dissolve into the gap between HR, IT, and your vendor.
.png)
The regulatory direction globally, from the EU to APAC to North America, converges on the same set of expectations: documentation, testing, transparency, human oversight, and vendor accountability. The specifics vary by jurisdiction and the timelines differ. But the underlying question is consistent: can you prove how a decision was made, and can you show that someone was responsible for it?
The organisations that will earn trust in this environment aren't those with the most automation. They're the ones that can answer that question from their records, clearly and under pressure, whenever it's asked.
The bigger picture
This article is part of a four-part series on how AI is reshaping trust in hiring and workforce management. For the full picture, including how authenticity, fairness, continuity, and accountability connect as a single trust problem, read the pillar piece.
{{your-hiring-process-was-built-for-a-world-that-no-longer-exists="/components"}}
FAQs
FAQs
This depends on the industry and type of role you are recruiting for. To determine whether you need reference checks, identity checks, bankruptcy checks, civil background checks, credit checks for employment or any of the other background checks we offer, chat to our team of dedicated account managers.
Many industries have compliance-related employment check requirements. And even if your industry doesn’t, remember that your staff have access to assets and data that must be protected. When you employ a new staff member you need to be certain that they have the best interests of your business at heart. Carrying out comprehensive background checking helps mitigate risk and ensures a safer hiring decision.
Again, this depends on the type of checks you need. Simple identity checks can be carried out in as little as a few hours but a worldwide criminal background check for instance might take several weeks. A simple pre-employment check package takes around a week. Our account managers are specialists and can provide detailed information into which checks you need and how long they will take.
All Veremark checks are carried out online and digitally. This eliminates the need to collect, store and manage paper documents and information making the process faster, more efficient and ensures complete safety of candidate data and documents.
In a competitive marketplace, making the right hiring decisions is key to the success of your company. Employment background checks enables you to understand more about your candidates before making crucial decisions which can have either beneficial or catastrophic effects on your business.
Background checks not only provide useful insights into a candidate’s work history, skills and education, but they can also offer richer detail into someone’s personality and character traits. This gives you a huge advantage when considering who to hire. Background checking also ensures that candidates are legally allowed to carry out certain roles, failed criminal and credit checks could prevent them from working with vulnerable people or in a financial function.
Trusted by the world's best workplaces


APPROVED BY INDUSTRY EXPERTS
.png)
.png)




and Loved by reviewers
Transform your hiring process
Request a discovery session with one of our background screening experts today.




