Policy and insights report
Well-designed AI assessments can help evaluate whether the technology meets business and society’s expectations
For business leaders, policymakers and the public, AI represents a generational opportunity for increasing productivity and innovation.
However, AI can also seem like a black box — with minimal transparency and assurance of its effectiveness, governance and trustworthiness. And while AI assessment frameworks are emerging that aim to address these concerns, the sheer number and variety of approaches is challenging.
This policy paper explores the nascent field of AI assessments, identifies the characteristics of effective AI assessments, and highlights key considerations for business leaders and policymakers.
Our review finds a rapidly emerging assessment ecosystem that seeks to provide businesses with an opportunity to build and deploy AI systems that are more likely to be effective, safe and trusted.
AI assessments — whether voluntary or mandatory — can increase confidence in AI systems. When well-designed, they can enable business leaders to evaluate whether the systems are performing as intended, inform effective governance and risk mitigation, and support compliance with any applicable laws, regulations or standards.
The concerns about AI — like the excitement — are broad based.
Business leaders are asking how they can assess whether an AI system is safe and effective; how they should identify and manage its risks; and how to measure an AI system against governance and performance criteria.
Understanding the AI assessment landscape
As of January 2025, policymakers from nearly 70 countries have introduced over a thousand AI public policy initiatives, including legislation, regulation, voluntary initiatives and agreements, according to the Organisation for Economic Co-operation and Development (OECD). Many of these initiatives include various types of AI assessments.
AI assessments can generally be grouped into three categories and may be performed separately or in combination:
- Governance assessments, which determine whether appropriate internal corporate governance policies, processes and personnel are in place to manage an AI system, including in connection with that system’s risks, suitability and reliability.
- Conformity assessments, which determine whether an organisation’s AI system complies with relevant laws, regulations, standards or other policy requirements.
- Performance assessments, which measure the quality of performance of an AI systems’ core functions, such as accuracy, non-discrimination and reliability. They often use quantitative metrics to assess specific aspects of the AI system.
There can be significant variations in assessment quality and to address shortcomings we recommend the following
- Specificity about what is to be assessed and why. An effective AI assessment framework will have a clearly specified and articulated business or policy objective, scope and subject matter.
- Clear methodology. Methodologies and suitable criteria determine how a subject matter is assessed, and it is essential that similar AI assessments use clearly defined and consistent approaches. Some assessments, for instance, may include explicit opinions or conclusions, while others may only provide a summary of procedures performed. Consistency, combined with clear terminology, allows users to compare assessment outcomes and understand how they were reached.
- Suitable qualifications for those providing the assessment. The choice of assessment provider is crucial and directly influences the credibility, reliability and overall integrity of the process. Key considerations for selecting assessment providers include competency and qualifications, objectivity and professional accountability.
For policymakers, we suggest that they:
- consider what role voluntary (or mandated) AI assessments can play in building confidence
- clearly define the purpose and components of the assessment framework
- address expectation gaps in what AI assessments entail and their limitations
- identify measures to build the capacity of this market
- and endorse assessment standards that are – to the extent practical – consistent and compatible with standards in other jurisdictions.
We suggest that business leaders consider the following:
- The role AI assessments can play in enhancing corporate governance and risk management.
- Whether – even in the absence of regulatory requirements – voluntary assessments can build confidence in AI systems among employees and customers.
- Where voluntary assessments are used, what the most appropriate type of assessment would be and whether it should be conducted internally or by a third party.
Policy and insights report
"AI is scaling rapidly, and being able to trust what it says has never been more crucial. AI assessments are an important step towards creating an ecosystem of trustworthy AI. Building on this policy paper co-produced with EY, we look forward to continued collaboration with policymakers, practitioners and others to improve awareness and understanding of effective AI assessments that enhance confidence in AI in the public interest."
Narayanan Vaidyanathan, ACCA
"With the use of AI systems in organisations transitioning from proof-of-concept to application in mission-critical applications, it is becoming increasingly vital that quality, governance and compliance assessments of AI systems are reliable and fit for purpose. In this co-produced report by EY and ACCA we lay out some of the main challenges we have observed in the current landscape of policies on AI assessments (also referred to as AI assurance or AI audits) and offer a set of considerations to address these challenges."
Dr Ansgar Koene, EY