📢 Gate Square Exclusive: #WXTM Creative Contest# Is Now Live!
Celebrate CandyDrop Round 59 featuring MinoTari (WXTM) — compete for a 70,000 WXTM prize pool!
🎯 About MinoTari (WXTM)
Tari is a Rust-based blockchain protocol centered around digital assets.
It empowers creators to build new types of digital experiences and narratives.
With Tari, digitally scarce assets—like collectibles or in-game items—unlock new business opportunities for creators.
🎨 Event Period:
Aug 7, 2025, 09:00 – Aug 12, 2025, 16:00 (UTC)
📌 How to Participate:
Post original content on Gate Square related to WXTM or its
Making AI Governance Verifiable: Singapore’s AI Verify Toolkit
Author: JOSH LEE KOK THONG Translation: Li Yang Proofreading: Xiang Xinyi
Source: The Paper
Global interest in AI governance and regulation has exploded in recent months. Many believe that new governance and regulatory structures are needed to deal with generative AI systems — whose capabilities are astounding, such as OpenAI’s ChatGPT and DALL-E, Google’s Bard, Stable Diffusion, and others. The EU Artificial Intelligence Act has received widespread attention. In fact, many other important initiatives are emerging around the world, including various AI governance models and frameworks.
This article is about the Singapore Artificial Intelligence Governance Test Framework and Toolkit - "AI Verify" released in May 2022. It mainly extracts three key points. ① Summarize Singapore's overall strategy on AI governance and the key initiatives issued by the government before launching AI verification. ② Explain the key of "artificial intelligence verification". ③ "AI Verification" has been launched for a year, discussing the future of AI verification, and Singapore's approach to AI governance and regulation. In short, the main points are as follows:
Singapore has taken a moderate intervention approach to AI governance and regulation, with the AI Governance Framework Model setting out guidelines for AI governance in the private sector.
"AI Verify" is an AI governance testing framework and toolkit, launching in May 2022. Although in a trial phase, it represents Singapore's efforts to further develop the global discourse on AI governance and regulation, attempting to meet the growing demand for trustworthy AI systems, and promoting the interconnectivity of the global AI regulatory framework.
"AI Verification" is a test framework based on internationally recognized AI governance principles that companies can use when testing their own AI systems. "AI Verification" is not intended to define ethical standards, but to provide verifiability by allowing AI system developers and their owners to issue statements attesting to the performance of their AI systems.
– To be successful, “AI-verified” may need more recognition and adoption. This depends on factors such as cost, persuading stakeholders of its value, and its relevance and synergies with the international regulatory framework.
Overall Approach to AI Governance in Singapore
In its National Artificial Intelligence Strategy, Singapore announced that the country aims to be "at the forefront of the development and deployment of scalable, impactful AI solutions" and hopes to consolidate the country's role as "a leader in developing, testing, deploying and scaling artificial intelligence." global hub for solutions. One of five "ecosystem enablers" identified in a strategy to increase AI adoption is fostering a "progressive and trustworthy environment" for AI development—a trade-off between innovation and minimizing societal risk. balanced environment.
To create this "progressive and trustworthy environment," Singapore has so far taken a benign and voluntary approach to AI regulation. That’s because the country recognizes two realities of its AI ambitions.
First, the Singapore government** sees AI as a key strategic enabler** to grow the economy and improve the quality of life of its citizens. As a result, Singapore has not taken drastic steps in regulating artificial intelligence, so as not to stifle innovation and investment. Second, given its size, Singapore recognizes that the government itself may be a price taker rather than a price setter as AI governance discourse, frameworks and regulations develop globally. Therefore, the current strategy is not to refresh the principles of artificial intelligence, but to "follow the trend of the world, and have no intention of changing the trend of the world."("Take the world where it is, rather than where it hopes the world to be.")
Singapore's regulatory approach to AI - overseen by Singapore's Personal Data Protection Commission (PDPC) - has three pillars ahead of the launch of AI Verify in 2022:
1 AI Governance Framework Model (Framework Model).
Advisory Committee on the Ethical Use of Artificial Intelligence and Data (Advisory Committee).
AI governance and data use research plan (research project).
The following focuses on the "Framework Model".
frame mode
The Framework Model, first launched at the World Economic Forum Annual Meeting in 2019, is a voluntary and non-binding framework that guides organizations in the responsible deployment of artificial intelligence solutions at scale, noting that the framework is independent of the development stage of the technology . As a guide, the Framework Model only makes practical recommendations for the deployment of AI by private sector entities, while public sector use of AI is governed by internal guidelines and AI and data governance toolkits. **The Framework Patterns is known as a "living document" and future versions will evolve as technology and society evolve. Its basis lies in the unpredictability of technology, industry, scale and business model. **
Essentially, the framework pattern is guided by two fundamental principles that promote trust and understanding in AI. **First, organizations using AI in decision-making should ensure that their decision-making processes are explainable, transparent and fair. Second, AI systems should be human-centred: protecting human well-being and safety should be a primary consideration in the design, development, and use of AI. **
The framework translates these guiding principles into actionable actions in four key areas of organizational decision-making and technology development processes:
(a) internal governance structures and practices;
(b) determine the level of human involvement in AI-augmented decision-making;
(c) operations management;
(d) Stakeholder interaction and communication.
The table below summarizes some suggested considerations, approaches and measures in these key areas.
When Singapore launched the second edition of the Framework Model at the World Economic Forum 2020, it was accompanied by two other documents: Implementation and Self-Assessment Guide for Organizations (ISAGO) and Compendium of Use Cases (Compilation - Volume 1 and 2 volumes). ISAGO is a checklist to help organizations assess the alignment of their AI governance processes with the model framework. The Compendium provides real-world examples of adoption of the Framework's recommendations across sectors, use cases and jurisdictions.
In general, the "Framework Model" and its supporting documents anchor and outline the substantive thinking of artificial intelligence regulation in Singapore. These initiatives saw Singapore win the United Nations World Summit on the Information Society Award in 2019, recognizing its leadership in AI governance.
January 2020 marked a turning point in the global discussion on AI regulation. On January 17, 2020, a white paper released by the European Commission made the international community pay more attention to the possibility of government regulation of artificial intelligence technology. In February 2020, the European Commission officially released the "White Paper on Artificial Intelligence", setting out plans to create a regulatory framework for artificial intelligence. A few months later, the European Commission presented a draft of its forthcoming Artificial Intelligence Bill. This is the first serious attempt by a government agency to introduce substantive rules to horizontally regulate the development and use of AI systems. It can be expected that the AI Act will also have extrajurisdictional effects, and companies developing AI systems outside Europe may be subject to the new law.
These have influenced thinking about the future of Singapore's AI regulatory and governance landscape. While Singapore's Personal Data Protection Commission (PDPC) maintains its voluntary and lax approach to AI regulation, it acknowledges that AI will face tougher oversight in the future. The PDPC also appears to be mindful of the growing demand from ** consumers for the credibility of AI systems and developers, and the need for AI international standards for benchmarking and evaluating AI against regulatory requirements. In addition, the requirements for the interoperability of AI regulatory frameworks are also increasing. **In view of this, Singapore began to develop, and the final results were merged into the framework of "AI Verify".
What is "AI Verify"
"AI Verify" is jointly issued by Infocomm Media Development Authority (IMDA), a statutory committee under the Ministry of Communications and Information of Singapore, and the Personal Data Protection Committee (PDPC). It is an artificial intelligence governance testing framework and toolkit. **Using AI Verify, organizations can conduct a voluntary assessment of their AI systems using a combination of technical testing and process-based inspections. In turn, the system helps companies provide objective and verifiable proof to stakeholders that their AI systems are being implemented in a responsible and trustworthy manner. **
In view of the continuous development of artificial intelligence testing methods, standards, indicators and tools, "artificial intelligence verification" (AI Verify) is currently in the "minimum viable product" (MVP) stage. This has two implications. First, the MVP version has technical limitations and is limited by the type and size of AI models or datasets that can be tested or analyzed. Second, AI verification is expected to evolve as AI testing capabilities mature.
The four goals for developing the "AI Verified" MVP version are:
(a) First, IMDA hopes that organizations will be able to use "AI validation" to determine the performance benchmarks of their AI systems and demonstrate these validated benchmarks to stakeholders such as consumers and employees, thereby helping organizations build trust.
(b) Second, given its development taking into account various AI regulatory and governance frameworks, as well as common trustworthy AI principles, AI Validation aims to help organizations find various global AI governance frameworks and regulations in common. IMDA will continue to work with regulators and standards organizations to map the testing framework for "AI Validation" to the established framework. The efforts are aimed at enabling companies to operate or offer AI products and services in multiple markets, while making Singapore a center for AI governance and regulatory testing.
(c) Third, **IMDA will be able to collate industry practices, benchmarks and metrics as more organizations experiment with “AI Validation” and use its testing framework. **Considering that Singapore is participating in global AI governance platforms such as the Global AI Partnership and ISO/IEC JTC1/SC 42, providing valuable perspectives on international standard setting for AI governance, these can facilitate the development of standards for AI governance put in.
(d) Fourth, IMDA wants "AI Verification" to help create a local AI testing community in Singapore consisting of AI developers and system owners (seeking to test AI systems), technology providers (who are developing AI governance implementation and testing solutions), consulting service providers (specializing in testing and certification support), and researchers (who are developing testing techniques, benchmarks, and practices).
It is also important to clarify several potential misconceptions about "AI validation." First, **"AI Validation" does not attempt to define ethical standards. **It does not attempt to signal the classification of AI systems, but instead provides verifiability, allowing AI system developers and owners to prove their claims about their AI system performance. Second, government organizations using "AI verification" cannot guarantee that the tested AI systems are free from risk or bias, or that they are completely "safe" and "ethical." **Third, "AI Validation" aims to prevent organizations from inadvertently revealing sensitive information about their AI systems (such as their underlying code or training data). It has adopted a key safeguard measure - "**AI Verification" which will be self-tested by developers and owners of AI systems. This allows the organization's data and models to remain within the organization's operating environment. **
How "AI Verification" works
"AI Validation" consists of two parts. The first is the Test Framework, which cites 11 internationally recognized AI ethics and governance principles organized into five pillars. The second is the toolkit that organizations use to perform technical testing and document process checks in the testing framework.
Testing framework for "artificial intelligence verification"
The five pillars and eleven principles of the "AI Validation" testing framework and their intended assessments are listed below:
***(a) Definitions: *** The test framework provides easy-to-understand definitions for each AI principle. For example, interpretability is defined as "the ability to assess the factors that lead to an AI system's decision, its overall behavior, outcome, and impact."
***(b) Testable Criteria: ***For each principle, a set of testable criteria is provided. These standards take into account technical and/or non-technical (such as processes, procedures or organizational structures) factors that contribute to achieving the intended outcomes of this governance principle.
Taking interpretability as an example, two testable criteria are given. Developers can run explainability methods to help users understand what drives AI models. Developers can also demonstrate a preference for developing AI models that explain their decisions, or do so by default.
***(c) Test Process: *** For each testable criterion, "AI Validation" provides a process or actionable steps to be performed, which may be quantitative (such as statistical or technical tests) , can also be qualitative (e.g. documentary evidence produced during process inspections).
As far as interpretability is concerned, technical testing may involve empirical analysis and determining the contribution of features to model output. Process-based testing will document the rationale, risk assessment, and tradeoffs of the AI model.
***(d) Metrics: ***These are the quantitative or qualitative parameters used to measure or provide evidence for each testable criterion.
Using the interpretability example above, the metrics used to determine feature contributions examine the contributing features of model outputs obtained from technical tools such as SHAP and LIME. When selecting the final model, process-based metrics can be used to document the assessment, such as risk assessment and trade-off exercises.
***(e) Thresholds (if applicable): ***Where available, the testing framework will provide accepted values or benchmarks for selected metrics. These values, or benchmarks, can be defined by regulatory bodies, industry associations, or other recognized standards-setting organizations. No threshold is provided for the MVP model of "AI Validation", taking into account the rapid development of AI technologies, their use cases, and methods of testing AI systems. However, as the AI governance space matures and the use of "AI Verify" increases, IMDA intends to collate and develop context-specific metrics and thresholds to add to the testing framework.
"Artificial Intelligence Verification" AI Verify Toolkit
While AI Verify's toolkit for "artificial intelligence verification" is currently only available to organizations that successfully enroll in the AI Verify MVP program, IMDA describes the toolkit as a "one-stop" tool for organizations to conduct technical testing. Specifically, the toolkit makes extensive use of open source testing libraries. These tools include SHAP (Shapley Additive ExPlanations) for explainability, Adversarial Robustness Toolkit for robustness, and AIF360 and Fair Learning for fairness.
Users of "AI Verification" can install the toolkit in their internal environment. The user will carry out the testing process under the guidance of the user interface. For example, the tool includes a "guided fairness tree" for users to identify fairness metrics relevant to their use case. Finally, AI Verify will generate a summary report to help system developers and owners interpret test results. For process inspections, the report provides a checklist of the presence or absence of documentary evidence as specified in the test framework. Test results are then packaged into Docker® containers for deployment.
in conclusion
When IMDA released AI Verify, the wave of interest in generative AI hadn’t materialized yet. Following the current trend, interest in the governance, testability, and trustworthiness of AI systems has grown significantly. As listed in this article, the various initiatives of "artificial intelligence verification" AI Verify are just preparing to respond to the current situation.
Singapore has previously demonstrated its ability to contribute to global discourse and thought leadership on AI governance and regulation. The released Framework Patterns are proof of that. The stakes of AI Verify are certainly high, but so is the global demand for the initiative. To be successful, it may need to be more recognized and used more. It depends on several factors. First, the accessibility of the tool is critical: Organizations looking to use AI Verify need to be able to use it at low or no cost. **Second, convincing the organization of its value is critical. **This requires IMDA to prove that "artificial intelligence verification" AI Verify is technically and procedurally sound, it can be effectively used for more and newer types and scales of artificial intelligence models and data sets, and will not affect proprietary The commercial sensitivity of the AI model or dataset. **Third, and perhaps most important, it must maintain interoperability with international regulatory frameworks. **IMDA needs to ensure that AI Verify continues to help organizations address and interoperate within key emerging global AI regulatory frameworks such as the EU AI Act, the Canadian AI and Data Act, and the US NIST AI Risks Regulatory framework, as well as Singapore's own national model framework.