Although it is not always obvious, Artificial intelligence (AI) is quickly becoming a part of everyday life. And for those organisations who implement AI, it is important to have the corresponding AI risk management, AI governance, and AI assurance arrangements in place well before, not after AI is used.
The use of AI in business can include back-end applications such as spam filters, smart email categorisation and fraud detection and prevention for online transactions to customer facing e-commerce applications that include product recommendations, purchase predictions, dynamic price optimisation, online customer support process automation and automated insights, especially for data-driven industries such as financial services or e-commerce.
The use of AI exposes organisations to several risks including:
- social manipulation
- social surveillance
Cyber Risk Analyst, Flynn O’Keeffe and Director, Tony Harb take a closer look at AI governance and the NSW AI Assurance Framework and what it means to risk, audit and governance professionals.
The Rise of AI Governance
AI governance is a new discipline that has arisen due to the recent uptake in AI. AI governance is the overarching framework that manages and controls how organisations use AI with a predefined set of processes, methodologies and tools that, like all other governance arrangements, should be reviewed by the governing body and audit and risk committees.
Globally, governments and organisations have begun issuing AI governance principles and frameworks. For example:
- Singapore first released the Model AI Governance Framework at the 2019 World Economic Forum Annual Meeting in Davos, Switzerland. The framework focuses primarily on four broad areas (1) internal governance structures and measures, (2) human involvement in AI-augmented decision-making, (3) operations management and (4) stakeholder interaction and communication. The Model Framework is based on two high-level guiding principles that promote trust in AI and understanding of the use of AI technologies.
- In April 2021, the European Commission proposed what would be the first legal framework for AI – the Artificial Intelligence Act. The Act assigns applications of AI to three risk categories (1) applications and systems that create an unacceptable risk, e.g. government-run social scoring of the type used in China, (2) high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements and (3) applications not explicitly banned or listed as high-risk will be largely left unregulated.
- After 18 months of consultation, in January 2023, NIST released the final revision of the AI Risk Management Framework (AI RMF 1.0). The voluntary framework is designed to help develop and deploy AI technologies in ways that enable government and organisations to enhance AI trustworthiness while managing risks based on our democratic and social values.
In Australia, the Australian government released the AI Ethics Framework in November 2019 to help guide organisations and governments when designing, developing and implementing AI. The 8 Artificial Intelligence Ethics Principles are voluntary and designed to help ensure AI is safe, secure and reliable.
In NSW, the state government released the NSW AI Assurance Framework which came into effect in March 2022 to help NSW government agencies design, build and use AI-enabled products and solutions.
Clearly these new global and local AI governance laws and frameworks highlight the growing recognition of the need to address the ethical, legal, and societal implications of artificial intelligence and the critical need for guidelines to ensure responsible and accountable AI development and deployment.
NSW AI Assurance Framework
The NSW AI Assurance Framework must be used for all projects that contain an AI component or utilise AI-driven tools including large language models and generative AI. The framework is the result of collaboration between the community, academic leaders, ethics experts, industry partners, and members of the public. These collaborations help foster knowledge sharing, promote best practices, and encourage innovation in AI technologies. These partnerships can collectively address the challenges and opportunities presented by AI through this framework and other strategies.
Industry experts say that the NSW government will be taking a “deliberate but cautious” approach when implementing new artificial intelligence technology. As the framework is relatively new, many agencies and Audit and Risk Committee members remain unfamiliar with the new AI Assurance Framework.
The AI Assurance Framework is just 1 of 3 key governance components of the NSW Government’s overarching approach to Artificial Intelligence which aims to provide clear guidance on the safe use of AI, finding the right balance between opportunity and risk, while putting in place important protections that would apply for any service delivery solution. The 3 components include:
- The Artificial Intelligence Strategy which sets out a way forward for AI adoption by NSW Government to help deliver services.
- The Artificial Intelligence Ethics Policy which provides a set of key principles for NSW government agencies to follow. It requires all agencies to implement AI in a way that is consistent with key ethical principles and the AI User Guide. The pillars of the ethics policy include – community benefit, fairness, privacy and security, transparency and accountability.
- The AI Assurance Framework which assists agencies to design, build and use AI-enabled products and solutions.
Building Trust through AI Governance
The NSW Government’s AI Assurance Framework aims to build public trust by establishing a robust governance structure around AI implementation. This framework acknowledges the importance of public input, transparency, and accountability in shaping AI policies and practices. It provides a roadmap for government agencies, businesses, and other stakeholders to ensure that AI technologies are developed, deployed, and used in a manner that aligns with community values and expectations.
Ethics and Human-Centric AI Governance Approach
At the heart of the AI Assurance Framework is a commitment to an ethical and human-centric approach. It emphasises the need for AI systems to be fair, transparent, and accountable. This means that AI technologies should not discriminate, should be explainable and understandable to humans, and should have mechanisms in place for addressing any unintended consequences or biases.
The NSW government AI Ethics Policy has 5 mandatory principles when utlising AI:
- Community Benefit – AI should deliver the best outcome for the citizen, and key insights into decision making
- Fairness – Use of AI will include safeguards to manage data bias or data quality risks, following best practice and Australian Standards
- Privacy and security – AI will include the highest levels of assurance. Ensuring projects adhere to the Privacy and Personal Information Protection (PPIP) Act 1998
- Transparency – Reviewing mechanisms will ensure citizens can question and challenge AI based outcomes. Ensuring projects adhere to the Government Information (Public Access) Act 2009
- Accountability – Decision-making remains the responsibility of organisations and Responsible Officers
Accountability is a crucial principle for the proper implementation of AI into governance and society. Ultimately, properly verifying AI is working as intended and ethically is a priority for any organisation looking to incorporate AI into business activity. As it currently stands, AI is excellent for the automation and simplification of day-to-day tasks. However, AI, such as the widely popular ChatGPT, can often make mistakes that look convincingly correct. To counteract this, organisations must ensure that outputs generated by AI are correct and decision making is conducted by Responsible Officers.
Risk Management and Assessment
To mitigate potential risks associated with AI, the framework emphasises a comprehensive risk management approach. Government agencies and organisations are encouraged to conduct thorough assessments of the potential risks and impacts of AI systems before their deployment. This includes considering factors such as privacy, security, bias, and the potential for unintended consequences. By taking a proactive approach to risk management, the NSW Government aims to prevent or minimise any negative consequences arising from the use of AI.
The assurance framework gives great insight into quantifying AI risk factors on a spectrum, ranging from very low to very high. AI risks that constitute a higher rating on the scale occur when AI makes and implements operational decisions, autonomous of human input that could potentially result in a negative effect on human wellbeing. Agencies and organisations should review their implementation of AI, evaluate the risk taken in comparison with their internal risk appetite and then make strong decisions.
Data Governance and Privacy
The AI Assurance Framework recognises the importance of data governance and privacy in AI applications. It highlights the need for clear guidelines on data collection, storage, sharing, and usage. Privacy protection and compliance with relevant laws and regulations are essential components of the framework. The NSW Government is committed to ensuring that personal information is handled appropriately and securely in all AI initiatives.
The AI Assurance Framework assists organisations and agencies by providing a flowchart on how to identify the level of control requirement for the data. Further, a heat map is provided in the framework to determine the data governance environment required from the previous flowchart.
Organisations must stay vigilant when utilising AI to ensure privacy. Acceptable use policies should require information regarding the use of AI technologies. According to a report from CyberHaven, 11% of data employees paste into ChatGPT is confidential. This shows the importance of educating employees through relevant training and AI safety being an integral part of a strong information security framework.
Transparency and Explainability
To foster trust and accountability, the framework requires transparency and explainability in AI systems. Organisations utilising AI technologies are encouraged to provide clear and accessible information about how AI systems work, the data they rely on, and the decision-making processes involved. This transparency enables individuals and communities to understand and question the outcomes produced by AI algorithms.
The framework further provides warnings around the use of a “black box” AI system, that being models with internal workings and source code inaccessible. If a black box system is being used, the framework encourages evaluating the outputs of the system and appropriately documenting all considerations.
Ongoing Monitoring and Evaluation
The AI Assurance Framework recognises that the responsible use of AI requires ongoing monitoring and evaluation. Monitoring is an integral part of all sections of the framework, encouraging government agencies and organisations to regularly assess the performance and impact of AI systems and incorporate feedback from affected stakeholders and the public. The framework suggests having clear performance monitoring and calibration schedules.
Weekly performance monitoring is recommended for any systems with moderate residual risks. A monthly evaluation is recommended for low risk systems.
Finally, for systems that are of a high or very high risk, the organisation should develop an agreed upon schedule for performance monitoring, evaluation and calibration. Continuous evaluation ensures that AI systems remain aligned with their intended objectives and that any emerging risks or biases can be addressed promptly.
The NSW Government’s Artificial Intelligence Assurance Framework sets a strong foundation for the ethical and responsible use of AI in New South Wales. By prioritising trust, transparency, accountability, incorporating comprehensive risk management, data governance, and ongoing evaluation, the framework ensures that AI technologies are developed and deployed in a manner that benefits society while mitigating potential risks.
The framework serves as a commendable model for other governments and organisations seeking to navigate the complexities of AI and create a future that harnesses its potential while safeguarding human values and well-being.
Can we help improve AI Governance & AI Assurance?
We are here to help strengthen cyber risk management, risk governance and resilience. Our cyber risk management capabilities include designing and developing cyber risk management frameworks, AI governance and a wide range of cyber response plans and playbooks to strengthen cyber resilience capabilities. Our cyber risk management services include:
- Cyber Security Gap Analysis
- Cyber Risk Governance Framework Review
- Cyber Risk Governance Framework Development
- AI Governance Advisory
- Third-Party Vendor Review and Cyber Risk Analysis
- Cyber Risk Awareness Training and Internal Campaigns
- Email Phishing Campaigns
- Cyber Incident Response
- Post-Cyber Incident Review
- Crisis Team Familiarisation Training
Explore, innovate and take risks. Contact us to discuss how we can help strengthen your cyber risk governance and your resilience.