<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ASX | InConsult</title>
	<atom:link href="https://inconsult.com.au/publication-category/asx/feed/" rel="self" type="application/rss+xml" />
	<link>https://inconsult.com.au</link>
	<description>Helping you confidently take risks</description>
	<lastBuildDate>Wed, 22 Oct 2025 03:38:19 +0000</lastBuildDate>
	<language>en-AU</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>ASX Takes Control of Corporate Governance</title>
		<link>https://inconsult.com.au/publication/asx-takes-control-of-corporate-governance/</link>
		
		<dc:creator><![CDATA[Tony Harb]]></dc:creator>
		<pubDate>Tue, 21 Oct 2025 04:15:29 +0000</pubDate>
				<guid isPermaLink="false">https://inconsult.com.au/?post_type=publication&#038;p=13185</guid>

					<description><![CDATA[<p>ASX Takes Control of Corporate Governance Principles The structure of corporate governance in Australia is undergoing a fundamental change. The system for establishing the Corporate Governance Principles and Recommendations (CGPR), which guide all ASX Listed Entities, is being overhauled after 23 years. The catalyst for this shift was the collapse of negotiations for the proposed [&#8230;]</p>
The post <a href="https://inconsult.com.au/publication/asx-takes-control-of-corporate-governance/">ASX Takes Control of Corporate Governance</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></description>
										<content:encoded><![CDATA[<h1>ASX Takes Control of Corporate Governance Principles</h1>
<p>The structure of corporate governance in Australia is undergoing a fundamental change. The system for establishing the Corporate Governance Principles and Recommendations (CGPR), which guide all ASX Listed Entities, is being overhauled after 23 years.</p>
<p>The catalyst for this shift was the collapse of negotiations for the proposed 5th Edition of the CGPR earlier this year. The 19-member ASX Corporate Governance Council (CGC), a body designed to achieve consensus among diverse industry groups, found itself in gridlock.</p>
<p>This friction prompted the ASX to commission an independent &#8220;Review Panel&#8221; in June 2025 to recommend a more effective path forward.</p>
<h2>The Six Key Recommendations Shaping the Future of CGPR</h2>
<p>In September 2025, the Review Panel released a <a href="https://www.asx.com.au/content/dam/asx/about/media-releases/2025/16-oct-asx-to-implement-recommendations-from-independent-panel-review-on-the-process-for-develop.pdf" target="_blank" rel="noopener">report</a> with sweeping recommendations, which the ASX formally committed to adopting in October 2025. This marks a definitive transition from a broadly representative model to a more centralised and expert-driven approach.</p>
<p>The core recommendations are:</p>
<ol>
<li><strong>Shift Ultimate Responsibility</strong>: The sole authority for developing, approving, and issuing the CGPR will move from the CGC to the ASX.</li>
<li><strong>Establish a New Expert Advisory Group</strong>: The ASX will be supported by a small, new Advisory Group of 6-10 members (half the size of the former council), led by an independent Chair.</li>
<li><strong>The Philosophical Shift</strong>: Members of the new Advisory Group will be appointed in their individual expert capacity. They will be expected to act in the interests of the entire market, moving away from representing specific industry groups or constituencies.</li>
<li><strong>Retain the &#8220;If Not, Why Not&#8221; Rule</strong>: The existing flexible disclosure approach will remain central to the Principles. This cornerstone of Australian governance gives boards flexibility while maintaining transparency.</li>
<li><strong>Simplify and Streamline</strong>: Future Principles will be simplified to avoid overlap with matters already covered in the Corporations Act and other legal/regulatory requirements.</li>
<li><strong>Set a Fixed Renewal Cycle</strong>: A 4-year renewal cycle will be implemented to ensure the CGPR remains current and can quickly address evolving risks like ESG, cyber security, and corporate culture.</li>
</ol>
<h2>What This Means for ASX Listed Entities</h2>
<p>The ASX&#8217;s commitment to these changes signals a move toward a more agile and efficient system for maintaining world-class corporate governance standards in Australia. While the 4th Edition of the Principles remains in effect for now, boards should anticipate quicker responses to global governance trends and a renewed focus on core principles over overly prescriptive rules in the next edition.</p>
<h2>Can We Help?</h2>
<p>We understand the importance of good governance for Australian listed entities.  Since 2001, we have assisted listed entities strengthen their risk management framework to align with better practice and the Corporate Governance Principles and Recommendations (CGPR).</p>
<p>Our services are designed to help boards quickly translate the Principles into actionable compliance and disclosure practices, enabling listed entities to meet the higher standards expected.</p>
<p>For small to medium listed entities, we offer bespoke services that include:</p>
<ul>
<li><strong>Virtual Risk Officer</strong>: We offer expert guidance in establishing formal risk frameworks and conduct independent reviews to assess framework maturity. Additionally, we conduct risk workshops, risk culture assessments, and provide specialised services in areas like business continuity, crisis management, fraud control, modern slavery, and climate change risk.</li>
<li><strong>Virtual Chief Information Security Officer (vCISO)</strong>: We provide fractional or on-demand cyber security leadership to help you meet the evolving cyber risk landscape, without the cost of a full-time executive.</li>
<li><strong>Internal Audit Services</strong>: We provide outsourced or co-sourced internal audit functions, ensuring independent assurance on your controls and operational efficiency, which is vital for maintaining board and stakeholder confidence.</li>
</ul>
<p>Take risk management to the next level and <a href="https://inconsult.com.au/contact-us/" target="_blank" rel="noopener noreferrer">contact us</a> to discuss your risk and resilience needs.</p>
<div class='printomatic pom-default ' id='id2888'  data-print_target='body'></div>The post <a href="https://inconsult.com.au/publication/asx-takes-control-of-corporate-governance/">ASX Takes Control of Corporate Governance</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Evolving Duties of the Board</title>
		<link>https://inconsult.com.au/publication/the-evolving-duties-of-the-board/</link>
		
		<dc:creator><![CDATA[Tony Harb]]></dc:creator>
		<pubDate>Tue, 21 Oct 2025 04:14:23 +0000</pubDate>
				<guid isPermaLink="false">https://inconsult.com.au/?post_type=publication&#038;p=13224</guid>

					<description><![CDATA[<p>The Evolving Duties of the Board: From Compliance to Culture and the &#8216;Social License to Operate&#8217; In an era of relentless scrutiny and rapid stakeholder expectation shifts, the responsibilities of the Board of Directors have expanded far beyond mere financial oversight and regulatory tick-boxing. While quarterly earnings and strategic growth remain paramount, a new, equally [&#8230;]</p>
The post <a href="https://inconsult.com.au/publication/the-evolving-duties-of-the-board/">The Evolving Duties of the Board</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></description>
										<content:encoded><![CDATA[<h1>The Evolving Duties of the Board: From Compliance to Culture and the &#8216;Social License to Operate&#8217;</h1>
<p>In an era of relentless scrutiny and rapid stakeholder expectation shifts, the responsibilities of the Board of Directors have expanded far beyond mere financial oversight and regulatory tick-boxing.</p>
<p>While quarterly earnings and strategic growth remain paramount, a new, equally critical mandate has emerged over the last 10 years &#8211; the active cultivation and protection of the company&#8217;s culture and its overarching social license to operate. This evolution is starkly articulated in the ASX Corporate Governance Principles and Recommendations (4th Edition), released on 27 February 2019, particularly within <em>Principle 3: Instil a culture of acting lawfully, ethically and responsibly.</em></p>
<p>For ASX listed companies, the cost of cultural missteps can be catastrophic, leading to reputational damage, regulatory fines, leadership churn, and a precipitous erosion of shareholder value.</p>
<p>The global financial crisis, numerous royal commissions, and ongoing corporate scandals have indelibly demonstrated that robust governance is fundamentally underpinned by a sound culture, not just a comprehensive compliance manual.</p>
<h2>Navigating the Rise of Non-Financial Risk</h2>
<p>The emphasis on Principle 3 signals a profound shift.</p>
<p>Boards are now unequivocally responsible for overseeing non-financial risks. These are the intangible, yet immensely impactful, dangers that emerge from employee conduct, ethical lapses, compliance breaches, and a failure to embed core values throughout the organisation. Historically, risk committees might have focused predominantly on financial, operational, and market risks. Today, their remit must explicitly extend to:</p>
<ul>
<li><strong>Conduct Risk</strong>: The potential for employee or executive behaviour to cause harm to customers, markets, or the organisation itself.</li>
<li><strong>Reputational Risk</strong>: The likelihood of negative public perception impacting brand value, customer loyalty, and investor confidence.</li>
<li><strong>Environmental &amp; Social Risk</strong>: Non-compliance or perceived non-compliance with societal expectations around sustainability, human rights, and community engagement.</li>
<li><strong>Cyber &amp; Data Ethics Risk</strong>: Beyond technical security, the ethical implications of data usage and the cultural approach to privacy.</li>
</ul>
<p>The corporate landscape for ASX-listed entities has been repeatedly tested by recent high-profile failures that highlight a critical vulnerability in modern corporate governance: the inadequate oversight of non-financial risks and corporate culture.</p>
<h3>1. The Qantas Board Governance Issues (2023–2024)</h3>
<p>The Qantas board faced intense scrutiny for prioritizing shareholder interests over customer and social license interests, leading to a significant breakdown of trust. <a href="https://www.aicd.com.au/leadership/types/management/insights-from-the-qantas-governance-review.html" target="_blank" rel="noopener">Failures</a> included inadequate oversight of operational decline post-COVID, a contentious retention of substantial customer flight credits, and approving executive actions amid public outrage. This demonstrated a critical governance gap where the board was perceived as insufficiently independent and failed to maintain an effective balance between stakeholders.</p>
<h3>2. Board Governance Failures (Crown and Star)</h3>
<p>Royal Commissions and regulatory inquiries into both Crown Resorts and The Star Entertainment Group exposed massive systemic failures in risk management and ethical conduct oversight. Directors were found to have presided over a culture that allowed extensive breaches of anti-money laundering and financial crime laws to occur. These cases highlighted a catastrophic failure to establish a robust non-financial risk framework, leading to regulatory action, massive fines, and widespread board and executive turnover.</p>
<h3>3. Rex Continuous Disclosure Case (2024)</h3>
<p>ASIC initiated legal action against Rex and four of its directors alleging breaches of continuous disclosure obligations and directors’ duties. The core of the complaint is that the company issued an &#8220;optimistic&#8221; profit forecast without a reasonable basis, and then failed to correct the market promptly when financial information clearly indicated a material downgrade was necessary. This serves as a pointed reminder of the board’s duty to ensure the integrity of market disclosures and avoid misleading investors.</p>
<p>Effective oversight of these non-financial risks requires Boards to move beyond traditional reporting structures. It demands probing questions, access to diverse internal metrics (not just financial KPIs), and a willingness to challenge management on the qualitative aspects of their risk culture.</p>
<p>This proactive approach helps to pre-empt crises rather than merely react to them.</p>
<h2>Whistleblowers &amp; Anti-Bribery: Barometers of Culture</h2>
<p>Two specific recommendations within Principle 3 serve as vital indicators of an organisation&#8217;s cultural health: Recommendation 3.3 (whistleblower policy) and Recommendation 3.4 (anti-bribery and corruption policy). These are not merely administrative requirements; they are fundamental tools for fostering transparency, accountability, and ethical conduct.</p>
<p>A truly effective whistleblower policy does more than just provide a channel for reporting; it creates a safe environment where individuals feel empowered to speak up without fear of reprisal. For ASX boards, this means:</p>
<ul>
<li>Ensuring the policy is widely communicated and understood.</li>
<li>Verifying the independence and impartiality of the reporting mechanism.</li>
<li>Actively monitoring the number and nature of reports, and critically, the outcomes and remedial actions taken. A low volume of whistleblower reports in a large organisation might signal a culture of fear, not a lack of issues.</li>
</ul>
<p>Similarly, a robust anti-bribery and corruption policy, supported by consistent enforcement, signals zero tolerance for unethical practices. The Board&#8217;s oversight here ensures that the company&#8217;s values are not merely aspirational but are deeply ingrained in everyday operations, especially in complex international dealings. These policies act as a crucial feedback loop, giving the board direct insight into the integrity and efficacy of their ethical framework.</p>
<h2>Aligning Board Remuneration with Risk and Values</h2>
<p>The link between executive remuneration and corporate culture is undeniable, as highlighted in ASX Corporate Governance Principle 8 (Remunerate fairly and responsibly). Where incentive structures overly reward short-term financial gains without sufficient consideration for long-term value creation, ethical conduct, and risk management, they can inadvertently foster a culture of excessive risk-taking and compromise.</p>
<p>Boards must ensure that remuneration frameworks, particularly those for senior executives, explicitly integrate non-financial performance metrics. This includes:</p>
<ul>
<li>Safety metrics: Especially critical in high-risk industries.</li>
<li>Customer satisfaction scores: Reflecting a commitment to external stakeholders.</li>
<li>ESG (Environmental, Social, Governance) targets: Aligning with sustainability goals and community expectations.</li>
<li>Adherence to risk management frameworks and ethical conduct: Penalising breaches and rewarding responsible behaviour.</li>
</ul>
<p>By aligning incentives with the entity&#8217;s core values and its risk appetite, Boards can reinforce the desired culture. This sends a clear message from the top: sustainable success is built on ethical foundations, and individual rewards are intrinsically linked to collective, responsible performance.</p>
<p>This strategic alignment is paramount for securing and maintaining the &#8220;social license to operate,&#8221; ensuring long-term viability and stakeholder trust for ASX listed companies in an increasingly scrutinised global market.</p>
<h2>How We Can Help Your Board</h2>
<p>We understand the importance of good governance for Australian listed entities.  Since 2001, we have assisted listed entities strengthen their risk management framework to align with better practice and the Corporate Governance Principles and Recommendations (CGPR).</p>
<p>Our services are designed to help boards quickly translate the Principles into actionable compliance and disclosure practices, enabling listed entities to meet the higher standards expected.</p>
<p>For small to medium listed entities, we offer bespoke services that include:</p>
<ul>
<li><strong>Virtual Risk Officer</strong>: We offer expert guidance in establishing formal risk frameworks and conduct independent reviews to assess framework maturity. Additionally, we conduct risk workshops, risk culture assessments, and provide specialised services in areas like business continuity, crisis management, fraud control, modern slavery, and climate change risk.</li>
<li><strong>Virtual Chief Information Security Officer (vCISO)</strong>: We provide fractional or on-demand cyber security leadership to help you meet the evolving cyber risk landscape, without the cost of a full-time executive.</li>
<li><strong>Internal Audit Services</strong>: We provide outsourced or co-sourced internal audit functions, ensuring independent assurance on your controls and operational efficiency, which is vital for maintaining board and stakeholder confidence.</li>
</ul>
<p>Take Board Governance to the next level and <a href="https://inconsult.com.au/contact-us/" target="_blank" rel="noopener noreferrer">contact us</a> to discuss your needs.</p>
<div class='printomatic pom-default ' id='id17'  data-print_target='body'></div>The post <a href="https://inconsult.com.au/publication/the-evolving-duties-of-the-board/">The Evolving Duties of the Board</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The AI Governance Maze: Navigating AI Risks and Chaos</title>
		<link>https://inconsult.com.au/publication/the-ai-governance-maze-navigating-ai-risks-and-chaos/</link>
		
		<dc:creator><![CDATA[Tony Harb]]></dc:creator>
		<pubDate>Wed, 11 Jun 2025 04:37:24 +0000</pubDate>
				<guid isPermaLink="false">https://inconsult.com.au/?post_type=publication&#038;p=12576</guid>

					<description><![CDATA[<p>Five years ago (mid-2020), the AI landscape was primarily dominated by &#8220;Narrow AI&#8221; models performing specific tasks like image classification, recommendation systems, and basic natural language processing. While foundational large language models like GPT-3 were being introduced (GPT-3 was released in May 2020), their widespread public impact was not yet felt.  AI governance and enterprise [&#8230;]</p>
The post <a href="https://inconsult.com.au/publication/the-ai-governance-maze-navigating-ai-risks-and-chaos/">The AI Governance Maze: Navigating AI Risks and Chaos</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></description>
										<content:encoded><![CDATA[<div class="flex-1 overflow-hidden">
<div class="h-full">
<div class="react-scroll-to-bottom--css-afbpl-79elbk h-full">
<div class="react-scroll-to-bottom--css-afbpl-1n7m0yu">
<div class="flex flex-col text-sm md:pb-9">
<article class="w-full text-token-text-primary focus-visible:outline-2 focus-visible:outline-offset-[-4px]" dir="auto" data-testid="conversation-turn-3" data-scroll-anchor="true">
<div class="text-base py-[18px] px-3 md:px-4 m-auto w-full md:px-5 lg:px-4 xl:px-5">
<div class="mx-auto flex flex-1 gap-4 text-base md:gap-5 lg:gap-6 md:max-w-3xl lg:max-w-[40rem] xl:max-w-[48rem]">
<div class="group/conversation-turn relative flex w-full min-w-0 flex-col agent-turn">
<div class="flex-col gap-1 md:gap-3">
<div class="flex max-w-full flex-col flex-grow">
<div class="text-message flex w-full flex-col items-end gap-2 whitespace-normal break-words [.text-message+&amp;]:mt-5" dir="auto" data-message-author-role="assistant" data-message-id="22138f38-573c-4da5-81c0-47065e044a8e">
<div class="flex w-full flex-col gap-1 empty:hidden first:pt-[3px]">
<p>Five years ago (mid-2020), the AI landscape was primarily dominated by &#8220;Narrow AI&#8221; models performing specific tasks like image classification, recommendation systems, and basic natural language processing. While foundational large language models like GPT-3 were being introduced (GPT-3 was released in May 2020), their widespread public impact was not yet felt.  AI governance and enterprise adoption was still in early stages, with around 47% of companies reporting some AI usage, often in pilot projects or specific functions like IT automation, quality control, and cybersecurity.</p>
<p>Today, there are tens of thousands of AI systems actively deployed worldwide, ranging from specialised Narrow AI models to large language models (LLMs) powering various applications. Model capabilities have advanced rapidly, becoming more powerful, efficient, and accessible, leading to a proliferation of specialised AI models tailored for diverse applications across almost every industry.  New models are being developed, refined, and deployed constantly by researchers, companies, and open-source communities.  Surveys indicate a higher percentage of enterprise adoption (over 75% in some reports), as many have integrated at least one AI-powered solution, leading to thousands of AI systems across various sectors.</p>
<p>But, when you consider individual instances of AI (like virtual assistants on billions of smartphones or recommendation engines on millions of searches), the number of AI instances actively running is in the billions &#8211; a vast and intricate web where each instance represents a potential point of failure or vulnerability from a risk perspective.</p>
<p>Artificial intelligence (AI) offers immense opportunities for individuals and organisations.  But with rapid AI advancements comes AI risks and the need for effective AI risk management and robust AI governance.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="text-base py-[18px] px-3 md:px-4 m-auto w-full md:px-5 lg:px-4 xl:px-5">
<div class="mx-auto flex flex-1 gap-4 text-base md:gap-5 lg:gap-6 md:max-w-3xl lg:max-w-[40rem] xl:max-w-[48rem]">
<div class="flex-col gap-1 md:gap-3">
<div class="flex max-w-full flex-col flex-grow">
<div class="text-message flex w-full flex-col items-end gap-2 whitespace-normal break-words [.text-message+&amp;]:mt-5" dir="auto" data-message-author-role="assistant" data-message-id="22138f38-573c-4da5-81c0-47065e044a8e">
<div class="flex w-full flex-col gap-1 empty:hidden first:pt-[3px]">
<h3>Top 5 AI Risks</h3>
<p>The rapid growth and adoption of AI present both immense opportunities and significant risks for organisations. If you had to only focus on the top 5 current and emerging risks, what would they be?</p>
<h4>1. Cybersecurity Vulnerabilities and Sophisticated Attacks</h4>
<p>While AI can enhance cybersecurity defences, it also introduces new attack vectors and enables more sophisticated threats. AI-powered phishing, deepfakes for social engineering, adversarial attacks that manipulate AI models, and the creation of advanced malware are becoming more prevalent.</p>
<p>A significant percentage of phishing emails show some AI involvement. One <a href="https://securitytoday.com/articles/2025/04/15/report-82-percent-of-phishing-emails-used-ai.aspx#:~:text=Report%3A%2082%20Percent%20of%20Phishing,Artificial%20Intelligence" target="_blank" rel="noopener">report</a> from April 2025 states that 82.6% of all phishing emails analysed exhibited some use of AI technology. Another report from February 2025 indicated that 67.4% of all phishing attacks in 2024 utilized some form of AI.</p>
<p>Consequently, organisations face increased risks of data breaches, system compromise, and reputational damage as cybercriminals leverage AI to bypass traditional security measures, highlighting an urgent need for proactive, AI-informed cybersecurity strategies rather than relying solely on reactive defences.</p>
<h4>2. Ethical, Bias, and Reputation Risks</h4>
<p>AI systems are trained on lots and lots of data, and if that data is biased, incomplete, or reflects societal prejudices, the AI will perpetuate and even amplify those biases. This can lead to discriminatory outcomes in areas like hiring, lending, or customer service, resulting in significant ethical concerns, legal challenges, and severe damage to an organisation&#8217;s brand and public trust. The &#8220;black box&#8221; nature of some AI models also makes it difficult to explain decisions, raising transparency and accountability issues.</p>
<p>Untamed AI bias can erode customer loyalty, trigger costly class-action lawsuits, and lead to regulatory sanctions, especially as governments globally focus more on algorithmic fairness. Furthermore, the inability to explain AI decisions can cripple internal investigations, external audits, and consumer trust, creating a profound crisis of accountability.</p>
<h4>3. Regulatory and Compliance Complexity</h4>
<p>The global regulatory landscape for AI is still evolving, with different regions developing varying approaches.  At present (as at June 2025):</p>
<ul>
<li>The European Union has introduced the EU AI Act, a risk-based regulation effective August 2024, but is now discussing exemptions and voluntary measures amidst industry criticism and signs of regulatory fatigue.</li>
<li>Australia currently has no dedicated AI legislation, but the government has proposed mandatory guardrails for high-risk AI and published guidelines, though the path to comprehensive legislation remains uncertain given global shifts and a focus on deregulation.</li>
<li>The United States currently lacks dedicated federal AI legislation, relying on existing frameworks, and its federal government has adopted a deregulatory stance, even seeking a 10-year moratorium on state-level AI laws.</li>
<li>The United Kingdom favours a light-touch, &#8216;pro-innovation&#8217; approach without dedicated AI legislation, focusing instead on targeted regulation for high-risk AI models to avoid stifling investment.</li>
<li>The People&#8217;s Republic of China does not have unified AI regulation but governs specific use cases like deep synthesis and generative AI, aiming to be a world leader in AI by 2030.</li>
</ul>
<p>Organisations face increasing pressure to comply with emerging data privacy laws, algorithmic transparency requirements, and industry-specific mandates. Non-compliance can lead to hefty fines, lawsuits, and operational disruptions, requiring constant monitoring and adaptation of AI strategies and governance frameworks.</p>
<h4>4. Workforce Displacement and Skills Gaps</h4>
<p>AI-driven automation has the potential to displace human jobs, particularly in repetitive or data-intensive roles, and even some high-skill professions. While AI is expected to create new jobs, there&#8217;s a significant risk of a mismatch between the skills required for these new roles and the existing workforce capabilities. This can lead to unemployment, social unrest, and a struggle for organisations to find the necessary talent to develop, implement, and manage AI effectively.</p>
<h4>5. Operational Risks and Over-Reliance</h4>
<p>Over-reliance on AI without sufficient human oversight can lead to significant operational risks. Errors in AI decision-making (sometimes called &#8220;hallucinations&#8221; in generative AI), false positives or negatives, and unforeseen consequences from complex AI systems can disrupt operations, alienate customers, and lead to financial losses. There&#8217;s a risk of losing critical human intuition and decision-making capabilities if AI is allowed to operate without proper human in the loop processes and continuous monitoring.</p>
<h3>Strategies to Enhance AI Risk Management</h3>
<p>To effectively manage the risks associated with the growth of AI, organisations need a proactive, holistic, and adaptive approach. Here&#8217;s a breakdown of key actions organisations should take or at least consider.</p>
<h4>1. Establish Robust AI Governance and Ethical Frameworks</h4>
<p>Set the foundations of good AI governance. Develop clear organisational principles and principles for responsible AI use, encompassing fairness, transparency, accountability, human oversight, privacy, and security. Translate these into actionable policies and procedures that guide AI development, deployment, and operation.</p>
<p>Assign clear roles and responsibilities for AI governance, potentially including an AI ethics committee, Chief AI Officer, or cross-functional working groups. Ensure board-level oversight of AI initiatives and their associated risks.</p>
<p>Conduct thorough risk assessments for every AI application throughout its lifecycle (design, development, deployment, monitoring). Identify potential biases, security vulnerabilities, and ethical concerns, and implement robust mitigation strategies.</p>
<p>Implement continuous monitoring of AI system performance in real-world settings to detect bias drift, performance degradation, and unexpected outputs. Conduct regular internal and third-party audits to validate governance and security controls.</p>
<h4>2. Enhance Data Management, Privacy, and Cybersecurity</h4>
<p>Establish stringent data governance practices, ensuring the quality, integrity, and representativeness of training data to mitigate bias. Implement strong data anonymization and de-identification techniques when handling sensitive information.</p>
<p>Embed privacy-by-design principles into the AI system development process from the outset, ensuring personal data is handled ethically and in compliance with relevant privacy regulations (e.g., Australian Privacy Principles, GDPR).</p>
<p>Maintain robust cybersecurity measures. Treat AI systems as critical information assets requiring the highest level of cybersecurity. Implement end-to-end encryption, strict access controls, regular vulnerability testing, and threat intelligence specific to AI (e.g., adversarial attack detection). Involve cybersecurity teams from the earliest stages of AI development.</p>
<h4>3. Prioritise Workforce Adaptation and Skill Development</h4>
<p>Invest &#8216;appropriately&#8217; in comprehensive AI literacy training for all relevant staff, from technical teams to business leaders. Educate employees on ethical AI use, how to recognise potential biases, and how to effectively collaborate with AI systems.</p>
<p>Develop clear career pathways for roles likely to be transformed by AI. Implement reskilling and upskilling programs to equip employees with the new skills needed to work alongside or manage AI technologies, focusing on uniquely human capabilities like critical thinking, creativity, and empathy.</p>
<p>Foster an AI-friendly culture that encourages experimentation, adaptability, and open dialogue about AI&#8217;s role in the organisation, alleviating fears of job displacement.</p>
<h4>4. Ensure Regulatory Compliance and Transparency</h4>
<p>Monitor and stay updated on the rapidly evolving global and local AI regulatory landscape (e.g., potential Australian AI framework, EU AI Act). Proactively adapt internal policies and practices to align with new laws and industry standards.</p>
<p>Whenever possible, strive for explainable AI (XAI) models, documenting development processes, data sources, and decision-making logic. This transparency is crucial for accountability, trust, and demonstrating compliance to regulators and stakeholders.</p>
<p>Engage legal and ethical experts to review AI applications, contracts with third-party AI providers, and internal policies to ensure compliance and mitigate legal and reputational risks.</p>
<h4>5. Foster a Culture of Responsible Innovation</h4>
<p>Design AI systems with appropriate human oversight and intervention capabilities (Human-in-the-Loop), especially for critical decision-making processes. AI should augment, not replace, human judgment.</p>
<p>Engage diverse stakeholders, including employees, customers, partners, and even external experts, in the design and oversight of AI systems. This fosters inclusivity, garners different perspectives, and builds trust.</p>
<p>Start with low-risk pilot projects to test AI applications, gather feedback, and iterate quickly. Scale AI adoption responsibly, learning from each implementation.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</article>
</div>
</div>
</div>
<div class="react-scroll-to-bottom--css-afbpl-79elbk h-full">
<div class="react-scroll-to-bottom--css-afbpl-1n7m0yu">
<div class="flex flex-col text-sm md:pb-9">
<article class="w-full text-token-text-primary focus-visible:outline-2 focus-visible:outline-offset-[-4px]" dir="auto" data-testid="conversation-turn-3" data-scroll-anchor="true">
<div class="text-base py-[18px] px-3 md:px-4 m-auto w-full md:px-5 lg:px-4 xl:px-5">
<div class="mx-auto flex flex-1 gap-4 text-base md:gap-5 lg:gap-6 md:max-w-3xl lg:max-w-[40rem] xl:max-w-[48rem]">
<h2 class="flex-col gap-1 md:gap-3">Take Outs</h2>
<p class="flex-col gap-1 md:gap-3">The widespread adoption of AI, with tens of thousands of deployed systems and billions of individual instances, presents significant opportunities but also critical risks for organisations.</p>
<p class="flex-col gap-1 md:gap-3">The top five AI risks include increased cybersecurity vulnerabilities from sophisticated AI-powered attacks, ethical and reputational damage due to inherent biases in AI models, complex regulatory and compliance challenges given evolving global AI legislation (which varies significantly by country, from the EU&#8217;s risk-based approach to the US and UK&#8217;s deregulatory stance), potential workforce displacement and widening skills gaps, and operational risks stemming from over-reliance on AI without sufficient human oversight.</p>
<p class="flex-col gap-1 md:gap-3">To mitigate these, organisations must establish robust AI governance and ethical frameworks, enhance data management and cybersecurity, prioritize workforce adaptation through training and reskilling, ensure continuous regulatory compliance and transparency (e.g., via Explainable AI), and foster a culture of responsible innovation with human oversight and stakeholder engagement.</p>
<h2 class="flex-col gap-1 md:gap-3">Can We Help You Improve AI Governance?</h2>
<p>Our <a href="https://inconsult.com.au/services/artificial-intelligence-risk-governance/" target="_blank" rel="noopener">AI Risk Governance</a> services are designed to help organisations navigate the complexities of AI, ensuring ethical practices, regulatory compliance, and manage AI risks at every stage of AI development and deployment.</p>
<h4>Future-Proof Your AI with Strong Governance</h4>
<p>We can help you design and build a clear, custom AI governance framework. This means engaging with stakeholders, setting up clear rules, responsibilities, and ethical guidelines for all your AI projects. By doing this, you&#8217;ll reduce unforeseen risks, ensure your AI aligns with your company values, and build a foundation that adapts to future regulations, keeping you ahead of the curve.</p>
<h4>Pinpoint and Fix Your AI Risks</h4>
<p>Our risk, cybersecurity and AI experts can conduct thorough risk assessments of your AI systems to uncover potential issues like hidden biases, cybersecurity weaknesses, privacy gaps, and operational flaws. We draft concrete strategies to fix these problems, helping you avoid costly mistakes, legal challenges, and damage to your reputation. You&#8217;ll gain peace of mind knowing your AI is robust and reliable.</p>
<h4>Secure Your AI Against Emerging Cyber Threats</h4>
<p>AI introduces new cybersecurity challenges. We can assess the specific security risks of your AI systems and the data they use, implementing robust defences against new threats like AI-powered attacks and data breaches. Our goal is to safeguard your critical assets and sensitive information, ensuring your AI operates in a secure environment and protecting your business from costly cyber incidents.</p>
</div>
</div>
</article>
</div>
</div>
</div>
</div>
</div>
<p>&nbsp;</p>
<div class='printomatic pom-default ' id='id6226'  data-print_target='body'></div>The post <a href="https://inconsult.com.au/publication/the-ai-governance-maze-navigating-ai-risks-and-chaos/">The AI Governance Maze: Navigating AI Risks and Chaos</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
