<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>InConsult | InConsult</title>
	<atom:link href="https://inconsult.com.au/publication-category/inconsult/feed/" rel="self" type="application/rss+xml" />
	<link>https://inconsult.com.au</link>
	<description>Helping you confidently take risks</description>
	<lastBuildDate>Mon, 13 Apr 2026 02:08:24 +0000</lastBuildDate>
	<language>en-AU</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>The AI Governance Maze: Navigating AI Risks and Chaos</title>
		<link>https://inconsult.com.au/publication/the-ai-governance-maze-navigating-ai-risks-and-chaos/</link>
		
		<dc:creator><![CDATA[Tony Harb]]></dc:creator>
		<pubDate>Wed, 11 Jun 2025 04:37:24 +0000</pubDate>
				<guid isPermaLink="false">https://inconsult.com.au/?post_type=publication&#038;p=12576</guid>

					<description><![CDATA[<p>Five years ago (mid-2020), the AI landscape was primarily dominated by &#8220;Narrow AI&#8221; models performing specific tasks like image classification, recommendation systems, and basic natural language processing. While foundational large language models like GPT-3 were being introduced (GPT-3 was released in May 2020), their widespread public impact was not yet felt.  AI governance and enterprise [&#8230;]</p>
The post <a href="https://inconsult.com.au/publication/the-ai-governance-maze-navigating-ai-risks-and-chaos/">The AI Governance Maze: Navigating AI Risks and Chaos</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></description>
										<content:encoded><![CDATA[<div class="flex-1 overflow-hidden">
<div class="h-full">
<div class="react-scroll-to-bottom--css-afbpl-79elbk h-full">
<div class="react-scroll-to-bottom--css-afbpl-1n7m0yu">
<div class="flex flex-col text-sm md:pb-9">
<article class="w-full text-token-text-primary focus-visible:outline-2 focus-visible:outline-offset-[-4px]" dir="auto" data-testid="conversation-turn-3" data-scroll-anchor="true">
<div class="text-base py-[18px] px-3 md:px-4 m-auto w-full md:px-5 lg:px-4 xl:px-5">
<div class="mx-auto flex flex-1 gap-4 text-base md:gap-5 lg:gap-6 md:max-w-3xl lg:max-w-[40rem] xl:max-w-[48rem]">
<div class="group/conversation-turn relative flex w-full min-w-0 flex-col agent-turn">
<div class="flex-col gap-1 md:gap-3">
<div class="flex max-w-full flex-col flex-grow">
<div class="text-message flex w-full flex-col items-end gap-2 whitespace-normal break-words [.text-message+&amp;]:mt-5" dir="auto" data-message-author-role="assistant" data-message-id="22138f38-573c-4da5-81c0-47065e044a8e">
<div class="flex w-full flex-col gap-1 empty:hidden first:pt-[3px]">
<p>Five years ago (mid-2020), the AI landscape was primarily dominated by &#8220;Narrow AI&#8221; models performing specific tasks like image classification, recommendation systems, and basic natural language processing. While foundational large language models like GPT-3 were being introduced (GPT-3 was released in May 2020), their widespread public impact was not yet felt.  AI governance and enterprise adoption was still in early stages, with around 47% of companies reporting some AI usage, often in pilot projects or specific functions like IT automation, quality control, and cybersecurity.</p>
<p>Today, there are tens of thousands of AI systems actively deployed worldwide, ranging from specialised Narrow AI models to large language models (LLMs) powering various applications. Model capabilities have advanced rapidly, becoming more powerful, efficient, and accessible, leading to a proliferation of specialised AI models tailored for diverse applications across almost every industry.  New models are being developed, refined, and deployed constantly by researchers, companies, and open-source communities.  Surveys indicate a higher percentage of enterprise adoption (over 75% in some reports), as many have integrated at least one AI-powered solution, leading to thousands of AI systems across various sectors.</p>
<p>But, when you consider individual instances of AI (like virtual assistants on billions of smartphones or recommendation engines on millions of searches), the number of AI instances actively running is in the billions &#8211; a vast and intricate web where each instance represents a potential point of failure or vulnerability from a risk perspective.</p>
<p>Artificial intelligence (AI) offers immense opportunities for individuals and organisations.  But with rapid AI advancements comes AI risks and the need for effective AI risk management and robust AI governance.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="text-base py-[18px] px-3 md:px-4 m-auto w-full md:px-5 lg:px-4 xl:px-5">
<div class="mx-auto flex flex-1 gap-4 text-base md:gap-5 lg:gap-6 md:max-w-3xl lg:max-w-[40rem] xl:max-w-[48rem]">
<div class="flex-col gap-1 md:gap-3">
<div class="flex max-w-full flex-col flex-grow">
<div class="text-message flex w-full flex-col items-end gap-2 whitespace-normal break-words [.text-message+&amp;]:mt-5" dir="auto" data-message-author-role="assistant" data-message-id="22138f38-573c-4da5-81c0-47065e044a8e">
<div class="flex w-full flex-col gap-1 empty:hidden first:pt-[3px]">
<h3>Top 5 AI Risks</h3>
<p>The rapid growth and adoption of AI present both immense opportunities and significant risks for organisations. If you had to only focus on the top 5 current and emerging risks, what would they be?</p>
<h4>1. Cybersecurity Vulnerabilities and Sophisticated Attacks</h4>
<p>While AI can enhance cybersecurity defences, it also introduces new attack vectors and enables more sophisticated threats. AI-powered phishing, deepfakes for social engineering, adversarial attacks that manipulate AI models, and the creation of advanced malware are becoming more prevalent.</p>
<p>A significant percentage of phishing emails show some AI involvement. One <a href="https://securitytoday.com/articles/2025/04/15/report-82-percent-of-phishing-emails-used-ai.aspx#:~:text=Report%3A%2082%20Percent%20of%20Phishing,Artificial%20Intelligence" target="_blank" rel="noopener">report</a> from April 2025 states that 82.6% of all phishing emails analysed exhibited some use of AI technology. Another report from February 2025 indicated that 67.4% of all phishing attacks in 2024 utilized some form of AI.</p>
<p>Consequently, organisations face increased risks of data breaches, system compromise, and reputational damage as cybercriminals leverage AI to bypass traditional security measures, highlighting an urgent need for proactive, AI-informed cybersecurity strategies rather than relying solely on reactive defences.</p>
<h4>2. Ethical, Bias, and Reputation Risks</h4>
<p>AI systems are trained on lots and lots of data, and if that data is biased, incomplete, or reflects societal prejudices, the AI will perpetuate and even amplify those biases. This can lead to discriminatory outcomes in areas like hiring, lending, or customer service, resulting in significant ethical concerns, legal challenges, and severe damage to an organisation&#8217;s brand and public trust. The &#8220;black box&#8221; nature of some AI models also makes it difficult to explain decisions, raising transparency and accountability issues.</p>
<p>Untamed AI bias can erode customer loyalty, trigger costly class-action lawsuits, and lead to regulatory sanctions, especially as governments globally focus more on algorithmic fairness. Furthermore, the inability to explain AI decisions can cripple internal investigations, external audits, and consumer trust, creating a profound crisis of accountability.</p>
<h4>3. Regulatory and Compliance Complexity</h4>
<p>The global regulatory landscape for AI is still evolving, with different regions developing varying approaches.  At present (as at June 2025):</p>
<ul>
<li>The European Union has introduced the EU AI Act, a risk-based regulation effective August 2024, but is now discussing exemptions and voluntary measures amidst industry criticism and signs of regulatory fatigue.</li>
<li>Australia currently has no dedicated AI legislation, but the government has proposed mandatory guardrails for high-risk AI and published guidelines, though the path to comprehensive legislation remains uncertain given global shifts and a focus on deregulation.</li>
<li>The United States currently lacks dedicated federal AI legislation, relying on existing frameworks, and its federal government has adopted a deregulatory stance, even seeking a 10-year moratorium on state-level AI laws.</li>
<li>The United Kingdom favours a light-touch, &#8216;pro-innovation&#8217; approach without dedicated AI legislation, focusing instead on targeted regulation for high-risk AI models to avoid stifling investment.</li>
<li>The People&#8217;s Republic of China does not have unified AI regulation but governs specific use cases like deep synthesis and generative AI, aiming to be a world leader in AI by 2030.</li>
</ul>
<p>Organisations face increasing pressure to comply with emerging data privacy laws, algorithmic transparency requirements, and industry-specific mandates. Non-compliance can lead to hefty fines, lawsuits, and operational disruptions, requiring constant monitoring and adaptation of AI strategies and governance frameworks.</p>
<h4>4. Workforce Displacement and Skills Gaps</h4>
<p>AI-driven automation has the potential to displace human jobs, particularly in repetitive or data-intensive roles, and even some high-skill professions. While AI is expected to create new jobs, there&#8217;s a significant risk of a mismatch between the skills required for these new roles and the existing workforce capabilities. This can lead to unemployment, social unrest, and a struggle for organisations to find the necessary talent to develop, implement, and manage AI effectively.</p>
<h4>5. Operational Risks and Over-Reliance</h4>
<p>Over-reliance on AI without sufficient human oversight can lead to significant operational risks. Errors in AI decision-making (sometimes called &#8220;hallucinations&#8221; in generative AI), false positives or negatives, and unforeseen consequences from complex AI systems can disrupt operations, alienate customers, and lead to financial losses. There&#8217;s a risk of losing critical human intuition and decision-making capabilities if AI is allowed to operate without proper human in the loop processes and continuous monitoring.</p>
<h3>Strategies to Enhance AI Risk Management</h3>
<p>To effectively manage the risks associated with the growth of AI, organisations need a proactive, holistic, and adaptive approach. Here&#8217;s a breakdown of key actions organisations should take or at least consider.</p>
<h4>1. Establish Robust AI Governance and Ethical Frameworks</h4>
<p>Set the foundations of good AI governance. Develop clear organisational principles and principles for responsible AI use, encompassing fairness, transparency, accountability, human oversight, privacy, and security. Translate these into actionable policies and procedures that guide AI development, deployment, and operation.</p>
<p>Assign clear roles and responsibilities for AI governance, potentially including an AI ethics committee, Chief AI Officer, or cross-functional working groups. Ensure board-level oversight of AI initiatives and their associated risks.</p>
<p>Conduct thorough risk assessments for every AI application throughout its lifecycle (design, development, deployment, monitoring). Identify potential biases, security vulnerabilities, and ethical concerns, and implement robust mitigation strategies.</p>
<p>Implement continuous monitoring of AI system performance in real-world settings to detect bias drift, performance degradation, and unexpected outputs. Conduct regular internal and third-party audits to validate governance and security controls.</p>
<h4>2. Enhance Data Management, Privacy, and Cybersecurity</h4>
<p>Establish stringent data governance practices, ensuring the quality, integrity, and representativeness of training data to mitigate bias. Implement strong data anonymization and de-identification techniques when handling sensitive information.</p>
<p>Embed privacy-by-design principles into the AI system development process from the outset, ensuring personal data is handled ethically and in compliance with relevant privacy regulations (e.g., Australian Privacy Principles, GDPR).</p>
<p>Maintain robust cybersecurity measures. Treat AI systems as critical information assets requiring the highest level of cybersecurity. Implement end-to-end encryption, strict access controls, regular vulnerability testing, and threat intelligence specific to AI (e.g., adversarial attack detection). Involve cybersecurity teams from the earliest stages of AI development.</p>
<h4>3. Prioritise Workforce Adaptation and Skill Development</h4>
<p>Invest &#8216;appropriately&#8217; in comprehensive AI literacy training for all relevant staff, from technical teams to business leaders. Educate employees on ethical AI use, how to recognise potential biases, and how to effectively collaborate with AI systems.</p>
<p>Develop clear career pathways for roles likely to be transformed by AI. Implement reskilling and upskilling programs to equip employees with the new skills needed to work alongside or manage AI technologies, focusing on uniquely human capabilities like critical thinking, creativity, and empathy.</p>
<p>Foster an AI-friendly culture that encourages experimentation, adaptability, and open dialogue about AI&#8217;s role in the organisation, alleviating fears of job displacement.</p>
<h4>4. Ensure Regulatory Compliance and Transparency</h4>
<p>Monitor and stay updated on the rapidly evolving global and local AI regulatory landscape (e.g., potential Australian AI framework, EU AI Act). Proactively adapt internal policies and practices to align with new laws and industry standards.</p>
<p>Whenever possible, strive for explainable AI (XAI) models, documenting development processes, data sources, and decision-making logic. This transparency is crucial for accountability, trust, and demonstrating compliance to regulators and stakeholders.</p>
<p>Engage legal and ethical experts to review AI applications, contracts with third-party AI providers, and internal policies to ensure compliance and mitigate legal and reputational risks.</p>
<h4>5. Foster a Culture of Responsible Innovation</h4>
<p>Design AI systems with appropriate human oversight and intervention capabilities (Human-in-the-Loop), especially for critical decision-making processes. AI should augment, not replace, human judgment.</p>
<p>Engage diverse stakeholders, including employees, customers, partners, and even external experts, in the design and oversight of AI systems. This fosters inclusivity, garners different perspectives, and builds trust.</p>
<p>Start with low-risk pilot projects to test AI applications, gather feedback, and iterate quickly. Scale AI adoption responsibly, learning from each implementation.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</article>
</div>
</div>
</div>
<div class="react-scroll-to-bottom--css-afbpl-79elbk h-full">
<div class="react-scroll-to-bottom--css-afbpl-1n7m0yu">
<div class="flex flex-col text-sm md:pb-9">
<article class="w-full text-token-text-primary focus-visible:outline-2 focus-visible:outline-offset-[-4px]" dir="auto" data-testid="conversation-turn-3" data-scroll-anchor="true">
<div class="text-base py-[18px] px-3 md:px-4 m-auto w-full md:px-5 lg:px-4 xl:px-5">
<div class="mx-auto flex flex-1 gap-4 text-base md:gap-5 lg:gap-6 md:max-w-3xl lg:max-w-[40rem] xl:max-w-[48rem]">
<h2 class="flex-col gap-1 md:gap-3">Take Outs</h2>
<p class="flex-col gap-1 md:gap-3">The widespread adoption of AI, with tens of thousands of deployed systems and billions of individual instances, presents significant opportunities but also critical risks for organisations.</p>
<p class="flex-col gap-1 md:gap-3">The top five AI risks include increased cybersecurity vulnerabilities from sophisticated AI-powered attacks, ethical and reputational damage due to inherent biases in AI models, complex regulatory and compliance challenges given evolving global AI legislation (which varies significantly by country, from the EU&#8217;s risk-based approach to the US and UK&#8217;s deregulatory stance), potential workforce displacement and widening skills gaps, and operational risks stemming from over-reliance on AI without sufficient human oversight.</p>
<p class="flex-col gap-1 md:gap-3">To mitigate these, organisations must establish robust AI governance and ethical frameworks, enhance data management and cybersecurity, prioritize workforce adaptation through training and reskilling, ensure continuous regulatory compliance and transparency (e.g., via Explainable AI), and foster a culture of responsible innovation with human oversight and stakeholder engagement.</p>
<h2 class="flex-col gap-1 md:gap-3">Can We Help You Improve AI Governance?</h2>
<p>Our <a href="https://inconsult.com.au/services/artificial-intelligence-risk-governance/" target="_blank" rel="noopener">AI Risk Governance</a> services are designed to help organisations navigate the complexities of AI, ensuring ethical practices, regulatory compliance, and manage AI risks at every stage of AI development and deployment.</p>
<h4>Future-Proof Your AI with Strong Governance</h4>
<p>We can help you design and build a clear, custom AI governance framework. This means engaging with stakeholders, setting up clear rules, responsibilities, and ethical guidelines for all your AI projects. By doing this, you&#8217;ll reduce unforeseen risks, ensure your AI aligns with your company values, and build a foundation that adapts to future regulations, keeping you ahead of the curve.</p>
<h4>Pinpoint and Fix Your AI Risks</h4>
<p>Our risk, cybersecurity and AI experts can conduct thorough risk assessments of your AI systems to uncover potential issues like hidden biases, cybersecurity weaknesses, privacy gaps, and operational flaws. We draft concrete strategies to fix these problems, helping you avoid costly mistakes, legal challenges, and damage to your reputation. You&#8217;ll gain peace of mind knowing your AI is robust and reliable.</p>
<h4>Secure Your AI Against Emerging Cyber Threats</h4>
<p>AI introduces new cybersecurity challenges. We can assess the specific security risks of your AI systems and the data they use, implementing robust defences against new threats like AI-powered attacks and data breaches. Our goal is to safeguard your critical assets and sensitive information, ensuring your AI operates in a secure environment and protecting your business from costly cyber incidents.</p>
</div>
</div>
</article>
</div>
</div>
</div>
</div>
</div>
<p>&nbsp;</p>
<div class='printomatic pom-default ' id='id959'  data-print_target='body'></div>The post <a href="https://inconsult.com.au/publication/the-ai-governance-maze-navigating-ai-risks-and-chaos/">The AI Governance Maze: Navigating AI Risks and Chaos</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Essential AI Governance Documents to Build Trust in AI</title>
		<link>https://inconsult.com.au/publication/essential-ai-governance-documents-to-build-trust-in-ai/</link>
		
		<dc:creator><![CDATA[Tony Harb]]></dc:creator>
		<pubDate>Tue, 29 Oct 2024 06:48:11 +0000</pubDate>
				<guid isPermaLink="false">https://inconsult.com.au/?post_type=publication&#038;p=12241</guid>

					<description><![CDATA[<p>As artificial intelligence (AI) becomes embedded in the operations of many organisations, effectively managing the associated risks is essential. Successful AI governance relies on a robust foundation of policies, plans, and documentation that address technical, operational, legal, and ethical dimensions. Our team of cybersecurity, risk and AI specialists explore the foundational policies, procedures and reports [&#8230;]</p>
The post <a href="https://inconsult.com.au/publication/essential-ai-governance-documents-to-build-trust-in-ai/">Essential AI Governance Documents to Build Trust in AI</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></description>
										<content:encoded><![CDATA[<div class="flex-1 overflow-hidden">
<div class="h-full">
<div class="react-scroll-to-bottom--css-afbpl-79elbk h-full">
<div class="react-scroll-to-bottom--css-afbpl-1n7m0yu">
<div class="flex flex-col text-sm md:pb-9">
<article class="w-full text-token-text-primary focus-visible:outline-2 focus-visible:outline-offset-[-4px]" dir="auto" data-testid="conversation-turn-3" data-scroll-anchor="true">
<div class="text-base py-[18px] px-3 md:px-4 m-auto w-full md:px-5 lg:px-4 xl:px-5">
<div class="mx-auto flex flex-1 gap-4 text-base md:gap-5 lg:gap-6 md:max-w-3xl lg:max-w-[40rem] xl:max-w-[48rem]">
<div class="group/conversation-turn relative flex w-full min-w-0 flex-col agent-turn">
<div class="flex-col gap-1 md:gap-3">
<div class="flex max-w-full flex-col flex-grow">
<div class="text-message flex w-full flex-col items-end gap-2 whitespace-normal break-words [.text-message+&amp;]:mt-5" dir="auto" data-message-author-role="assistant" data-message-id="22138f38-573c-4da5-81c0-47065e044a8e">
<div class="flex w-full flex-col gap-1 empty:hidden first:pt-[3px]">
<p class="markdown prose w-full break-words dark:prose-invert light">As artificial intelligence (AI) becomes embedded in the operations of many organisations, effectively managing the associated risks is essential. Successful AI governance relies on a robust foundation of policies, plans, and documentation that address technical, operational, legal, and ethical dimensions.</p>
<p class="markdown prose w-full break-words dark:prose-invert light">Our team of cybersecurity, risk and AI specialists explore the foundational policies, procedures and reports required for good AI governance.  These documents will also form the basis of controls that aim to <a href="https://inconsult.com.au/publication/mastering-the-machine-navigating-the-top-10-ai-risks/" target="_blank" rel="noopener">mitigate a wide range of AI risks</a>.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="text-base py-[18px] px-3 md:px-4 m-auto w-full md:px-5 lg:px-4 xl:px-5">
<div class="mx-auto flex flex-1 gap-4 text-base md:gap-5 lg:gap-6 md:max-w-3xl lg:max-w-[40rem] xl:max-w-[48rem]">
<div class="flex-col gap-1 md:gap-3">
<div class="flex max-w-full flex-col flex-grow">
<div class="text-message flex w-full flex-col items-end gap-2 whitespace-normal break-words [.text-message+&amp;]:mt-5" dir="auto" data-message-author-role="assistant" data-message-id="22138f38-573c-4da5-81c0-47065e044a8e">
<div class="flex w-full flex-col gap-1 empty:hidden first:pt-[3px]">
<h3>What is AI Governance?</h3>
<p>AI governance refers to the framework of policies, and processes that guide the responsible development, deployment, and management of artificial intelligence systems. AI governance ensures that AI technologies operate ethically, transparently, and in alignment with legal, regulatory, and organisational objectives.</p>
<h3>Foundations of an AI Governance Framework</h3>
<p>Collectively, the components of an AI governance framework should work coherently together to ensure the responsible, ethical, and effective use of artificial intelligence.</p>
<p><img fetchpriority="high" decoding="async" class=" wp-image-12313 aligncenter" src="https://inconsult.com.au/wp-content/uploads/2024/10/AI-Governance-Foundations-300x142.jpg" alt="AI Governance Foundations" width="615" height="291" srcset="https://inconsult.com.au/wp-content/uploads/2024/10/AI-Governance-Foundations-300x142.jpg 300w, https://inconsult.com.au/wp-content/uploads/2024/10/AI-Governance-Foundations-1224x581.jpg 1224w, https://inconsult.com.au/wp-content/uploads/2024/10/AI-Governance-Foundations-768x364.jpg 768w, https://inconsult.com.au/wp-content/uploads/2024/10/AI-Governance-Foundations-1536x729.jpg 1536w, https://inconsult.com.au/wp-content/uploads/2024/10/AI-Governance-Foundations-2048x972.jpg 2048w" sizes="(max-width: 615px) 100vw, 615px" /></p>
<p>Here is our list of the top 10 components required for effective AI governance.</p>
<h4><strong>1. Risk Management</strong></h4>
</div>
</div>
<p>Risk management documents are foundational to good risk management. They are also crucial for managing AI risks as they establish structured processes to identify, assess, mitigate, and monitor potential threats throughout the AI system&#8217;s lifecycle.</p>
<p>Risk framework documents support AI governance by formalising the processes needed to mitigate risks, align with organisational goals, and ensure that AI systems operate safely and ethically. Alignment to standards such as ISO 31000 is good practice.  Conformance to the NIST-AI-600-1 Risk Management Framework is essential as this guideline specifically addresses AI-related risks like biases, security vulnerabilities, and transparency.</p>
<p>The risk management framework typically outlines the processes for evaluating risks, ensuring that AI systems are developed, deployed, and monitored within established boundaries i.e. risk appetite and tolerance. This creates a solid foundation for AI governance by ensuring transparency and accountability.</p>
<p>By clarifying how much risk the organisation is willing to accept, it ensures that AI capabilities align with strategic goals without exceeding acceptable limits.</p>
<p>Importantly, the risk management framework includes provisions for continuous monitoring of risks, including AI performance, detecting potential risks such as model drift, security vulnerabilities, or changes in data quality. This proactive approach helps reduce AI risks over time.</p>
<h4><strong>2. AI Ethical Guidelines</strong></h4>
<p>Ethics guidelines, which may vary by country, industry, or organisation, outline standards for fair, transparent, and accountable AI usage. These guidelines address concerns such as bias, privacy, discrimination, and decision-making transparency. These guidelines provide a moral and ethical framework for AI development, helping organisations avoid unethical practices such as biased algorithms or opaque decision-making. By adhering to these guidelines, organisations foster trust among stakeholders, including users, regulators, and the public.</p>
<p>AI ethics guidelines are often spread across multiple documents within an organisation, as different aspects of ethics intersect with legal, operational, technical, and governance requirements.  Examples include:</p>
<ul>
<li>AI Ethics Charter or Social Impact Guidelines</li>
<li>Data Governance Policy</li>
<li>Bias and Fairness Assessment Framework</li>
<li>Cybersecurity Policies</li>
<li>Risk Management Policies</li>
<li>AI Incident Response Plan/ Playbook</li>
</ul>
<h4><strong>3. Information Technology Governance </strong></h4>
<p>Good AI governance sits within the organisations current IT governance framework.</p>
<p>Compliance with <a href="https://www.iso.org/standard/81684.html" target="_blank" rel="noopener">ISO/IEC 38500 – Governance of IT</a> ensures robust oversight of AI deployment within the organisation. It emphasises that AI systems are deployed effectively, transparently, and with clear accountability, minimising associated risks. Strong governance under this standard also boosts stakeholder confidence by demonstrating the organisation’s commitment to responsible management and control of AI systems.</p>
<p>Algorithmic Impact Assessments (AIA) and Data Protection Impact Assessments (DPIA) evaluate the potential social, legal, and ethical impacts of AI systems. AIAs focus on the broader societal impacts, while DPIAs are specifically concerned with privacy and data protection.</p>
<p>Responsibility matrices assign clear roles and responsibilities to individuals or teams involved in the AI lifecycle (development, deployment, monitoring). These documents clarify who is accountable for what, ensuring that all aspects of the AI system are properly managed. For AI governance, a RACI matrix (Responsible, Accountable, Consulted, Informed) is a popular framework to ensure that everyone involved in the AI lifecycle understands their role.</p>
<h4><strong>4. Stakeholder Engagement</strong></h4>
<p>Stakeholder analysis documentation identifies key internal and external stakeholders who interact with or are affected by the AI system. This includes employees, customers, regulators, and third parties.</p>
<p>Stakeholder communication plans detail how information about the AI system’s development, risks, and performance will be shared with stakeholders, including users, customers, employees, and regulators</p>
<h4><strong>5. AI System Design Specifications</strong></h4>
<p>Collectively, AI system design documents play a crucial role in fostering transparency, accountability, and continuous improvement in AI projects. They ensure that the technical foundations of AI systems are well understood, validated, and monitored, which is essential for ethical and effective AI deployment.</p>
<p>Technical system design specifications provide a comprehensive overview of the AI model’s architecture, including details on the algorithms employed, the data processing pipeline, and the overall system design. A well-documented technical design serves as a blueprint for developing and implementing AI systems. It ensures that all stakeholders, including developers, data scientists, and project managers, have a clear understanding of how the system is constructed.</p>
<p>Model training and validation reports are critical for ensuring the integrity and reliability of AI systems. These reports detail the training process of the AI model, including the datasets used, pre-processing steps, training algorithms, validation methods, and the results of various testing phases. They document how the model was developed, the rationale behind the choices made during training, and any iterations or adjustments performed to improve model performance.</p>
<h4><strong>6. AI Operational Policies</strong></h4>
<p>Operational policies ensure that AI systems remain functional, understandable, and manageable throughout their lifecycle. They strengthen governance by providing the necessary controls, training, and documentation to maintain the AI&#8217;s reliability, transparency, and user competence.</p>
<p>AI system maintenance and update policies and guidelines outline the procedures and guidelines for continuously monitoring, maintaining, and updating AI systems to ensure their performance remains optimal over time. This includes routine system checks, retraining of AI models with new or updated data, patching for security vulnerabilities, and addressing algorithmic drift.</p>
<p>Model explainability and transparency documentation provide detailed explanations of how the AI model operates, including how decisions are made, what data influences the outputs, and any factors that could affect the system&#8217;s decisions. They often include the use of explainability tools like LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations), which help interpret model predictions.</p>
<p>User training documents provide guidelines and training materials to internal users, such as employees, operators, or administrators, on how to interact with and operate AI systems. This training might include understanding how to input data, how to interpret AI-generated outputs, how to flag issues or anomalies, and how to ensure that human oversight is maintained.</p>
<h4><strong>7. Data Management </strong></h4>
<p>Data management documents play a crucial role in establishing a solid foundation for responsible AI development. By outlining procedures for sourcing, quality assurance, and privacy compliance, these documents empower organisations to harness high-quality data effectively.</p>
<p>Data sourcing policies outline the origins of the data used for training and operating AI systems. These policies specify whether the data is sourced internally from company databases, externally from public datasets, or from third-party providers. They may include criteria for selecting data sources, such as relevance, reliability, and legal compliance.</p>
<p>Data quality and governance guidelines establish standards for ensuring that data used in AI systems is accurate, complete, consistent, and representative of the intended population. This includes defining processes for data validation, cleaning, and monitoring to identify and rectify issues such as duplicates, inaccuracies, or biases.</p>
<p>Data privacy policies detail how organisations handle and protect personally identifiable information (PII) in compliance with regulations such as the General Data Protection Regulation (GDPR) and/or other relevant data protection laws. These policies outline the procedures for collecting, storing, processing, and sharing PII, as well as the rights of individuals regarding their data.</p>
<h4><strong>8. Cybersecurity </strong></h4>
<p>AI systems are exposed to adversarial attacks, data breaches, and other vulnerabilities.</p>
<p>Cyber Security Documents are essential components of a comprehensive framework designed to safeguard AI systems from various threats and vulnerabilities.</p>
<p>Cybersecurity policies outline the organisation&#8217;s approach to securing its AI systems against attacks, adversarial inputs, and data breaches. They establish guidelines for identifying and mitigating potential risks, including the implementation of security measures such as firewalls, encryption, and access controls. Additionally, these policies emphasise the importance of regular security audits and the need for continuous monitoring to detect and respond to potential threats proactively. By setting clear expectations for cybersecurity practices, organisations can create a culture of security awareness among employees, reducing the likelihood of human errors that could lead to security breaches.</p>
<p>Vulnerability assessment and penetration testing are crucial for identifying potential security gaps within the AI system.</p>
<p>Vulnerability assessments systematically analyse the AI system for weaknesses that could be exploited by attackers, while penetration testing simulates real-world attack scenarios to evaluate the system&#8217;s defences. The findings from these reports help organisations prioritise security improvements and allocate resources effectively. By addressing identified vulnerabilities, organisations can strengthen their AI systems against adversarial attacks.</p>
<p>Preparedness is key to managing incidents involving AI systems. Incident response plans and playbooks outline the steps to take in the event of a security breach, system failure, or regulatory violation. These plans include defined roles and responsibilities for team members, procedures for containing and mitigating the incident, and guidelines for communicating with stakeholders, including affected users and regulatory bodies. By having a structured response protocol in place, organisations can minimise the impact of an incident, ensure a swift recovery, and maintain trust with stakeholders.</p>
<p>Regularly testing, reviewing and updating these plans ensures they remain effective and relevant as the threat landscape evolves.</p>
<p>Incident and breach reports document any incidents related to AI systems, including data breaches or algorithmic failures.  They provide a detailed analysis of the causes, impacts, and corrective actions taken in the event of an incident.</p>
</div>
<h4><strong>9. Performance Monitoring </strong></h4>
<p>Monitoring plans outline the procedures for continuously tracking the performance of AI systems. These plans specify key performance indicators (KPIs) and thresholds for detecting deviations from expected outcomes.</p>
<p>Performance monitoring reports help track the effectiveness and efficiency of AI systems over time.  They include metrics such as accuracy, reliability, and user satisfaction. System performance metrics quantify how effectively the AI model operates, providing insights into its accuracy, precision, recall, F1 score, and other relevant performance indicators. These metrics are typically generated during testing phases and may include assessments of the model’s performance in real-world scenarios, such as its speed, responsiveness, and reliability.</p>
<p>By implementing ongoing monitoring, organisations can promptly flag potential issues such as bias, model drift, or security vulnerabilities. For instance, if an AI model used for hiring begins to favour a particular demographic group, a monitoring plan can trigger an immediate investigation. Regular assessments not only help in maintaining the accuracy and reliability of AI systems but also ensure that any emerging risks are addressed in a timely manner.</p>
<h4><strong>10.  Audit Reports</strong></h4>
</div>
<p>Conducting regular audits is essential for upholding ethical standards and meeting regulatory requirements, especially in industries where decisions can greatly affect individuals&#8217; lives.  Audits help identify weaknesses and areas for improvement.</p>
<p>Audit and monitoring reports for AI risk management vary based on the audit&#8217;s scope and the specific risks being assessed. A range of audits are necessary to provide organisations with adequate assurance.Audits are conducted by internal teams or third-party auditors, evaluate the effectiveness of the AI risk management framework, internal controls, and governance practices. Examples of AI audits include:</p>
<ul>
<li><strong>AI Governance Audits:</strong> To assess the effectiveness of governance frameworks and policies related to AI management and oversight.</li>
<li><strong>Compliance Audits</strong>: To assess adherence to legal, regulatory, and ethical standards related to AI usage.</li>
<li><strong>Data Quality Audits</strong>: To evaluate the quality, integrity, and relevance of data used in AI models, identifying issues like bias or outdated information.</li>
<li><strong>Model Validation and Evaluation Audits</strong>: To assess the performance of AI models against predefined criteria.</li>
<li><strong>Performance Monitoring Audits</strong>: To review and report on the ongoing effectiveness and efficiency of AI systems over time.</li>
<li><strong>Ethical Impact Assessment Audits</strong>: To analyse the ethical implications and societal impacts of AI systems.</li>
</ul>
<p>Each type of audit plays a crucial role in maintaining effective AI risk management practices, ensuring that organisations can identify, assess, and mitigate risks associated with AI technologies.</p>
<h2 class="flex-col gap-1 md:gap-3">Take outs</h2>
<div class="flex-col gap-1 md:gap-3">In summary, robust AI governance requires a comprehensive framework of policies, processes, reports and documentation across risk management, ethics, operations, data management, cybersecurity, and audits. These components not only help mitigate risks but also foster transparency, accountability, and compliance. By proactively implementing these controls, organisations can build trust with stakeholders, ensure the ethical use of AI, and maintain resilient, high-performing systems in an ever-evolving technological landscape.</div>
<h2 class="flex-col gap-1 md:gap-3">Can we help improve AI Governance?</h2>
<p><span style="font-size: 16px;">Our </span><a style="font-size: 16px;" href="https://inconsult.com.au/services/artificial-intelligence-risk-governance/">AI Risk Governance</a><span style="font-size: 16px;"> services are designed to help organisations navigate the complexities of AI, ensuring ethical practices, regulatory compliance, and manage AI risks at every stage of AI development and deployment.</span></p>
</div>
</div>
</article>
</div>
</div>
</div>
</div>
</div>
<ul>
<li><strong>AI Governance Framework Design</strong>: By designing a comprehensive framework, we help align AI practices with regulatory standards and ethical principles, empowering organisations to innovate responsibly and sustainably.</li>
<li><strong>AI Governance Framework Uplift</strong>: We help ensure that policies, procedures, and safeguards are in place and socialised with the board, management and key staff to promote transparency, fairness, and accountability.</li>
<li><strong>AI Risk Assessment</strong>: We will deliver a comprehensive AI risk assessment that is aligned to your risk management framework, considers your risk appetite and identifies risks that require enhanced risk treatments.</li>
<li><strong>AI Governance Audits</strong>: A structured review to evaluate how well an organisation’s AI systems and practices align with its AI governance framework, regulatory requirements, and better practice ethical guidelines such as the NIST-AI-600-1 Artificial Intelligence Risk Management Framework.</li>
</ul>
<p>Explore, innovate and take risks. <a href="https://inconsult.com.au/contact-us/">Contact us</a> to discuss how we can help strengthen your AI risk governance.</p>
<div class='printomatic pom-default ' id='id6243'  data-print_target='body'></div>The post <a href="https://inconsult.com.au/publication/essential-ai-governance-documents-to-build-trust-in-ai/">Essential AI Governance Documents to Build Trust in AI</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Mastering the Machine: Navigating the Top 10 AI Risks</title>
		<link>https://inconsult.com.au/publication/mastering-the-machine-navigating-the-top-10-ai-risks/</link>
		
		<dc:creator><![CDATA[Tony Harb]]></dc:creator>
		<pubDate>Mon, 28 Oct 2024 05:37:58 +0000</pubDate>
				<guid isPermaLink="false">https://inconsult.com.au/?post_type=publication&#038;p=12272</guid>

					<description><![CDATA[<p>Artificial intelligence (AI) continues to reshape how organisations do business, offering unprecedented opportunities to improve efficiency.  AI can enhance decision-making and revolutionise entire sectors. However, the use and integration of AI also brings a new set of risks and challenges that organisations must address. As our reliance on AI grows, so do the AI risks.  [&#8230;]</p>
The post <a href="https://inconsult.com.au/publication/mastering-the-machine-navigating-the-top-10-ai-risks/">Mastering the Machine: Navigating the Top 10 AI Risks</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></description>
										<content:encoded><![CDATA[<div class="flex-1 overflow-hidden">
<div class="h-full">
<div class="react-scroll-to-bottom--css-afbpl-79elbk h-full">
<div class="react-scroll-to-bottom--css-afbpl-1n7m0yu">
<div class="flex flex-col text-sm md:pb-9">
<article class="w-full text-token-text-primary focus-visible:outline-2 focus-visible:outline-offset-[-4px]" dir="auto" data-testid="conversation-turn-3" data-scroll-anchor="true">
<div class="text-base py-[18px] px-3 md:px-4 m-auto w-full md:px-5 lg:px-4 xl:px-5">
<div class="mx-auto flex flex-1 gap-4 text-base md:gap-5 lg:gap-6 md:max-w-3xl lg:max-w-[40rem] xl:max-w-[48rem]">
<div class="group/conversation-turn relative flex w-full min-w-0 flex-col agent-turn">
<div class="flex-col gap-1 md:gap-3">
<div class="flex max-w-full flex-col flex-grow">
<div class="text-message flex w-full flex-col items-end gap-2 whitespace-normal break-words [.text-message+&amp;]:mt-5" dir="auto" data-message-author-role="assistant" data-message-id="22138f38-573c-4da5-81c0-47065e044a8e">
<div class="flex w-full flex-col gap-1 empty:hidden first:pt-[3px]">
<div class="markdown prose w-full break-words dark:prose-invert light">
<p>Artificial intelligence (AI) continues to reshape how organisations do business, offering unprecedented opportunities to improve efficiency.  AI can enhance decision-making and revolutionise entire sectors. However, the use and integration of AI also brings a new set of risks and challenges that organisations must address. As our reliance on AI grows, so do the AI risks.  <a href="https://inconsult.com.au/publication/essential-ai-governance-documents-to-build-trust-in-ai/" target="_blank" rel="noopener">Building trust in AI technologies</a> is a key priority.</p>
<p>In this publication, we review recent incidents that have exposed some AI vulnerabilities.  We explore the top 10 AI risks organisations face today. We provide insights into how each risk can arise and offer strategies to manage each of the AI risks identified.</p>
<h2>Recent AI Incidents and Issues</h2>
<p>The growing use of AI has exposed vulnerabilities through high-profile incidents, sparking concerns over the technology’s safety, fairness, and reliability. These events highlight the need for stronger risk management frameworks to ensure responsible AI deployment. Here are some examples.</p>
<h4><strong>The 2020 Twitter Hack and AI’s Role</strong></h4>
<p>In July 2020, Twitter (now X) experienced a major security breach where hackers gained access to high-profile accounts, including those of Barack Obama, Elon Musk, and others. Whilst it was a social engineering attack, AI may have played an indirect role to craft more convincing spear-phishing emails and messages. This incident raised alarms about AI risks and AI&#8217;s potential to enhance the sophistication of cyberattacks and manipulate human behaviour on social platforms.</p>
<h4><strong>Bias in AI-Powered Hiring Tools</strong></h4>
<p>Several companies have adopted AI to streamline their hiring processes. However, in 2018, <a href="https://www.irishtimes.com/business/technology/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-1.3658651" target="_blank" rel="noopener">Amazon</a> scrapped its AI recruiting tool when it was revealed it was biased against women. The system, trained on past hiring data, penalised resumes that included the word &#8220;women&#8217;s&#8221; and disproportionately favoured male candidates. This incident highlighted how AI systems, when trained on biased datasets, can perpetuate and even exacerbate existing inequalities.</p>
<h4><strong>AI and Autonomous Vehicles</strong></h4>
<p>The development of autonomous vehicles is another area where AI failures have serious consequences. In 2018, an Uber self-driving car killed a pedestrian in Arizona. The investigation revealed that the car&#8217;s AI failed to properly classify the pedestrian as a hazard and react appropriately. This incident underscores the potential for AI to make critical errors in real-world applications, particularly in safety-critical industries like transportation.</p>
<h4><strong>GPT-3 and Content Misinformation</strong></h4>
<p>Open AI&#8217;s GPT-3, once one of the most advanced language models, has shown remarkable abilities in generating human-like text. However, it has also demonstrated the potential for generating false information, hate speech, and harmful content when used without adequate oversight. GPT-3’s ability to generate misleading or harmful content at scale presents significant risks, especially in areas like media, customer support, and content creation.</p>
<p>This list is not comprehensive and the examples reflect only a fraction of the challenges associated with AI. That is why, understanding the broader landscape of AI risks is critical for managing and mitigating potential hazards.</p>
<h2>Top 10 AI Risks</h2>
<p>AI technologies are rapidly evolving, which makes it difficult to keep up with all the potential AI risks. New algorithms, models, and applications can introduce unforeseen risks, making risk management a constantly moving target.  Organisations must understand the key risks associated with AI to effectively manage them/  Here is our list of the top 10 AI risks.</p>
<h4><strong>1. Risk of Bias and Discrimination</strong></h4>
<p>AI systems rely heavily on large datasets and learn from the data they are trained on. If this data is biased—reflecting historical inequalities, discrimination, or stereotypes—the AI will likely perpetuate or even amplify those biases. Poor-quality or incomplete data can also lead to incorrect predictions and decisions.</p>
<p>As we saw with the Amazon incident above, biases can result in unfair treatment in various contexts, such as hiring, lending, law enforcement, and healthcare. Specifically, predictive policing algorithms have been criticised for disproportionately targeting minority communities due to biases in historical data.</p>
<p>Ensuring that the data is accurate and AI systems are trained on diverse and representative data is essential for reducing bias.</p>
<p>Strategies to manage the risk of bias and discrimination include:</p>
<ul>
<li>Ensuring diverse and representative training data sets that accurately reflect the diversity of real-world populations and scenarios.</li>
<li>Implementing fairness testing throughout the AI development and deployment lifecycle.</li>
<li>Regular auditing to detect bias that may develop over time or as the system encounters new data.</li>
</ul>
<h4><strong>2. AI Security Vulnerabilities</strong></h4>
<p>AI can introduce new types of vulnerabilities in cybersecurity. The systems may be susceptible to new forms of cyberattacks, such as adversarial attacks, where small alterations to inputs cause the AI to make incorrect decisions. AI-driven systems, particularly those involving machine learning, may be vulnerable to adversarial attacks. These attacks involve manipulating input data in ways that deceive the AI system into making incorrect decisions.</p>
<p>For instance, a carefully altered image might cause an AI system to misclassify it, leading to potentially dangerous consequences in applications like facial recognition or autonomous vehicles.</p>
<p>Identifying and defending against cybersecurity vulnerabilities in AI systems requires a multi-faceted approach.</p>
<p>Strategies to better manage AI security vulnerabilities risks include:</p>
<ul>
<li>Implementing robust cybersecurity measures to secure AI systems from unauthorised access, manipulation, and cyber threats that could compromise their integrity or functionality.</li>
<li>Conducting regular penetration testing to identify and fix security vulnerabilities in AI systems by simulating cyberattacks and malicious behaviour.</li>
<li>Using encryption and access controls to ensure data confidentiality and integrity.</li>
<li>Continuous monitoring for anomalies and using adversarial training techniques can help harden AI systems.</li>
</ul>
<h4><strong>3. Lack of Transparency and Explainability</strong></h4>
<p>Users and stakeholders may have difficulty trusting AI systems that lack clear explainability. Why? Many AI systems, particularly deep learning models, operate as &#8220;black boxes&#8221;, where it’s difficult to understand how they arrive at specific decisions. This lack of transparency can be problematic in industries like healthcare, finance, or criminal justice, where understanding the rationale behind a decision is crucial.</p>
<p>For example, if an AI system denies someone a loan or insurance claim, that individual may have little recourse to challenge the decision because the reasoning behind it is opaque. Explainability and transparency are essential for ensuring trust and accountability in AI systems.</p>
<p>Ensuring that AI outputs can be understood and justified is essential for building trust, especially in high-stakes domains like healthcare and finance.</p>
<p>Strategies to manage the risks around lack of transparency and explainability include:</p>
<ul>
<li>Using explainability tools like LIME (Local Interpretable Model-Agnostic Explanations) or SHAP (SHapley Additive exPlanations) to enhance understanding of AI model predictions by using tools that provide insights into how models arrive at their decisions.  This means the rationale behind their decisions can be understood by both technical and non-technical stakeholders.</li>
<li>Maintaining up to date documentation of AI models to maintain clear, transparent and comprehensive documentation for AI models, outlining their development, training data, features, and decision-making processes to promote accountability and understanding.</li>
<li>Keeping stakeholders informed about the AI system’s decision-making processes and outcomes.</li>
</ul>
<h4><strong>4. Autonomous Decision-Making and Accountability</strong></h4>
<p>AI systems may make decisions without human oversight, leading to errors or unintended outcomes. As AI systems increasingly make autonomous decisions, it becomes challenging to assign responsibility when things go wrong.  This raises questions of accountability. Who is responsible when something goes wrong? The developer, the operator, or the manufacturer?</p>
<p>In the Uber self-driving car case, legal and ethical questions arose regarding liability for the pedestrian&#8217;s death.</p>
<p>Clear accountability frameworks are</p>
<p>Strategies to manage autonomous decision-making and accountability risks include:</p>
<ul>
<li>Introducing human-in-the-loop systems (reviews) for critical decisions to ensure that human judgment and oversight are incorporated into the decision-making processes of AI systems, particularly for high-stakes AI applications.</li>
<li>Regular testing and validation of AI decisions to continuously assess the performance and reliability of AI systems to ensure their outputs remain accurate and fair over time.</li>
<li>Establishing accountability frameworks to create clear structures for accountability in AI decision-making processes to ensure responsibility for outcomes and adherence to ethical standards.</li>
</ul>
</div>
<h4><strong>5. Regulatory and Compliance Issues</strong></h4>
<p>As AI adoption grows, regulatory frameworks often lag behind, making it challenging to ensure AI systems comply with current laws. Different industries and regions have varying regulations, further complicating risk management. AI systems may improperly handle personal data, violating regulations e.g., GDPR.</p>
<p>Strategies to minimise regulatory and compliance risks include:</p>
<ul>
<li>Ensuring compliance with relevant data protection regulations by conducting a thorough analysis of applicable data protection regulations.</li>
<li>Using data anonymisation and pseudonymisation techniques to protect personal data by transforming it in a way that individuals cannot be readily identified, thereby reducing risks associated with data breaches.</li>
<li>Implementing strict access controls to limit access to personal data to authorised personnel only, thereby minimising the risk of data breaches or unauthorised use.</li>
</ul>
</div>
</div>
<h4><strong>6. Breach of Intellectual Property and Copyright</strong></h4>
<p>AI systems pose a significant risk of infringing on intellectual property (IP) or generating unprotected content due to their reliance on vast amounts of data and their ability to create new outputs. When AI systems are trained on copyrighted material, such as books, music, images, or proprietary datasets, there is a potential for these systems to replicate, modify, or combine elements of existing works in ways that could violate copyright, trademark, or patent laws. This becomes particularly problematic if AI-generated content closely resembles or copies existing protected works without proper licensing or attribution.</p>
</div>
<p>Strategies to minimise risk of intellectual property and copyright breaches include:</p>
<ul>
<li>Prompting AI to provide sources, including references for the source of information provided.</li>
<li>Implementing legal reviews, fact checks and compliance checks on data sources.</li>
<li>Ensuring that any required attribution is disclosed and complies with the terms of the licenses.</li>
</ul>
<h4><strong>7. Misuse of AI </strong></h4>
</div>
<div class="flex max-w-full flex-col flex-grow">
<p>AI systems can be deployed in ways that raise significant misuse concerns, particularly in areas like surveillance, manipulation, and social control. For example, AI-driven surveillance technologies can be used to monitor individuals&#8217; activities without their consent, potentially infringing on privacy rights and leading to invasive or disproportionate oversight by governments or corporations.</p>
</div>
<p>Strategies to minimise risk of misuse of AI include:</p>
<ul>
<li>Establishing robust ethical guidelines that define the boundaries for acceptable AI use.</li>
<li>Implementing strict access controls and usage policies to ensure that only authorised individuals can access and use AI systems.</li>
<li>Monitoring AI prompts to ensure appropriate use.</li>
<li>Forming oversight committees that include ethicists, legal experts, technologists, and other relevant stakeholders.</li>
</ul>
</div>
</div>
</div>
<div class="flex max-w-full flex-col flex-grow">
<h4><strong>8. Model Drift</strong></h4>
<p>AI models can degrade over time as the data or conditions they were originally trained on evolve, leading to reduced accuracy and reliability in their predictions or decisions. This occurs because real-world environments and data trends change over time, while the AI model remains static.</p>
</div>
<p>Strategies to reduce the risk of model drift include:</p>
<ul>
<li>Continuous monitoring of the AI model to identify signs of degradation or anomalies whereby output is no longer accurate or relevant.</li>
<li>Regular retraining of AI models with up-to-date data is crucial for maintaining their accuracy and relevance.</li>
<li>Establishing regular processes for updating models, either by refining their algorithms or incorporating new techniques as they emerge.</li>
<li>Performing regular performance audits.</li>
</ul>
<h4><strong>9. Malicious Use of AI</strong></h4>
<div class="text-base py-[18px] px-3 md:px-4 m-auto w-full md:px-5 lg:px-4 xl:px-5">
<div class="mx-auto flex flex-1 gap-4 text-base md:gap-5 lg:gap-6 md:max-w-3xl lg:max-w-[40rem] xl:max-w-[48rem]">
<div class="group/conversation-turn relative flex w-full min-w-0 flex-col agent-turn">
<div class="flex max-w-full flex-col flex-grow">
<p>AI systems have the potential to be exploited for harmful and unethical purposes.  For example:</p>
<ul>
<li>AI-supported hacking by using AI-powered phishing tools or chatbots to manipulate users into revealing sensitive information (social engineering).</li>
<li>Deploying deepfakes to spread misinformation or discredit individuals.</li>
<li>AI-supported password guessing and brute force attacks allows cybercriminals to analyse a wide number of password datasets and generate password variations that may lead to more accurate and targeted password guesses.</li>
<li>AI CAPTCHA cracking by analysing images and choosing according to human behaviour patterns learned.</li>
<li>Training autonomous drones to launch cyberattacks.</li>
</ul>
<p>Strategies to reduce the risk of malicious use of AI include:</p>
</div>
<ul>
<li>Establishing robust ethical guidelines that define the boundaries for acceptable AI use.</li>
<li>Monitoring AI prompts and outputs deliberate malicious use is crucial to detecting and preventing harmful activities early. This can involve setting up automated systems that flag suspicious or potentially harmful outputs for review, especially in high-risk settings.</li>
</ul>
<div class="flex-col gap-1 md:gap-3">
<div class="flex max-w-full flex-col flex-grow">
<div class="text-message flex w-full flex-col items-end gap-2 whitespace-normal break-words [.text-message+&amp;]:mt-5" dir="auto" data-message-author-role="assistant" data-message-id="22138f38-573c-4da5-81c0-47065e044a8e">
<div class="flex w-full flex-col gap-1 empty:hidden first:pt-[3px]">
<div class="markdown prose w-full break-words dark:prose-invert light">
<h4><strong>10. Societal Implications and Economic Disruption</strong></h4>
<p>AI and automation threaten to disrupt labour markets by displacing jobs, particularly in sectors that rely on routine, manual tasks. This can have societal implications and longer term economic impacts.  While AI can create new jobs, the transition may be painful for workers whose skills become obsolete.</p>
<p>Ensuring that the workforce is adequately prepared for this transition, through education and retraining programs, is essential for mitigating the economic risks posed by AI.</p>
</div>
<p>Strategies to reduce the risk of societal implications include:</p>
<ul>
<li>Investing in employee retraining and upskilling programs to equip the existing workforce with new skills to adapt to the changing job landscape.</li>
<li>Developing transition strategies for affected workers to provide support and resources to employees whose roles may be displaced or significantly changed due to AI adoption, helping them transition to new opportunities within or outside the organisation.</li>
</ul>
</div>
<div class="flex w-full flex-col gap-1 empty:hidden first:pt-[3px]">
<div class="markdown prose w-full break-words dark:prose-invert light">
<h2>Take outs</h2>
<p>This publication highlights the growing influence of Artificial Intelligence on business operations, offering opportunities to enhance efficiency, decision-making, and sector-wide transformation. As AI’s adoption increases, so do associated risks and challenges. Recent incidents, such as the bias in AI-powered hiring tools and examples of exploitation by threat actors, underscore vulnerabilities in AI systems, especially regarding security, fairness, and accountability.</p>
<p>AI risks are not static.  The top 10 AI risks presented are simply a starting point for organisations.</p>
<p>To manage these top 10 AI risks, we have identified a number of risk treatment strategies such as data auditing, cybersecurity measures, ethical guidelines, regular retraining, and accountability frameworks. By proactively addressing these challenges, organisations can navigate these AI risks more effectively.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</article>
</div>
</div>
</div>
</div>
</div>
<h2>Can we help manage your AI Risks?</h2>
<p>Our <a href="https://inconsult.com.au/services/artificial-intelligence-risk-governance/">AI Risk Governance</a> services are designed to help organisations navigate the complexities of AI, ensuring ethical practices, regulatory compliance, and manage AI risks at every stage of AI development and deployment.</p>
<ul>
<li><strong>AI Governance Framework Design</strong>: By designing a comprehensive framework, we help align AI practices with regulatory standards and ethical principles, empowering organisations to innovate responsibly and sustainably.</li>
<li><strong>AI Governance Framework Uplift</strong>: We help ensure that policies, procedures, and safeguards are in place and socialised with the board, management and key staff to promote transparency, fairness, and accountability.</li>
<li><strong>AI Risk Assessment</strong>: We will deliver a comprehensive AI risk assessment that is aligned to your risk management framework, considers your risk appetite and identifies risks that require enhanced risk treatments.</li>
<li><strong>AI Governance Audits</strong>: A structured review to evaluate how well an organisation’s AI systems and practices align with its AI governance framework, regulatory requirements, and better practice ethical guidelines such as the NIST-AI-600-1 Artificial Intelligence Risk Management Framework.</li>
</ul>
<p>Explore, innovate and take risks. <a href="https://inconsult.com.au/contact-us/">Contact us</a> to discuss how we can help strengthen your AI risk governance.</p>
<div class='printomatic pom-default ' id='id2369'  data-print_target='body'></div>The post <a href="https://inconsult.com.au/publication/mastering-the-machine-navigating-the-top-10-ai-risks/">Mastering the Machine: Navigating the Top 10 AI Risks</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Cybersecurity Upgrade: Australia&#8217;s Stricter Regulations</title>
		<link>https://inconsult.com.au/publication/cybersecurity-upgrade-australias-stricter-regulations/</link>
		
		<dc:creator><![CDATA[Tony Harb]]></dc:creator>
		<pubDate>Wed, 09 Oct 2024 22:14:03 +0000</pubDate>
				<guid isPermaLink="false">https://inconsult.com.au/?post_type=publication&#038;p=12245</guid>

					<description><![CDATA[<p>Australia&#8217;s cybersecurity posture is undertaking an upgrade i.e. a comprehensive overhaul of its cybersecurity framework in response to the rising tide of cyberattacks targeting businesses, government agencies, and critical infrastructure. The government is introducing &#8220;unprecedented&#8221; cybersecurity legislation to parliament to help protect Australia&#8217;s critical infrastructure.  These reforms come at a time when the nation is [&#8230;]</p>
The post <a href="https://inconsult.com.au/publication/cybersecurity-upgrade-australias-stricter-regulations/">Cybersecurity Upgrade: Australia’s Stricter Regulations</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></description>
										<content:encoded><![CDATA[<p>Australia&#8217;s cybersecurity posture is undertaking an upgrade i.e. a comprehensive overhaul of its cybersecurity framework in response to the rising tide of cyberattacks targeting businesses, government agencies, and critical infrastructure. The government is introducing &#8220;unprecedented&#8221; cybersecurity legislation to parliament to help protect Australia&#8217;s critical infrastructure.  These reforms come at a time when the nation is grappling with increasingly sophisticated cyber threats, including ransomware attacks that have compromised sensitive data and disrupted essential services.</p>
<p>These reforms, which are centred on the introduction of the <strong>Cyber Security Bill</strong> and the development of the <strong>2023-2030 Cyber Security Strategy</strong>, are expected to transform the way businesses and government agencies approach cybersecurity. In particular, the new laws will enforce mandatory reporting of ransomware payments and impose stricter regulations on critical infrastructure sectors.</p>
<p>This article explores the key aspects of these reforms and the new legislation introduced by the government to ensure a more secure digital future for the country.</p>
<h3>Why Cybersecurity Reforms Are Critical</h3>
<p>In recent years, Australia has witnessed a surge in cyberattacks, including high-profile breaches affecting organisations such as Optus, Medibank, and government agencies. These attacks have exposed vulnerabilities in the nation&#8217;s digital infrastructure and underscored the need for stronger cybersecurity measures.</p>
<p>A recent report revealed that Australia ranks second in the world for ransomware attacks, highlighting the urgency of these reforms.  The Australian Cyber Security Centre (ACSC) has identified ransomware as one of the most significant cybersecurity threats facing the country, with incidents increasing in both frequency and severity over the past few years.</p>
<p>In response, the Australian government has launched the <a href="https://www.homeaffairs.gov.au/about-us/our-portfolios/cyber-security/strategy/2023-2030-australian-cyber-security-strategy" target="_blank" rel="noopener">2023-2030 Cyber Security Strategy</a>, a national plan to enhance cybersecurity resilience across all sectors. The plan acknowledges that Australia faces persistent threats from both cybercriminals and nation-state actors. The strategy emphasises the need for a comprehensive approach, with a focus on improving the security of critical infrastructure, implementing stricter regulations, enhancing cybersecurity awareness among businesses and citizens, and increasing accountability for organisations that fail to protect sensitive data</p>
<h3>The Australian Cybersecurity Strategy 2023-2030</h3>
<p>The Cyber Security Strategy forms the backbone of Australia’s efforts to combat cyber threats. It emphasises six key pillars, referred to as “shields,” which collectively address various aspects of cybersecurity, such as secure digital infrastructure, public-private collaboration, and increased accountability for cyber incidents. The strategy also introduces a significant shift in how cybersecurity is approached, encouraging both individuals and organisations to adopt a “secure by design” mindset.  Check out our more detailed article on <a href="https://inconsult.com.au/publication/australias-new-cyber-security-strategy/" target="_blank" rel="noopener">Australia’s New Cyber Security Strategy</a> including the six shields.</p>
<p><img decoding="async" class="wp-image-11515 aligncenter" src="https://inconsult.com.au/wp-content/uploads/2023/12/6-cyber-shields-300x158.png" alt="cybersecurity Australia's six cyber shields" width="505" height="266" srcset="https://inconsult.com.au/wp-content/uploads/2023/12/6-cyber-shields-300x158.png 300w, https://inconsult.com.au/wp-content/uploads/2023/12/6-cyber-shields-768x406.png 768w, https://inconsult.com.au/wp-content/uploads/2023/12/6-cyber-shields.png 888w" sizes="(max-width: 505px) 100vw, 505px" /></p>
<p style="text-align: center;"><em>The six cyber shields </em></p>
<p>One of the strategy’s critical components is bolstering defences around critical infrastructure, including healthcare, telecommunications, and transport networks. These sectors are seen as high-value targets for cybercriminals and foreign actors, and securing them is essential to maintaining national security​.</p>
<p>The strategy also aims to fill the skills gap in the cybersecurity industry by investing in education and training programs. The government is working with universities and vocational institutions to create a pipeline of cybersecurity professionals who can meet the growing demand for expertise in this field.​</p>
<h3>New Legislation: Cybersecurity Bill and Ransomware Reporting</h3>
<p>One of the most significant elements of the <a href="https://www.homeaffairs.gov.au/news-media/archive/article?itemId=1247" target="_blank" rel="noopener">new reforms</a> is the introduction of the <strong>Cyber Security Bill</strong>, which aims to enhance regulatory oversight of cybersecurity practices across various sectors. The bill mandates stricter security protocols for businesses and government entities and introduces new requirements for reporting cyber incidents​. Key features of the reforms include:</p>
<h4><strong>1. Mandatory Ransomware Reporting</strong></h4>
<p>Under the new legislation, businesses and organisations are required to report ransomware attacks and any payments made to cybercriminals. The new reporting requirements are designed to discourage businesses from paying ransoms, as such payments not only fuel further criminal activity but also often fail to result in the secure return of stolen data.</p>
<p>Other updates will address gaps in current legislation to:</p>
<ul>
<li>mandate minimum cyber security standards for smart devices,</li>
<li>introduce a ‘limited use’ obligation for the National Cyber Security Coordinator and the Australian Signals Directorate (ASD), and</li>
<li>establish a Cyber Incident Review Board.</li>
</ul>
<p>Moreover, mandatory reporting will allow the government to gain a clearer understanding of the scale of ransomware activity in Australia and help develop more effective strategies for combatting these attacks.</p>
<p>Companies that fail to comply with the reporting requirements could face significant fines and penalties​.</p>
<h4><strong>2. Improved Cyber Resilience for Critical Infrastructure</strong></h4>
<p>The reforms place a strong emphasis on protecting critical infrastructure, such as healthcare, telecommunications, and energy sectors, from cyber threats. These industries are particularly vulnerable to cyberattacks due to the high value of the data they manage and their essential role in maintaining national security. These measures include the requirement for companies to:</p>
<ul>
<li>conduct regular risk assessments,</li>
<li>implement incident response plans, and</li>
<li>ensure the encryption of sensitive data​.</li>
</ul>
<p>The reforms will advance and implement reforms under the Security of Critical Infrastructure Act 2018 (SOCI Act), which will:</p>
<ul>
<li>clarify obligations for systems handling business-critical data,</li>
<li>enhance government assistance to manage the impacts of all hazards on critical infrastructure,</li>
<li>simplify information sharing between industry and government,</li>
<li>introduce powers allowing the government to direct entities to fix serious deficiencies in their risk management programs, and</li>
<li>align telecommunications security regulation with the SOCI Act.</li>
</ul>
<h4><strong>3. Increased Collaboration Between Government and Private Sector</strong></h4>
<p>The reforms encourage greater collaboration between the government and private businesses to share threat intelligence, best practices, and resources for combating cyber threats. This public-private partnership is viewed as essential for creating a unified national defence against cyberattacks.</p>
<h4><strong>4. Stricter Penalties for Non-Compliance</strong></h4>
<p>Organisations that fail to implement adequate cybersecurity measures or fail to report cyber incidents in a timely manner will face significant penalties under the new legislation. This includes fines and other legal consequences for businesses that do not comply with the new regulations. The aim is to hold businesses accountable for the protection of their customers&#8217; data and to ensure that they take proactive steps to defend against cyber threats.</p>
<p>By enacting stricter laws, the government hopes to create a culture of cybersecurity accountability, where businesses understand the importance of securing their systems and data​</p>
<h3>Learning from Past Cybersecurity Breaches</h3>
<p>Australia’s new cybersecurity reforms have been largely shaped by lessons learned from high-profile data breaches. Two of the most notable incidents include the attacks on telecommunications giant Optus and health insurance provider Medibank, both of which resulted in the exposure of sensitive customer information.</p>
<p>The Optus breach exposed the personal information of over 10 million Australians, including passport numbers, driving licenses, and other sensitive data. The breach was one of the largest in the country’s history and raised serious concerns about the adequacy of corporate cybersecurity practices, leading to widespread criticism of the  cybersecurity protocols.</p>
<p>Similarly, the Medibank breach involved the theft of highly sensitive health records, resulting in further scrutiny of how businesses in the healthcare sector manage patient data.</p>
<p>In both cases, the lack of sufficient cybersecurity safeguards and poor incident response strategies were seen as major contributing factors to the scale and impact of the attacks.</p>
<p>These incidents have driven home the importance of proactive measures and incident preparedness.</p>
<p>The government has stressed that organisations must do more to secure their systems against increasingly sophisticated cyberattacks by requiring businesses to:</p>
<ul>
<li>implement stronger cybersecurity measures,</li>
<li>regularly assess their risks and vulnerabilities,</li>
<li>strengthen third-party vendor management,</li>
<li>establish dedicated cybersecurity teams, and</li>
<li>improve their preparedness and response for cyber incidents.</li>
</ul>
<h3>Public-Private Collaboration and Education</h3>
<p>The Cyber Security Strategy highlights the importance of sharing threat intelligence and best practices across industries to improve national cybersecurity resilience. A key element of the new reforms is the promotion of collaboration between the government and the private sector.  As part of the reforms, the government is encouraging businesses to work closely with agencies such as the ACSC to share information about emerging threats and best practices for mitigating cyber risks​</p>
<p>The government has also launched public education campaigns to raise awareness of cybersecurity threats and encourage best practices among individuals and small businesses. These campaigns focus on basic cybersecurity hygiene, such as using strong passwords, enabling multi-factor authentication, and recognising phishing attempts.</p>
<h3>The Role of Emerging Technologies</h3>
<p>The reforms also take into account the growing role of artificial intelligence (AI) and automation in cybersecurity. With the rapid evolution of cyber threats, AI is seen as a crucial tool in detecting and responding to attacks more quickly and efficiently. Automation can help organisations manage the sheer volume of threats they face, enabling faster identification of vulnerabilities and reducing the time it takes to remediate breaches​.</p>
<p>However, the government has cautioned that the implementation of AI-driven solutions must be carefully managed to avoid new risks. For instance, while AI can significantly enhance cybersecurity, it can also be used maliciously by cybercriminals, making it a double-edged sword. Therefore, any deployment of AI technologies must be accompanied by rigorous oversight and testing to ensure they are effective without introducing additional vulnerabilities​.</p>
<h3>The Future of Cybersecurity in Australia</h3>
<p>Australia&#8217;s cybersecurity reforms represent a significant step forward in the nation’s efforts to protect its digital infrastructure and safeguard sensitive data. By implementing stricter regulations, fostering collaboration between the public and private sectors, and increasing accountability for cyber incidents, the government is positioning Australia as a leader in global cybersecurity.</p>
<p>However, the success of these reforms will depend on the willingness of businesses and government agencies to adopt a proactive approach to cybersecurity. As cyber threats continue to evolve, it is essential that organisations remain vigilant and continuously improve their security practices.</p>
<p>Australia’s new cybersecurity reforms provide a comprehensive framework for addressing the growing cyber threat landscape. With the introduction of mandatory ransomware reporting, stricter penalties for non-compliance, and a strong focus on collaboration and education, the nation is well on its way to building a more secure digital future</p>
<h3><strong>How can we help enhance cyber security?</strong></h3>
<p>We are here to help strengthen cyber resilience. Our cyber risk management capabilities include designing and developing a cyber risk management framework and a wide range of response plans to enhance your cyber resilience capabilities. Our cyber risk management services include:</p>
<ul>
<li>Vulnerability scanning</li>
<li>Cyber Security Gap Analysis against Essential Eight, ISO 27001 or APRA&#8217;s CPS234</li>
<li>Regulation compliance advice</li>
<li>Cyber Risk Governance Framework Reviews</li>
<li>Cyber Risk Governance Framework Development</li>
<li>Third-Party Vendor Review and Cyber Risk Analysis</li>
<li>Cyber Risk Awareness Training and Internal Campaigns</li>
<li>Post-Cyber Incident Review</li>
<li>Email Phishing Campaigns</li>
<li>Cyber Incident Response</li>
<li>Crisis Team Familiarisation Training</li>
<li>AI Risk Governance</li>
</ul>
<p>Be more resilient to a wide range of cyber risks and get relevant insight into how to protect your systems by <a href="https://inconsult.com.au/contact-us/">contacting us</a> to discuss how we can help strengthen your cyber resilience framework.</p>The post <a href="https://inconsult.com.au/publication/cybersecurity-upgrade-australias-stricter-regulations/">Cybersecurity Upgrade: Australia’s Stricter Regulations</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Bank Governance and Risk Culture</title>
		<link>https://inconsult.com.au/publication/bank-governance-and-risk-culture/</link>
		
		<dc:creator><![CDATA[Tony Harb]]></dc:creator>
		<pubDate>Mon, 23 Sep 2024 02:31:17 +0000</pubDate>
				<guid isPermaLink="false">https://inconsult.com.au/?post_type=publication&#038;p=12070</guid>

					<description><![CDATA[<p>The new 2024 European Central Bank (ECB) Single Supervisory Mechanism (SSM) draft guide on governance and risk culture significantly updates and builds upon the 2016 SSM supervisory statement on governance and risk appetite. Key themes in both documents emphasise the importance of robust internal governance, risk culture, and risk appetite frameworks (RAF) within financial institutions. [&#8230;]</p>
The post <a href="https://inconsult.com.au/publication/bank-governance-and-risk-culture/">Bank Governance and Risk Culture</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></description>
										<content:encoded><![CDATA[<p>The new 2024 European Central Bank (ECB) Single Supervisory Mechanism (SSM)<a href="https://www.bankingsupervision.europa.eu/legalframework/publiccons/pdf/ssm.pubcon202407_draftguide.en.pdf?4532f41855e11e6a317fcb07e5532b56" target="_blank" rel="noopener"> draft guide on governance and risk culture</a> significantly updates and builds upon the 2016 SSM supervisory statement on governance and risk appetite. Key themes in both documents emphasise the importance of robust internal governance, risk culture, and risk appetite frameworks (RAF) within financial institutions. However, the 2024 guide is 3 times in size and introduces several critical changes aimed at enhancing the effectiveness of supervisory practices based on lessons learned from the past decade.</p>
<p>The draft guide is important for Australian financial institutions for several reasons.  Firstly, the ECB guidelines set global best practice standards that influence regulatory expectations internationally, including the Australian Prudential Regulation Authority (APRA). Also, many Australian banks have international exposure and aligning with ECB standards can help them meet global governance and risk culture benchmarks, reducing operational risks and regulatory friction.</p>
<p>Our risk and resilience team take a closer look into the evolution in governance and risk management expectations between the old and new supervisory statements. We explore the growing emphasis on robust internal governance, enhanced risk culture, and updated risk appetite frameworks.</p>
<h2>1. Broadening the Scope of Risk Culture</h2>
<p>As expected, the focus on risk culture is significantly expanded in the new 2024 guide, marking a shift towards a more comprehensive and practical approach. Unlike the 2016 statement, which recognised the importance of risk culture in broad terms, the new guidance emphasises embedding it deeply across all levels of the institution. This involves influencing not just formal policies but also everyday behaviours, decision-making, and governance structures.</p>
<p>The guide highlights that a strong risk culture must be an integral part of how an institution operates. Risk culture should shape leadership attitudes, staff conduct, and internal decision processes, ensuring that risks are acknowledged, evaluated, and addressed consistently across the institution. Supervisors are provided with more tangible tools to assess the effectiveness of these cultural practices, moving beyond superficial assessments.</p>
<p>A key addition to the 2024 guide is the introduction of measurable benchmarks and evaluation criteria. These metrics allow supervisors to track how effectively institutions are internalising and operationalising their risk culture. This approach emphasises that a sound risk culture is not just about having policies on paper but about translating those policies into everyday actions and decision-making.</p>
<p>Another important element is the expectation for senior management and the board to actively promote and model culture. The guide reinforces that leadership plays a critical role in setting the tone from the top, fostering an environment where risk management is seen as a shared responsibility across the organisation. In doing so, the guide makes clear that risk culture should evolve from a compliance-driven concept to a core part of how institutions govern themselves.</p>
<h2>2. Internal Controls and Risk Culture</h2>
<p>The 2024 SSM guide strengthens the connection between internal controls and risk culture, acknowledging advancements in compliance technology and increased regulatory pressure for proactive risk management. While the 2016 framework treated internal controls primarily as safeguards, the new guide emphasises that these controls must be fully aligned with the bank&#8217;s risk culture.</p>
<p>A key focus is on leveraging new technologies like AI-driven compliance systems to continuously monitor and identify behaviours that stray from the institution’s defined risk appetite. These systems enable real-time tracking and offer a more dynamic approach to compliance. Additionally, supervisors will now be looking for evidence that internal controls go beyond formal procedures.</p>
<p>Internal controls should actively support a risk-aware culture, with regular testing, auditing, and real-time feedback mechanisms to ensure that they are not just box-ticking exercises, but integral to the institution&#8217;s day-to-day operations. This shift highlights a more integrated approach, ensuring that risk management practices are deeply embedded within the institution’s culture, rather than being treated as separate, isolated processes.</p>
<h2>3. Greater Focus on Governance Bodies</h2>
<p>The 2024 SSM guide delves deeper into the composition, diversity, and effectiveness of governance bodies like management boards and committees, as a result of a decade of supervisory experience. It highlights the critical role of diversity in decision-making, emphasising that diverse perspectives help to counteract groupthink and foster well-rounded, robust governance practices. This focus marks a significant shift from the 2016 framework, where such diversity concerns were less prominent.</p>
<p>The guide also expands on the roles and responsibilities of internal control functions, providing more detail than the 2016 version. The aim is to ensure that these functions play a more active role in oversight and accountability.</p>
<p>Governance structures are now expected to have a much stronger influence on the institution&#8217;s risk appetite and overall governance decisions, ensuring that supervisory boards are not merely symbolic but have a real, strategic impact. This shift underscores the ECB&#8217;s drive for more effective and engaged governance frameworks, ensuring that banks are well-equipped to handle evolving risk landscapes and regulatory demands.</p>
<h2>4. Risk Appetite Frameworks (RAF)</h2>
<p>The 2024 SSM guide marks a shift toward a more detailed and prescriptive approach in setting risk appetite metrics compared to its 2016 version. Reflecting on the growing complexity of risks—particularly those related to climate change, digital transformation, and global market interconnections—institutions are now required to do more than just define broad risk appetite limits. They must perform granular, scenario-based analyses of their exposures. This includes specific thresholds for climate-related and cyber risks, which are integrated directly into their overall Risk Appetite Framework (RAF). This heightened focus ensures that institutions proactively measure and manage their vulnerabilities to these emerging threats.</p>
<p>In addition, the 2024 guide emphasises the need for dynamic RAFs. Unlike the 2016 approach, which was relatively static, banks now must continuously update their risk frameworks to adapt to the changing risk landscape. The guide also incorporates the latest European Banking Authority (EBA) standards, providing more detailed expectations for how risk appetite frameworks should be applied across all business units. Supervisors will now assess not only the existence of an RAF but also its real-time application across decision-making processes, ensuring that institutions remain agile and aligned with evolving risks.</p>
<p>This forward-looking guidance signals the ECB’s push for more resilient and proactive risk management, pushing institutions to adjust quickly to new challenges while maintaining strong governance.</p>
<h2>5. New Emerging Risks &#8211; Climate, ESG, and Digitalisation</h2>
<p>The 2024 SSM guide highlights risks that have grown more significantly since 2016, especially climate risks and environmental, social, and governance factors (ESG). Institutions now need to actively assess their exposure to climate-related threats, such as extreme weather or regulatory shifts toward greener policies. These risks are not just side concerns; they must be built directly into the institution&#8217;s Risk Appetite Framework, with clear metrics to measure how well the institutions can withstand climate-related financial disruptions.</p>
<p>In addition to climate risks, the guide stresses the growing importance of digital risks. With increased use of cloud computing and AI, institutions face new cybersecurity challenges.</p>
<p>The 2024 guide requires institutions to develop specific strategies to manage these risks, ensuring they are prepared to handle potential threats from cyberattacks and technological failures. By embedding these emerging risks into governance and risk management processes, the guide pushes institutions to be more resilient and forward-thinking in today’s rapidly changing environment.</p>
<h2>6. Supervisory Tools and Approaches</h2>
<p>The 2016 statement established a baseline for evaluating institution&#8217;s internal governance and risk appetite frameworks, but the 2024 guide introduces a more sophisticated and dynamic supervisory toolkit. The ECB now integrates thematic reviews, benchmarking, and ad-hoc evaluations, enabling a more holistic and real-time approach to supervision. The new guide also offers detailed examples of good practices gathered over the past decade, providing clearer pathways for institutions to enhance their governance structures.</p>
<p>Additionally, the 2024 guide imposes stricter expectations on documentation and transparency in risk management processes. institutions are now required to submit more frequent reports, including real-time risk assessments, as opposed to the less frequent, periodic reviews previously expected.</p>
<p>Enforcement actions will also be more swift and decisive when deficiencies in governance or risk management are identified. With access to a broader array of supervisory tools, supervisors can ensure compliance with the more rigorous standards set forth in the updated guidance. This shift demonstrates the ECB’s focus on pre-emptive action to address risks before they materialise.</p>
<h2>7. Proportionality and Tailored Supervision</h2>
<p>The 2024 guide introduces a refined proportionality principle, enhancing the flexibility of supervisory expectations by considering the size, complexity, and systemic importance of each institution. Unlike in the 2016 framework, where this principle was implicit, the updated guide makes proportionality central to its approach. This allows for a differentiated supervisory intensity, ensuring that smaller, less complex institutions are not overburdened with the same stringent requirements as larger, systemically important institutions.</p>
<p>This approach not only acknowledges the diversity of financial entities under the European Central Bank’s purview but also ensures that the regulatory framework is scalable. Smaller institutions with simpler business models benefit from reduced regulatory burden, allowing them to focus resources more effectively, while maintaining high oversight standards for larger institutions that pose greater systemic risks. By tailoring governance and risk management expectations to the specific profile of each institution, the ECB aims to create a fairer, more balanced supervision regime across the banking sector.</p>
<h2>Next Steps</h2>
<p>The 2024 SSM guide marks a critical evolution in the ECB’s approach to governance and risk management. While the 2016 statement laid the groundwork, the new guide takes a more detailed and proactive stance, reflecting years of supervisory insights and the increasing complexity of the financial ecosystem.</p>
<p>The expectations flowing from the new guide represents a significant step forward, and pose challenges for institutions.</p>
<p>Financial institutions will need to adapt to these enhanced expectations, particularly around governance diversity, risk culture, and real-time application of risk appetite frameworks, ensuring they can meet the challenges of modern banking supervision.</p>
<p>Institutions must not only adjust their governance frameworks but also ensure that cultural changes are genuinely internalised by staff. This requires continuous training, monitoring, and a shift in leadership approaches. As noted by ECB representatives, poor governance and risk culture have contributed to past banking failures, and the aim of these updates is to prevent such occurrences in the future.</p>
<p>Institutions must therefore engage in meaningful transformation efforts, particularly concerning culture, or face potential supervisory actions​</p>
<h2>Can We Help?</h2>
<p>We understand financial services.  Since 2001, we have assisted APRA regulated entities strengthen their risk management framework, capital management, reinsurance, recovery plans, cybersecurity and operational risk management.</p>
<p>We offer comprehensive support to enhance your risk management systems and processes. Services include interim Chief Risk Officer placements, on-demand Virtual Risk Officers, guidance in establishing formal risk frameworks, and independent reviews to assess framework maturity. Additionally, we conduct risk workshops, risk culture assessments, and provide specialised services in areas like business continuity, crisis management, cyber risk, and climate change risk.</p>
<p>Take risk management to the next level and <a href="https://inconsult.com.au/contact-us/" target="_blank" rel="noopener noreferrer">contact us</a> to discuss your risk and resilience needs.</p>
<div class='printomatic pom-default ' id='id8968'  data-print_target='body'></div>The post <a href="https://inconsult.com.au/publication/bank-governance-and-risk-culture/">Bank Governance and Risk Culture</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Meeting Emerging Challenges: Climate-related Disclosures</title>
		<link>https://inconsult.com.au/publication/meeting-emerging-challenges-climate-related-disclosures/</link>
		
		<dc:creator><![CDATA[Tony Harb]]></dc:creator>
		<pubDate>Mon, 15 Jul 2024 22:30:08 +0000</pubDate>
				<guid isPermaLink="false">https://inconsult.com.au/?post_type=publication&#038;p=11808</guid>

					<description><![CDATA[<p>In parts 1 and 2 of our special 3-part series, we explored the evolving landscape of climate disclosure and sustainability reporting. We covered Australia’s climate change journey to date, the new mandatory sustainability reporting laws and standards being introduced, and the implications for directors. In part 3, we outline some of the significant challenges companies [&#8230;]</p>
The post <a href="https://inconsult.com.au/publication/meeting-emerging-challenges-climate-related-disclosures/">Meeting Emerging Challenges: Climate-related Disclosures</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></description>
										<content:encoded><![CDATA[<p>In parts 1 and 2 of our special 3-part series, we explored the evolving landscape of climate disclosure and sustainability reporting. We covered Australia’s climate change journey to date, the new mandatory sustainability reporting laws and standards being introduced, and the implications for directors.</p>
<p>In part 3, we outline some of the significant challenges companies face in meeting sustainability reporting obligations, particularly with governance, strategy, metrics, and risk management.</p>
<h1>The challenges</h1>
<p>Other than the compliance challenges that will come from meeting reporting obligations in respect to the entity’s governance, strategy, metrics and targets and risk management, entities may need to consider and overcome some less obvious challenges including those listed below.</p>
<h2>Greenwashing</h2>
<p>A big risk with reporting will be “greenwashing”, or engaging in misleading and deceptive conduct, which is actionable under the Australian Consumer Law, Australian Securities and Investments Commission Act and Corporations Law. Companies should be aware, that <a href="https://asic.gov.au/regulatory-resources/financial-services/how-to-avoid-greenwashing-when-offering-or-promoting-sustainability-related-products/" target="_blank" rel="noopener">ASIC</a> has stated that it is prioritising “misleading conduct in relation to sustainable finance including greenwashing” in 2023 and 2024.</p>
<p><img decoding="async" class="aligncenter wp-image-10275" src="https://inconsult.com.au/wp-content/uploads/2022/08/Greenwashing-redflags-300x166.png" alt="climate greenwashing" width="660" height="365" srcset="https://inconsult.com.au/wp-content/uploads/2022/08/Greenwashing-redflags-300x166.png 300w, https://inconsult.com.au/wp-content/uploads/2022/08/Greenwashing-redflags-1224x679.png 1224w, https://inconsult.com.au/wp-content/uploads/2022/08/Greenwashing-redflags-768x426.png 768w, https://inconsult.com.au/wp-content/uploads/2022/08/Greenwashing-redflags-1536x852.png 1536w, https://inconsult.com.au/wp-content/uploads/2022/08/Greenwashing-redflags-2048x1136.png 2048w" sizes="(max-width: 660px) 100vw, 660px" /></p>
<p>The legislation will provide reporting entities and their officers with limited protection for up to 3 years against actions for misleading and deceptive conduct brought by parties other than ASIC in relation to certain statements. This will include forward-looking statements made in the sustainability reports and auditors’ reports.</p>
<h2>Data quality</h2>
<p>Some of the most challenging parts of the disclosures will include:</p>
<ul>
<li>estimating of scope 3 emissions, particularly in large and complex supply chains;</li>
<li>consistency and accessibility of climate scenarios and decarbonisation pathways across economies, sectors, industries and how they affect individual sectors or firms; and</li>
<li>access to reliable and comparable sustainability data beyond emissions.</li>
</ul>
<h2>Record retention</h2>
<p>Organisations must retain good, accurate records to support the decisions and determinations you make.</p>
<h2>Silos</h2>
<p>Entities should take a holistic approach to sustainability and decarbonisation, including goal setting, integrating its climate data and records from throughout the organisation into a centralised repository. The approach it takes to sustainability should be consistent across the organisation. The board must oversee and drive sustainability as it does its other business development strategies.</p>
<h1>Future sustainability disclosures</h1>
<p>While developing procedures to comply with climate disclosures, entities should keep in mind that whereas the current proposed measures, including the financial risks disclosure are directed to climate risks, the intention is that climate disclosures are the first priority for sustainable finance and reforms and platforms will subsequently include other sustainability-related issues. To this end, laws will be drafted to allow for expansion of financial disclosures to expand to nature-related issues.</p>
<h1>Four areas to begin with</h1>
<p>Reporting entities should start preparing now instead of waiting for the Bill to be enacted and the Sustainability Standards to be finalised. How burdensome the reporting process will be will largely depend on how far along the sustainability path the company is, and the resources and knowledge available to it.</p>
<p>Tackling the following questions will help you get ahead of reporting requirements:</p>
<h2>1. Understand the law’s scope and applicability</h2>
<p>Understand your company’s climate-related reporting obligations. What are differences between the Sustainability Standards and any current disclosures it makes?</p>
<p>Stay informed about upcoming regulations and plan for incremental implementation of new measures.</p>
<p>To reduce the risk of greenwashing, ensure accurate and truthful reporting and implement robust internal review processes.</p>
<h2>2. Conduct a readiness assessment</h2>
<p>Review your company’s current governance structures in respect to responsibilities and accountabilities around climate change as well as the skills, capabilities and resources available, including those needed to conduct climate-related scenario analyses. Where are the gaps?</p>
<h2>3. Conduct a risk assessment</h2>
<p>Conduct a risk assessment to understand your company’s exposure and vulnerability to climate-related risks and opportunities that could reasonably be expected to affect the company’s prospects and how it will respond to mitigate/adapt to/manage climate risks.</p>
<p>Determine your company’s strategy and the resilience of its business model to the direct and indirect effects of climate change.</p>
<p>Adopt widely accepted climate scenarios and regularly update scenarios based on latest research and data. Align with industry standards where possible.</p>
<p>What are the reasonable impacts likely to be of these climate influences on the company’s:</p>
<ul>
<li>performance and financial performance for the current reporting period, and the short, medium and long term?</li>
<li>its strategy, financial planning and decision-making?</li>
</ul>
<h2>4. Data management and targets</h2>
<p>Consider appropriate climate-related targets. How will your company measure and monitor and its progress in achieving its climate-related strategic goals and any externally driven targets its legally required to meet?</p>
<p>Can you collaborate with suppliers to obtain accurate data for estimating scope 3 emission?</p>
<p>Is the use of standardised methods of estimation appropriate for your organisation?</p>
<p>Can you leverage technology for data collection and analysis?</p>
<p>Does the company retain and manage all the necessary data for disclosures in a single centralized repository?</p>
<p>To address the risk of silos, foster cross-departmental collaboration and integrate climate data into all business processes.</p>
<p>Establish clear and measurable climate-related targets and monitor and report progress regularly.</p>
<h2>The bottom line</h2>
<p>The impending mandatory sustainability reporting requirements mark a pivotal shift in how Australian corporations must address their environmental responsibilities. As the global community intensifies its focus on combating climate change, pollution, and biodiversity loss, Australian companies must prepare for the stringent climate disclosure mandates being legislated.</p>
<p>The path to compliance may be challenging, but the rewards of safeguarding both the planet and long-term business viability are immense.</p>
<h2>We are here to help you manage climate risk</h2>
<p>By acting now, you can turn these challenges into opportunities, positioning your company as a leader in sustainability and corporate responsibility. Don’t wait for the legislation to take effect—begin your journey towards a more sustainable and resilient future today.</p>
<p>Together, let’s build a more sustainable tomorrow.</p>
<p><a href="https://inconsult.com.au/contact-us/">Contact us</a> to embark on a journey towards authentic environmental stewardship and responsible business practices.</p>
<div class='printomatic pom-default ' id='id3306'  data-print_target='body'></div>The post <a href="https://inconsult.com.au/publication/meeting-emerging-challenges-climate-related-disclosures/">Meeting Emerging Challenges: Climate-related Disclosures</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>New Laws New Risks: Climate-related Disclosures</title>
		<link>https://inconsult.com.au/publication/new-laws-new-risks-climate-related-disclosures/</link>
		
		<dc:creator><![CDATA[Tony Harb]]></dc:creator>
		<pubDate>Fri, 12 Jul 2024 00:31:23 +0000</pubDate>
				<guid isPermaLink="false">https://inconsult.com.au/?post_type=publication&#038;p=11797</guid>

					<description><![CDATA[<p>In part 1 of our special 3-part series, we explored the evolving landscape of climate disclosure and sustainability reporting including Australia’s climate change journey, the imminent changes to climate change disclosure and how more than ever, directors will be in the spotlight. In part 2, we now take a detailed look into the new mandatory [&#8230;]</p>
The post <a href="https://inconsult.com.au/publication/new-laws-new-risks-climate-related-disclosures/">New Laws New Risks: Climate-related Disclosures</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></description>
										<content:encoded><![CDATA[<p>In part 1 of our special 3-part series, we explored the evolving landscape of climate disclosure and sustainability reporting including Australia’s climate change journey, the imminent changes to climate change disclosure and how more than ever, directors will be in the spotlight.</p>
<p>In part 2, we now take a detailed look into the new mandatory sustainability reporting laws as these new laws mean new risks and challenges.</p>
<h2>The law</h2>
<p>Following a period of consultation, which resulted in some amendments to the proposed climate <a href="https://treasury.gov.au/consultation/c2024-466491">reporting requirements</a>, on 27 March 2024 the Treasurer introduced the <em>Treasury Laws Amendment (Financial Market Infrastructure and Other Measures) Bill 2024</em> (“Bill”) into Parliament.</p>
<p>Schedule 4 of the Bill proposes a new mandatory climate risk reporting framework for large Australian companies. The Schedule will amend the <em>Corporations Act</em> to include an obligation for certain companies to prepare annual sustainability reports.</p>
<p>The aim of the new legislation is to provide better comparability in data reported by large Australian companies to enable investors to make the right investment decisions to benefit from investment in a net zero transformation and to manage climate change challenges.</p>
<h2>The Standards</h2>
<p>To accompany the new legislation accounting standards are being developed by the Australian Accounting Standards Board (“AASB”) which will describe in more detail the required content of the disclosures. These are being guided by the International Sustainability Standards Board’s IFRS S1 and IFRS S2 Sustainability Standards to ensure global consistency in reporting, but with some differences to better suit the Australian corporate and reporting environment. Draft ES SR1 Australian Sustainability Reporting Standards-Disclosure of Climate-related Financial Information contains three draft Australian Sustainability Reporting Standards (“ASRS”) (“the Sustainability Standards) and is currently available for review.</p>
<p>One big difference between the proposed ASRS 1 and IFRS S1 is that the Australian Sustainability Standard only refers to climate change disclosures, whereas IFRS S1 encompasses sustainability disclosures more broadly.</p>
<p>To align with the Australian Government’s direction to address climate-related financial disclosures first, the AASB is developing climate-related financial disclosure requirements that can, at least initially, be applied independently of any broader sustainability reporting framework. This approach would permit additional time to consider the development of reporting requirements for other sustainability-related matters in Australia over time.</p>
<h2>What will need to be reported?</h2>
<p>The Bill along with the draft Sustainability Standards give us guidance on what the basic contents of the annual sustainability report will likely need to include. These are:</p>
<ul>
<li>a climate statement in relation to the reporting company for the financial year, including any notes specified by the Sustainability Standards. Required notes will likely include notes on the preparation of climate statements, the statement contents, or other matters concerning environmental sustainability;</li>
<li>any statements or other specific inclusions prescribed by regulations relating to the statements including those relating to financial matters concerning environmental sustainability; and</li>
<li>a directors’ declaration that the substantive provisions in the report comply with the Corporations Act and the Sustainability Standards. The Bill proposes a 3-year transition period, where directors need only declare that the company has taken reasonable steps to ensure that the substantive provisions of the report comply.</li>
</ul>
<p>If a reporting company determines that it has no material climate related risks or opportunities, it will need to disclose that fact in its general-purpose financial reports and explain how it reached this determination.</p>
<h2>Climate-related Disclosures</h2>
<p>The disclosures must fairly present the entity’s risks and opportunities. The climate statements and notes, when read together, must disclose:</p>
<ul>
<li>any material climate-related financial risks and opportunities that could reasonably be expected to affect an entity’s prospects;</li>
<li>the climate metrics and targets proscribed by the Standards for disclosure including scope 1, scope2 and scope 3 emissions;</li>
<li>information about governance, strategy, risk management in relation to the risks, opportunities, metrics and targets.</li>
</ul>
<p>The core content of climate-related financial disclosures will need to contain information from the 4 pillars identified in the TCFD Framework, being governance, strategy, risk management, and metrics and targets.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-11802" src="https://inconsult.com.au/wp-content/uploads/2024/07/4-pillars-300x138.png" alt="climate" width="747" height="344" srcset="https://inconsult.com.au/wp-content/uploads/2024/07/4-pillars-300x138.png 300w, https://inconsult.com.au/wp-content/uploads/2024/07/4-pillars-1224x565.png 1224w, https://inconsult.com.au/wp-content/uploads/2024/07/4-pillars-768x354.png 768w, https://inconsult.com.au/wp-content/uploads/2024/07/4-pillars-1536x709.png 1536w, https://inconsult.com.au/wp-content/uploads/2024/07/4-pillars-2048x945.png 2048w" sizes="(max-width: 747px) 100vw, 747px" /></p>
<h2>Audit requirements</h2>
<p>The annual sustainability reports will need to be audited in accordance with new assurance standards being developed by The Australian Auditing and Assurance Standards Board (AUASB) and to be phased in from 1 January 2025.</p>
<h2>Who will need to report?</h2>
<p>Three classes of entities will be required under the new legislation to prepare sustainability reports:</p>
<ol>
<li>Large companies that are required to prepare and lodge annual reports under Chapter 2M of the Corporations Act</li>
<li>Asset owners (e.g. superannuation companies and registered schemes) with more than $5 billion funds under management</li>
<li>Any size entity subject to both the annual reporting requirements under the Corporations Act and emissions reporting obligations under the National Greenhouse and Energy Reporting Act 2007.</li>
</ol>
<p>Although small and medium businesses, below the relevant size thresholds, charities and not-for-profits will be exempt under Bill from reporting, some of these may, be asked by reporting entities in their value chain to disclose similar information to assist the reporting entity to prepare meaningful and accurate disclosures, particularly, in respect to their scope 3 emissions reporting.</p>
<h2>When do the reporting obligations begin?</h2>
<p>The reporting entities will be classified into 1 of 3 groups with a phased starting period as shown in the table below:</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-11800" src="https://inconsult.com.au/wp-content/uploads/2024/07/new-laws-new-risks-300x123.png" alt="climate" width="756" height="310" srcset="https://inconsult.com.au/wp-content/uploads/2024/07/new-laws-new-risks-300x123.png 300w, https://inconsult.com.au/wp-content/uploads/2024/07/new-laws-new-risks-1224x501.png 1224w, https://inconsult.com.au/wp-content/uploads/2024/07/new-laws-new-risks-768x314.png 768w, https://inconsult.com.au/wp-content/uploads/2024/07/new-laws-new-risks-1536x629.png 1536w, https://inconsult.com.au/wp-content/uploads/2024/07/new-laws-new-risks-2048x838.png 2048w" sizes="(max-width: 756px) 100vw, 756px" /></p>
<h2>Overcoming the challenges</h2>
<p>Other than the compliance challenges that will come from meeting reporting obligations in respect to the entity’s governance, strategy, metrics and targets and risk management, entities will need to consider and overcome some other less obvious challenges.</p>
<p>In part 3, we will delve deeper into some of the challenges and look at strategies to overcome them.</p>
<h2>The bottom line</h2>
<p>The impending mandatory sustainability reporting requirements mark a pivotal shift in how Australian corporations must address their environmental responsibilities. As the global community intensifies its focus on combating climate change, pollution, and biodiversity loss, Australian companies must prepare for the stringent climate disclosure mandates being legislated.</p>
<p>The path to compliance may be challenging, but the rewards of safeguarding both the planet and long-term business viability are immense.</p>
<h2>We are here to help you manage climate risk</h2>
<p>By acting now, you can turn these challenges into opportunities, positioning your company as a leader in sustainability and corporate responsibility. Don’t wait for the legislation to take effect—begin your journey towards a more sustainable and resilient future today.</p>
<p>Together, let’s build a more sustainable tomorrow.</p>
<p><a href="https://inconsult.com.au/contact-us/">Contact us</a> to embark on a journey towards authentic environmental stewardship and responsible business practices.</p>
<div class='printomatic pom-default ' id='id8827'  data-print_target='body'></div>The post <a href="https://inconsult.com.au/publication/new-laws-new-risks-climate-related-disclosures/">New Laws New Risks: Climate-related Disclosures</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Facing the Future: Climate and Sustainability Reporting</title>
		<link>https://inconsult.com.au/publication/facing-the-future-climate-and-sustainability-reporting/</link>
		
		<dc:creator><![CDATA[Tony Harb]]></dc:creator>
		<pubDate>Tue, 25 Jun 2024 21:23:19 +0000</pubDate>
				<guid isPermaLink="false">https://inconsult.com.au/?post_type=publication&#038;p=11756</guid>

					<description><![CDATA[<p>As the world grapples with the adverse and uncertain consequences of climate change, pollution, and biodiversity loss, driven by human activities like burning fossil fuels and deforestation, the stakes for sustainability and corporate responsibility have never been higher. The financial ramifications of failing to act sustainably—ranging from biodiversity loss to carbon emissions—are potentially immense. Governments [&#8230;]</p>
The post <a href="https://inconsult.com.au/publication/facing-the-future-climate-and-sustainability-reporting/">Facing the Future: Climate and Sustainability Reporting</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></description>
										<content:encoded><![CDATA[<p>As the world grapples with the adverse and uncertain consequences of climate change, pollution, and biodiversity loss, driven by human activities like burning fossil fuels and deforestation, the stakes for sustainability and corporate responsibility have never been higher. The financial ramifications of failing to act sustainably—ranging from biodiversity loss to carbon emissions—are potentially immense.</p>
<p>Governments worldwide, including Australia, are beginning to respond by setting decarbonisation targets and with legislative measures demanding mandatory disclosures of sustainability and/or climate change risks. New legislation currently being debated in Parliament will see some Australian companies soon facing significant regulatory changes.</p>
<p>Corporations and their directors must recognise and proactively mitigate climate change and other sustainability risks through adopting sustainable practices and strategies including decarbonising their businesses, understanding and managing their supply chains, and robust stakeholder engagement.</p>
<p>As the expectations on corporate governance intensify, businesses must prepare now not only to protect their financial interests but also contribute to a more resilient and sustainable future for their companies and our planet.</p>
<p>In a special series of articles, InConsult will explore the changing climate change and sustainability reporting landscape In Australia.</p>
<h2>Australia’s climate change journey</h2>
<p>Australia’s journey in climate change action has been marked by significant milestones and policy shifts. The current urgency to take meaningful action to decarbonise our economy, mitigate the risks and adapt to the hazards, comes despite the initial climate change steps dating back to the late 1980s and early 1990s. The early actions followed the first meeting of an international working group of climate experts, the International Panel on Climate Change (the IPCC) in 1988. Since then, there have been 6 IPCC reports issued and 28 international climate meetings (COP)s to determine global climate responses.</p>
<p>We find ourselves still floundering over how to tackle this immense issue because of a lack of consistent and science-based responses by successive governments. Over the years Australian governments have, among other things commissioned reports, established greenhouse gas plans, climate strategies, advisory panels, environmental statutory bodies and agencies, emission reduction schemes and issued numerous discussion papers on emissions trading schemes. Successive governments have disbanding many of their predecessors’ actions to adopt a different strategy, leaving businesses with uncertainty and all of us in the position where action is becoming more urgent and will need to be more extreme if we are to avoid irreversible climate breakdowns.</p>
<h2>Legislated climate change disclosure changes are imminent</h2>
<p>Australia’s sustainability and climate change expectations have risen dramatically in recent years, reflecting a global shift towards greater environmental accountability and sustainability. This change is driven by growing awareness of the severe impacts of climate change, including more frequent and intense bushfires, droughts, and flooding. Public opinion has increasingly demanded action, compelling both government and private sectors to respond more robustly.</p>
<p>Governments, including the Australian Government, have made commitments to reaching net zero emissions and are developing strategies to achieve these goals. In many jurisdictions these strategies include legislating for mandatory meaningful and consistent disclosures of companies’ material sustainability and climate change risks.</p>
<p>As part of the Commonwealth Government’s strategy to tackle climate change it is amending treasury laws with a new Treasury Laws Amendment (Financial Market Infrastructure and Other Measures) Bill 2024 (the Bill) and evolving Sustainability Standards. The objective is to ensure better disclosures and reporting of sustainability and climate change measures by particular classes of entities.</p>
<p>This legislation will see a significant legislative shift, which if passed in its current form will mandate annual sustainability disclosure reports from large companies. Accompanying the legislation, will be a set of new Sustainability Reporting Standards. These Standards will align with International Standards and mark a new era of comprehensive climate risk reporting in Australia, reinforcing the country’s commitment to combating climate change and promoting sustainability.</p>
<p>Although the initial disclosure requirements are for climate change risks and opportunities, the intention is that in the future companies will be required to report on other sustainability risks and opportunities.<br />
Australia’s multifaceted strategy not only aims to mitigate climate risks but also positions Australia as a more proactive participant in the global effort to combat climate change, ensuring long-term economic and environmental resilience.</p>
<p>Although the Australian legislation is not yet in effect, in a later paper we will identify some of the challenges that disclosing companies may need to anticipate and the preparations they should begin working on now.</p>
<h2>Growing climate change expectations</h2>
<p>Corporate responsibility is under the spotlight, with stakeholders—including consumers, investors, and regulators—placing increasing emphasis on sustainable practices.</p>
<p>Alongside the unprecedented environmental challenges that the world is beginning to experience, comes increasing expectations for companies to act as responsible corporate citizens and earn their social licence to operate. Companies and their directors are facing growing scrutiny over, and liability for, their practices particularly concerning their contribution to climate change.</p>
<p>Consumers, investors, lenders, governments and regulators are placing a growing emphasis on the need for companies to accept responsibility for their contribution to the sustainability and environmental problems in the world.</p>
<h2>Directors in the spotlight</h2>
<p>In Australia, it is well recognised that the directors’ duties to act in the best interests of the company require them to integrate sustainability and climate change considerations into corporate decision-making and governance when these issues are material to the financial interests of investors and of the financial position and performance of the company.</p>
<p>With the passing of the Bill, directors will be mandated to ensure their companies comply with rigorous sustainability reporting standards. This legislation will integrate climate risk into the core of corporate governance, aligning climate disclosures with global frameworks like the Taskforce on Climate-related Financial Disclosures (TCFD) and the International Sustainability Standards Board (ISSB).</p>
<p>Directors must oversee the preparation of annual sustainability reports, with detailed disclosures on climate-related financial risks. These reports will need to cover scope 1, scope 2, and scope 3 emissions, alongside comprehensive information about governance structures, strategies, risk management processes, and relevant metrics and targets. Directors will be responsible for ensuring that these disclosures are accurate, verifiable, and provide a fair representation of the company’s climate-related risks and opportunities.</p>
<p>Directors must consider the long-term impacts of climate change on the company’s financial performance and strategic direction. This will involve conducting thorough risk assessments, setting and monitoring climate-related targets, and ensuring robust data management practices. Directors will also be tasked with driving a consistent approach to sustainability across the organisation, fostering a culture of environmental responsibility.</p>
<h2>The challenges</h2>
<p>A significant challenge for directors is to prevent greenwashing, where misleading claims about sustainability can lead to regulatory and reputational damage. The Australian Securities and Investments Commission (ASIC) has underscored the importance of genuine and transparent reporting. Directors must ensure that the company’s sustainability claims are backed by solid evidence and align with regulatory expectations. Further, ASIC has prioritised tackling greenwashing, ensuring that companies&#8217; sustainability claims are genuine and verifiable.</p>
<p>In essence, directors play a pivotal role in steering their companies towards sustainable practices. By embracing their responsibilities for climate and sustainability disclosures, they not only ensure compliance with emerging regulations but also contribute to a more resilient and sustainable future for their organizations and society at large.</p>
<p>It is well recognised that the directors’ duties to act in the best interests of the company require them to integrate sustainability and climate change considerations into corporate decision-making and governance when these issues are material to the financial interests of investors and of the financial position and performance of the company.</p>
<h2>Reporting climate-related financial risks</h2>
<p>Even without the new legislation, reporting on climate-related financial risks in Australia is becoming an essential aspect of corporate governance and sustainability strategies. The increasing awareness of climate change impacts, coupled with regulatory pressures, has led to significant developments in how companies disclose these risks.</p>
<p>The Taskforce on Climate Financial Disclosures (“TCFD”) first published its reporting recommendations, establishing a global framework for disclosing climate risks in 2017 and since then many jurisdictions have made the reporting of TCFD-aligned disclosures mandatory, and hundreds of sustainability standards have been written.</p>
<p>The International Sustainability Standards Board (“ISSB”) is developing a series of sustainability standards to assist entities in making globally consistent climate and sustainability disclosures. To date, over 20 jurisdictions, representing nearly 55% of the global GDP (excluding the USA) have taken steps to introduce ISSB standards, or use these as a base for developing their own, into their legal or regulatory frameworks.</p>
<p>For example, the EU’s Corporate Sustainability Reporting Directive (“CSRD”) applies to companies with significant business in Europe, regardless of where they are based. The CSRD is broad in its application as well as the depth of its disclosure requirements over a range of ESG issues. In the United States, the Securities and Exchange Commission (“SEC”) passed its climate-risk disclosure rule in March. This is not as extensive as other jurisdictions as there is no requirement to measure and report on scope 3 emissions</p>
<p>Until now, Australia has limited mandatory climate financial disclosure requirements. Large energy generators and users have  been required to report their greenhouse gas emissions under the <a href="https://cer.gov.au/schemes/national-greenhouse-and-energy-reporting-scheme">National Greenhouse Emissions Reporting Act</a> (“NGER Act”). The NGER Act requires large emitters to report their greenhouse gas emissions annually by providing a framework for measuring, reporting, and verifying emissions to ensure accuracy and consistency. Liable companies must disclose scope 1 (direct), scope 2 (indirect from energy consumption), and increasingly scope 3 (other indirect) emissions.</p>
<p>Financial and corporate regulators including APRA, <a href="https://asic.gov.au/about-asic/news-centre/articles/asic-s-current-focus-what-are-the-regulator-s-expectations-on-sustainability-related-disclosures/">ASIC</a> and the ASX have published guidelines for the recommended disclosure of material climate-related financial risks under existing obligations to disclose material risks.</p>
<p>Although many entities in Australia consider climate-related financial risks in their businesses and voluntarily publish sustainability or environmental reports, and/or report under frameworks such as the NGER, most of them are not reporting in accordance with the full TCFD requirements.</p>
<p>To make reporting more consistent, transparent and meaningful for users, the Bill will make climate-related financial disclosures mandatory for certain entities and sets out the reporting requirements.</p>
<p>The introduction of standardised, internationally-aligned requirements for mandatory disclosure of climate-related financial risks and opportunities in Australia for large businesses is part of the government’s “Powering Australia” policy.</p>
<h2>New laws, new risks</h2>
<p>The implementation of these reporting obligations marks a transformative step in Australia’s approach to climate risk management. By adopting robust reporting practices, Australian companies can not only comply with regulatory requirements but also drive sustainability, attract responsible investment, and contribute to global efforts to combat climate change.</p>
<p>New laws mean new risks and challenges. In part 2, we take a detailed look into the new mandatory sustainability reporting laws.</p>
<h2>The bottom line</h2>
<p>The impending mandatory sustainability reporting requirements mark a pivotal shift in how Australian corporations must address their environmental responsibilities. As the global community intensifies its focus on combating climate change, pollution, and biodiversity loss, Australian companies must prepare for the stringent climate disclosure mandates being legislated.</p>
<p>The path to compliance may be challenging, but the rewards of safeguarding both the planet and long-term business viability are immense.</p>
<h2>We are here to help you manage climate risk</h2>
<p>By acting now, you can turn these challenges into opportunities, positioning your company as a leader in sustainability and corporate responsibility. Don’t wait for the legislation to take effect—begin your journey towards a more sustainable and resilient future today.</p>
<p>Together, let’s build a more sustainable tomorrow.</p>
<p><a href="https://inconsult.com.au/contact-us/">Contact us</a> to embark on a journey towards authentic environmental stewardship and responsible business practices.</p>
<div class='printomatic pom-default ' id='id6954'  data-print_target='body'></div>The post <a href="https://inconsult.com.au/publication/facing-the-future-climate-and-sustainability-reporting/">Facing the Future: Climate and Sustainability Reporting</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>New 2024 Internal Audit Standards: Insights for CAEs</title>
		<link>https://inconsult.com.au/publication/new-2024-internal-audit-standards-insights-for-caes/</link>
		
		<dc:creator><![CDATA[Tony Harb]]></dc:creator>
		<pubDate>Fri, 26 Apr 2024 03:05:55 +0000</pubDate>
				<guid isPermaLink="false">https://inconsult.com.au/?post_type=publication&#038;p=11723</guid>

					<description><![CDATA[<p>The new 2024 Global Internal Audit Standards by The Institute of Internal Auditors (IIA) introduce several significant updates designed to enhance the practice and relevance of internal auditing in today&#8217;s turbulent and complex business environment.  The key changes reflect the profession’s evolution, accommodating newer challenges and ensuring the standards meet current needs effectively. The Standards [&#8230;]</p>
The post <a href="https://inconsult.com.au/publication/new-2024-internal-audit-standards-insights-for-caes/">New 2024 Internal Audit Standards: Insights for CAEs</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></description>
										<content:encoded><![CDATA[<p>The new 2024 Global Internal Audit Standards by The Institute of Internal Auditors (IIA) introduce several significant updates designed to enhance the practice and relevance of internal auditing in today&#8217;s turbulent and complex business environment.  The key changes reflect the profession’s evolution, accommodating newer challenges and ensuring the standards meet current needs effectively. <a href="https://www.theiia.org/en/standards/2024-standards/global-internal-audit-standards/">The Standards</a> will be effective from 9 January 2025 which provides an opportunity for Internal Audit (IA) functions to reflect on their current practices.</p>
<p>The Chief Audit Executives (CAE) now has a significant opportunity to incorporate the latest developments in good practice and drive transformation to increase the value that IA can provide to stakeholders.</p>
<h2>The Key Changes</h2>
<p>The 2024 Global Internal Audit Standards mark a significant step forward in aligning internal audit practices with modern business challenges and governance expectations. By structuring the standards into 5 specific domains and emphasizing areas like cybersecurity, IT governance, and ethical conduct, the IIA aims to enhance the professionalism, efficiency, and impact of internal audit functions globally. Organisations are encouraged to transition to these updated standards ahead of their January 2025 effective date to maximize their internal audit function’s alignment with contemporary governance and risk management practices.</p>
<h3>A Restructure &#8211; One Document</h3>
<p>The 2024 Standards have been restructured for better clarity and practical application. The Standards are now combined into one document, the five mandatory components &#8211; Purpose of Internal Auditing, Ethics and Professionalism, Governing the Internal Audit Function, Managing the Internal Audit Function, and Performing Internal Audit Services, as well as one of the recommended non-mandatory elements, the Implementation Guidance. The Standards use the word “must” in the Requirements sections and the words “should” and “may” to specify common and preferred practices in the Considerations for Implementation sections</p>
<p>This new structure aims to streamline the standards for easier navigation and application in diverse auditing environments.</p>
<p>Both assurance and advising (formerly consulting) initiatives are included in the main body of the Standards and are not distinguished from one another by the Standards. With very few exceptions, the requirements for advisory and ad hoc initiatives now resemble those of risk-based assurance audits.</p>
<p>The only non-mandatory section of the International Professional Practices Framework (IPPF) is the IIA’s ‘Global Guidance’ which includes non-mandatory information, advice and best practices for performing engagements.</p>
<p style="text-align: center;"><img loading="lazy" decoding="async" class="alignnone wp-image-11732" src="https://inconsult.com.au/wp-content/uploads/2024/04/Screenshot-2024-04-26-122204-300x169.png" alt="Internal Audit International Professional Practices Framework" width="762" height="429" srcset="https://inconsult.com.au/wp-content/uploads/2024/04/Screenshot-2024-04-26-122204-300x169.png 300w, https://inconsult.com.au/wp-content/uploads/2024/04/Screenshot-2024-04-26-122204-1224x689.png 1224w, https://inconsult.com.au/wp-content/uploads/2024/04/Screenshot-2024-04-26-122204-768x433.png 768w, https://inconsult.com.au/wp-content/uploads/2024/04/Screenshot-2024-04-26-122204.png 1408w" sizes="(max-width: 762px) 100vw, 762px" /></p>
<p style="text-align: center;"><em>The 5 Domains and 15 Principles of the new International Professional Practices Framework (IPPF)</em></p>
<h3>Refined Purpose of Internal Auditing</h3>
<p>The previous Standards focused broadly on the purpose and necessity of standards for internal auditing effectiveness. The 2024 Standards clarify that internal auditing serves to enhance and protect organisational value, guiding adherence to a systematic, disciplined approach.</p>
<h3>Stronger Emphasis on Ethics and Professionalism</h3>
<p>The 2024 revision introduces a stronger emphasis on ethics and professionalism, consolidating related standards to ensure internal auditors uphold integrity, objectivity, and confidentiality in their conduct.</p>
<h3>New Governance Framework</h3>
<p>The Governing the Internal Audit Function domain is new in 2024 and underscores the importance of proper governance structures for internal auditing, highlighting roles and responsibilities from the board and executive management in supporting the audit function.</p>
<p>According to the IIA, the new standards aims strengthen governance frameworks to help organisations be more responsive to rapidly changing conditions.</p>
<h3>Unified Approach and Leadership Involvement</h3>
<p>The standards emphasize the need for a unified approach to internal auditing that involves board or equivalent oversight. This alignment is intended to strengthen the organisation&#8217;s overall approach to risk management and optimize assurance and monitoring activities.</p>
<p>Domain III, ‘Governing the Internal Audit Function’, specifies what the CAE must do in order to support the Board and Senior Management to perform necessary oversight responsibilities for an effective IA function.</p>
<p>Each of the Standards in Domain III now define the ‘Essential Conditions’ for the Board and Senior Management that must be present for the IA function to be able to meet its mandate and fulfil the Purpose of Internal Auditing.</p>
<h3>Aligning Internal Audit Planning and Performance Evaluation</h3>
<p>There is additional focus on the internal audit’s mandate, vision, strategic planning, and performance measurement. This is aimed at ensuring that internal audits are strategically aligned with the organisation&#8217;s goals and are effectively tracking and evaluating their findings and impact.</p>
<p>In order to support the organisation&#8217;s success and strategic objectives, the CAE must now create and implement an IA strategy that meets the expectations of the Board, Senior Management, and other important stakeholders.  Creating a vision, strategic goals, and auxiliary projects for the IA function are all included in this.</p>
<h3>Building Trust and Relationships</h3>
<p>The CAE must create a strategy for the IA function to cultivate strong relationships, connections and confidence with important stakeholders. Surveys, interviews, workshops, and continuing unofficial contacts with the organisation&#8217;s staff are all recommended by guidance.</p>
<p>There&#8217;s a greater emphasis on how internal audit functions serve the public interest, alongside new requirements for quality assurance and improvement programs. This reflects a broader scope in the governance role of internal audits.</p>
<h3>Execution &#8211; Planning, Performing and Reporting</h3>
<p>The latest standards enhance the focus on the execution of internal audit engagements, detailing methodologies for risk assessment, engagement planning, and reporting. The standards also incorporate current trends such as cybersecurity and information technology governance</p>
<p>It is now a requirement to have &#8220;an engagement conclusion that summarises the engagement conclusion results relative to the engagement objectives and management&#8217;s objectives.&#8221; According to each unique level of relevance, engagement findings must be prioritised. In the section under &#8220;Consideration for Implementation,&#8221; ratings and rankings are suggested as an improved practice but are not necessary.</p>
<h3>Internal Audit Technology</h3>
<p>While the 2017 Standards focused on individual and organisational attributes for effective auditing, the 2024 Standards provide a comprehensive framework on managing audit resources, skills, and technological tools to maintain functionality and adapt to organisational changes.</p>
<p>The chief audit executive must now regularly evaluate the technology used by the internal audit function and pursue opportunities to improve effectiveness and efficiency and to engage with the organisations IT and cyber security functions.</p>
<h3>Internal Audit Performance</h3>
<p>In order to assess the effectiveness of the IA function, the CAE must set objectives and evaluate IA performance. Example Key Performance Indicators (KPIs) to be taken into account when implementing the Standard include:</p>
<ul>
<li>Percentage of the organisation’s key risks and controls reviewed,</li>
<li>Percentage of internal audit plan (as adjusted and approved) completed on time</li>
<li>The percentage of recommendations or action plans completed by management</li>
</ul>
<p>The objectives and KPIs should be a component of the CAEs performance measuring approach, which also needs to involve creating an action plan to deal with problems and areas that might use improvement.</p>
<h3>More Flexibility and Relevance</h3>
<p>The standards have been updated to be more flexible, allowing them to be more relevant across various industries and geographic regions. This includes specific guidance for public sector audits and smaller audit functions, ensuring adaptability to different global contexts.</p>
<p>Whilst the previous draft Standards were widely considered to be too prescriptive and difficult to implement, especially for smaller IA functions, Chief Audit Executives (CAEs) now have more leeway in how they execute the Standards as many of the &#8220;must&#8221; have aspects from the draft 2023 Standards have been moved to the &#8220;Considerations for Implementation&#8221; portions of the Standards.</p>
<h3>New Topical Requirements</h3>
<p>New guidance addresses contemporary risk areas like Cybersecurity, Information Technology Governance, Privacy Risk Management, Sustainability, ESG (Environmental, Social &amp; Governance), and Third-party Management. These additions aim to help internal audit functions focus on strategic risks and enhance their value to stakeholders.</p>
<h3>Emphasis on Quality Assurance and Improvement</h3>
<p>There is a renewed focus on continuous improvement and quality assurance in internal auditing, urging functions to implement regular and systematic reviews of their activities and outcomes.</p>
<h2>Implications for Key Stakeholders</h2>
<p>So what does this mean for key stakeholders like the Board, Audit and Risk Committee and the C suite?</p>
<ol>
<li>The 2024 IPPF emphasises a more strategic role for internal auditing within governance frameworks. This includes a greater emphasis on risk management and ensuring that internal audit activities are aligned with the broader strategic objectives of the organisation. This alignment is crucial for ensuring that internal audit provides value in identifying and mitigating potential risks before they impact the organisation.</li>
<li>There is a renewed focus on ethics and professionalism within the internal audit sector. The 2024 IPPF consolidates standards related to ethical behaviour, integrity, objectivity, and confidentiality. This ensures that internal auditors are held to a high standard of conduct, which is critical for maintaining stakeholder trust and the credibility of the audit function.</li>
<li>The new framework incorporates contemporary risk areas such as cybersecurity and information technology governance. This update acknowledges the increasing significance of technology in business processes and the associated risks. Ensuring that internal audits cover these areas can help protect organisations against emerging threats and enhance their resilience.</li>
</ol>
<h2>Step-by-Step Guide to Adapting to the 2024 Changes</h2>
<p>As a Chief Audit Executive, you play a critical role in transitioning your organisation to align with the new 2024 Global Internal Audit Standards. Here’s our strategic roadmap to guide your next steps:</p>
<ol>
<li>Begin by thoroughly understanding the key changes in the 2024 standards that are likely to impact your IA function and organisation. Focus on the restructured domains, new focus areas like cybersecurity, and the enhanced requirements for governance and risk management.</li>
<li>Conduct a comprehensive review of your current internal audit practices against the 2024 standards. Identify areas of compliance and gaps where enhancements are needed, particularly in the areas of IT governance, ethics, and professionalism.</li>
<li>Revise your internal audit charter and other key documents to reflect the changes in the standards. This includes updating the audit plan, risk assessment methodologies, and reporting formats. Ensure you enhance your quality assurance and improvement program to ensure continuous compliance with the new standards. Set up regular reviews and audits to monitor adherence and effectiveness.</li>
<li>Review IA resources and potential capability and training needs.</li>
<li>Engage with key stakeholders, including the board of directors, senior management, and audit committees, to discuss the implications of the new standards and the expected changes in the internal audit function.</li>
<li>Clearly communicate the changes and enhancements in your internal audit function to relevant stakeholders. Ensure transparency in how these changes improve governance, risk management, and overall organisational resilience.</li>
<li>Begin implementing the necessary changes to align with the new standards. This may involve enhancing IT systems, revising governance structures, and introducing new audit tools and technologies.</li>
<li>Continuously monitor the effectiveness of the new practices and make adjustments as necessary. Stay informed about any further updates from the IIA regarding the standards.</li>
</ol>
<h2>Ready to Transform Internal Audit?</h2>
<p>Are you ready to elevate your internal audit function, protect organisational value, and lead with confidence? Your journey towards internal audit excellence starts here.  Here is how we can help:</p>
<p><strong>Establishing a new internal audit function</strong>: We specialise in setting up comprehensive internal audit systems tailored to your specific business needs and budget. Our expert team provides end-to-end solutions—from assessing your current risk and controls and developing a strategic audit plan to implementing auditing processes that are in line with the new standards.</p>
<p><strong>Co-sourcing</strong>: We work alongside your internal audit team on specific projects, providing additional expertise or manpower where needed.</p>
<p><strong>Specialised expertise</strong>: We bring specialised knowledge that your internal team might not possess, such as IT audits, cybersecurity, regulatory compliance, insurance, reinsurance, ESG, sustainability and environmental audits.</p>
<p><strong>Technology support:</strong> With the increasing integration of technology in auditing processes, external auditors can assist in implementing new audit software, analytics tools, or other technologies that enhance the internal team&#8217;s capabilities.</p>
<p><a href="https://inconsult.com.au/contact-us/" target="_blank" rel="noopener noreferrer">Contact us</a> today to schedule a consultation and discover how our services can help your audit function rise to the challenges of the 2024 standards.</p>
<div class='printomatic pom-default ' id='id6701'  data-print_target='body'></div>The post <a href="https://inconsult.com.au/publication/new-2024-internal-audit-standards-insights-for-caes/">New 2024 Internal Audit Standards: Insights for CAEs</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Australia&#8217;s New Cyber Security Strategy</title>
		<link>https://inconsult.com.au/publication/australias-new-cyber-security-strategy/</link>
		
		<dc:creator><![CDATA[Tony Harb]]></dc:creator>
		<pubDate>Wed, 20 Dec 2023 21:00:33 +0000</pubDate>
				<guid isPermaLink="false">https://inconsult.com.au/?post_type=publication&#038;p=11509</guid>

					<description><![CDATA[<p>In November 2023, the Commonwealth Government unveiled its 2023-2030 Cyber Security Strategy, with the objective of positioning Australia as a &#8220;world leader in cyber security by 2030.&#8221; The strategy emphasizes six &#8220;cyber shields&#8221; aimed at fortifying the nation against cyber threats. This announcement signals the government&#8217;s intention to revise existing laws to enhance cyber security, [&#8230;]</p>
The post <a href="https://inconsult.com.au/publication/australias-new-cyber-security-strategy/">Australia’s New Cyber Security Strategy</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></description>
										<content:encoded><![CDATA[<p>In November 2023, the Commonwealth Government unveiled its 2023-2030 Cyber Security Strategy, with the objective of positioning Australia as a &#8220;world leader in cyber security by 2030.&#8221;</p>
<p>The <a href="https://www.homeaffairs.gov.au/cyber-security-subsite/files/2023-cyber-security-strategy.pdf" target="_blank" rel="noopener">strategy</a> emphasizes six &#8220;cyber shields&#8221; aimed at fortifying the nation against cyber threats. This announcement signals the government&#8217;s intention to revise existing laws to enhance cyber security, prompting a detailed examination of potential changes and recommended preparations for businesses.</p>
<p><a href="https://www.homeaffairs.gov.au/cyber-security-subsite/files/2023-cyber-security-strategy.pdf" target="_blank" rel="noopener"><img loading="lazy" decoding="async" class=" wp-image-11515 aligncenter" src="https://inconsult.com.au/wp-content/uploads/2023/12/6-cyber-shields-300x158.png" alt="6 cyber shields" width="619" height="326" srcset="https://inconsult.com.au/wp-content/uploads/2023/12/6-cyber-shields-300x158.png 300w, https://inconsult.com.au/wp-content/uploads/2023/12/6-cyber-shields-768x406.png 768w, https://inconsult.com.au/wp-content/uploads/2023/12/6-cyber-shields.png 888w" sizes="(max-width: 619px) 100vw, 619px" /></a></p>
<p style="text-align: center;"><em>The six cyber shields </em></p>
<h2>Overview of the Government’s Cyber Security Strategy</h2>
<p>Released on November 22, 2023, the Cyber Security Strategy outlines the government&#8217;s vision to bolster Australia&#8217;s cyber defences, making it a &#8220;hard target for cyber attacks.&#8221;</p>
<p>The strategy introduces six key &#8220;cyber shields&#8221; designed to safeguard Australians, covering areas such as strong businesses and citizens, safe technology, world-class threat sharing, protected critical infrastructure, sovereign capabilities, and a resilient region with global leadership.</p>
<p>The strategy is structured across three implementation horizons. The initial horizon (2023-2025) focuses on strengthening the foundations of cyber resilience, followed by a scaling of cyber maturity across the economy in the second horizon (2026-2028). The third horizon (2028-2030) aspires for Australia to attain global leadership in cyber security.</p>
<h2>The Action Plan</h2>
<p>The most relevant shield to people and businesses is Shield 1 &#8211; Strong businesses and citizens .  In brief, this includes:</p>
<ol>
<li>Strengthening cyber security measures for small and medium businesses.</li>
<li>Empowering Australians by assisting individuals in defending themselves against cyber threats.</li>
<li>Taking actions to disrupt and deter cyber threat actors from targeting Australia.</li>
<li>Combating ransomware by collaborating with industry to dismantle the ransomware business model.</li>
<li>Providing clear and comprehensive cyber guidance for businesses.</li>
<li>Enhancing post-incident support by simplifying access to advice and support for businesses following a cyber incident.</li>
<li>Enhancing identity security and offering improved assistance to victims of identity theft.</li>
</ol>
<p>The government plans to support the Cyber Security Strategy through an accompanying Action Plan, providing details on how strategic aims will be achieved and specifying government agencies responsible for implementation.</p>
<h2>Key Law Reforms</h2>
<p>The Cyber Security Strategy identifies areas for law reform to align with its goals:</p>
<h4>No-Fault, No-Liability Reporting for Ransomware Attacks</h4>
<p>The government proposes legislation to establish a no-fault, no-liability reporting obligation for ransomware attacks. The objective is to enhance visibility and encourage timely disclosure by businesses, addressing current reluctance.</p>
<p>A &#8220;ransomware playbook&#8221; will be created to assist businesses in preparing for, dealing with, and recovering from ransomware or cyber-extortion attacks.</p>
<h4>No Specific Ban on Ransomware Payments</h4>
<p>While not explicitly stated in the strategy, the government refrains from an immediate ban on ransomware payments. The possibility of a future ban will be reviewed in two years, with input from businesses and the community.</p>
<h4>Mandatory Cyber Security Standard for IoT Devices</h4>
<p>The government prioritizes legislation for a mandatory cyber security standard for Internet of Things (IoT) devices. A voluntary labelling scheme for consumer-grade smart devices will also be implemented.</p>
<h4>Improving Data Governance Standards and Obligations</h4>
<p>The government is considering Privacy Act reforms and intends to review legislative data retention requirements, especially regarding &#8220;non-personal data.&#8221; The &#8220;data brokerage ecosystem&#8221; will be assessed for potential risks associated with data transfer to malicious actors.</p>
<h4>Extending Critical Infrastructure Regulation</h4>
<p>Shield 4 focuses on upgrading and promoting the cyber resilience of critical infrastructure. The Security of Critical Infrastructure Act 2018 (SOCI Act) will be further revised, imposing stringent obligations on telecommunications companies regarding cyber incident reporting.</p>
<p>An overview of &#8220;corporate obligations&#8221; for critical infrastructure owners and operators will be published, and the Act will clarify the obligations of managed service providers.</p>
<p>A &#8220;consequence management power&#8221; will be introduced under the SOCI Act, allowing the government to direct entities in managing the aftermath of a &#8220;nationally significant incident.&#8221;</p>
<h2>Conclusion</h2>
<p>The Cyber Security Strategy represents a substantial step in enhancing Australia&#8217;s cyber resilience.</p>
<p>The proposed law reforms, particularly the introduction of a ransomware reporting obligation, underscore the government&#8217;s commitment to addressing evolving cyber threats.</p>
<p>Businesses in various sectors, including IoT device manufacturers, critical infrastructure operators, and managed service providers, should closely monitor these developments and prepare for potential legislative changes.</p>
<h2><strong>How can we help enhance cyber security?</strong></h2>
<p>We are here to help strengthen cyber resilience. Our cyber risk management capabilities include designing and developing a cyber risk management framework and a wide range of response plans to enhance your cyber resilience capabilities. Our cyber risk management services include:</p>
<ul>
<li>Vulnerability scanning</li>
<li>Cyber Security Gap Analysis</li>
<li>Regulation compliance advice</li>
<li>Cyber Risk Governance Framework Reviews</li>
<li>Cyber Risk Governance Framework Development</li>
<li>Third-Party Vendor Review and Cyber Risk Analysis</li>
<li>Cyber Risk Awareness Training and Internal Campaigns</li>
<li>Post-Cyber Incident Review</li>
<li>Email Phishing Campaigns</li>
<li>Cyber Incident Response</li>
<li>Crisis Team Familiarisation Training</li>
</ul>
<p>Be more resilient to a wide range of cyber risks and get relevant insight into how to protect your systems by <a href="https://inconsult.com.au/contact-us/">contacting us</a> to discuss how we can help strengthen your cyber resilience framework.</p>The post <a href="https://inconsult.com.au/publication/australias-new-cyber-security-strategy/">Australia’s New Cyber Security Strategy</a> first appeared on <a href="https://inconsult.com.au">InConsult</a>.]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
