Artificial intelligence (AI) is advancing at an exponential rate. Generative artificial intelligence, such as ChatGPT, has made headlines due to its technological disruption and accessibility.
Countries around the world have started assessing and implementing regulatory frameworks to manage the risks posed by AI. This update provides a snapshot of the current AI regulatory landscape in Australia, the European Union and the United Kingdom.
To date, Australia has taken a soft-law, principles-based approach to regulating AI. The Australian Government’s AI Ethics Framework, published in November 2019, sets out eight principles for ensuring AI is used in a way that is safe, secure and reliable. These voluntary principles may be applied by businesses or government when designing, developing, integrating and using AI.
While there are currently no laws or regulations specific to AI in Australia (although existing laws can and do apply), the Australian Government is continuing to consider specific AI regulation as part of its strategy to position Australia as a leader in digital economy regulation.
In June 2023, the Australian Government released its discussion paper Safe and Responsible AI in Australia for public consultation. This built on the recent Rapid Research Report on Generative AI delivered by the National Science and Technology Council.
The discussion paper seeks feedback on the steps Australia can take to mitigate any potential risks of AI and support safe and responsible AI practices. It sets out 20 questions for stakeholders to consider, which are designed to assist the Australian Government in ensuring that Australia continues to support responsible AI practices and to increase public trust and confidence in the development and use of AI in Australia.
The discussion paper’s consultation period for submissions closes 26 July 2023.
Globally, there are discrepancies in how different jurisdictions approach the regulation and governance of AI. A complex question exists as to whether prescriptive legislative intervention or principles-based regulation is the most appropriate approach; or alternatively, whether it is more effective to calibrate existing regulatory systems.
In 2021, the European Union (EU) Commission released a proposed regulation for AI, the Artificial Intelligence Act (AI Act). The proposed AI Act would introduce an EU-wide framework to regulate the development, deployment and use of AI systems, to ensure the safe and responsible use of AI in the future.
While this approach has the advantage of consolidating specific rules relating to AI under one law, it remains to be seen whether this is the most appropriate response. Attempting to regulate technology with capabilities that are not fully understood poses a considerable challenge. There may also be different implications for different use cases in different sectors, including competing needs across sectors, that one piece of legislation cannot accommodate easily.
The AI Act is likely to become the international standard for AI regulation, similarly to the General Data Protection Regulation (GDPR).
The United Kingdom (UK) has expressed its intention to regulate AI by adopting a flexible, principles-based and pro-innovation approach. In March 2023, the UK Government published a policy paper outlining its regulatory framework approach.
The UK seeks to establish a framework underpinned by five principles:
- Safety, security and robustness
- Appropriate transparency and explainability
- Accountability and governance
- Contestability and redress.
Existing regulators will be tasked with interpreting and applying the framework to AI within their regulatory remits. Adopting a regulator-led approach is a key difference to the EU approach and may create more space and flexibility to resolve competing needs and use cases.
Until the likely rise of AI regulation in Australia and internationally, existing laws must not be ignored when designing, developing and using AI in Australia. These include:
- the Privacy Act 1988 (Cth), which regulates the collection and handling of “personal” information
- the Australian Consumer Law (ACL) and particularly the ACL’s misleading and deceptive conduct regime, which can apply where organisations make representations about the collection and use of personal information that does not accurately reflect its use in AI systems
- Commonwealth, state and territory anti-discrimination laws, which in principle apply to the use of AI
- Commonwealth, state and territory laws governing the use of surveillance devices
- state and territory defamation and criminal laws.
Businesses are also recommended to consider reviewing their internal processes and governance in anticipation of the introduction of AI regulatory frameworks domestically and internationally. While this area of law is evolving, businesses should position themselves to be adaptable and responsive to existing laws and new AI laws in future.