On 21 July, ArtsHub tabled the list of 16 submissions made by organisations in the creative sector to the Senate Select Committee on Adopting Artificial Intelligence (AI), which conducted an inquiry and a series of hearings on the matter since March this year.
The Committee released its final report this week, with a list of 13 recommendations that seek to introduce AI-dedicated legislation, force transparency from AI companies owned by multinationals and ensure creators of copyrighted works used for training data are fairly remunerated.
In this article:
Summary of recommendations
The final report calls for a “new, whole-of-economy, dedicated legislation to regulate high-risk uses of AI” that “adopts a principles-based approach to defining high-risk AI uses”. This definition is to be supplemented by “a non-exhaustive list of explicitly defined high-risk AI uses”.
This doubling down on the definition of “high-risk AI use” suggests the desire to fully encapsulate possible scenarios, but also highlight the amount of uncertainty on issues that may arise. The third recommendation specifically calls for the inclusion of general-purpose AI models (for example, large language models) in the non-exhaustive list of high-risk AI use, targeted at programs such as ChatGPT.
Recommendations 8 and 10 explicitly address the creative industries, which the Australian Government should consult extensively “on appropriate solutions to the unprecedented theft of their work by multinational tech companies operating within Australia”, such as Meta, Google and Amazon. The Committee urges the Australian Government to work together with the creative industry to consider a mechanism for fair remuneration that ought to be paid to creators for “commercial AI-generated outputs based on copyrighted material used to train AI systems”. This clearly acknowledges the work of creatives as professionals who have been stripped of their labour and intellectual property.
The need to build AI capability on home turf is highlighted in the fourth recommendation, which calls for an increase in both financial and non-financial support for “sovereign AI capability in Australia”, adding that a focus on First Nations’ perspectives is vital.
List of 13 recommendations addressing AI use in Australia
- That the Australian Government introduce new, whole-of-economy, dedicated legislation to regulate high-risk uses of AI, in line with Option 3 presented in the Government’s ‘Introducing mandatory guardrails for AI in high-risk settings: proposals paper‘.
- That, as part of the dedicated AI legislation, the Australian Government adopts a principles-based approach to defining high-risk AI uses, supplemented by a non-exhaustive list of explicitly defined high-risk AI uses.
- That the Australian Government ensure the non-exhaustive list of high-risk AI uses explicitly includes general-purpose AI models, such as large language models (LLMs).
- That the Australian Government continues to increase the financial and non-financial support it provides in support of sovereign AI capability in Australia, focusing on Australia’s existing areas of comparative advantage and unique First Nations’ perspectives.
- That the Australian Government ensures that the final definition of high-risk AI clearly includes the use of AI that impacts on the rights of people at work, regardless of whether a principles-based or list-based approach to the definition is adopted.
- That the Australian Government extends and applies the existing work, health and safety legislative framework to the workplace risks posed by the adoption of AI.
- That the Australian Government ensures that workers, worker organisations, employers and employer organisations are thoroughly consulted on the need for, and best approach to, further regulatory responses to address the impact of AI on work and workplaces.
- That the Australian Government continues to consult with creative workers, rights-holders and their representative organisations through the Copyright and Artificial Intelligence Reference Group (CAIRG) on appropriate solutions to the unprecedented theft of their work by multinational tech companies operating within Australia.
- That the Australian Government requires the developers of AI products to be transparent about the use of copyrighted works in their training datasets, and that the use of such works is appropriately licensed and paid for.
- That the Australian Government urgently undertakes further consultation with the creative industry to consider an appropriate mechanism to ensure fair remuneration is paid to creators for commercial AI-generated outputs based on copyrighted material used to train AI systems.
- That the Australian Government implements the recommendations pertaining to automated decision-making (ADM) in the review of the Privacy Act, including Proposal 19.3 to introduce a right for individuals to request meaningful information about how substantially automated decisions with legal or similarly significant effect are made.
- That the Australian Government implements recommendations 17.1 and 17.2 of the Robodebt Royal Commission pertaining to the establishment of a consistent legal framework covering ADM in government services and a body to monitor such decisions. This process should be informed by the consultation process currently being led by the Attorney-General’s Department and be harmonious with the guardrails for high-risk uses of AI being developed by the Department of Industry, Science and Resources.
- That the Australian Government takes a coordinated, holistic approach to managing the growth of AI infrastructure in Australia to ensure that growth is sustainable, delivers value for Australians and is in the national interest.
Recap of creative sector submissions
To compare the list of recommendations in the final report to the creative sector submissions, here is a recap of the three key calls for action that ArtsHub previously identified:
- Federal and state laws are urgently needed to define the compliance framework for AI data sets in relation to the copyright of the artists whose works have been copied or scraped.
- Permission for the use of a creatives’ works for AI must be sought and obtained before use of the work, including works used to ‘train’ AI systems, by data mining companies and AI generators.
- Government is to mandate transparency when AI is used.
The sector responds
Media, Entertainment & Arts Alliance (MEAA)
The Media, Entertainment & Arts Alliance (MEAA) welcomes the report for “the clear and unambiguous call … for measures to protect creative content from being stolen by Google, Amazon, Open AI, Meta and other AI developers”.
Previously, the MEAA’s top concern was creators’ compensation and the organisation is “encouraged” by the attention paid to the matter in the final report. MEAA Chief Executive Erin Madeley says, “The Committee has called out the farcical arguments of the big AI developers who claim to be acting in the public interest when in reality they are acting in their own self-interest at the expense of our vital creative and media sectors.
“In the months since the Committee began its inquiries, we have seen more and more evidence come to light that suggests that the theft and exploitation of creative work has occurred at a much larger scale than previously thought, putting into question the ongoing viability of rights and protections regimes that have governed the use of creative work for more than a century.
“The impunity with which big AI developers have systematically scraped and stolen creative work shows us that copyright laws on their own are no longer fit for purpose to protect the rights and payments of creative and media workers,” she continues.
In addition to welcoming the Committee’s recommendations, the MEAA also calls for a system of moral rights to be implemented, which would protect “the voice, image and likeness of creative workers”.
APRA AMCOS
In APRA AMCOS’ AI and Music report published in August this year, it found that 24% of the Australian and New Zealand music industry’s revenue could be lost within the next four years if generative AI platforms continue to operate without proper licensing or consent, and 82% of music creators are concerned about losing the ability to make a living from their work due to AI.
Read: Women believe AI reduces bias in recruitment process – but does it?
APRA AMCOS embraces the Committee’s final report for its focus “on the need for proactive regulation”.
Dean Ormston, CEO, APRA AMCOS says, “The Senate report makes an important contribution to ensuring the future of Australia’s creative industries… The testimony of some tech platforms during this inquiry demonstrates their unwillingness to take accountability for the harm their technologies are causing.
In the final report ‘Chapter 2 – Regulating the AI Industry in Australia‘, the Committee acknowledges the “glaring absence of transparency” from developers including Meta, Google and Amazon, which dodged or refused to answer questions, or offered opaque responses to enquiries around data input, copyrighted data and personal information.
Ormston continues, “The Committee’s recommendations provide a clear framework that supports innovation, but also holds these companies to account, so an environment is fostered where creators’ rights are respected and upheld.”
He adds that the report “reflects unanimous agreement across government, opposition and cross-bench senators” and elevates concerns around protecting cultural creators as “a national priority”.
Copyright Agency
The Copyright Agency applauds the list of recommendations with special focus on Recommendations 8, 9 and 10, which are “essential for the responsible and ethical adoption of AI”.
It also finds the justifications put forth by multinational tech companies on the lack of transparency and abuse of creative output from Australian practitioners as unacceptable; highlighted here.
Australian Society of Authors (ASA)
Australian Society of Authors (ASA) CEO Lucy Haywards says, “The Senate report provides a welcome recognition of the enormous harm generative AI poses to creators through the unauthorised and unremunerated use of their work, as well as a clear path forward for Government regulation.
“What is at stake is not only the sustainability of author and illustrator careers in Australia, but the richness and diversity of Australian literature. We applaud the Committee’s support of Australian authors and illustrators and their vital work.”
National Association for the Visual Arts (NAVA)
NAVA Executive Director Penelope Benton welcomes the report and says it establishes the ground to “safeguarding artists’ livelihoods and creative autonomy in the face of AI’s rapid development”.
While NAVA recognises the potential of AI in generating creative ideas and productivity, and its utilisation by many artists, the “risks of copyright infringement”, “Indigenous data sovereignty” and “threats to cultural autonomy” are significant.
“NAVA’s research shows that while many artists are exploring AI’s potential, they remain deeply concerned about the lack of safeguards against unauthorised use of their work,” says Benton. “This report highlights the pressing need for regulatory frameworks that prioritise equity, transparency and accountability.”
NAVA urges the Government to act swiftly in developing and implementing solutions. “The arts sector is navigating significant change,” adds Benton. “By acting now, we can harness the potential of AI to support and enhance artists’ work while protecting their rights and livelihoods for the long term.”
Find the full report from the Select Committee on Adopting Artificial Intelligence here.