Australia’s new AI Plan makes worker-centric commitments

Photo: @lukejonesdesign via Unsplash

With AI chatbots becoming ubiquitous and starting to have a real economic impact, governments are increasingly stepping in to oversee safety and accountability. We've seen two models for AI regulation in the past couple of years: hard regulation with the force of law, as the EU has done with its AI Act or light touch regulation that essentially provide a framework for governments to negotiate with the major AI labs. Many jurisdictions, including the UK and the US - on the federal level - are resisting legal measures of any kind.

Australia’s new National AI Plan puts the country on the "low regulation" side of the fence, relying on existing laws rather than introducing a standalone AI Act. Where Australia's plan does stand out, however, are in the protections it claims to provide for workers, with provisions for training, skills development and a commitment to ensuring that the economic benefits from productivity gains are shared. It does not introduce new legal protections or specialist whistleblowing channels for AI disclosures, as the EU did earlier this month.

The explicit aim of the plan is to introduce "as much regulation as necessary, as little as possible." For the time being, this means no firm standards, or "guardrails" for AI developers to abide by. Regulators are not being given any new powers, which means that novel issues AI brings up in copyright, workplace surveillance and discrimination may fall outside existing institutions' remit. As in the UK, US and many other jurisdictions the plan does set up an AI Safety Institute with AUS$30 million in funding to monitor developments and advise on gaps in the law.

Earlier drafts of the plan were significantly stronger. Former industry minister Ed Husic included proposals for mandatory guardrails and a standalone AI Act. These would have categorised AI systems by risk, much as the EU's AI Act does, imposing stricter rules on high-risk applications. However, business groups, including DIGI (which represents Apple, Google, Meta, Microsoft), lobbied for a pause, warning that burdensome laws could stifle innovation and investment. Australia's government has essetially aligned itself with this view, deciding to prioritise potential economic opportunity over intervention.

Nevertheless, advocates for workers' rights have had some impact on the plan as it stands. A commitment to ensuring technology “works for people, not the other way around,” is part of the basic messaging around the Plan. Whether this aspiration is fulfilled by concrete commitments can, of course, be questioned.

The plan includes funding for programmes to expand IT literacy and technical training, with the intention that workers can work alongside AI systems, rather than being replaced by them. The plan highlights the need to upskill Australian workers so they can adapt to AI-driven changes in the workplace. Neverthless, the plan's commitment to "public sector improvements" clearly indicates that the intention is to bring many more AI innovations into the public sector.

The remaining commitments in the plan are to "spread the benefits" of AI by "sharing productivity gains" across the private and public sector, as well as "all regions, industries and communities." It is not clear what this means in practice.

https://www.abc.net.au/news/2025-12-02/national-artificial-intelligence-plan-growth-existing-laws/106086474

Next
Next

EU launches world’s first whistleblower mailbox for AI