Cheers, Nick. So good to be here. Aunty Glenda, in your absence, thank you for the generous acknowledgement of country. This amazing city, beautiful place in the country, beautiful place in the world, which, of course, is on the land of the Eora people, the Gadigal people of the Eora Nation. So let’s pay our respects to them, elders past, present and emerging and extend that to any First Nations people who have joined us here at this great gathering today and just make the observation that as we turn our minds to how we deal with in a societal, regulatory and a business sense technology which enables the most rapid learning that has ever occurred in human history, that we do that in a spirit when embraces long learning for First Nations people who’ve built up a deep knowledge of this country over tens of thousands of years, and we have much to learn from that as well.
Professor Andrew Parker, thanks for hosting this, and to Nick and Sally, thank you for the great work that you do. One of the reasons I was excited about coming along to this particular conference today is it’s not just a conference of lawyers, it’s not just a conference of regulators, it’s not just a conference of social policy wonks; it brings everyone together and enables them to join the conversation. And we’re at that juncture where we need that joined‑up conversation. So thank you for the great work that you do and your team, Nick and Sally for that great work.
Great to be here with you, Mr Chairman, Joseph Longo, the Chairman of ASIC, and I know you’re going to provide some wonderful insights. Also wonderful in that spirit of joined‑up conversations that we’ve got Human Rights Commissioner Lorraine Finlay, the eSafety Commissioner Julie Inman‑Grant who’s got some great insights in this area as well, as well as the CEO of the National AI Centre, Ms Solar, who – and the government is putting a lot of work, and I’ll have some observations about this, my colleague, in fact, the whole government is put in a lot of work in providing some particular and additional grant of money to the National AI Centre to advance our thinking as a government and as a nation in this area.
The Twitter version of what I’m going to say is this: we’re a government that embraces the future and wants to shape it, not cower from it, which means we look to innovation, we look to science, we look to technology for something that is going to ensure we have a better future than the past. It doesn’t mean it’s without its problems or its trepidations. You can’t foresee all of those, but we should take it with excitement. And the area of regulation, we’ve got to ensure that we have the right principles in place. It will be [indistinct]. I’ll talk about briefly the interim report – the interim response to the consultation and the approach that we’re doing. It’s not my job but Ed Husic’s job to [indistinct] the government’s approach in that area, but you can be confident that it will be an iterative approach in the area of regulation, and that will inform anything that we need to do in a bespoke area – sorry, bespoke regulation in this area.
And whilst we’re dealing with bespoke – anything in the area of bespoke regulation guidance, let’s be mindful that AI is operating under the aegis of the general law. It’s not operating in a law‑free environment at the moment. I’m sure the Human Rights Commissioner, the eSafety Commissioner and others will have interesting insights as well as the, I should say, the Chair of ASIC will have interesting insights to make about that. This is not a lawless environment.
And finally for those who are looking to government in the area of regulation, yes, that is an important role to play. But also there’s an important role for ethics in government – in governance in this area. Just because you can doesn’t mean you should. And if you’re looking – god bless Channel 9, giving us a very present example of just because you can doesn’t mean you should. Yes, it’s of concern to us as a government when human rights are interfered with. It’s a concern to me as somebody who has spent a large part of my life in the union movement when workers' rights are interfered with. My role as minister in this government I’m concerned when investor rights or when consumer rights are interfered with. We should all, you know, being involved in a corporate governance area you should be concerned when your brand damage leads to an erosion of shareholder value. And believe me, there’s plenty of capacity for that.
So they’re the three messages. I’ve said them quickly. Now I’ll say them slowly. How about that? Look, they are really important conversations, and you could be in no better hands than my friend and colleague Ed Husic to lead that conversation from a government point of view. He’s engaged over many, many years in that area.
Whenever new technology emerges it’s – we always lean on old cliches, and they always have some work to do. Inevitably we try and grapple with the new techs and its consequences. We find ourselves saying things like in this instance, there are opportunities and risks, and there are opportunities and risks – in fact, I’m pretty certain that if I asked one of the AI bots to generate a speech for me today, that would be the second sentence. They’ll say that because it’s true, or true‑ish.
You can guarantee that throughout the entire history of the last two centuries as we moved from early stages of the Industrial Revolution, from recognised weaving looms through to this last century with personal computers, web‑enabled technologies that on the verge of each of those there were people talking about the end of days. I was reflecting to Andrew and to Nick and others earlier that I’m very much informed by this. I grew up in Wollongong in the 1980s, a steelworking town. I became a lawyer, not a steelworker, because the year before I left school very necessary economic reforms saw 13,000 people leave the 20,000 upwards workforce at the steel works and they stopped hiring apprentices. Some might say I dodged a bullet, but history takes you – or life takes you in lots of different directions.
That very much informed my view of the way we ought to embrace these. So often the benefits of new technologies are dispersed while the cost of them is very concentrated. I suspect it will be different in this. I suspect it will be different, and we need to be very mindful of that.
The government absolutely embraces the benefits of innovation, technology. We see this as having – we see AI and its capacity as having the same economic boosters capacity as the personal computer had to productivity in the services sector throughout the late 80s and the early 90s. We think it will be a productivity booster that our economy so desperately needs. But there is a downside that we also need to manage in that process.
The ethical questions become more acute and harder to pass. The Australian Government expects – is expected, Australians, I should say, expect the government to manage the social dislocations and the social impact of these transformative technologies, just as our predecessors did in the 80s and 90s. Removing tariff barriers, opening ourselves up to the world, deregulating our financial markets, there were significant social dislocations as a result of that, and the government had to manage those social dislocations. If you don’t, you get a social backlash, and we want to ensure that we aren’t there.
We already know that there are stacks of innovative products that are out there which are using early‑stage AI technologies to make customer interfaces easier, better, more efficient, whether it’s our entry through a customs barrier or moving through an airport. Whether it’s having an insurance claim pre‑assessed or assessed more efficiently, a home loan approved legally and efficiently, more effectively all of these things are happening in the background.
I was talking to the commissioner in one of my earlier briefings. I know that ASIC uses AI technology in triaging the matters that comes within its gaze as well. So there’s a hell of a lot of it happening in the background. We already know, as Nick said in his introduction, that the conversation exploded last year when products came on to the market which made it within the grasp of ordinary everyday Australians, accelerated the conversation that was already happening.
It’s clear that lots can be gained, and we’ve got to embrace it. In another area of my work, I’m very, very focused on – as I know the commissioner is, sorry, the Chair is – the impact of scams and fraud. It’s a $3 billion a year cost on Australian households. I’ve already personally experienced AI technology being used to create videos of myself promoting financial products. I’m not alone in that. If you’re a high‑profile financial personality, you will have experienced exactly the same thing. And they were pretty crap. I had a very ordinary American accent. That will change. They’ll get better at it. There is no doubt about it. And the phone calls that are going into Australians’ houses from generally call centres located overseas, again, it doesn’t take much imagination to see how the power of artificial intelligence combined with data that is illegally acquired and legally available can be used for people with nefarious intent.
However, that’s a challenge. We’ve got to face it. But AI can be both a sword and a shield in our battle against the criminals who are after Australians’ money, and the government intends to ensure that we use it as a shield to continue the work that we are doing, the excellent work that is being done right across the economy to take the fight up to fraudsters, scammers who are robbing Australians of billions of dollars.
We can’t foresee everything. And this will inform the approach that the government will take to any bespoke regulation. We cannot foresee everything. And in a sense, you don’t want to. You want to create the environment where the market, private individuals can innovate, explore capacities and opportunities. But we want to ensure that that occurs within a safe ecosystem. My view, using the analogy that we approach to other areas of regulation, you should regulate activity, not technology. If the technology is being used in a way that is harmful, focus on the harms that are being done, not on the technology and the things. That is the way we approach this issue in other areas of the economy. I see no reason – there will be some need for some bespoke intervention, but I think as a general principle, looking at the nefarious activity, the risky activity or activity of concern is the preferred approach.
As I said a number of times, Ed Husic is leading the government’s work from the technology and industry point of view. In June he put out a discussion paper, canvassed the way forward that we would regulate things. It was obviously tied in with what’s happening in other jurisdictions around the world. We know this, particularly if we want to harness and we want to be a player in the innovation space then we’ve got to ensure that what we are doing is encouraging innovation and not the hardest jurisdiction as well to be developing these technologies and the beneficial use of them. But at the same time, ensuring that our regulation is aligned with other jurisdictions, at least not hostile to them for so many reasons.
Received a pile of submissions, showed that while many Australians are excited about the possibilities everyone who’s been saying, “Hey, Allegra,” or “Hey, Siri,” for quite a bit – as an aside, I’m pretty sure I am responsible for the very first appearance of Siri on Hansard. In the middle of a very heated parliamentary exchange I said at the dispatch box words something to the effect of, “You cannot be serious about that proposition.” My phone interpreted that as a request to Siri and said something to the effect, “I do not agree.” So not only was I the first parliamentarian to get Siri on the record but Siri interjecting and heckling me, which I think is some achievement. I don’t know, if I do anything else in my parliamentary career, I’ll be pretty proud of that.
But Australians have had an interface with it, even if they hadn’t known. They’ve had an interface with it. And I think for that and many other reasons, we are an early adopter nation as a country. We have at least equal measure and enthusiasm for the opportunities, but we want the risks to be dealt with as well.
So two weeks ago the government released its interim response after a cabinet discussion. We committed to consider a mandatory guardrail for AI, that’s the bespoke regulation, the bespoke response I’m talking about. And the development and deployment of those guardrails in high‑risk settings. Plugging a risk [indistinct], which I think is sensible. It’s pretty hefty work. Even the task of defining, you know, what’s high risk and what is not will be a matter of conjecture, but that’s the stuff of government. It’s what we do every day of the week.
We want it to be safe and responsible. We are pursuing that risk‑based approach, and I think getting the balance right is critical but certainly not [indistinct]. We’ll encourage the innovation and the competition that I’ve mentioned so many times. We want to continue the approach at the start, and that is deep collaboration with business, the academic community, with the legal fraternity, with the innovation sector. The deep collaboration to ensure that we get it right. We need to modernise our laws, and the work we’re doing in this area, we’re doing a lot of work, as Joe knows, in the financial services area to modernise and update our regulatory settings [indistinct] problems in so many areas. They were designed to solve last century problems and doing that very well, but it’s 2024. So that’s the work of the government. A lot going on in that space.
You can be confident that as a government that will be our approach. So that’s the long way of saying everything I said in the Twitter version upfront. You know, we’re working on a bespoke approach. The general law still applies. Business can’t outsource its legal obligations to another. And if you’re looking for examples, whether you’re a government agency, a government minister, a chairman of the board, there were examples where we thought it would be okay. Robodebt comes to mind, Channel 9. There are countless examples of where not only did the board, the government of the day get it wrong legally, but the brand damage that was a consequence of those legal errors are far more consequential than any fine that a regulator could impose upon them. And all of this I think, is food for the discussion around the ethical approach.
Government has a big role to play in regulation, but we can’t regulate for everything. There is a huge role for ethical conversations around boardroom tables, around ministerial meeting rooms to ensure that we fill in the space between the general law and the bespoke law and what citizens, customers, consumers and workers need as we embrace the future.
Thanks so much.