‘We’re not ready for it’: chief scientist warns on ChatGPT

‘We’re not ready for it’: chief scientist warns on ChatGPT
image

The Davos discussions mostly focused on its potential to decimate the white-collar workforce, and the transformative and disruptive effect it could have on internet staples such as Google search and Wikipedia.

‘Powerful’ quick response

Besides its potential impact on the labour market, ChatGPT and its ilk also have other features with potentially complex policy implications.

They open up the prospect of a new kind of plagiarism-type problem at schools and universities. There are also copyright and compensation issues over ChatGPT’s trawling of online material.

There is also the risk of impersonation and fraud, and there is the simple fact that ChatGPT often confidently and quite convincingly generates inaccurate information.

Dr Foley said using an RRI report could be a “very powerful” quick response to the rapidly emerging challenges.

Advertisement

“Where the government asks me a question, I go up to the research community, get the best and brightest to help me answer that question very briefly – 1500 words flat,” she said.

“This is the information. There you are, do what you want with it. And that has been very powerful with government, being able to get flat, independent advice, which is evidence-based, to help them make decisions.”

Actually formulating responses would take the private and public sectors some time, she suggested.

“How we actually do this will be a mixture of social science and human behaviour through to blockchain, watermarking, being able to have the smarts to be able to run programs that say this has been computer-generated, this hasn’t,” she said.

“There’ll be a whole range of approaches, it will evolve over time, we will learn to live with it. That would be my guess.”

Dr Foley said the government was well placed to manage policy and regulatory challenges arising from AI, with an E-Safety Commissioner in place and a report on AI ethics having already been produced by the Human Rights Commissioner.

Advertisement

Tech companies can sometimes view government intervention as a barrier to progress and innovation. But Dr Foley said they needed to acknowledge and address the policy issues, because “eventually, it comes and bites them”.

“Do it early, and design it from the beginning, rather than trying to patchwork or Band-Aid it later on,” she said.

Dr Foley said that when nanotechnologies emerged at the turn of the century, CSIRO had moved early and conducted extensive research on the potential dangers and pitfalls.

“At the end of that research program we wanted nanotechnology to be a nothing, you know, people didn’t even think about it. And guess what? Now no one thinks about nanotechnology because it’s just embedded in everything,” she said.

“That’s what we should be doing: responsible research should always have a parallel path, which isn’t done by the people who are doing the research because they’re so excited and want to push things through.

“People almost like a red team, saying: ‘How do we make sure that this is safe? Where do we put the boundaries of what we want, to have safeguards in place?’.”

Peyman Taeidi

Leave a Reply

Your email address will not be published. Required fields are marked *