The Skeptics Society & Skeptic magazine


UK Prime Minister Rishi Sunak in Conversation With Elon Musk (Photo by Tolga Akmen / EPA / Bloomberg via Getty Images)

Are Governments Prepared to Keep AI Safe?

Note from editors: In response to the growing concerns about artificial intelligence development, on November 1–2, 2023, the British Government held the first ever summit on AI Safety, attended by representatives of 28 countries as well as business leaders working in the field of AI. The summit aptly took place at Bletchley Park, the very location where Alan Turing cracked the German Enigma code, which played a significant part in the Allied victory in WWII.

The result of the summit was the signing of The Bletchley Declaration, which recognizes the urgent need to understand and collectively manage potential risks of AI through a joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community. The signatories of the declaration include Canada, China, the European Union, Japan, the United Kingdom, and the United States.

The world leaders in attendance officially recognized the need to collaborate on testing the next generation of AI models against a range of critical national security, safety, and societal risks.

At the conclusion of the event, the British Prime Minister Rishi Sunak and tech entrepreneur Elon Musk sat down at the prime minister’s residence for a private conversation, and then held a public discussion. Their public dialogue is transcribed below, with only minor edits for clarity.

Rishi Sunak has served as the Prime Minister of the United Kingdom since 2022 and has been Member of Parliament since 2015. He studied philosophy, politics and economics at Oxford and earned his MBA from Stanford as a Fulbright Scholar. Prior to his political career, he was a hedge fund manager.

Elon Musk was a founding board member of OpenAI, the research organization behind ChatGPT. He is the owner of Tesla, a pioneer in autonomous electric vehicles, and the founder of Neuralink, a company working on developing implantable brain-computer interfaces. He is also the CEO of the rocket company SpaceX and owner of the social media platform X.com (formerly Twitter).

Rishi Sunak: Bill Gates said there is no one in our time who has done more to push the bounds of science innovation than you. That’s a nice thing to have anyone say about you. But oddly enough, when it comes to AI, you’ve been doing almost the opposite. For around a decade, you’ve been saying, “Hang on, we need to think about what we’re doing and what we’re pushing here. And what do we do to make this safe?” What was it that caused you to think about it that way? Why do we need to be worried?

Elon Musk: I’ve been somewhat concerned for quite a while. I would tell people, “We should really be concerned about AI.” They’re like, “What are you talking about?” They’ve never really had any experience with AI. But since I have been immersed in technology for a long time, I could see it coming.

I think this year there have been a number of breakthroughs. We’re at the point at which someone can see a dynamically created video of themselves, like video of you saying anything in real time. These sorts of deep fake videos are really incredibly good, sometimes more convincing than real ones. And then obviously things like ChatGPT were quite remarkable. I saw GPT-1, GPT-2, GPT-3, GPT-4—the whole sort of lead up to that. It was easy for me to see where it’s going. If you just extrapolate the points on a curve and assume that trend will continue, then we will have profound artificial intelligence. And obviously at a level that far exceeds human intelligence.

But I’m glad to see that, at this point, people are taking safety seriously, and I’d like to say thank you for holding this AI Safety conference. I think it will go down in history as being very important. It’s really quite profound.

I do think, overall, that the potential is there for artificial intelligence to most likely have a positive effect and to create a future of abundance where there is no scarcity of goods and services. But it is somewhat of the Magic Genie problem: if you have a magic genie that can grant all the wishes…usually those stories don’t end well. Be careful what you wish for, including wishes.

RS: So, you talked a little bit about the summit and thank you for being engaged in it, which has been great. One of the things that we achieved today in the meetings between the companies and the leaders was an agreement that, ideally, governments should be doing safety testing of models before they’re released.

In government, my job is to say, “Hang on, there is a potential risk here.” Not a definite risk, but a potential risk of something that could be bad. My job is to protect the country, and we can only do that if we develop the capability we need in our safety institute, and then make sure we can test the models before they are released. You’ve talked about the potential risk. What are the types of things governments like ours should be doing to manage and mitigate those risks?

EM: Well, I generally think that it is good for government to play a role when public safety is at risk. For the vast majority of software, public safety is not at risk. If the app crashes on your phone or your laptop, it’s not a massive catastrophe. But talking about digital super intelligence, does it pose a risk to the public? Then there is a role for government to play, to safeguard the interests of the public.

This is true in many fields. I deal with regulators throughout the world because of Starlink (communications), SpaceX (aerospace), and Tesla (cars). So I’m very familiar with dealing with regulators and I actually agree with the vast majority of regulations. There are a few that I disagree with from time to time, probably less than one percent.

There is some concern from people in Silicon Valley who have never dealt with regulators before, and they think that this is going to just crush innovation, slow them down, and be annoying. And it will be annoying—it’s true, they’re not wrong about that. But I think we’ve learned over the years that having a referee is a good thing. And if you look at any sports game, there’s always a referee and nobody’s suggesting to have a sports game without one. I think that’s the right way to think about this: for government to be a referee to make sure the public safety is addressed.

I think there might be, at times, too much optimism about technology. I say that as a technologist, so I ought to know. But like I said, on balance, I think that the AI will be a force for good. But the probability of it going bad is not zero percent. We just need to mitigate the downside potential.

UK Prime Minister Rishi Sunak speaks at a plenary session on day two of the AI Summit at Bletchley Park on November 2, 2023.

UK Prime Minister Rishi Sunak speaks at a plenary session on day two of the AI Summit at Bletchley Park on November 2, 2023. (Photo by Kirsty O’Connor / No 10 Downing Street [CC BY-NC-ND 2.0 DEED])

RS: Do you think governments can develop the expertise? Governments need to quickly tool up capability personnel-wise, which is what we’re doing. Is it possible for governments to do that fast enough given how quickly the technology is developing?

EM: It’s a great point you’re making. The pace of AI is faster than any technology I’ve seen in history, by far. And it seems to be growing in capability by at least five-fold, perhaps ten-fold per year. It will certainly grow by an order of magnitude in 2024. And government isn’t used to moving at that speed. But I think even if there are no firm regulations and even if there isn’t an enforcement capability, simply having insight and being able to highlight concerns to the public will be very powerful.

RS: Well, hopefully we can do better than that. What was interesting over the last couple of days talking to everyone who’s doing the development of this—and I think you can go with this—is just the pace of advancement here is unlike anything all of you have seen in your careers in technology, because you’ve got these kind of compounding effects from the hardware, and the data, and the personnel.

EM: Currently, the two leading centers for AI development are the San Francisco Bay Area and the London area, and there are many other places where it’s being done, but those are the two leading areas. So, I think if the U.S. and the UK, and China are aligned on safety, that’s all going to be a good thing because that’s really where the leadership is generally.

RS: Good. Thanks. You mentioned China. I took a decision to invite China to the summit over the last days, and it was not an easy decision. A lot of people criticize me for it. My view is, if you’re going to try to have a serious conversation, you need to. What are your thoughts?

EM: It’s essential.

RS: Should we be engaging with China? Can we trust them?

EM: If we don’t, if China is not on board with AI safety, it’s somewhat of a moot situation. The single biggest objection that I get to any kind of AI regulation or sort of safety controls is, “Well, China is not going to do it and therefore they will just jump into the lead and exceed us all.” But actually, China is willing to participate in AI safety. And thank you for inviting them. And I think we should thank China for attending. When I was in China earlier this year, my main subject of discussion with the leadership in China was AI safety. They took it seriously, which is great, and having them here I think was essential. Really, if they are not participants, it’s pointless.

RS: We were pleased they were engaged in the discussions yesterday and actually ended up signing the same communiqué that everyone else did. Which is a good start. And as I said, we need everyone to approach this in a similar way if we’re going to have a realistic chance of resolving it.

We had a good debate today about open source. And I think you’ve been a proponent of algorithmic transparency, making some of the X.com algorithms public. Some are very concerned about open source models being used by bad actors. And then you’ve got people who say they are critical to innovation. What are your thoughts on how we should approach this?

EM: Well, the open source algorithms and data tend to lag the closed source by 6 to 12 months. Given the rate of improvement this is quite a big difference; if things are improving by a factor of let’s say five or more, then being a year behind you are five times worse. It’s a pretty big difference. And that might be an OK situation.

But certainly it will get to the point where you’ve got open source AI that will start to approach human level intelligence, perhaps exceed it. I don’t quite know what to do about it. I think it’s somewhat inevitable. There will be some amount of open source and I guess I would have a slight bias towards open source because at least you can see what’s going on, whereas with closed source, you don’t know what’s happening. Now it should be said that even if AI is open source, do you actually know what’s going on? If you’ve got a gigantic data file and billions of data points, weights, and parameters…you can’t just read it and see what it’s going to do. It’s a gigantic file of inscrutable numbers. You can test it when you run it. But it’s probabilistic as opposed to deterministic. It’s not like traditional programming where you’ve got very discrete logic, and the outcome is very predictable and you can read each line and see what each line is going to do. A neural net is just a whole bunch of probabilities.

RS: The point you’ve just made is one that we have been talking about a lot. AI is not like normal software, where there’s predictability about inputs improving leading to a particular output improving. And as the models iterate and improve, we don’t quite know what’s going to come out the other end. Which is why there is this bias for that we need to get in there while the training runs are being done, before the models are released…to understand what has this new iteration brought about in terms of capability,

When I talk to people about AI, the thing that comes up the most is probably not so much the stuff we’ve been talking about, but jobs. It’s, “What does AI mean for my job? Is it going to mean that I don’t have a job, or my kids are not going to have a job?”

My answer as a policymaker and as a leader is that AI is already creating jobs and you can see that in the companies that are starting, and also in the way it’s being used more as a co-pilot versus replacing the person. There’s still human agency, but AI is helping you do your job better, which is a good thing. And as we’ve seen with technological revolutions in the past, clearly there’s change in the labor market. I was quoting an MIT study today that they did a couple of years ago; something like 60 percent of the jobs at that moment didn’t exist 40 years ago. So—it’s hard to predict.

And my job is to create an incredible education system, whether it’s at school, whether it’s retraining people at any point in their career. Ultimately, if we’ve got a skilled population, then we ought to keep up with the pace of change and have a good life. But it’s still a concern. What are your observations on AI and the impact on labor markets and people’s jobs, and how people should feel as they think about this?

EM: Well, I think we are seeing the most disruptive force in history here. For the first time, we will have something that is smarter than the smartest human. It’s hard to say exactly what that moment is, but there will come a point where no job is needed. You can have a job if you want to have a job for personal satisfaction, but the AI will be able to do everything. I don’t know if that makes people comfortable or uncomfortable. That’s why I say, if you wish for a magic genie that gives you any wishes you want and there’s no limit—you don’t have this three wish limit—you just have as many wishes as you want… It’s both good and bad.

One of the challenges in the future will be, how do we find meaning in life, if you have a magic genie that can do everything you want? When there’s new technology, it tends to usually follow an S-curve. In this case, we’re going to be on the exponential portion of the S-curve for a long time. You’ll be able to ask for anything. We won’t have universal basic income. We’ll have universal high income. In some sense, it’ll be somewhat of a leveler or an equalizer. Really, I think everyone will have access to this magic genie. You’ll be able to ask any question. It’ll certainly be good for education. It’ll be the best, most patient tutor. There will be no shortage of goods and services. It will be an age of abundance.

I’d recommend people read Iain Banks. The Banks culture books are definitely, by far, the best envisioning of an AI future. There’s nothing even close that’ll give you a sense of what is a fairly utopian or protopian future with AI.

RS: Universal high income is a nice phrase. I think part of our job is to make sure that we can navigate to that largely positive place that you’re describing and help people through it between now and then.

EM: It is largely positive, yes. You know, a lot of jobs are uncomfortable or dangerous or sort of tedious, and the computer will have no problem doing that. It will be happy to do it all. And we still have sports where humans compete, like the Olympics. Obviously, a machine can go faster than any human, but humans still race against each other. Even though the machines are better, people do find fulfillment in that.

RS: Yes, we still find a way. It’s a good analogy. We’ve been talking a lot about managing the risks… Let’s talk a little bit about the opportunities.

Having that personalized tutor is incredible compared to classroom learning. If you can have every child have a personal tutor specifically for them that evolves with them over time, that could be extraordinary. And so that you know, for me, I look at that, I think, gosh, that is within reach at this point! That’s one of the benefits I’m most excited about.

I was just going over a couple of things with the team, like how are we doing AI right now that it’s making a difference to people’s lives. We have this thing called gov.uk, all the government information brought together on one website. If you need to get a driving license, passport, pay your taxes, any interaction with government, it is centralized in a very easy to use way. So, a large chunk of the population is interacting with gov.uk every single day to do all these day-to-day tasks, right?

We are about to deploy AI across the platform to make that whole process even easier. Like, “Look, I’m currently here and I’ve lost my passport and my flight is in five hours.” At the moment, that would require how many steps to figure out what you do. When we deploy the AI, it should be that you could just literally say that, and boom, we’re going to walk you through. And that’s going to benefit millions and millions of people every single day.

That’s a very practical way that, in my seat, I can start using this technology to help people in their day-to-day lives—not just healthcare discoveries and everything else that we’re also doing. That’s quite a powerful demonstration.

When you look at the landscape of things that you see as possible, what are you particularly excited about?

EM: I think certainly an AI tutor is going to be amazing. I think there’s also, perhaps, companionship, which may seem odd. How can a computer really be your friend? But if you have an AI that has memory and remembers all of your interactions, and, say, you gave it permission to read everything you’ve ever done…and you can talk to it every day, and those conversations build upon each other… It will really know you better than anyone, perhaps even yourself. You will actually have a great friend. I think that will be a real thing. One of my sons has some learning disabilities and has trouble making friends. An AI friend would be great for him.

RS: OK… You know, that was a surprising answer that’s worth reflecting on. That’s really interesting. END

© Crown Copyright 2023. Reproduced under the Open Government Licence v 3.0. Transcribed by Skeptic.

This article was published on June 28, 2024.

 
Skeptic Magazine App on iPhone

SKEPTIC App

Whether at home or on the go, the SKEPTIC App is the easiest way to read your favorite articles. Within the app, users can purchase the current issue and back issues. Download the app today and get a 30-day free trial subscription.

Download the Skeptic Magazine App for iOS, available on the App Store
Download the Skeptic Magazine App for Android, available on Google Play
SKEPTIC • 3938 State St., Suite 101, Santa Barbara, CA, 93105-3114 • 1-805-576-9396 • Copyright © 1992–2024. All rights reserved • Privacy Policy