Anders Sandberg, neuroscientist: ‘We’re at the beginning of history. We have a responsibility not to mess things up’
Today, many young people are worried about the future. They’re wondering if they’ll ever be able to afford a house, or they’re concerned about the society that their children will grow up in. But are these really long-term concerns?
For Dr. Anders Sandberg – a Swedish researcher at Oxford University’s Future of Humanity Institute – thinking in the “long-term” means thinking about what will happen thousands of years down the road. From his perspective, the key goal for the future is, very simply, to ensure that over the next centuries and millennia, humans have the chance to be born and survive.
Sandberg, 50, is a computational neuroscientist. He’s part of a philosophical stream known as longtermism, which studies the far future. To get there, humanity will have to survive a series of threats.
“There are natural ones – such as asteroid impacts and supervolcanoes – but the probability of those wiping out humanity over the course of a century is low when compared to the risks of climate change and nuclear war,” he suggests.
Among the low-probability and high-impact risks to humanity, Sandberg mentions artificial intelligence (AI), which, when misused, could lead to chaos. He argues that the biggest threats to humans are created by humans themselves. Still, the long-termist remains optimistic.
“Smart decisions can be made now, when there’s still time,” the researcher told EL PAÍS at the National Center for Oncological Research in Madrid, after giving a conference about the future of humanity.
Question. How does climate change compare to the risks posed by AI?
Answer. The risks posed by AI are currently close to zero… but many researchers believe that this threat will grow in the near future. The interesting thing is that we can avoid it; we can work on creating security mechanisms for AI and biotechnologies, with the support of governmental legislation. The risk decreases if we do our job well.
Climate change is more complicated, because it’s systemic: it affects the economy, politics, the supply chain, the ecosystem. And that means it requires many different solutions.
Q. Which technologies could get out of control?
A. People often mention that bots misbehave, although I think advisory systems might be the most dangerous. Imagine a program created to advise a company and make it more profitable. It makes sense to use it, because it gives good advice. Executives who don’t follow the AI-generated recommendations could be fired. Now, while that maximizes the company’s income, it doesn’t respect ethics. In the end, it turns the company into something more ruthless. If this is replicated by other companies, the situation becomes more problematic. If we have powerful systems capable of creating very intelligent technologies, there are dangerous consequences for society.
Another problem is that we want computers to do things, but they don’t understand why we want these things done. The real risk is that we get powerful technologies that alter the way our society is run and cause humans to lose control. To some extent, we’ve already suffered from it – many states, corporations and large institutions already rely on AI. Automating entire processes can discard the value of people and rules. Systems don’t care about values.
Q. Should we be afraid of the future?
A. Being afraid of something means wanting to run away from it. The future is exciting and terrifying and full of possibilities. It’s just like a video game or a jungle gym – there are dangerous things you need to be careful about. There are also very interesting things. And there are things that we must fiddle with, to grow and be better. We shouldn’t fear the future. Instead, we should be hopeful and make sure it’s worth experiencing!
Q. Will future generations think that we treated the planet in a criminal manner?
A. To some degree, we blame our ancestors – who lived thousands of years ago – for driving mammoths to extinction… but they didn’t know about ecology, they didn’t know that they could make a species disappear by hunting it for meat. Future generations will also have negative opinions about what we’re doing. We always assume that the future is going to be more sensible, with more knowledge and resources. But in the present, we need to remember that we’re at the beginning of history. We have a responsibility not to mess things up too much.
Q. Do you think people like Elon Musk use the possibility of a future on Mars as a way of not dealing with pressing issues we are facing on Earth?
A. It’s always a struggle to prioritize problems. Should we save people suffering from malaria, or focus on reducing poverty? Do we raise living standards, or take extreme health measures to protect populations from pandemics? We can argue about priorities, although, in the end, you have to choose to address some over others.
Certain people use excuses to think about the distant future, as a way of escaping the great problems that humanity is facing today. In practice, we should have as many people as possible trying to solve as many problems as possible. And, sometimes, the solution to one problem is also useful for another. For instance, thanks to the desperate search to address Covid, we ended up with improved RNA vaccines. This could help cure many other diseases, including non-pandemic ones. Exploring space could also give us many tools and solutions that we could recycle here on earth, such as creating new packaged foods or emergency equipment.
Q. Is it possible to have infinite development on a finite planet?
A. Maybe. Many people say that economic growth cannot go on forever… but this thinking assumes that economic value is embodied in things. The Mona Lisa didn’t require much material, yet its value is enormous. Value – which is really what economic growth is all about – isn’t always tied to quantity of matter.
Similarly, technological development often means doing more with fewer resources. Modern planes are lighter than older ones, because they’re better designed and built with better materials. They even run on less fuel.
In the very long term, infinite development doesn’t work on a finite planet… it’s much safer to spread it out. But to say that it’s better to have less technology is nonsense, because it would imply less efficiency. Going low-tech is a luxury today.
Q. What discoveries would you like to see over the next decade?
A. I’d love to see a good way to bring human values into AI, so that machines are able to say “I’m not going to do said task, because I don’t fully understand the intention.” Currently, AI does exactly what humans tell it to do, which is dangerous.
I think we also need to work hard to find better sources of energy, such as solar power. Nanotechnology will help us advance a lot… and we need to create better methods of recycling materials.
Q. A few years ago, you said that many scientists were afraid of ruining their careers by conducting studies on human cryogenics (preserving humans at freezing temperatures).
A. Yes! Many people in astronomy, for example, think it’s entirely reasonable to spend a lot of effort trying to understand the universe’s past… but they get angry when I ask them to use the same equations to predict a few billion years into the future. They argue that science has to be checked against reality and cannot be tested over the long term. But on the other hand, there are climate forecasts that are projected well into the future and are very important for crafting public policy.
I think many people only focus on one way to use their knowledge and are not aware that it can be applied to other domains. I think it’s worth trying to learn as much as we can about the future, not because we want to have a perfect prediction, but rather, to obtain information that can help guide our actions.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition