播客文字稿
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Dario Amodei, CEO and Co-Founder, Anthropic: It will help us cure cancer. It may help us to eradicate tropical diseases. It will help us understand the universe. But there are these immense and grave risks.
Robin Pomeroy, host, Radio Davos: Welcome to Radio Davos, the podcast from the World Economic Forum that looks at the biggest challenges and how we might solve them. This week: what happens when AI becomes AGI - artificial general intelligence - smarter than humans at just about everything?
Demis Hassabis, Co-Founder and Chief Executive Officer, Google DeepMind: What happens after AGI arrives, really we would be in uncharted territory at that point.
Robin Pomeroy: Two of the smartest human minds - Demis Hassabis, CEO of Google DeepMind, and Dario Amodei of Anthropic - came together in Davos for a conversation called The Day After AGI.
Zanny Minton Beddoes, Editor-in-Chief, The Economist: This was for me like chairing a conversation between the Beatles and the Rolling Stones.
Robin Pomeroy: The Editor-in-Chief of The Economist asks these rockstar AI founders what the rest of us should know about AGI which could now be just a few years away.
Zanny Minton Beddoes: Are we going to get through the technological adolescence of this technology without destroying ourselves?
Dario Amodei: I think this is a risk. This is a risk that if we all work together we can address. We can learn through science to properly control and direct these creations that we're building. But if we build them poorly, if we're all racing and we go so fast that there's no guardrails, then I think there is risk of something going wrong.
Robin Pomeroy: So what exactly is AGI, when will it happen and what will it mean? It’s on this episode of Radio Davos - available wherever you get podcasts and at wef.ch/podcasts.
I’m Robin Pomeroy at the World Economic Forum, and with this look at the future of AI and of humanity…
Demis Hassabis: It's for us to write, as humanity, what's going to happen next.
Robin Pomeroy: This is Radio Davos.
Welcome to Radio Davos. This week, we're talking about artificial intelligence and artificial general intelligence. To talk about that, I'm joined by a co-host, Benjamin Larson, all the way from San Francisco. Hi, Benjamin, how are you?
Benjamin Larsen: Hey Robin, it's great to be here.
Robin Pomeroy: It's great to have you. Tell us what you do at the World Economic Forum.
Benjamin Larsen: I'm an AI safety lead. I work at the Forum's Centre for AI Excellence. I'm based here in San Francisco. We have what's called the AI Global Alliance where we have a number of different work streams and I'm leading one of these work streams called Safe Systems and Technologies where we are looking especially at the safety and security and so on surrounding frontier models and systems and AI agents.
Robin Pomeroy: You've joined me here to kind of set up this session that happened at the World Economic Forum's Annual Meeting in Davos2026, just a few weeks ago. It's been viewed many, many times online. It is two AI founders and CEOs. Could you tell us something about them? It's Dario Amodei and Demis Hassabis. Who are they and why are they important to listen to?
Benjamin Larsen: Yeah, definitely. So Dario Amodei and Demis Hassabis, they're essentially two of the most influential leaders that are shaping the direction of advanced AI today.
And Dario, he's the CEO and co-founder of Anthropic, and that's one of the leading AI research companies. He was previously at OpenAI actually, and is probably best known for his focus on AI safety and alignment and building systems that are powerful, but also safe and controllable. That's something that's very much front and centre for Anthropic.
And then there's Demis. Demis, he is the CEO and co founder of Google DeepMind. He was originally trained as a neuroscientist actually, and he's known for breakthroughs such as AlphaGo, which is a system that beat the world leading Go player back in 2016, but also is very known for scientific AI, so advanced breakthroughs, such as Alphafold, which is protein structure prediction algorithm. Demis, he's long framed AI as a scientific project that's aimed at understanding intelligence itself. And this is where AGI comes into the picture as well.
Robin Pomeroy: So AGI then, Artificial General Intelligence. Os that a concept that's very clear? I have a feeling there's probably several definitions of it or maybe a scale of definitions. What do you think? If I'd never heard of that expression before, how would you define it for me?
Benjamin Larsen: Yeah, so I would say perhaps one way of understanding is that you can have narrow or specific intelligence, or you can have very general intelligence. So AGI, it stands for artificial general intelligence, and it really refers to a system that can understand and learn and apply knowledge across a whole range of tasks that are comparable to that of a human being.
So if you think about our intelligence, Robin, we don't have intelligence in just one narrow and very specific domain, but we're able to interact and operate in the physical world and that demands kind of a very broad and generalised sense of intelligence.
So you can think of this as kind of AI systems that are moving from being very simple and scoped towards broadly understanding and being able to interact and interpret the world in ways that are actually very similar and very general, so similar to what we do as humans.
Robin Pomeroy: For people who've been kind of bamboozled by generative AI over the last three years or so, maybe they'd say, well, it looks like some of these models can do that already, because you can ask one of these large language models any question and well, it sounds very authoritative. You can ask it to do tasks for you as well to work things out or to create something for you. Is that not already general intelligence to some degree?
Benjamin Larsen: You could say to some degree it is, and one definition of it, so I'm leading what's called a Global Future Council on AGI that's co-chaired by Yoshua Bengio and Akiko Murakami s well, who's the executive director of Japan's AI Safety Institute. And the working definition of AGI that we use is that it would be a system that's able to perform tasks that demands a cognitive function that would be similar to that of a human in a range of digital settings.
And you could say that to some extent, some systems, they are definitely already able to do that. But again, it may not have the 'general' in there. So it could be that a system is able to perform one task well or perform well in one domain, but for it to perform across other domains similar to a human and how we are able to switch tasks, that would demand significant engineering skills.
So you're not just able to take any system and plug it into kind of any digital environment and say perform these and those actions according to these kind of specifications. And that's something that is still missing but we're definitely seeing trends and tendencies towards this direction so we're definitely seeing that as we speak.
Robin Pomeroy: And why is it such a big deal? Some people also talk about there being a singularity, which I'm not entirely sure, you can help us with what that means. Is it we wake up one day and suddenly we're surrounded by incredibly intelligent machines, AIs that have this general intelligence. Why are people so fixated on AGI? Because also you hear a lot of the heads of a lot of these companies saying this is what we're working on, maybe there's a race to achieve AGI. Why is it such a big deal?
Benjamin Larsen: Yeah, so the singularity, just to unpack a little bit what that is, so that's a hypothetical point in time at which AI systems, they will become capable of improving themselves faster than humans can understand or control.
So in this kind of scenario, you would have accelerating progress and that could lead to rapid or unpredictable societal change essentially.
But I think it's worth here stressing that not everyone they believe that the singularity will happen or at least that it will be happening anytime soon. But some they also view it as a useful thought experiment for stress testing assumptions and for example about exponential growth. While others they would maybe treat it as a real risk that demands preparation and significant safeguards already today.
So how we view AGI in the work that I mentioned on the Council before is much more kind of a progressive approach. It's not something that we're going to wake up to tomorrow and then we'll have AGI similar to how we understand the singularity. But you could view AGI as a jagged form of intelligence that's able to excel and be superhuman in some areas. For example, in terms of the underlying information that's wrapped into the existing language models and how they can respond almost like an oracle would be. And that's very different from how we would respond as humans in terms of limited memory and understanding of some subjects.
But then in other areas where we would excel these systems, we would say why can't you even remember what we talked about five minutes ago or something like that.
So there are real differences and that's why you can understand it as a jagged form of intelligence right now that's accelerating in some fields, but is still very limited in other fields.
Robin Pomeroy: One thing that's mentioned, there's a concept that's mentioned many times in what we're about to listen to. They talk about AI models closing the loop. Could you tell us what that means?
Benjamin Larsen: Closing the loop, it means moving from AI systems that are only able to generate outputs. So if you prompt a large language model today, you would say, OK, I have an input and I expect an output. But closing the loop, it refers to systems that can also take actions, that can observe the results and they can then adapt their behaviour accordingly and over time.
So that means it's much more about a loop, similar to how, again, we as humans think and learn and act. I just don't come up with an answer, but I think through what are the implications of that, where do I need to search for additional knowledge, how do I solve a part of this task, and based on how I solve all of these different elements, I then go back and I revisit what was the original task. So, that would be closing the loop, essentially.
So you could say that this involves kind of iteration and it's really important because it's a step towards autonomy. You can imagine that if you're able to close this loop, then you actually have greater autonomy and then you can take kind of more infinite steps at solving specific tasks.
But of course, very important too to also stress that when you do aim to close this loop or you do in for greater autonomy, for example, in agentic settings, then you also have new risks that demand new governance approaches. And that's something that we've been focusing a lot on in our work at the Forum.
Robin Pomeroy: So in a closed loop environment, the AI doesn't need any human input anymore. It can go off on its own, learn new things, decide what it's doing by itself. Or am I oversimplifying?
Benjamin Larsen: I think that would be taking it a step too far in terms of where we currently are, but it's, in the bigger picture, we do see, for example, computer use agents that you can simply give an agent a laptop and you can ask it to do a number of things and then maybe you can have a degree of freedom where it can actually go out and do its own kind of thing. And it would go on and take unpredictable actions.
So you don't want that in any kind of work setting at all, of course, because there's a number of risks associated with that.
But you could also say that these systems are a bit more rudimentary than that, in terms of how much is actually actions that are diverted or that are directed at solving or optimising a specific problem. So that goes back to this notion of singularity that we just talked about and where does kind of direction come from? Does it come from an algorithm that's seeking to kind of maximise its own goals or objectives or those goals and objectives that they stated by a human in the loop who's actually running a system?
So I think that's a core degree of difference in terms of control versus where we realistically are today.
But of course there are scenarios where you have to think much more carefully about what are the systems that we are creating and once we have artificial general intelligence that are able to close the loop as we talked about, then what is the outcome of that and how are you able to control and guarantee that essentially.
Robin Pomeroy: There's a recent kind of hype cycle. It's funny, I'm sitting here in Geneva, you're in San Francisco, you're close to where there's interesting things going on all the time. Occasionally stories break out of the bubble maybe of Silicon Valley, and one of them that a lot of people have been paying attention to is this, well, it's changed its name a few times, OpenClaw, I think, is its latest iteration.
The reason I mention it is because you mentioned you could give a laptop to an AI and set it off doing its thing. And as I understand it, that's kind of what's been happening here. And that's what some people have been getting excited about. A lot of people are quite sceptical about what it is as well. But could you explain to anyone who's not heard of that, what it what it, is and why some people were very excited about it?
Benjamin Larsen: OpenClaw, the latest name iteration, of course, so that would be a idea of a personal AI assistant, essentially. So let's say that I engage with OpenClaw -OpenClaw is entirely open source. So I can essentially have a personal assistant. I can grant it access to my email, my calendar and my schedules. And for the first time you kind of have this idea of a personal assistant that's actually useful and, and helpful.
Of course it's still brittle and it can break down a lot, but the difference Robin is that this time it's an open source system. Everybody is able to tweak the agent and interact and give it access to personal resources and so on versus a lot of the agents that we've been seeing so far, they tend to be proprietary, it's of course, a closed source. It's companies that are developing these systems and they are much more targeted at specific use cases and so on. And that's a big difference.
But what we've also seen with OpenClaw, and this is a cautionary tale, once you begin to grant access to kind of early stage agents and allow them access to all of your systems and so on, it also increases the attack surface for hackers and malicious activity that are able to go in and exploit cyber security vulnerabilities and so on.
And I certainly wouldn't feel comfortable kind of in that kind of situation or scenario having the ability of having some of my personal data exposed, essentially.
So it's kind of indicative of the direction of travel that we're going to see much more innovation in terms of agents and AI assistants in the time to come, also in the open source domain.
And that paired with the underlying protocols such as model context protocol and agent to agent protocols, we can expect much more in the times to come in terms not only AI and personal assistance but also in terms of agents broadly speaking and how that is changing the notion of the underlying infrastructure, essentially have having a much more agentic web and many more agentic surfaces.
And that speaks to what we talked about earlier in terms of closing the loop, that we're going to see more autonomy in AI and in software systems in general in the time to come.
Robin Pomeroy: Right, well, let's listen to this session then from Davos. This is the two, as the moderator of this session, Zanny Minton Beddoes, who's the Editor-in-Chief of The Economist, she says it's, she's done this a couple of times before with Dario Amodei and Demis Hassabis, and she said it's like interviewing the Beatles and the Rolling Stones at the same time, which is a pretty good, queuing up of it.
Before I let you go though, Ben, you mentioned the Global Future Council. These are groups with a dozen or so kind of experts from around the world that the World Economic Forum brings together. In your case, it's the Global Future Council. on what did you say?
Benjamin Larsen: It's the Global Future Council on AGI.
Robin Pomeroy: Is there already stuff people can find about that yet or is that something to look out for in the future?
Benjamin Larsen: Yes, so we have a web page where we are publishing a number of shorter briefing papers, precision papers, essentially.
So we're looking into areas such as agency and control that we've been discussing here, Robin. We're looking at international collaboration, what that looks like. We're look at AGI and policy triggers. So, for example, implications for labour markets, economic and societal transformation and so on. And we, of course, similar to the conversation that we're about to tune into, take a stance on a number of these areas and are here to essentially help inform stakeholders, make better decisions when it comes to what is increasingly capable of systems, this notion of artificial general intelligence, what does that look like in the time to come, what do we need to do now in order to be better prepared.
Robin Pomeroy: I'll put a link to that in the show notes to this.
You mentioned one of the chairs of that Global Future Council is Yoshua Bengio. I met him in Davos a few weeks ago, recorded an interview, in which he talks a lot about the dangers, as do the two people we're about to listen to now, of artificial intelligence. And these are people who really know what they're talking about, all three of them. So please look out for that episode coming up in the next few weeks on Radio Davos.
For now, Benjamin Larson. Thanks very much for joining us on Radio Davos.
Benjamin Larsen: Robin, it was a pleasure to be here.
Robin Pomeroy: Great, well, let's just go ahead and listen to that. You can watch this session. I'll also put a link to that, it's on the World Economic Forum's YouTube page. It's called The Day After AGI, and the first voice you'll hear will be the chief editor of The Economist, Zanny Minton Beddoes.
Zanny Minton Beddoes, Editor-in-Chief, The Economist: Welcome everybody and welcome to those of you joining us on live stream to this conversation that I have to say I have been looking forward to for months.
I was lucky enough to moderate a conversation between Dario Amadei and Demis Hassabis last year in Paris. Which I'm afraid got most attention for the fact that you two were squashed on a very small love seat while I sat on an enormous sofa, which was probably my screw up.
But I said at that point that this was for me like, you know, chairing a conversation between the Beatles and the Rolling Stones and you have not had a conversation on stage since. So this is, you now, the sequel, the bands get together again, I'm delighted. You need no introduction. The title of our conversation is The Day After AGI, which I think is perhaps slightly getting ahead of ourselves because we should probably talk about how quickly and easily we will get there. And I want to do a bit of a sort of update on that and then talk about the consequences.
So firstly, on the timeline, Dario, you last year in Paris said, we'll have a model that can do everything a human could do at the level of a Nobel laureate across many fields by 26, 27. We're in 26. Do you still stand by that timeline?
Dario Amodei, CEO and Co-Founder, Anthropic: So, you know, it's always hard to know exactly when something will happen, but I don't think that's going to turn out to be that far off.
So, the mechanism whereby I imagined it would happen is that we would make models that were good at coding and good at AI research, and we would use that to produce the next generation of model and speed it up, to create a loop that would increase the speed of model development.
We are now, in terms of the models that write code, I have engineers within Anthropic who say, I don't write any code anymore. I just let the model write the code. I edit it. I do the things around it.
I think, I don't know, we might be six to 12 months away from when the model is doing most, maybe all of what SWEs [Software Engineers] do end to end.
And then it's a question of how fast does that loop close? Not every part of that loop is something that can be sped up by AI, right? There's like chips, there's manufacturer of chips. There's training time for the model.
So it's, you know, I think there's a lot of uncertainty. It's easy to see how this could take a few years. I don't, it's very hard for me to see how it could take longer than that.
But if I had to guess, I would guess that this goes faster than people imagine and that that key element of code and increasingly research going faster than we imagine, that's going to be the key driver.
It's really hard to predict, again, how much that exponential is going to speed us up. But something fast is going happen.
Zanny Minton Beddoes: You, Demis, were a little more cautious last year. You said a 50% chance of a system that can exhibit all the cognitive capabilities humans can by the end of the decade. Clearly, in coding, as Dario says, it's been remarkable. What is your sense of, do you stand by your prediction, and what's changed in the past year?
Demis Hassabis: Yeah, look, I think I'm still on the same kind of timeline. I think there has been remarkable progress, but I think some areas of kind of engineering work, coding, or so you could say mathematics, are a little bit easier to see how they would be automated, partly because they're verifiable what the output is.
Some areas of natural science are much harder to do than that. You won't necessarily know if the chemical compound you've built or this prediction about physics is correct. You may have to test it experimentally. And that will all take longer.
So I also think there are some missing capabilities at the moment in terms of like not just solving existing conjectures or existing problems but actually coming up with the question in the first place or coming up with the theory or the hypothesis. I think that's much much harder and I think, that's the highest level of scientific creativity.
And it's not clear. I think we will have those systems. I don't think it's impossible but I think there may be one or two missing ingredients.
It remains to be seen how, first of all, can this self-improvement loop that we're all working on actually close without a human in the loop? I think there are also risks to that kind of system, by the way, which we should discuss, and I'm sure we will. But that could speed things up if that kind of system does work.
Zanny Minton Beddoes: We'll get to the risks in a minute, but one other change, I think, of the past year has been a kind of change in the pecking order of the race, if you will. This time a year ago, we just had the DeepSeek moment, and everyone was incredibly excited about what happened there. And there was still a sense, you know, that Google DeepMind was kind of lagging Open AI.
I would say that now, it's looking quite different. I mean, they've declared code red, right?
It's been quite a year. So talk me through what specifically you've been surprised by and how well you've done this year and whether you think, and then I'm going to ask you about the lineup.
Demis Hassabis: Well, look, I was always very confident we would get back to sort of the top of the leaderboards and the SOTA [State-of-the-Art] type of models across the board because I think we've always had like the deepest and broadest research bench and it was about kind of marshalling that altogether and getting the intensity and focus and the kind of startup mentality back to the whole organisation.
And it's been a lot of work and we're still a lot work to do. But I think you can start seeing the progress that's been made in both the models with Gemini 3, but also on the product side with the Gemini app, getting increasing market share.
So I feel like we're making great progress, but there's a tonne more work to do. And we're bringing to bear Google DeepMind's kind of like the engine room of Google, where we're getting used to shipping our models more and more quickly into the product surfaces.
Zanny Minton Beddoes: One question for you Dario on this aspect of it because you've just or you're in the process of you know a new round at an extraordinary evaluation too. But you are, unlike Demis, a let's call it an independent model maker and there is I think an increasing concern that the independent model makers will not be able to continue for long enough until you get to where the revenues come in.
It's made very openly about OpenAI but talk me through how you think about that and then we'll get to the AGI itself.
Dario Amodei: Yeah, I mean, you know, I think how we think about that is, you know, as we've built better and better models, there's been a kind of exponential relationship, not only between how much compute you put into the model, and how cognitively capable it is, but between how cognitive cable it is and how much revenue it's able to generate.
So our revenue has grown 10x in the last three years from zero to 100 million in 2023, 100 million to a billion in 2024. And 1 billion to 10 billion in 2025. And so those revenue numbers, I don't know if that curve will literally continue. It would be crazy if it did, but those numbers are starting to get not too far from the scale of the largest companies in the world.
So there's always uncertainty. We're trying to bootstrap this from nothing. It's a crazy thing, but I have confidence that if we're able to produce the best models in the things that we focus on then I think things will go well.
And you know, I will generally say, you know I think it's been a good year for both Google and Anthropic. And I think the thing we actually have in common is that they're both kind of companies that are, you know, or the research part of the company, that are kind of led by researchers who focus on the models, who focus solving important problems in the world, right? Who have these kind of hard scientific problems as a North Star.
And I think those are the kind of companies that are going to succeed going forward. And I think we share that between us.
Zanny Minton Beddoes: I'm going to resist the temptation to ask you what will happen to the companies that are not led by researchers because I know you won't answer it.
But let's then go on to the predictions area now and this we are supposed to be talking about the day after AI, but let's talk about closing the loop.
The odds that you will get models that will close the loop and be able to power themselves if you will because that's the really the crux for the the winner-takes-all threshold approach.
Do you still believe that we are likely to see that? Or is this going to be much more of a normal technology where followers and catch-up can compete?
Demis Hassabis: Well, look, I definitely don't think it's going to be a normal technology. So, I mean, there are aspects already that, as Dario mentioned, that it's already helping with our coding and some aspects of research.
The full closing of the loop, though, I think is an unknown. I mean I think it is possible to do. You may need AGI itself to be able to do that in some domains.
Again, where these domains, you know, where there's more messiness around them, it's not so easy to verify your answer very quickly. There's kind of NP-hard domains. So as soon as you start getting more, and I also include, by the way, for AGI, physical AI, robotics working, all of these kind of things, and then you've got hardware in the loop, that may limit how fast the self-improvement systems can work. But I think in coding and mathematics and these kind areas, I can definitely see that working. And then the question is more theoretical one, is what is the limit of engineering and maths to solve the natural sciences.
Zanny Minton Beddoes: Darren, you, last year, I think it was last year that you published Machines of Loving Grace, which was a very, I would say, upbeat essay about the potential that you were going to see unfold. I'm told that you are working on an update to this, a new essay, so, you now, wait for it, guys. It's not out yet, but it is coming out. But perhaps you can give us. A sort of a sneak preview of what a year later your big take is going to be.
Dario Amodei: Yes, so, you know, my take has not changed. It has always been my view that, you know, AI is going to be incredibly powerful. I think Demis and I kind of agree on that. It's just a question of exactly when.
And because it's incredibly powerful, it will do all these wonderful things, like the ones I talked about in Machines of Loving Grace. It will help us cure cancer. It may help us to eradicate tropical diseases. It will helps us understand the universe. But that there are these immense and grave risks that, not that we can't address them, I'm not a doomer, but that we need to think about them and we need address them.
And I wrote Machines of Loving Grace first. I'd love to give some sophisticated reason why I wrote that first, but it was just that the positive essay was easier and more fun to write than the negative essay. So I finally spent some time on vacation and I was able to write an essay about the risks. And even when I'm writing about the risks. I tried, you know, I'm like an optimistic person, right? So even as I'm writing about these risks, I wrote about it in a way that was like, how do we overcome these risks? How do we have a battle plan to fight them?
And the way I framed it was, you know, there's this scene from Carl Sagan's Contact, the movie version of it. Where they kind of discover alien life and this international panel that's interviewing people to be humanities representative to meet the alien. And one of the questions they ask one of the candidates is if you could ask the aliens any one question, what would it be? And one the characters says, I would ask, how did you do it? How did you manage to get through this technological adolescence without destroying yourselves. How did you make it through?
And ever since I saw it, it was like 20 years ago, I think I saw that movie. It's kind of stuck with me. And that's the frame that I used, which is that we are knocking on the door of these incredible capabilities, right? The ability to build basically machines out of sand, right. I think it was inevitable that the instant we started working with fire. But how we handle it is not inevitable. And so I think the next few years, we're going to be dealing with, you know, how do we keep these systems under control that are highly autonomous and smarter than any human? How do we make sure that individuals don't misuse them? I have worries about things like bioterrorism. How do make sure that nation states don't miss use them? That's why I've been so concerned about, you now, the CCP, other authoritarian governments. What are the economic impacts, right? I've talked about labour displacement a lot.
And what haven't we thought of, which in many cases may be the hardest thing to deal with at all.
So I'm thinking through how to address those risks. And for each of these, it's a mixture of things that we individually need to do as leaders of the companies and that we can do working together.
And then there's going to need to be some role for wider societal institutions like the government in addressing all of these.
But I just feel this urgency that every day, there's all kinds of crazy stuff going on in the outside world, outside AI, right? But my view is this is happening so fast and it's such a crisis, we should be devoting almost all of our effort to thinking about how to get through this.
Zanny Minton Beddoes: So I can't decide whether I'm more surprised that you, A, take a vacation, B, when you take a vacations, you think about the risks of AI, and C, that your essay is framed in terms of, are we going to get through the technological adolescence of this technology without destroying ourselves?
So my head is slightly spinning, but you then, and I can wait to read it, but you mentioned several areas that can guide the rest of our conversation.
Let's start with jobs. Because you actually have been very outspoken about that, and I think you said that half of entry-level white-collar jobs could be gone within the next one to five years.
But I'm going to turn to you, Demis, because so far we haven't actually seen any discernible impact on the labour market. Yes, unemployment has ticked up in the US, but all of the kind of economic studies I've looked at and that we've written about suggest that this is over-hiring post-pandemic, that it's really not AI driven. If anything, people are hiring to build out AI capability.
Do you think that this will be, as economists have always argued, that it's not a lump of labour fallacy that actually there will be new jobs created because so far the evidence seems to suggest that.
Demis Hassabis: I mean, I think in the near term, that is what will happen, the kind of normal evolution when a breakthrough technology arrives. So some jobs will get disrupted, but I think new, even more valuable, perhaps more meaningful jobs will be created.
I think we're going to see this year the beginnings of maybe impacting the junior level, entry level child of jobs, internships, this type of thing. I think there is some evidence I can feel that ourselves, maybe like a slowdown in hiring and that.
But I think that can be more than compensated by the fact there are these amazing creative tools out there, pretty much available for everyone, almost for free, that if I was to talk to a class of undergrads right now, I would be telling them to get really unbelievably proficient with these tools. I think to the extent that even those of us building it were so busy building it, it's hard to have also time to really explore almost the capability overhang even today's models and products have, let alone tomorrow's. And I think that can be maybe better than a traditional internship would have been in terms of you sort of leapfrogging yourself into being in a useful profession.
So I think there's, that's what I see happening probably in the next five years. Maybe we again slightly differ on time scales on that. But I think what happens after AGI arrives, that's a different question. So I really think we would be in uncharted territory at that point.
Zanny Minton Beddoes: Do you think it's going to take longer than you thought last year when you said half of all entry-level white-collaar jobs?
Dario Amodei: I have about the same view. I actually agree with you and with Demis that at the time I made the comment there was no impact on the labour market. I wasn't saying there was an impact on labour market at that moment.
Now I think maybe we're starting to see just a little beginnings of it in software encoding. I even see it within within Anthropic where I can look forward to a time where on the more junior end and then on the intermediate end, we actually need less and not more people. And we're thinking about how to deal with that within Anthropic in a sensible way.
One to five years as of six months ago, I would stick with that. You know, if you kind of connect this to what I said before, which is, you know, we might have AI that's better than humans at everything in you know maybe one to two years, maybe a little longer than that. Those don't seem to line up. The reason is that there's this lag and there's this replacement thing, right? I know that the labour market is adaptable, right, just like you know 80 percent of people used to do farming. You know, farming got automated, and then they became factory workers, and then knowledge workers. So, you know, there is some level of adaptability here as well, right? We should be economically sophisticated about how the labour market works.
But my worry is, as this exponential keeps compounding, and I don't think it's going to take that long, again, somewhere between a year and five years, it will overwhelm our ability to adapt.
I think I may be saying the same thing Demis is, just factored out of that difference we have about timelines, which I think ultimately comes down to how fast you close the loop on coding.
Zanny Minton Beddoes: How much confidence do you have that governments get the scale of this and have, are beginning to think about what policy responses they need to have?
Demis Hassabis: I don't think that it's anywhere near enough work going on about this. I'm constantly surprised even when I meet economists at places like this that they're not more of professional economists, professors, thinking about what happens.
And not just sort of on the way to AGI, but even if we get all the technical things right that Dario is talking about and the job displacements, one question, we're all worried about the economics of that, but maybe there are ways to distribute this new productivity, this new wealth more fairly. I don't know if we have the right institutions to do that, but that's what should happen at that point. There should be, you know, we may be in a post-scarcity world.
But then there are even the things that keep me up at night is there are bigger questions than that at that point to do with meaning and purpose and a lot of the things we get from our jobs, not just economically, that's one question, but I think that may be easier to solve, strangely, than what happens to the human condition and humanity as a whole.
And I think I'm also optimistic we'll come up with new answers there. We do a lot of things today from extreme sports to art that aren't necessarily directly to do with economic gain. So I think we will find meaning and maybe there'll be even more sort of sophisticated versions of those activities. Plus I think will be exploring the stars so they'll be all of that to factor in as well for in terms of purpose.
But I think it's really worth thinking now even on my timelines of like five to 10 years away, that isn't a lot of time before this comes.
Zanny Minton Beddoes: How big do you think is the risk of a popular backlash against AI that will somehow kind of cause governments to do what from your perspective might be stupid things? Because I'm just thinking back to the era of globalisation in the 1990s when there was indeed some displacement of jobs, governments didn't do enough, the public backlash was such that we've ended up sort of where we are now. Do you think that there is a risk that there will be a growing antipathy towards what you are doing and your companies in the kind of body politic?
Demis Hassabis: I think there's definitely a risk, I think that's kind of reasonable, there's fear and there's worries about these things like jobs and livelihoods. I think it's going to be very complicated the next few years, I think, geopolitically, but also the various factors here, like we want to and we're trying to do this with AlphaFold and our science work and Isomorphic, our spin-out company, solve all disease, cure diseases, come up with new energy sources. I think as a society, it's clear we'd want that.
I think maybe the balance of what the industry is doing is not enough balance towards those types of activities. I think we should have a lot more examples, I know Dario agrees with me of like, Alpha Fold-like things that help sort of unequivocal goods in the world.
And I think actually it's incumbent on the industry and all of us leading players to show that more, demonstrate that, not just talk about it, but demonstrate that.
But then it's going to come with these other intendant disruptions. But I think the other issue is the geopolitical competition. There's obviously competition between the companies, but also US and China primarily. So unless there's an international cooperation or understanding around this, which I think would be good actually in terms of things like minimum safety standards for deployment, I think Dario would agree on that as well, I think it's vitally needed. This technology is going to be cross-border. It's going affect everyone. It's going to affect all of humanity. Actually, Contact is one of my favourite films as well. So, funny enough, I didn't realise it was yours too, Dario, but I think those kind of things need to be worked through.
And if we can, maybe it would be good to have a bit of a slightly slower pace than we're currently predicting, even my timelines, so that we can get this right societally. But that would require some coordination.
Dario Amodei: I prefer your timelines. I'll concede.
Zanny Minton Beddoes: But Dario, let's turn to this now, because one thing, since we last spoke in Paris, the geopolitical environment has, if anything, I don't know, complicated, mad, crazy, whatever phrase you want to use.
Secondly, the U.S. Has a very different approach now towards China. It's a much more, it's a kind of no holds barred, go as fast as we can, but then sell chips to China.
And that is it. So you've got a different attitude towards the United States. You've got very strange relationship between the United States and Europe right now, geopolitically.
Against that, I mean, I hear you talk about it would be nice to have a CERN-like organisation. I mean it's a million years from where we are, from the real world.
So, in the real-world, have the geopolitical risks increased? And what, if anything, do you think should be done about that? And the administration seems to be doing the opposite of what you were suggesting.
Dario Amodei: Yeah, I mean, look, you know, we're just trying to do the best we can to, you know, we're just one company and we're trying to operate in, you know, the environment that exists, no matter how crazy it is.
But, you know, I think at least my policy recommendations haven't changed. That, you know, not selling chips is one of the, you know, one of one of biggest things we can do to, you know make sure that we have the time to handle this.
You know, I said before, I prefer Demis' timeline. I wish we had five to 10 years, you know, so it's possible he's just right and I'm just wrong, but assume I'm right and it can be done in one to two years. Why can't we slow down to Demis's timeline?
Zanny Minton Beddoes: Well, you could just slow down.
Dario Amodei: Well, no, but the reason, the reason we can't do that is because we have geopolitical adversaries building the same technology at a similar pace. It's very hard to have an enforceable agreement where they slow down and we slow down.
And so if we can just not sell the chips, then this isn't a question of competition between the U.S. And China, this is a question of competition between me and Demis, which I'm very confident that we can work out.
Zanny Minton Beddoes: And what do you make of the logic of the administration, which, as I understand it, is we need to sell them chips because we need bind them into U.S. supply chains?
Dario Amodei: I think it's a question not just of time scale, but of the significance of the technology, right?
If this was telecom or something, then all this stuff about proliferating the US stack and wanting to build our chips around the world to make sure that these random countries in different parts of the world build data centres that have Nvidia chips instead of Huawei chips, I think of this more as like it's a decision, are we going to sell nuclear weapons to North Korea because that produces some profit for Boeing, where we can say, OK, yeah, these cases were made by Boeing. The US is winning. This is great. That analogy should just make clear how I see this trade-off, that I just don't think it makes sense, and we've done a lot of more aggressive stuff towards China, China and other players that I think is much less effective than this one measure.
Zanny Minton Beddoes: One more area for me, and then I hope we'll have time for a question or two.
The other area of potential risk that doomers worry about is a kind of all-powerful malign AI. And I think you've both been somewhat sceptical of the doomer approach, but in the last year, we have seen these models showing themselves to be capable of deception, duplicity.
Do you think differently about that risk now than you did a year ago, and is there something about the way the models are evolving that we should put a little bit more concern on that?
Dario Amodei: Yeah, I mean, you know, since the beginning of Anthropic, we've kind of thought about this risk.
Our research at the beginning of it was very theoretical, right? You know, we pioneered this idea of mechanistic interpretability, which is looking inside the model and trying to understand, looking inside its brain, trying to why it does what it does, as human neuroscientists, which we actually both have background in, try to understand the brain.
And I think as time has gone on, we've increasingly documented the bad behaviours of the models when they emerge and are now working on trying to address them with mechanistic interpretability.
So I think I've always been concerned about these risks. I've talked to Demis many times. I think he has also been concerned about these risks.
I think, I have definitely been, and I would guess Demis as well, although I'll let him speak for himself, sceptical of of doomerism which is you know we're doomed there's nothing we can do or this is the most likely outcome.
I think this is a risk. This is risk that if we were all work together we can address. We can learn through science to properly you know control and direct these creations that we're building. But if we build them poorly if we're all racing and we go so fast that there's no guardrails, then I think there is risk of something going wrong.
Zanny Minton Beddoes: So I'm going to give you a chance to answer that in the context of a slightly broader question, which is, over the past year, have you grown more confident of the upside potential of the technology, science, all of the areas that you have talked about a lot, or are you more worried about the risks that we've been discussing?
Demis Hassabis: Look, Zanny, I've been working on this for 20 plus years, so we already knew, the reason I've spent my whole career on AI is the upsides of solving basically the ultimate tool for science and understanding the universe around us. I've sort of been obsessed with that since a kid. And building AI should be the ultimate tool for that if we do it in the right way.
The risks also we've been thinking about since at least the start of DeepMind 15 years ago. And we kind of sort of foresaw that if you got the upsides, it's a dual purpose technology. So. It could be repurposed by, say, bad actors for harmful ends. So we've needed to think about that all the way through.
But I'm a big believer in human ingenuity. But the question is having the time and the focus and all the best minds collaborating on it to solve these problems. I'm sure if we had that, we would solve the technical risk problem. It may be we don't have that, and then that will introduce risk because we'll be sort of, it'll be fragmented, there'll be different projects and people will be racing each other. Then it's much harder to make sure these systems that we produce will be technically safe. But I feel like that's a very tractable problem if we have the time and space.
Zanny Minton Beddoes: I want to make sure there's one question. Gentlemen, keep it very short because we've got literally two minutes.
Audience member: Thanks very much. I'm Philip, co-founder of Starcloud building data centres in space. I wanted to ask a slightly philosophical question. The sort of strongest argument for doomerism to me is the Fermi paradox, the idea that we don't see intelligent life in our galaxy. I was wondering if you guys have any thoughts?
Demis Hassabis: I've thought a lot about that. That can't be the reason, because we should see all the AIs that have. So just for everyone's understanding, the idea is. Well, it's sort of unclear why that would happen, right? So if the reason there's a Fermi paradox, there are no aliens because they get taken out by their own technology, we should be seeing paper clips coming towards us from some part of the galaxy. And apparently, we don't.
We don't see any structures, Dyson spheres, nothing, whether they're AI or natural or sort of biological. So to me, there has to be a different answer to the Fermi Paradox. I have my own theories about that. But it's out of scope for the next minute.
But you know, I just feel like that the, my feeling is that we're past the Great Filter. It was probably multicellular life, if I would have to guess. It was incredibly hard for biology to evolve that. So there isn't a comfort of what's going to happen next. I think it's for us to write as humanity, what's going to happen next?
Zanny Minton Beddoes: This could be a great discussion, but it is out of scope for the next 36 sections. But what isn't, 15 seconds each, what, when we meet again, I hope next year, the three of us, which I would love, what will have changed by then?
Dario Amodei: I think the biggest thing to watch is this issue of AI systems building AI systems. How that goes, whether that goes one way or another, that will determine, you know, whether it's a few more years until we get there, or if we have wonders and a great emergency in front of us that we have to face.
Zanny Minton Beddoes: AI systems building AI systems.
Demis Hassabis: I agree on that so we're keeping close touch about that.
But also I think outside of that I think there are other interesting ideas being researched like world models, continual learning, these are the things I think will need to be cracked if self-improvement doesn't sort of deliver the goods on its own then we'll need these other things to work and then I think things like robotics may have its sort of breakout moment.
Zanny Minton Beddoes: But maybe on the basis of what you've just said we should all be hoping that it does take you a little bit longer and indeed everybody else to give us more time.
Demis Hassabis: I would prefer that. I think that would be better for the world.
Zanny Minton Beddoes: Well, you guys could do something about that. Thank you both very much.
Robin Pomeroy: Zanny Minton Beddoes was speaking to Demis Hassabis and Dario Amodei at the World Economic Forum Annual Meeting 2026 in Davos. You can watch that session on our website or YouTube channel - links in the show notes.
There are loads of great conversations, like that one, that were live streamed from Davos, and you can watch them all on catch up. And we are also publishing many of them on our sister podcast feed Agenda Dialogues - available wherever you get podcasts.
To find out more about the work of the Forum's Centre for AI Excellence, the AI Global Alliance and the Global Future Council on Artificial General Intelligence, please also click the links in the notes.
And don't miss my interview with Yoshua Bengio - a godfather of AI - on how he is working to counter the risks posed by the technology - coming soon to Radio Davos. Follow us wherever you get podcasts or visit wef.ch/podcasts where you will also find Meet the Leader, which also has some amazing interviews from Davos dropped in the coming weeks.
This episode of Radio Davos was presented, produced and edited by me, Robin Pomeroy and by my colleague Benjamin Larsen. Studio production was by Taz Kelleher.
Radio Davos will be back next week, but for now thanks to you for listening and goodbye.
Artificial general intelligence (AGI) is that point in the future when the machines can do pretty much everything better than humans. When will it happen, what will it look like, and what will be the impact on humanity?
Two of the brightest minds working in AI today, Demis Hassabis, Co-Founder and CEO of Google DeepMind, and Dario Amodei, Co-Founder and CEO of Anthropic, speak to Zanny Minton Beddoes, Editor-in-Chief of The Economist.
Benjamin Larsen, an expert in AI at the World Economic Forum, introduces the conversation and gives us a primer on AGI.
You can watch the conversation from the Annual Meeting 2026 in Davos here:
每周为您呈现推动全球议程的紧要问题(英文)
Kaiser Kuo
2026年2月12日















