Scroll down for full podcast transcript - click the ‘Show more’ arrow
From diplomacy to defence, AI is markedly changing geopolitics. Shifts in data ownership and infrastructure will transform some stakeholders while elevating others, reshaping sovereignty and influence.
How is the landscape evolving and what does it mean for the existing international architecture?
This is the full audio from a session at the World Economic Forum Annual Meeting 2024. Watch it here: https://www.weforum.org/events/world-economic-forum-annual-meeting-2024/sessions/the-geopolitical-power-of-ai/
Speakers:
Nick Clegg,President, Global Affairs, Meta Platforms Inc.
Mustafa Suleyman, Co-Founder and Chief Executive Officer, Inflection AI, Inc.
Leo Varadkar, Taoiseach, Government of Ireland
Karoline Edtstadler, Federal Minister for the European Union and Constitution, Federal Chancellery of Austria
Jeremy Jurgens, Managing Director, World Economic Forum
Dmytro Kuleba, Minister of Foreign Affairs, Ministry of Foreign Affairs of Ukraine
Andrew R. Sorkin, Editor-at-Large; Columnist, The New York Times Company
Check out all our podcasts on wef.ch/podcasts:
播客文字稿
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Andrew Sorkin: Good afternoon and evening everybody. I'm Andrew Ross Sorkin. Thank you so much for joining us.
We are going to have a conversation. I think one of the most important that's happening here in Davos. Not just about AI – which you've been hearing a lot about over the past several days – but really the geopolitics and the implications, for nation states, for regulation, for what this all looks like when it comes to defence and so much more. And we have such a great group and we want to make this as interactive a conversation as we can as well.
Sitting next to me, the Prime Minister Leo Varadkar of Ireland is with us. Next to him, Karoline Edstadler, Austrian Minister for the EU and Constitutional Affairs, Dmytro Kuleba, Minister of Foreign Affairs of Ukraine. And we're going to get into AI and its role in the war there. I also want to point out that next to him is Nick Clegg from Meta, a former politician. I want to talk to him about sitting on both sides of this discussion. And then finally, Mustafa Suleyman is here. He's a co-founder and chief executive of Inflection AI. He's also the co-founder of DeepMind, which was acquired by Google in 2014 and really one of the earliest innovators in the AI space. So thank you so much for joining us.
I'm going to go to Nick, if I could first, because as I said to you, as I said at the beginning, you've sat on both sides of this discussion, being a politician and thinking about technology and its impact on society, its impact on the state, if you will. And now you are sitting in the role of corporation and business and I think a lot of people oftentimes now look at businesses as nation states unto themselves. And so I'm curious, sitting where you sit today and also looking at how, the conversation about social media frankly, where you work now and your speciality. For many years, people say, could this be regulated? What's its impact going to be on the nation state, elections, defence – all of these issues? Can government ever keep up with business?
Nick Clegg: Well, in one sense, no. Of course, the velocity, particularly of technological change, is quite different to the pace of political and regulatory legislative debate. But there are degrees of sort of misalignment.
And I think, it is a good thing. It's actually a very good thing that sometimes, in a somewhat imprecise way, sometimes in a rather hyperbolic way but nonetheless – and in fact, the fact that we're all speaking about AI this week in Davos is another manifestation of it – I think the fact that the political, societal, ethical debate around generative AI is happening in parallel, as the technology is evolving is a lot healthier than what we've seen over the last 15, 18 years where you had an explosion of social media and many governments are only now getting around to deciding what kind of guardrails and legislation they should put in place 15 years later, after a great pendulum swing which swung from a sort of tech euphoria and utopianism to tech pessimism. So I think it's much better if those things work in parallel.
The only thing I would say is we also need to ask ourselves, who has access to these technologies? I personally think it is just unsustainable, impractical, infeasible to cleave to the view that only a handful of West Coast companies with enough GPU capacity, enough deep pockets and enough access to data can run this foundational technology. That's why I'm such an advocate of open source to democratize this.
And then the final thing I'd say is if you want to regulate this space, you can't respond to something, you can't react to something, you can't regulate something if you can't first detect it. So if I were still in politics, the thing I put right in the front of the queue is getting all the industry, the big platforms who are working in this already but crucially, the smaller players as well and really force the pace on getting common standards on how to identify and basically have what's called invisible watermarking in the images and videos, which can be generated by generative AI tools that do not exist at the moment. Each company's doing their own thing. There are some interesting discussions happening in something called a partnership for AI but in my view, that's the most urgent task facing us today.
Andrew Sorkin: I'm going to go out of order on the protocol here, Prime Minister; I ask you to indulge me for one second because I want to ask Mustafa a question. One of the things that's been so fascinating to watch, I think the public's all been watching, is the industry has been quite outspoken about saying there's going to be a lot of problems with AI. Please come regulate us. Please, we would love you to regulate us. Is that genuine? Is that sincere? And what is that about? And why is that happening now, this time when it hasn't happened before? And is there a real view inside the industry that it actually can happen?
Mustafa Suleyman: Look, I think that those calls are sincere but I think we are all a bit confused. This is going to be the most transformational moment, not just in technology but in culture and politics of all of our lifetimes. We're going to witness the plummeting cost of power. AI is really the ability to absorb vast amounts of information, generate new kinds of information and take actions on that information. Just like any organization, whether it's a government or a company or any individual, that's how we all interact in the world.
And we're commoditizing, like reducing the cost of producing and distributing that to all – ultimately, it will be widely available to everybody potentially in open source and in other forms and that is going to be massively destabilizing. So whichever way you look at it, there are incredible upsides and there's also the potential to empower everybody to be able to essentially conflict in the way that we otherwise might because we have different views and opinions in the world.
Andrew Sorkin: Prime Minister, how concerned are you that AI ultimately can allow almost individuals to become nation states and can influence things in ways they never could before? We were talking earlier. Even right now for you, there are deep fakes of you all over the internet, all over the internet, saying all sorts of things that you never would have ever said, some of them quite believable.
Leo Varadkar: Yeah, they're mostly selling cryptocurrency and financial products, so I hope most people realize that's not what I actually do. But it is a concern because it's got so good, it's only going to get better. And I hear audio of politicians that is clearly fake but people believe it.
Andrew Sorkin: But it's clearly fake to you. What are you going to do about making sure that the population – is it that the population needs to be educated enough to be able to identify and spot this on their own? Is the government supposed to do that? Is the technology company supposed to?
Leo Varadkar: I think the point that Nick made is very valid around detection, it's going to be really important so we can find out where it comes from. The platforms have a huge responsibility to take down content and to take it down quickly. Some are better at that than others. But also people in societies are going to have to adapt to this new technology.
That will happen anyway. Anytime there is new technology, people learn how to live with it but we're going to need to try and help our societies to do that. And that is around the whole space about AI awareness and AI education. But as a technology, I think it is going to be transformative, I think it's going to change our world as much as the internet has, maybe even the printing press. We need to see it in that context and the positive applications are also extraordinary too.
I'm a doctor by profession and I'm learning all the time about what AI can do in health care. And just think of all the unmet need out there in the world for health care because people can't see a doctor or can't get the test that they need or they're waiting for the test results to get back. So much can be done to make our world so much better – everyone will effectively have a personal assistant through AI. So what can be done to me is extraordinary and extraordinarily positive. But like any major new technology, they're real dangers too.
Andrew Sorkin: How much do you worry about jobs for your people five, 10 years out? You've read the reports, everybody's read the reports that have come out all week about what's going to ultimately happen to jobs if you believe those reports.
Leo Varadkar: I don't because history tells us that anytime there's a technological advancement, people believe it will eliminate jobs. What usually happens is jobs change and some jobs become obsolete and new forms of employment are created. Now, maybe this will be the first time in history that doesn't happen. In which case, it's an even bigger transformation than we may expect. But I think two things are important.
One is making sure that we're real and meaningful about lifelong education and second chance education and the opportunity to retrain and that becomes financially viable for people because it's likely it's probably already become the case that very few people have the same job for life. Most people have multiple careers. So we need to make sure that that's normalized in our education systems. Much more so than it is now.
And I think one thing it might potentially do if we use AI to our advantage of societies, maybe it'll enable us to work less, maybe it'd be possible, if it's distributed fairly of course, to allow people to have shorter working days and shorter working weeks with the help of AI. But that won't just happen organically. We'll have to make that work.
Andrew Sorkin: Let me ask you an EU question. This is the internet market commissioner Thierry Breton writing, "The EU becomes the very first continent to set clear rules for the use of AI," so the EU struck a deal on AI called the AI Act. And went on to say, "the AI act is much more than a rulebook, it's a launch pad" – a launch pad – "for EU startups and researchers to lead the global AI race."
So we hear about regulation on one side and we've heard of probably the most aggressive regulation on tech that's come out of Europe. But the least amount of true innovation on this topic seems to have come out of Europe too. Do you see a correlation between the two?
Karoline Edtstadler: Well, first of all, thanks for the invitation. And accepting me as the only woman on a panel where we are discussing very technical details also. This should be natural but I'm happy to be here because I can certainly say that I agree with most of the points which were mentioned here by the commissioner because the European Union is definitely the first institution, if you want to say it like that, who tries to categorize the risks of AI. And I think we can agree on one point.
AI is a very powerful technology. And we see a lot of downsides also emerging from AI. Maybe we don't know everything by now and I'm sure we don't know everything by now. But the trial of the European Union is to categorize the risks and from these risks, there are certain things that have to be done. For example, we all know spam filters. We know video games, so that's the minimum risk that's a limited risk. But we see also risks we don't want to see, for example, social scoring and for that I think it's really important to do so.
So if you ask me, or tell me that, not really a lot of startups are coming out of Europe, I think it's definitely the responsibility of an institution like the European Union to care for the future and if you want to ask me if we should stop here, I say no. We need global rules. And I think in an ideal world, we can agree on rules, some restrictions also in the whole world, together with the industry, together with technology, to discuss what can happen.
I'm not sharing dystopia. I'm not having the anxious "they could take over and overpower us," not at all. But I'm a realist. I was a criminal judge in my former life and I think it's really the time now to set some rules to set some standards.
Andrew Sorkin: You said yourself, in the United States, for example, since a lot of the innovation seems to be coming from there, is failing to properly regulate?
Karoline Edtstadler: Well first of all, in the United States is a trial to somehow regulate AI – we had a discussion yesterday, I think you were also there. So we are definitely the first and there is a political agreement already in the European Union. But we have to finalize it now. Also, the European Union is – I'm also on the leadership panel of the Internet Governance Forum of the United Nations and there, we try to describe the problem and to find solutions for these risks, which are unacceptable.
We don't want to hinder innovation. I would like to be very clear on that. That's not to try to hinder innovation, on the contrary. But we have to make sure that we keep to human oversight, that we keep it explainable and – you mentioned it already – that we also educate people, they should be able to deal with these risks and they should know which risks can emerge out of this technology.
Andrew Sorkin: Let's talk about Ukraine. Let's talk about what's happening in Ukraine and the war in Ukraine but also how Ukraine is using AI in this war because I think one of the things a lot of people both see as upsides and downsides is how AI ultimately can be used on the battlefield, can be used in the context of war, not just on the battlefield itself but on the battlefield of information and misinformation.
Dmytro Kuleba: Well, just a quick example: you usually need up to 10 rounds or two rounds to hit one target because of the corrections that you have to make with every new shot. If you have a drone connected to an AI-powered platform, you will need one shot and that has huge consequences on the production, on the purchase, on the management.
One of the biggest difficulties in the counter-offensive that we were undertaking last summer, was actually that both sides, Ukraine and Russia, were using surveillance drones connected to striking drones to such an extent that soldiers physically could not move because the moment you walk out of the forest or of the trench, you get immediately detected by the surveillance drones who sends a message to the striking drone and you're dead. So, it already has a huge effect on warfare and 2023 was pivotal in the transformation of warfare with the use of AI.
And in 2024, we will be observing some undebated publicly but enormous efforts to test and apply AI on the battlefield. But the power of AI is much broader than that. You know, when nuclear weapons emerged, it completely changed the way humanity understands security and security architecture and to a large extent, it was in addition to diplomacy and a completely different reset of the rules. Now, AI will have even bigger consequences on the way we think of global security.
You do not need to hold a fleet thousands of kilometres away from your country if you have a fleet of drones smart enough to operate in the region, just to say the least. And when computing when quantum computing arrives and it matches with AI, things will get even worse with the global security and the way we manage the world. So when we're thinking in Ukraine, because somehow God decided to put us at the edge of history, when we're thinking of the next kind of levels of threats we will be facing – and Russia will not be on the side of AI civilized regulated thinking – we will be opposing a completely different enemy.
And on a broader scale, I'm sure that there will be two camps – two poles – in the world in terms of approach to AI. And when people speak about a polarized world, it will be even more polarized because of the way AI will be treated. So all of this will change enormously. First, how humanity, emergency, security, how diplomats try to keep things sane and manageable and most importantly how we do all our work. Diplomacy as a job will become either extremely boring or as exciting as ever after AI introduction.
Andrew Sorkin: Mustafa, somebody who's inventing this technology, personally. What do you think when you hear that and also take us into this "at some point we will get to AGI" – it's going to happen, artificial general intelligence. And when that happens, whoever has that technology is that like having a nuclear bomb? Is that like being a nation state if your company or open AI has that first, are we supposed to think about that differently?
Mustafa Suleyman: I think that's far enough away that it's quite hard to properly speculate on the consequences. But, you know, as I was listening to Dmytro speak, I was reminded that I think one of the most remarkable things of 2023 is how much of the software platform that is enabling the resistance in Ukraine is, in fact, open-source – the targeting mechanisms, the surveillance mechanisms, the image classification models.
So, one of the obvious characteristics of this new wave is that these tools are omni-use, like duel use doesn't really cut it anymore. They're inherently useful in so many different settings and actually, when we look back at history at all of the major general purpose technologies that have transformed our world, there's a very consistent characteristic which is the extent that things get useful, they get cheaper, they get easier to use and they spread far and wide.
So we have to assume that's going to be the continued destiny over the next couple of decades and manage the consequences of power, the ability to take actions becoming cheaper and widely available to everybody – good and "bad."
Andrew Sorkin: Here's a technical question; maybe Nick and Mustafa can speak to it. There's a separate debate going on about open source versus closed source. And Meta has taken an open source approach. I think you've taken a closed source approach for now.
Mustafa Suleyman: That's more commercial rather than, you know, okay. But building our own models because we think they're better. In fact, they're objectively better than Lambda2. So according to the latest benchmarks, okay.
Andrew Sorkin: But for now that's true. Well, explain that approach but then also contextualize it if you could, as it relates to the public's ability to fully understand what is going on and their access to it.
Nick Clegg: Can I just just amplify something Mustafa raises that is terrifically important? Because you mentioned AGI and I think one of the things that somewhat paralyzed and distorted the debate last year, particularly through the great hype cycle when generative AI became a concept which was familiar to people for the first time, was everyone immediately started making predictions about where it was going to end up. That we were going to have some "all knowing," "all powerful" ... I mean.
By the way, ask data scientists for a definition of AGI and you get a different definition for each single one; there isn't even consensus of what AGI precisely means. And I think what we ended up then doing, was saying, "Oh, we can't open source because it could be really dangerous in some distant future which we can't even guess."
Right now, these models are not... – I don't want to be misinterpreted – they're more stupid than many people assume.
Andrew Sorkin: Sam Altman has called them incompetent helpers.
Nick Clegg: They don't understand the world. They can't reason, they can't plan. They don't know the meaning of the words that they produce. They're actually tokens. They're highly, highly sophisticated, versatile autocomplete systems. But we should be careful not to anthropomorphize artificial intelligence. We confer almost our own intelligence on something which does not have human level intelligence.
Now, there's a debate. Some people think human-level intelligence is a proximate thing. Others think that it's a much more distant prospect. But when it comes to access and who has control of this technology, for the technology that we have right now and that we're likely to have in the near future, that's absolutely the reasons why that should not be kept under lock and key by a few, handful of very rich corporations.
It is obvious that it is better for the world, particularly for the developing world, particularly for the Global South, for people to be able to use these systems without having to spend tens of billions of dollars on the GPU and compute capacity that only companies Mustafa that I work for. In the future, if these systems do develop an autonomy and agency of their own, sure, we're in a different paradigm. We're nowhere near that yet.
Mustafa Suleyman: Well, the definition of intelligence is in itself a distraction, right? It's a pretty unclear, hazy concept. We shouldn't really be using that. We should be talking about capabilities. We can measure what a system can do and we can often do so with respect to what a human can do. Right? Can this agent talk to us very knowledgeably about many of the topics that we all talk to LLMs about. In time, can they schedule? Can they plan? Can they organize? Can they buy? Can they book?
Those observable actions carry with them risks and benefits and we can do a very sensible evaluation of those things. So I think we have to step back from the kind of engineering research-led exciting definition that we've used for 20 years to excite the field, basically, to get us to fund academic research intelligence and actually now focus on what these things can do and that's where I think we have to complement the EU AI Act and the work that has been done there to focus on the risk-based approach of specific sectors in a very measurable way. And I think that's a sensible first step. You would agree with that.
Nick Clegg: I think the idea of identifying risk, it's always better to try and regulate the risks not the technology itself.
Mustafa Suleyman: You might need to regulate autonomy; you just said that, right? So we will need to regulate capabilities as well as the applications because autonomy is clearly much more dangerous than having a narrow human in the loop. Likewise, generality is more dangerous than narrowness. If it's a narrow and specific application, it's more dangerous.
Andrew Sorkin: Let me ask the politicians on the stage. Let's say somebody tries to influence the outcome of an election.
Mustafa Suleyman: Never.
Dmytro Kuleba: I can tell you dozens of stories about it.
Andrew Sorkin: Is the responsibility – and you can tell your story in a sec – but is the responsibility the human who is taking the technology and leveraging and using it or is it the folks who are building the technology to begin with that have allowed this to even be used? See the distinction?
Leo Varadkar: I think principally it's the person who is trying to misuse the technology for nefarious end.
Andrew Sorkin: Right. But does the folks who built the technology to build in such safeguards to never "allow." Right? This is a chicken and egg issue.
Leo Varadkar: I get the question but is it possible to do that if you apply that to any other technology? How do you write into that?
Andrew Sorkin: Well, I think there are efforts going on right now with AI to effectively try to put some of those safeguards around it, no?
Mustafa Suleyman: It's pretty hard to stop someone doing bad things with a cell phone or a laptop. And I think that these technologies are not that dissimilar. Having said that, there are specific capabilities, for example, coaching around being able to manufacture a bioweapon or a general bomb. I mean, clearly, our models shouldn't make it easier for an average non-technical person to go and manufacture anthrax. That would be both illegal and terrible for the world. And so whether they're open source or closed source, we can actually retard those capabilities at source rather than relying on some bad actor not to do bad things.
Karoline Edtstadler: Coming back to this point you asked, "Whose fault it is to break it down?" And I think there is no technology or humankind or nothing which cannot be misused. The question is, can you, as a user, see that you are misused? So you should be educated that you can see this is a deep fake video of the Prime Minister of the Republic of Ireland and not a real video. And this is what will happen, I guess in this super election year 2024. So we have to be very diligently trying to educate people trying to also push innovation in filtering these fake pictures and videos and then try to bring social media or whoever is bringing them to watermark them. I think this is our common task and our common responsibility.
Leo Varadkar: I think though, ironically, the use of AI and the misuse of AI in politics might have two unintended consequences. One, it might make people value more trusted sources of information. You might see a second age of traditional news people wanting to go back to getting their news from a public service broadcaster or a newspaper with a 200-year record of generally getting the facts right. That might be one unintended outcome. Another in politics might actually be that politics starts becoming more organic again. And people actually want to see the candidate physically with their own eyes, where they want them to knock on their door again, be outside their supermarket and that might yet become an unintended consequence of people becoming so sceptical of what they're seeing in electronic format.
Andrew Sorkin: Which is suggesting that this is going to undermine trust even more?
Leo Varadkar: I think it will, unfortunately. On balance, I think it will. But I think that can be dealt with in different ways. Going back to trusted sources, real human engagement, having additional value again and then re-putting the tools into place so that we can deal with the misuse of AI when it happens.
Nick Clegg: I actually agree with all of that I can easily imagine in the next few years. We've just talked about watermarking of synthetic content. So, the Prime Minister's point, I can easily imagine a time when we will all be looking out for watermarking for authentic and verified content, if you're coming at it from the other end as well so that you have a sort of reassurance. Because the internet can be full of not just synthetic content but hybrid content and most of it's going to be innocuous and totally innocent. But I think oddly enough, I agree, I think there'll be a real kind of longing to be absolutely sure that what you're seeing has a certain authenticity.
The only other thing I'll say about AI: yes, of course, generative AI accelerates the deployment of synthetic materials and so on. It's also one of the best weapons in these distribution platforms like Meta to actually identify bad content in the first place. If you look at, for instance, the prevalence of hate speech on Facebook today it's now 0.01%. And that's, by the way, independently audited. That's for every 10,000 bits of content, you might see one bit of hate speech. That is a very significant reduction over the last two years for one reason alone. One reason alone. AI. AI has become an incredibly effective tool, the improvement in the classifiers going after bad stuff. So it's a sword and shield I think we need to bear.
Mustafa Suleyman: And investing in human moderators.
Andrew Sorkin: Thank you for that, by the way. Human moderators.
Karoline Edtstadler: But it was hard work to get there regarding hatred on the internet. I remember quite well in 2020, I started a process in Austria for the communication platform act to raise the awareness that there is hatred on the internet and they had a lot of digital conversations with the social media platforms. And everyone told me, "Yes, we are doing a lot and we will delete the hatred on the internet quickly and so on and so forth. But we don't need any legislation there because we are doing it on our own." So now it changed completely. Of course, there is the DSA, the European Union and social media platforms are obliged to delete. There are the moderators there and it's important to have them but still, it's hard to draw the line.
We had a lot of discussions and there are several details on this debate. But this was awareness-raising. I think the same has to be done regarding AI and I would like to make one more point. There has always been misuse, there has always been the trial to influence people. Well, every politician tries to influence the voters to vote for them or for us, no? But on the other hand, it is much easier and much cheaper to do so with these tools because there is a comparison with Brexit and the influence of the people in Great Britain to vote for Brexit – about €200 million were needed. If you want to do it nowadays with the technology on the ground, you need €1,000 and you can do it because everyone can do it. And this is something which changed our world and we have to raise awareness and research that.
Andrew Sorkin: Speak to that because I think that there's also the question of propaganda and how propaganda is being impacted even in this war as it relates to AI. I think you were about to go there earlier and we interrupted you.
Dmytro Kuleba: I was listening to colleagues with great interest. I think if Davos had existed 500 years ago, we would have had the same discussion at the panel on Mr. Gutenberg and his invention, the printing press. Every time humanity faces the arrival of technology to create and spread information, it faces the same questions. And the answer of the Enlightenment was that the more human beings are exposed to information and the more opinions become available to the person then the more educated the person will become and the more reasonable choices that person will be making.
Then came the radio, television, then came the internet and then came social platforms, which proved to the whole world that the fundamental assumption of the whole Enlightenment was wrong. People have endless access to information of any kind and they still make stupid choices. My concern, and perhaps I'm wrong because people sitting on this panel are smarter and deeper in technology, is that up until now, the human being in making his political choice gets disoriented with his attention, was distorted by bots, by the paid influencers but at least that person has access to opinions.
If you use a search engine, deliberately avoiding mentioning specific brands, the first page is filtered by an algorithm but it still gives you different links to different opinions by open social media; I still get the opinions. But if I build a relationship with an AI-driven assistant or chat, I will have only one opinion. So, the transition that I see as extremely politically sensitive and cultural as well is the transition of a human being from looking for opinions to trusting the opinion of the universal intelligence as AI will be considered. And that will become a problem in terms of politics because when you ask, we've been playing with one of the most famous AI-driven assistants. Which one? The one. But the questions we get the answers to, the questions we ask about the war between Russia and Ukraine, can be pretty peculiar.
And when I asked some people who work on this who stand behind this technology, guys does it mean that if Russia invested more for decades, had been investing more for decades, in filling the internet with its propaganda and it did have much more resources to do that and we did less of that to prove that Crimea is Ukraine and Russia has no right to attack Ukraine but the algorithm will actually be leaning towards the opinion of the majority. So if I'm a rogue state and I want to prove that I'm the only one who has the right to exist and you all must speak my language, does it mean that if I spend billions and I do involve automated opinion producers by bots and everything, the chat will come up with the opinion that actually it makes sense. It's not that. It's useless. So this is the risk from opinions.
Andrew Sorkin: I'd love to hear your reaction to that.
Leo Varadkar: Potentially it is the biggest shift in political terms because, generally speaking, even now with social media, political communication is the one to the many. So it's the prime minister making a speech. It's the priests making a sermon, it's the newspaper talking to the many readers. It's the thing you see on social media that's seen by lots of different people. It's one-to-many and it's transparent. With AI, it's going to be one-to-one. It's going to be you and your assistant. And it won't be seen by anyone else, it'll all be done in private.
Andrew Sorkin: So can I ask a different question? Maybe I'll put Mustafa on this. If you talk to people in Silicon Valley or if you talk to people outside of Davos, they would say that there is an elite view of the world and one of the reasons that the elites have lost credibility and trust is because they tried to force feed a particular worldview.
And actually, Elon Musk will tell you that X and others like it provide multiple views; you get to hear all the different views and to think that the public is so stupid that they don't understand is a terrible way to think. And yet at the same time, when you have these multiple views and people seem to gravitate towards only trying, even though they're multiple views out there, they try to gravitate towards the views that they have. It creates this remarkably challenging, complex.
Philosophically, I'm so curious Mustafa, how you think about what you just heard.
Mustafa Suleyman: I think it's important to appreciate that we are in the very, very beginning of this new era of technology. So it's true to say that in 2023, there were one or two or three chatbots or conversational AIs but that's like saying, there was the printing press and then there was one book. I mean, there are going to be millions and millions of AIs, they will reflect the values of all the organizations, political and cultural and commercial, of all of the people that want to create and propagate.
Andrew Sorkin: Won't that undermine us?
Mustafa Suleyman: It will do both simultaneously.
So it's also true that it reduces the barrier to entry to accessing factually accurate, highly personalized, very real-time extremely useful content. And you have to fundamentally in my opinion, ask yourself the question, what is the core business model of the agent or conversational AI that you're talking to? And if actually, the business model of the organization providing that AI is to sell ads, then the primary customer of that piece of software is, in fact, the advertiser and not the user. But if the business model is entirely aligned with your interests as an individual and you're paying for it and there's transparency and a fiduciary connection between you and the personal assistant that knows so much about you, I think that you have a higher chance of seeing a much broader and more balance.
Andrew Sorkin: So I'm not going to speak for Nick Clegg but Nick Clegg, I think, would say that if you can democratize...
Nick Clegg: Well, you tell me.
Andrew Sorkin: You tell me. Advertising allows the democratization of some of this technology because it allows people access to use some of the technology in ways they may not be able to afford if they were charged to do so on a personal basis. This goes to the underlying business model question.
Nick Clegg: The very powerful argument in favour of an advertising finance business model, which is obviously used by companies like Meta and many others besides, is it means you're not asking people to pay to use your product. So anyone can use it, whether they're rich or poor but the fancy banker on Wall Street or a farmer in Bangladesh can use Instagram and Facebook as an example because it's paid for by advertisers. By the way, one thing that does reinforce advertising is that advertisers don't like their ads next to ghastly, vile, hateful content. So actually, despite the repeated assertion that there was a commercial business model incentive to promote extreme content, we actually need to do the reverse.
But one point, which I think is essential to remember is, does anyone seriously think that if you watch a particular cable news outlet today, with a very fixed ideological point of view or a British tabloid newspaper with a very fixed ideological point of view, that you're getting a richer menu of ideological political input than you get in the online world today? I just think we sometimes over-romanticize the non online world as if it's one which is replete with lots of diverse opinions. It's simply not and in fact, quite a lot of academic researchers have shown that the flywheel of polarization is often driven by quite old-fashioned media, both highly partisan newspapers and partisan cable news outlets in the US.
Andrew Sorkin: Any of the politicians on the day want to respond to that and then I think we should open up for questions in the audience.
Karoline Edtstadler: Well, I just wanted to add that, of course, you can find every opinion on the internet if you're searching for it but you should not underestimate the algorithm. And we find people very often on the internet and so-called echo chambers. They get their own opinion reflected again and again and again. If you read the paper, you have different journalists writing. So I'm not romanticizing at this time and I'm using the internet very intensively, let's put it like that but you have to search for these different opinions. Otherwise, you may end up in an algorithm and in an echo chamber.
Andrew Sorkin: Questions. I see a hand right there. Please stand up if you could. And we'll get your microphone. And please identify yourself if you could.
Audience member: Thank you. My name is Razia, I work for Agence France-Presse in Brussels, I cover EU tech normally. I have a question that I'm going to direct first to the Prime Minister because you mentioned something about how AI's risks must be controlled. And I wanted to ask you concretely, how do you make that happen? Is that the EU AI Act, the US executive order? And neat to that point and this is perhaps for everyone who would like to answer, we know that the UN has this panel and we've been talking about how this conversation needs to include the Global North and South, how do we get them involved in the conversation? Is the UN panel part of that? And a question particularly for Ms. Karoline Edtstadler, I hope I've said that right. What can you do to have more European champions? Because I'm directing this question to you as the voice of the EU on this. Do you think the EU AI Act will make sure that there will be European champions or..? Thank you very much.
Andrew Sorkin: Go for it.
Leo Varadkar: I think it we do it in the form of an international treaty and as we were talking before, an international agency. I don't like to create too many parallels between AI and nuclear technology but we do have international treaties and we have the Atomic Energy Authority, which is well respected and making sure that the rules are followed. Regulations are followed. But I know how hard it is to make an international treaty happen. And that's why it does fall to the US to do what the White House has done and executive order what we're doing with the UN, AI act and very often the Brussels effect comes into play, that the European Union is the first to regulate and regulate in the way that we and others then follow on that and build on that.
Andrew Sorkin: National champions in Europe.
Karoline Edtstadler: Well, first, let me reflect on the Global South because, of course, the Internet Governance Forum, the Global South is included. The IGF in 2022 was in Ethiopia and it was very important to have it in Ethiopia to discuss these issues because I'm convinced that AI can help us to become better in so many fields, quicker in areas and other ones. But we have to bear in mind that so many are not connected to the internet at all. So, if we are talking about this problem of how to regulate AI, it's a luxury problem for those who are connected. That's certainly right and I think the United Nations is trying to get them in via the IGF and the leadership panel and we have this mandate over two now more years until the summit of the future to present also some solutions and some recommendations on how we can get on and get them included.
Regarding the European champions. Of course, the AI itself won't create European champions. On the contrary, we have to do a lot more, for example, to get rid of the obstacles of the single market, to fulfil the single market in the end, to make the market of Europe attractive for startups and to keep them here. And I think there are a lot of good examples in the world. If I'm thinking from the US to Israel, where this is also in the mindset of the people and I think we have to start with the mindset of the people – trial and fail is something which has to be there on the way to a champion and this should be allowed also in Europe.
Andrew Sorkin: I think we have time only for one more question. Professor, I'll get you the microphone.
Audience member: Mustafa, you mentioned that capabilities should really be what we are looking at, and in writing and just on the stage, you talked about artificial capable intelligence as a metric. You made it very specific at one point: could an agent make a million dollars with $100,000 investment in just a few months. And I'd be interested in your – and a lot of people are working on these sorts of agents with the LLMs connecting to the real world and carrying out instructions, buying things and so forth – you said AGI may be far away, how far away do you think it is until the agent passes that kind of a test, your artificial capable intelligence test? And what does that mean for regulation? And to the other panellists, how would that change the way you think about AI if we had that kind of technology?
Andrew Sorkin: Mustafa, I'm going to give you the microphone as the final answer because we're going to run out of time in just a moment, I apologize.
Mustafa Suleyman: We've had the Turing test for over 70 years and the goal was trying to imitate human conversation and persuade a human that you're, in fact, human, not an AI. And it turns out we're pretty close to passing that – maybe we have in some settings, it's unclear but it's definitely no longer a test.
So I think the right test, the modern Turing test, would be to try to evaluate when AI was capable of acting like an entrepreneur, a mini project manager, an inventor of a new product to go and market it, manufacture it, sell it and so on to make a profit. In the next five years, certainly before the end of the decade, we are going to have not just those capabilities but those capabilities widely available for very cheap, potentially even in open source. I think that completely changes the economy.
Andrew Sorkin: We are over time but I do want to find our host from the World Economic Forum who's going to make some final comments.
Jeremy Jurgens: Thank you for the great panel. I'm Jeremy Jergens, managing director of the World Economic Forum and as we've heard this panel today, the AI doesn't stop at national boundaries; it has a global impact. And this can cut across a number of areas. I think we've also heard from the minister that when we think about governance, we need to look beyond regulation. Management of the risks but also unlocking the opportunities. The World Economic Forum's actively working on all these issues and we'd invite you to participate with us.
We're working through over 20 different national centres, the majority of which are located in the Global South today. We're working to ensure equitable access and inclusive access to data to compute to the various models and ultimately, the capabilities that Mustafa spoke of that can actually improve the lives of citizens around the world. And I invite all of you to work with us. And again, I'd like to thank the panel here.
Andrew Sorkin: Thank you. I want to thank all the fabulous crowd and thank you for the questions. Thank you, everybody. Thank you.