Scroll down for full podcast transcript - click the ‘Show more’ arrow
Leaders around the globe have called for international collaboration to steer AI's development to human and planetary development rather than exploitation. Harmonizing diverse views will tackle challenges at the nexus of tech, privacy and rights.
With AI's swift advance and varied national oversight frameworks emerging, how can global players collaboratively craft adaptive, forward-looking governance?
This is the full audio from a session at the World Economic Forum’s Annual Meeting 2024 on January 17, 2024. Watch it here: https://www.weforum.org/events/world-economic-forum-annual-meeting-2024/sessions/360-on-ai-regulations/
Vera Jourová, Vice-President for Values and Transparency, European Commission
Josephine Teo, Minister for Communications and Information, Ministry of Communications and Information (MCI) of Singapore
Ian Bremmer, President, Eurasia Group
Brad Smith, Vice-Chair and President, Microsoft Corp
Arati Prabhakar, Director, White House Office of Science and Technology Policy
Catch up on all the action from Davos at wef.ch/wef24 and across social media using the hashtag #WEF24.
Check out all our podcasts on wef.ch/podcasts:
播客文字稿
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Ian Bremmer: Good afternoon to everyone. It's just been such a polite group. We were back there in the speaker's room; it seemed so quiet we didn't realise that the room was full. Also, I am not used to being in the round; there's a whole bunch of people behind us, which means you may be on camera, so just be aware of that.
I'm Ian Bremmer. I'm your host for a short but going-to-be very concise and effective panel of 360 on AI regulation. I have wonderful people here, of course, covering the gamut globally. Vera Jourová, the vice-president for values and transparency for the European Commission. We have Josephine Teo, who is the minister of communications and information of Singapore. We have Arati Prabhakar, who is the director of the White House Office of Science and Technology. And then we have Brad Smith, who is the vice chair and president of Microsoft.
So, if we're talking about 360 on AI regulation, we have heard probably more than we have wanted to hear this week so far about AI. It is everywhere. We've certainly seen more than we've wanted to see on the promenade and that probably means that we've heard a lot of things that aren't so. So I want to focus on the things that this group thinks are so and instead of doing a 360, we've got four people I'd like them each to do a 270 and what that means is I want them to talk about your views on AI regulation as it stands today, everyone's been putting a lot of effort in but don't talk about your own institution.
Talk about how the other institutions are doing. What do you agree with? Where are there challenges? Give us the lay of the land without talking about say, Europe, Vera.
Vera Jourová: I am far from trying to speak about the European Commission. I understand you want me to speak about the AI and regulation is in place, we have it already now in the EU but of course it cannot stand alone. We also combine in the EU the AI Act with a lot of big plans in investments, public private partnerships, sandboxes for the companies. Standardization, which will involve industry because the industry itself and the technology has to have to work together on the standards. So there are a lot of things, which have to be done by many. The member states will also have the role in enforcement next to the Commission because we have a role as the Commission. You didn't want me to speak about the Commission.
Ian Bremmer: I did not. She's gonna get a 360 in no matter what but I want the 270. I will let you talk about Europe, I promise.
Vera Jourová: But what I wanted to say we need in Europe – a lot of creativity and I would even say optimism. And looking at AI in all sectors, in all fields, be it private be it public, because AI promises a lot of fantastic benefits for people. And so, the regulation is the precondition to cover the risks but the rest remains to be free for creativity and positive thinking. And I think that in Europe we are well placed.
Ian Bremmer: So because it's the first question, I'm gonna give you a second chance with that, which is let's talk about how AI regulation is doing outside of Europe. What do you think when you look at the Americans, you look at the Chinese, you look at the private sector, look at the other actors because everyone's focusing on it now? How do you think they're doing?
Vera Jourová: Similar situation as we had with GDPR and I will also define under GDPR in 2015. We felt that this this might serve as a global standard but we were not just passively sitting in Brussels waiting that the others will copy-paste. No, we were in very frequent dialogue with the United States – there's many others – explaining what we have in GDPR, what might be followed and trying to create some kind of global standard without mentoring others.
Similar thing might happen now with AI Act. But I think that there is promising space for international cooperation. We have under the G7 roof, the code of conduct for the technologies and ethical principles for AI. We work with UNESCO, we work with the United Nations. We believe that the AI Act could serve as an inspiration and we are of course ready to support this process.
Ian Bremmer: I'm in danger of failing. Arati help me out. Talk to me about AI regulation around the world and where you think we're getting right now.
Arati Prabhakar: It's an enormous priority in the United States. We know that AI doesn't stop at the borders and so we absolutely care about what's happening in the rest of the world. We've talked a lot about Europe and I think we've had some excellent dialogue as we worked towards the President's executive order that he signed at the end of October and then the EU AI Act – too many vowels in this business – and that's been terrific.
I think GDPR is an example of a place where we made some progress – you made some progress for the world on privacy – I think it creates enormous problems for the industry to not have full harmonization there. This is an area where President Biden continues to call on our Congress to act on privacy legislation.
We usually talk about the US, the EU and China. I'm also really interested in what happens in the rest of the world with AI because the opportunities are so substantial and I see the eagerness and the interest. Again, I think we're just at the beginning of this story but that's an area that I'm watching with great interest as well.
Ian Bremmer: On balance, do you see more trends towards alignment or fragmentation as you look at the various approaches and the urgency around the world today?
Arati Prabhakar: Yeah, I think everyone shares a sense of urgency because AI has been around. This is not our first encounter with AI but what has happened in the last year has focused everyone's attention on how pervasive it's going to be in everyone's lives in so many different ways. There will be places where harmonization can occur and we're working towards it and we can approach, I think, really good harmonization that forms the foundation that everyone can build on.
I think we seem to be clear that we will all compete economically, there will be geopolitical and strategic competition that's based on AI. So, both of those happen at the same time and I think that's in many ways that's like many other strategic industries that have grown in strategic technologies.
Ian Bremmer: Josephine, do you see...? Singapore is a country of course that's been so successful in working well, with pretty much everyone and it's also one of the most geopolitically savvy per capita countries out there because you have to be, right? So, in that environment, do you see AI regulation emerging in an area that's going to be relatively easy for Singapore to navigate globally or not?
Josephine Teo: Well, if we take a step back, I think in the first place, we should recognize that our interest is in AI governance not necessarily only in regulations. And in AI governance there are also other very important things to do. For example, you must have the infrastructure to be able to support the deployment and the development. You need to be able to build capabilities within the enterprise sector as well as individuals. And then you need to talk about international cooperation too.
Regulations, laws, they are going to be necessary but it doesn't mean that we know on day one exactly what to do. So, I found it very refreshing to hear from Vera that Europe is also interested not just in regulating but also in expanding the opportunities. That's the kind of balance I think we will need. If you will allow me Ian, I'd also want to respond briefly to the point that you were trying to get at. Do we see more alignment or fragmentation, specific to regulations?
I think it is not surprising at this phase that there will be many attempts at trying to define what the risks are going to be and what are the right ways to deal with them. So, this is, to my mind, a divergent phase. It means that some of the frameworks or some of the attempts that come out, they don't always sync so closely to one another. But I think over time, we have to embrace this dynamic and I'm hopeful that it'll take a while but we will all become clearer about where the use cases are going to present themselves and where the producers are going to be and what are the risks that we should be spending more time guarding against.
So, the conversion phase hasn't hit us right now but I'm hopeful that they will come and that's what we'll also need. For a small country, we can't have rules that we made for AI developers and deployers in Singapore only because they do cross borders. It makes no sense for us to say that this set of rules applies here. But if you're coming, you must comply only with our rules. These have to be international rules.
Ian Bremmer: Different focus, different governments. Bletchley. I mean, they just wanted to stick a flag in, they had some issues. They wanted to deal with Europeans, Chinese, different perspectives. But you also see a lot of overlap despite the divergence. Talk a bit about that Brad.
Brad Smith: I actually first start where Vice President Jourová began. It's worth just recalling that there are a wide variety of laws in place around the world that were not necessarily written for AI but they absolutely apply to AI. Privacy laws, cybersecurity rules, digital safety, child protection, consumer protection, competition law. So, you have existing regulators, courts and the like, all working with that and companies are working to navigate it. Now you have a new set of AI-specific rules and laws. And I do think there's more similarity than most people assume.
People are often prone to look at the AI Act. They look at the executive order or the voluntary commitments in the United States. The fundamental goals are complimentary in my view; the AI Act started by looking at the fundamental rights of European citizens, the values of Europe, privacy, the protection of consumers and democratic rights, all things that are held deeply as important in the US and other places. It started at the applications layer. That's really how it was originally drafted.
The deployers of AI then last year realized they needed to address foundation models. At the White House, they jumped in right away to address foundation models, focusing first and foremost on what I would call safety and security. But when they adopted their executive order, they built out a comprehensive list of all of the issues that mattered. There will of course be differences, the details matter. The executive order calls for all kinds of things to be prepared. Even the AI Act, still the fine-tuning is taking place – the writing, we haven't yet seen it. But the pattern actually begins to fit together.
And then you have the G7 Hiroshima process and even the UN advisory board and you see these things laddering up in a way that makes a fair amount of sense. So it doesn't mean that we'll have a world without divergence. But we first have to recognize people actually care about a lot of the same things and even have some similar approaches to addressing them.
Ian Bremmer: The European Commission was first in recognizing the need for regulation and governance in this space and then moving pretty decisively. What is it that you think created that urgency? What did the Europeans see that the Americans and others were later to the table at?
Vera Jourová: I think that the European Union is a special place where we have a special kind of instinct for the risks, which might come from the world of technologies to individual rights of people. And this is already projected, I have to mention again, GDPR. By GDPR, we just wanted to empower and enable people to be the masters of their own identity in the digital world.
And so a similar thing happened in the AI development where we were looking at the technologist, what they are doing, what they are planning. We had a discussion with Brad about that in 2018. And I appraised the corporation because we created together the first ethics standards. And I was clear that we don't have to rush with the regulation now, 2019, because we have the GDPR and we have the main thing done in the EU, the protection of privacy and cybersecurity legislation.
But then in 2021, it was inevitable that we adopted the AI Act. And then we got the lesson – very, very exciting moment – when we saw that it's true that legislation is much lower than the wealth of technologies but that's low. We suddenly saw the generative AI at the foundation model and ChatGPT and it moved us to draft together with local legislators the new chapter in the AI Act. So, we tried to react to the new reality. The result is there.
Yes, the fine-tuning is still ongoing but I believe that the AI Act will come into force. So, I think out of the basic instinct we have in the EU, that we have to guarantee that the principle values will be untouched for human beings. I heard this morning somebody from industry who said AI will change our priorities. I have to say on behalf of the public sphere or regulator, it must not change our priorities such as fundamental rights, freedom of expression, copyright, safety.
I think that we have to be very steady and stable. And so, having the regulation also means that we will start very intense cooperation in the triangle of public sphere. The wealth of technologies and research. This is a new thing. You started in US already and also the United Kingdom announced that there is such a platform and we need to work together to achieve a sufficient level of predictability of where the technology will go further because this is what's missing.
Ian Bremmer: So I accept that there's a lot of overlap in the sorts of issues that are being discussed. But if I closed my eyes, I would still know that that was what the Europeans would say. The Americans talk about these things a little differently. Talk about how the priorities, not that we don't care about citizens but rather that national security plays a pretty big role, innovation plays a big role and how the Americans are thinking about prioritizing regulation and governance in the space.
Arati Prabhakar: First of all, there's so many shared values between us and the European Union. I think that is the reason that we do see a lot of alignment and harmonization happening. And you mentioned in addition to rights, national security, that's absolutely. I want to step back one more step and talk about why we care about regulation or, thank you Josephine, governance, because that's much more comprehensive and appropriate.
This is the most powerful technology of our times. And every time President Biden talks about it, he talks about promise and peril and I greatly appreciate he is keeping both of those in frame and we see the power of AI as something that must be managed; the risks have to be managed for all the reasons that we're talking about here. The reason is, if we can do that, if we can build a strong foundation, if we can make sure that the quality of the AI technology is predictable and effective enough and safe enough and trustworthy enough. Once you build that solid foundation, you want to use it to reach for the stars.
The point is to use this technology to go after our great aspirations as a country and as a world. And if you think about the work that's ahead of us to deal with the climate crisis, to lift people's health and welfare, to make sure our kids get educated and that people in their working lives can train for new skills. These are things that it's hard to see how we're going to do them without the power of AI.
I think in the American approach, we've always thought about doing this work of regulation as a means to that end, not just to protect rights, which is completely necessary, not only to protect national security but also to achieve these great aspirations.
Ian Bremmer: A little political question, which is Biden administration, there's a sensibility that well don't want to be too close to big industry, I mean, the Democrats have Elizabeth Warren, they're talking about breaking up monopolies, the oil companies aren't getting any access, despite the fact there's a lot of production. We talk about governance of AI for the United States, it feels like the White House is actually working really closely with the industry leaders. How intentional is that? How much is that necessity? How much is that is different from the approach to perhaps other bits of the private sector.
Arati Prabhakar: That may be what you see but let me make sure you see the whole picture. We absolutely have worked with Microsoft, the other major tech companies. That is where a lot of the leading edge of this technology is currently being driven for all kinds of practical and business reasons.
But when you look at what went into our process, it was absolutely engaging with AI technology leaders, including especially the big companies, it was small companies and venture capitalists, it was civil society and hearing from people who care about consumer rights and hearing from workers and labour unions.
That is an essential component of this work. It was working with academia to get a deeper and longer-term perspective on what the fundamental bounds are on this technology and I actually think this is an important part of our philosophy of regulation and governance is not to just do a top-down and sit in our offices and make up answers.
The way effective governance happens is with all those parties at the table and to your point about the role of big tech, one thing that we have been completely clear about is the competition is part of how this technology is going to thrive. It's how we're going to solve the problems that we have ahead.
And so recognizing how much concentration happens when you take a billion dollars to train a leading edge model but also recognizing the explosion in entrepreneurial activity and venture investment at watching all of that and making sure that all of those factors are considered is absolutely intentional and the work that we're doing.
Brad Smith: I actually think what the White House did was pretty ingenious. Because the goal was to move fast. Because the EU had made so much progress in thinking about applications that used AI and suddenly you have these new generative AI foundation models. So, just remember the world really didn't get to start using them until 30 November. The first meeting that the White House had was the first week of May. So basically five months later, brought in four companies, said you have to get going, these are the problems, this has to be safe, this has to be secure, this has to be transparent.
The four companies that came in, Microsoft was one of them, were given homework assignments. We want you to give us a first draft by the end of May of what you are prepared to do to show the American people that you will address these needs. And I remember because we got to work right away and we were sort of proud inside Microsoft, we got it done fast. It was about eight days later, we submitted a first draft so we could get some feedback. We sent it in on Sunday. And on Monday morning...
Ian Bremmer: More questions.
Brad Smith: I had a call with Arati and Secretary Raimondo, and they said, "Congratulations, you got it in first; you know what your grade is? Incomplete. Now that we know what you can do, we're going to tell you to do more and build on what you've done." And it broke the cycle that often happens when policymakers are saying do this and industry is saying that's not practical. And especially for new technology that was evolving so quickly.
It actually made it possible to speed up the pace and then that complemented what was going on in Brussels and there was a lot of interaction actually between Brussels and Washington with London and others. And I don't think that all of these governments would have gotten as far as they did by December if you hadn't engaged some of the companies in that way and it's not like we got to write the blueprint. We just got to provide input and then civil society, as they should, said no, there needs to be more, it needs to be broader, needs to go farther and it has since then.
Ian Bremmer: So, if we look at the various models here from top-down government-driven to multi stakeholder hybrid, everybody gets a piece to the private sector moving really fast breaks some things but great competition. Do you think we're going to iterate towards an ideal place, you say we're in the divergent space but as we converge, do you think we are likely to move to one place on that spectrum? Is there a one place on the spectrum or will it necessarily be very different answers?
Josephine Teo: Actually, I'm really happy that you brought up the idea of a spectrum. I really do believe that you know, in some areas we will find it necessary and possible to regulate through laws. For example, with deep fakes. I think there is a real sense that this is an issue that all societies regardless of your political model will have to deal with and what is the right way of dealing with deep fakes.
I can't see an outcome where there isn't some law in place, exactly what shape and form it will take, I think that remains to be seen. But the whole regulatory space will have to be a spectrum for a number of years. I do believe that there will be instances where the answers are not so clear and there will still be a room that will still be in place for voluntary frameworks and you will have to look at the responses to the market. You will have to assess whether the recommendations that are being put forth in these voluntary frameworks, they're actually useful.
And then you will have further down the spectrum, a lighter touch approach where there are just some advisory guidelines and people will have to look at the specific use cases of the models that they are bringing to the market and whether it really needs to be regulated in the same way as some other use cases. So, there's more risk-based approach and also a whole spectrum of tools. It will be part of our reality. That's how I believe.
Ian Bremmer: If you don't mind, give me two examples. Give me an example of a hard challenge that you think is going to need to have strong government oversight regulation and give me one that you think is big. That is really best served by very, very light touch.
Josephine Teo: Well, at the moment, what seems quite clear to me is that our societies need an answer to how we deal with deep fakes. It's stealing a person's identity. It's worse than your data that is anonymized that is made available. It's being represented in a way that you do not intend to be represented. And there's something fundamentally very wrong about it. It's an assault on the infrastructure of fact.
How can societies function where deep fakes are confronting us all the time and we can't separate real from fiction reality from what is made up? So that is one specific example I do think that we, as nations, have to come up with an answer to in the not-too-distant future. But in another way and I'm so glad you talked about it, there will have to be different ways of demonstrating whether an AI is being implemented in a responsible way. And the question of how do you implement tests, how do you benchmark them?
These kinds of things are still very nascent. No one has answers just yet. You know, they're very clear, very demonstrable. Those kinds of things seem to me for a period of time to be better served with advisory guidelines, sandboxes, pilots and it may well take many more years of these kinds of experimentation before we come to a very clear sense of what really you want to mandate and in what situations.
Ian Bremmer: We've talked a lot about different approaches to regulation and governance. We haven't yet addressed power dynamics. And I want to get at this with this group because we talked a little bit about the Brussels effect before and Brussels effect is served not only because you've got strong, technocratic leaders in Brussels who are thinking a lot about regulation but also because the EU is a very large market that drives a lot of influence around the world. That wouldn't work so much if it was Bhutan. No matter how smart they are.
So, I wonder in an environment where, in AI, the power and the driving technologically is overwhelmingly not happening in Europe, at least not yet, how much does that undermine the ability of Europeans to set meaningful standards?
Vera Jourová: I think that we showed that we can set meaningful standards, first thing. But at the same time, we combine it with a lot of other actions and a lot of funding. And so we know that there is a gap; there is the need to push Europe forward in the world of technological development and the funding.
We have made a calculation that every year we should invest around €15 billion, private and public funding, be it from Brussels and from member states in order to push the technological development forward and to also unblock the ability of the industry and also small and medium enterprises to develop in that direction.
So we are doing a lot of things to decrease this gap but at the same time, I have to say, that it doesn't decrease our ability to set the standards which might be inspiring for the rest of the world.
Ian Bremmer: Do you share that view in terms of the US versus Europe and the rest of the world as a dynamic? I guess so much of the tech is coming from the United States, it's moving fast. Technology companies are able to drive a lot of outcomes.
Arati Prabhakar: They are and I think the fact that so much of this wave of AI has been driven by American companies is terrific for the United States. I think it also means that we have a particular responsibility because this is not going to get solved by top-down government action.
This is going to be something that happens because governments, companies, industry across the board, plus civil society, plus workers, all come together and the fact that we have such an active industry and such a significant market in the US I think really means that that we have the privilege but also have the responsibility to be serious about that. That's what I think we've stepped up to this year.
Ian Bremmer: Brad, you and I've talked a little bit about this. Is it fair to say, governance models are not just going to be shaped by the countries but also by the business models of the technology companies that happen to be leading.
Brad Smith: I think it's definitely the case. I would just offer a few thoughts.
I mean, first of all, it's easy for people to go back and say well, this is like GDPR with Europe setting the rules for the world but this time the United States is moving – the US still hasn't adopted a privacy law. So, you have a number of countries and I just think people are talking with each other and learning from each other and that's good for the world. I think it'll be a more collaborative international process because of that.
Second, I think one should not lose sight of the fact that it's not just about who invented something or where it was invented. But ultimately, it's who uses it and what business models they apply when they do. And it's worth recalling that it was a German who invented the printing press but it was the Dutch and the English who then built the most printing presses with the German technology and printed the most books. And if you look at say, GDP growth in the 50 years after the Germans invented the printing press, the Dutch and the English outperform Germany.
If you look at Europe today, the future of the auto industry, the pharmaceutical industry, the chemical industry, every industry where Europe is so important, their competitiveness will fundamentally be shaped by how they use AI and other things as well. And the truth is, therefore, people can say who built this model and maybe envy the person who did or the country that provided it but I'll argue it's going to be the adopters that will be the biggest winners over the next five decades.
Ian Bremmer: The adopters and those that bring it to market.
Brad Smith: Absolutely. But, I also often have this conversation in Europe because for 10 years, we'd go to Europe and they say but we don't have Facebook. We have to use Facebook from the United States and I would say, we used to get up in Microsoft, and every day we'd say we don't have a phone. And then, one day, we realized we are not good at building a phone and we can succeed without one and we did. And I don't hear anybody in Europe today bemoaning the fact that they don't have their own Facebook to be honest.
You go in, you build what you need. I'ts easy to turn the world into these rivalries but when you do, you sometimes miss what actually is the most impactful and it's the world's democracies building on each other's shared values and it's the world's economies. First and foremost, ask what makes you great today. And then ask how you can use this new technology to make you greater rather than spend all your time looking at what you don't have so that you can think about building. I'm not saying you shouldn't but if you don't focus on what makes you great today, you're probably going to miss what's most important.
Ian Bremmer: So, not wanting to focus on what is contentious. But there's a course a couple big things we haven't talked about here so far, geographically and one of course is China.
Outside of the United States, massive digital market, massive desire to be in this space but some significant competition and constraints with the Americans and others. So I'll ask both of you but I think I'll start with you, Josephine. Tell me a bit about how – we don't have the Chinese here today, we wouldn't have time, I don't know how we do it in 35 minutes, but if we did – how would the conversation change? What would be different if we had the Chinese opining openly about the way they think about governance of AI?
Josephine Teo: They've actually been quite open. They've published very specific guidelines. They've articulated their expectations for businesses, particularly that have an interaction with consumers. So if you go to China and you talk to the AI developers, there is no misunderstanding on their part about the expectations that their government has of them. If your AI models are primarily going to be used within the enterprise sector, it's fairly light touch.
If, however, your AI models are going to reach the consumers, individuals in society, then there are a whole host of requirements that will be made of you. So in that sense, actually, they do have an interesting way of thinking about the issue. I will also say that there are some very thoughtful scholars, not least of all in the United States that are studying the Chinese way of thinking about AI governance and regulations. And they have published you know, very useful articles as well as studies into what we can take away from them.
The Carnegie Endowment Institute, for example, has done very good work in this regard. I certainly think that Bletchley was very encouraging in the sense that you had all the major players in AI. Our counterparts from China was also there, the minister was there. And I think it's the start of a very meaningful conversation and the more we are able to exchange notes on what really makes sense with AI governance, I think we will be able to make better progress. That's the way we look at it.
Ian Bremmer: There's been an announcement at the APEC [Asian Pacific Economic Cooperation] summit between Biden and Xi, that a track 1.5 on AI is going to be kicked off. That's certainly better than the absence. We also have a lot of people talking about some level of technology cold war, given the export controls on semiconductors. Now, the Chinese see this as well maybe this is a way we can engage and not be cut out of AI by the Americans. How optimistic are you that there is a capacity of the Americans and others to engage with the Chinese in a way that doesn't lead to a greater decoupling on the tech side, particularly on AI.
Arati Prabhakar: This is a very difficult issue. I'm very encouraged both by China's participation at Bletchley and of course, President Biden and President Xi's announcement. And I think what we are talking about is multiple layers. There are areas where every participant around the world has a shared interest in getting AI right many of the the issues of the core technology being predictable, being effective and safe and trustworthy. That's something everyone can agree on.
But what happens above that foundation, whether it's economic opportunity, whether it's using it for national security and military purposes, really every part of the world is using this powerful technology in ways that reflect their values very much to the description that you provided Josephine. That's exactly what you would expect, it does mean that we will be competing and sometimes at odds with each other. There's certainly national security interests that have to be protected. And all of these things are going to happen simultaneously.
I think that's the reality of the world that we're living in, where we can find common cause and shared values with allies and partners around the world. We view that as so essential to forming, shaping the way that this moves forward. I think that's going to be to all of our advantage.
Ian Bremmer: What do we do? In an environment where so much of US policy on tech towards the Chinese has been a concern about defining things that are dual use, in 5G and in semiconductors but in much of what you're discussing with artificial intelligence, something that you can use to make a car, use to make a rocket. I mean, a guidance system and autonomous driving system. How do you thread the needle in an environment where everything is potentially dual-use?
Arati Prabhakar: Yeah and that's the nature of what what military capabilities look like today, that's absolutely the case. All of our work, for example, on export control has been narrowly focused on the leading edge of semiconductors that are key to building the most advanced AI models and this is not a blanket change in our trade policies or in the way that we think about technology and technology development sharing around the world. It's very specific, targeted but very serious about the things that we do target. Again, you have to hold many ideas in your head at the same time in this complex world. We want to make sure we protect our national security interests and not allow a potential adversary to use our most advanced technology for military purposes and at the same time, we know that we will remain important trading partners and that for so many of these other applications, whether they're dual use or not, we're going to have reasons that we want to continue to stay engaged.
Ian Bremmer: Are the Europeans 100%, 95% aligned with that approach towards China and AI technology?
Vera Jourová: I don't want to repeat what we have just heard because it's a very similar approach we have. We have a strategy towards China. There are things where we need to be partners because global things are at stake and it might be AI security. That's why I publicly said before – Bletchley, yes, this is a good thing, that the Brits invited China because we need to have them at the table and also to be a chance to ask questions.
Where are you going and are you willing to join some global platform where we could work on the standards? The second category, where we are competitors and, of course, chips and some critical raw materials and now we have the strategy on how to be more resilient. As for the economic security, so there obviously is China, the competitor and the category of where China is rival. And it's shown in how we approach AI because when I was in China, I read the guidelines also and I saw a lot of similarities.
There's our code of conduct for AI on the G7. And there's the AI Act but there is a big but in China, of course, they want to use AI to keep under control the society but in the AI Act in the horrible, horribly, long, difficult trialogue, the main issue was how far to let the states to go in using AI, especially in law enforcement. Because we want to keep this philosophy of protecting the individual people and balance it with the national security measures. So, here we cannot have a common language with China. And we will never have.
Ian Bremmer: One could say the Chinese in a sense had the most interest in having strong regulations on AI. It's not the Europeans for precisely those reasons. So Brad, in this environment, where there's been a lot of joint research, historically. There've been labs, they've been operations published in open source, increasingly challenging to do in a lot of these areas. How much are we losing as a consequence and can you give a little guidance around where lines can be drawn?
Brad Smith: I think there's a few things that this conversation helpfully illustrates. First, there are some areas where there are universally shared values even in a world that's so divided. No government wants machines to start the next war and every country wants humanity to remain in control of this technology. I think that is universal and I think that provides a common foundation in some areas.
The second thing that is very interesting is what the world to learn as it talks about, even just this concept of regulation is that we're all talking about the same questions. And that is what is revealed when you actually put the AI Act and the Chinese and their measures next to each other.
Then the next thing you see is when people answer the same question in a different way and why? And you can look at the AI Act and you can look at the Chinese measures and you can see in one, the voice of Aristotle and then the other, the voice of Confucius, long different philosophical traditions that manifest itself on how governments manage societies. But it helps I think, everyone just to understand how other people think.
And then there is a level of just what I would call basic research in fundamental scientific fields, scientific fields that will define the future of say, climate technology or just our understanding of molecular biology or physics. And the world has very much benefited from a tradition and Arati is an extraordinary representative I think of this tradition and the Office of Science and Technology Policy. I think you want a world that invests in basic research. You want a world where researchers publish their work, that's how people learn from each other.
You do want a world I think where scientists in many fields have the opportunity to learn from each other. And so we have to manage that as well and not just close off all aspects of engagement. I think you put it very well when you said, these are difficult issues. They are very complicated. But I think there are certain strands here that we do ourselves well to just remember and think about.
Ian Bremmer: So this has been an extremely enlightening conversation. I thank you for cramming so much into a short time. Because outside of this room, there has been so much discussion. I'd like to close it with each of you shattering a myth. What's something that you have heard either here this week or outside recently, that you wish people could unhear about the state of AI? Please.
Arati Prabhakar: I will start with every sentence that starts with "AI will do X" because I think every time we focus on the technology and imagine that it is has agency and it's taking action, we ignore what is really important, which is people build AI systems they choose what data to train them on. Lots of it is trained on human generated data.
People decide what kind of autonomy and agency to give these systems, people decide what applications to use them for. These are the most human technologies you can possibly imagine if we're gonna get governance, right, we have to keep our eyes on the people not just the technology.
Ian Bremmer: That's a very good start, Josephine.
Josephine Teo: I'm going to take a stab at this. I think it's helpful on occasion not to think of it as artificial intelligence but perhaps as augmented intelligence and to try and see how it can best serve the interests of human societies. And if we took that orientation, maybe we could have a more balanced approach in thinking about the opportunities and how we can deal with risks. So I offer that.
Ian Bremmer: It should become more obvious it starts training on your individual data, people are going to see it as augmented.
Vera Jourová: I share this view but I will add one more thing. Wherever, I went here in Davos yesterday and today, I had questions about the prediction of elections and democracy. We didn't mention it here. For me, it's a nightmare that one to see the voters to be manipulated in a hidden way by means of AI in combination with well targeted disinformation. It would be the end of democracy and democratic elections.
That's why I am coming back to Brussels, reassured by the necessity to do more. We are now using this light touch agreements with the technologists on disinformation and on labelling AI production so that the people see that this is the production of AI and that they can make still their free autonomous choice.
Ian Bremmer: I'd love to unhear AI impacting elections, I think we can all agree on that. Brad, yours.
Brad Smith: I think we should shatter the myth sometimes stated in the tech sector that people in government can't understand the technology because people in government do understand the technology increasingly around the world and they're adding more experts. You don't have to understand everything at the same moment as someone in an industry but government has mastered technology in most other, maybe every other industry and is doing so here as well.
Ian Bremmer: We can put an asterisk on Congress, though, right, on that one?
Brad Smith: There are some people in Congress that understand it as well.
Ian Bremmer: They're getting there. I agree they're getting there. And with that, thank you so much for joining us today, really appreciate it.