Scroll down for full podcast transcript - click the ‘Show more’ arrow
The discovery of the world's first functional graphene semiconductor, along with significant advancements in neuromorphic computing and the release of AI superchips are poised to accelerate computing power.
How are these breakthroughs reshaping the future of industry and what are the latest AI applications?
This is the full audio from a session at the Annual Meeting of the New Champions (AMNC24) on June 26, 2024. Watch it here: https://www.weforum.org/events/annual-meeting-of-the-new-champions-2024/sessions/ai-breaking-new-ground-whats-next-for-industry/
Jeremy Jurgens, Managing Director, World Economic Forum
Jay Lee, Clark Distinguished Professor; Director, Industrial AI Center, University of Maryland
Samuele Ramadori, Chief Executive Officer, BrainBox AI
Warren Jude Fernandez, Chief Executive Officer, Asia-Pacific, Edelman
Vincent Henry Iswaratioso, Chief Executive Officer, DANA
Van Dinh Hong Vu, Chief Executive Officer, ELSA
Annual Meeting of the New Champions - Next Frontiers for Growth, 25–27 June, 2024, Dalian, China: wef.ch/amnc24
Centre for the Fourth Industrial Revolution (C4IR): https://centres.weforum.org/centre-for-the-fourth-industrial-revolution/home
Check out all our podcasts on wef.ch/podcasts:
播客文字稿
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Jeremy Jurgens: Jeremy Jurgens, managing director at the World Economic Forum and also responsible for the Centre for the Fourth Industrial Revolution.
We're gathered here in Dalian for 15th Annual Meeting of the New Champions and throughout our discussions at this meeting, AI has been in pretty much every discussion that we've had. This is dominant we've seen amongst the popular press, following in the stock markets or in pretty much every casual conversation, there is a huge interest here.
AI, the emergence of a general purpose technology, has broad strategic implications for business, for society and the way we interact with one another and this carries across all industries and all countries. And this discussion, we want to look at actually how AI is set to redefine industries, their business and operating models and what implications that they have across media, pharmaceuticals, engineering, construction, other domains.
Now, at the same time, there's also been a lot of risks that have been highlighted around AI. Some of the existential domain, you know, what if we have a rogue AI, it runs away. Other things, you know, that we already see taking place. Today, malicious utilization, deep fakes, disinformation and then concerns, legitimate concerns around job losses, the need to reskill and how people adapt to these changes.
Now, in the face of these challenges, it's really important to actually bring different leaders together and as we've done here in Dalian and we do throughout the year, from business, government and civil society, academia, so we can have honest conversations about the implications, what they mean and how we can actually navigate those challenges together.
To this end, the Forum has established the AI Governance Alliance, which today has over 300 companies involved actively addressing these questions across three specific tracks. One on the technology and the safety side, the second on the transformation opportunity and the third, importantly, around governance and this discussion is part of those ongoing dialogues that we're having and feeds into it and we'll be able to share a little bit of that here in today's discussion.
Now, we have a specific working group with some of our members here on AI's role in the transformation of industries where we take a comprehensive look of what it means to correct for what's already in place, its walls, what might newly emerge there, and I think it's important to look at as much of how AI will change infrastructure that's already in place. Industrials, they're already in place, along with what those new industries are that may emerge in this domain.
And with this, we have a distinguished panel of guests with us. And with Warren Fernandez, CEO of Asia Pacific for Edelman Group, the American public relations and marketing firm. Vincent Henry Iswaratioso, CEO of DANA, Indonesian-based innovator and digital financial services. I have Samuele Ramadori, CEO of BrainBox another innovator in the global energy transition space. I have Vu Van, the CEO of ELSA Corp, a US-based education language technology company. And last, I have Jay Lee, a distinguished professor and director of the industrial AI Centre at the University of Maryland.
So, having walked down to the end of the room, Jay Lee, I'd like actually, to start with you. And, you know, we've been having discussion this week with a number of leaders, what are the economic sectors or industries that you feel stand to gain the most from using AI and how do you see this impacting productivity and growth in the future?
Jay Lee: In, I will say, in AI, we must first define the basic AI, right, three elements. A, algorithm, B is big data and C is computing. Then you have verticals: manufacturing, semiconductor, energies, aerospace and transportation, medical blah, blah, blah.
So I will say the question: what's the most beneficial industry, the manufacture industry all kinds? Why exactly the lighthouse is about, you take different verticals industry because industry has a baseline. You have a cost reduction, you have quality improvements, efficiency. So those are the baseline you have the last two years.
For last two years, last three years that you apply this data and alert and to reduce downtime, predictive maintenance. So I will say the manufacturing industry, about a 30% of benefits in predictive maintenance, 20%, 15% probably in quality improvements and another 20% in operation optimization, scheduling right and improve efficiency, etc.
So those are the big, very critical change, the last number of years, I will say regardless, GAI status because the lighthouse start 2018. So it has been very good – 153 lighthouse I would say, right.
Number two area I will say energy because we talk about global energy issue. But if you look at the energy efficiency, in wind turbine and wind energy in US about 10%. But if you look at 10 years ago, the downtime, the wind turbine is between 5% to seven, sometimes even 10 depend on the size of turbines offshore, But today, less than 2%. That's a great improvement. For offshore wind turbine, you're talking about 2%, I mean 5% reduction, there's a lot of energy you can generate. So again, that's another big feature.
Of course it goes through a semiconductor advanced. Definitely if you're talking about advanced semiconductor, then you're talking about new improvement, how fast you can do that. You trial and error to try here or you do systematic way. That's very important. So just a few. Yeah.
Jeremy Jurgens: Great. Thank you, Jay Lee, you spoke to the predictive maintenance and also this role in energy. And you know, Mckinsey estimated that value could be up to $25 trillion to the global economy. And I know Samuele you actually work in these domains on the predictive elements, energy and so on.
Could you give us a little bit of a picture how you see this playing out where do these giant figures that we see come from, you know, what's your perspective on this?
Samuele Ramadori: Sure. I mean, there's some really attractive opportunities with applying AI and the new tools at our fingertips. And anything with obviously very interconnected systems and with that today have a lot of waste. In our case, we deal with commercial buildings around the world. That is a prime example of that. Another one would be international shipping and logistics. Just the way things are done today are frankly, quite old school and there is just a lot of room for improvements.
So I kind of define it that anywhere that has obviously big impact, either dollars or energy consumed or materials consumed. That is in a very complex ecosystem. The opportunity is there in a big way now that the challenge of getting there is the access to the data. And then also the access to allowing AI to make the improvements, whether it's human delivered or autonomous, that also has to be the outcome.
But that unfortunately in a lot of sectors is the case in our sector as well. You almost have to work just as hard making the AI work. You know, new novel technology as you do, making sure that you can find a way to connect to the data. And that's the reality of industry, right? This is not social media, where the data is all there. Anybody can access it. We're talking about plants and assets that move around and degradation etc, etc.
So there has to be that combination of access to the data and then making novel AI outcomes, quite powerful outcomes, make them work, trial error, make them work as you were saying Jay. And so those two have to come together and in many industries, I'd say, there's a lot of work to be done before you're AI-ready. Like a lot. So.
Jeremy Jurgens: Great. Yeah. So, you know, let's hear from a couple of industries, maybe could actually give us a picture of what this opportunity might look for. And maybe I'd start with you, Vu Van, in the sense of, you know, education or concern around people. You know, let's keep people first and this discussion, role of education skills and so on.
Could you paint a picture for us of what AI could potentially bring to the industry sector?
Van Dinh Hong Vu: Oh definitely. I think education is one of the earlier use case that AI can quite be applied on broad cases. I think the biggest one, as we've seen already in the education space, is what we call hyper-personalization learning. In education, that's basically the holy grail because every single individual depends on who you are, have a very different needs but traditionally, it has been extremely hard to offer a hyper-personalized learning path for all learners.
That goes from K12 education and higher education to training in a corporate setting way like so for example, take corporate setting. It has been very hard for companies to even identify the areas of a skill set for employees; so then offers training solution and offer generic blanket training solution for the entirety would be very costly yet might not really address the skill gap. Same for school environment.
And so with AI right now, especially in GenAI, it allows the ability for us to automatically generate content on the fly and the efficiency of this, allowing for the hyper-personalization of learning to be extremely powerful. That's allow for better student engagement, better learning outcome, allowing for the learning to be quite adaptive on the fly to each of the learners' ability.
Each of us have a different way of progressing, some of us take a lot longer to digest and learn the content. Some of us move a lot faster in certain domains. And so I think that's the biggest one that we commonly see right now, in education. That has been, we've seen a lot of applications and services and offerings that cater to that on multiple domains, from skill set to math to language learning to science and STEM learning as well.
And I think the second one that we have also seen in education that's quite powerful in addition to that is what we call a conversational AI for learning. That's allowing for 24/7 support and instant feedback beyond the classroom. So traditionally, you rely on teachers to ask for questions and interaction. And up until recently when generative AI with a massive amount of content. That has not been possible you have to rely on asking the back and forth questions to a teacher when they are available. And previously for a generative AI become such an enabler right now.
AI used to be quite static, it means that it will tell you a certain response but you cannot ask back and push back on that one. Now, that ability to have a conversation with AI you can truly interact with and it has no limit of knowledge is extremely powerful because you can just ask about any topic that you can work in domain. And you don't need to wait for another teacher to really train you that.
And I think the combination of having a two virtual tutor or AI understand you where you are, what your gaps what your needs are, understand your weaknesses, your learning style and then the ability to interact on a 24/7 basis is really what will change the education landscape in the coming years.
Jeremy Jurgens: This idea of this personalized tutor that might be able to adapt to our needs, our or opportunities. What's your estimate if you'd be willing to make one? How far do you think we would be away from being able to actually see one of those tutors in place?
Van Dinh Hong Vu: Well, we have already seen some tutors in place and is actually more powerful than a human teacher when it comes to the ability to learn. Now, it doesn't have the social, emotional learning quite a bit that the teachers actually compliment to an AI tutor but when it comes to the knowledge base and asking for information, it's quite powerful.
What we are seeing the challenges with that personalized tutor again, lack of that social, emotional human intelligence, is that it doesn't catch up with the learners' learning style yet. I believe in my science data, this is the path that we should learning. Now, whether the student will take it or not is a different story and that's where teachers can really play, so can we truly replace human tutor and have a personalized tutor that can actually work?
I think that's a combination. I don't think we will ever do that in education. But I think there's certain elements that human teacher will never be able to bring a personalized tutor into it right now. I think the second part is really challenging right now for personalized tutor is, education training curriculum can only be as good as the data that come in to it.
And data availability is still a challenge because there's a lot of bias in the data. And so what's good for certain demographics and especially in education might not be relevant to us. And so it has to be extremely careful with that data bias that we commonly see to make the personalized tutor truly work for us.
Jeremy Jurgens: Great. So, we have the tutors available today. Not necessarily with the emotional level that we need. Good news for Professor Lee. There's still a role for educators in this process.
So Vincent, maybe coming over to you. And you're in the financial services sector in Indonesia. And you're out running a relatively lean company, in part because you're using AI. Can you share with us some of the innovations you already see shaping the sector and then maybe a little bit of an outlook of what you see emerging in the next couple of years there.
Vincent Henry Iswaratioso: Sure, I can talk a bit about internally and also externally. Think, internally, have been adopting AI since quite a while ago, before AI become such a catchphrase and everybody's talking that there are going to be AI first company, like basically everyone.
We started about three years ago in a much more simpler term at the time of just automation, right. When we started with RPA – robotic process automation – and we decided that it's going to be very important for companies going forward. Because a lot of our processes are pretty much repeated almost every single time. There's some variations from here and there but pretty much if you're talking about financial services a lot of the stuff that we are doing is just you know, administration's.
So, LLM comes in and then suddenly it just become so much more easier for everyone to not only develop that internally, for our people itself but also to serve our customers. So we started employing deploying that for our customer service. And we see a major improvement because, you know, if you do financial related products and services, there's definitely going to be some questions just come some you know, error from here and there because it's such a complex infrastructures and we have to be integrated to multiple parties.
So. when things happens, the customers are very demanding because it's related to, of course, their financial. So serving with humans, I would say will not be fast enough or will not be scalable enough.
So, we started from the customer service. So when we started about two years ago, we had a bit over 1,000 people serving about 200 phones at the time since about 50,000 of contact ratio daily. So one pair of 50 per one agent. Going forward now, we're serving about more than like 200 to 250,000 sub-contact ratio daily, and then now we're serving it using 350 human agents – 90% of our contact ratio now is entirely managed by automation and LLMs.
I think that's just one indication of how successful and LLM you know, like automation and plus AI can help in terms of putting more efficiencies through your organizations. And we decided as well, we don't want to be you know, like, dehumanizing our companies itself, we decided that we're going to stop reducing the people and instead we're going to focus on upskilling more and more of our people itself so that they can use and they can capitalize on the AI to become more productive.
And now that has come across every divisions that we have in our companies, not just the customer service in terms of development of programming, engineering people itself, for example, like close to 20% of our code now is generated by AI. In terms of product development, users, interface, audit, user experience, now 40% of them it's already audited by AI and that goes along with them.
Our legal process, NDAs and stuff. Our NDA is now we used to have one 24 hours SLA now we're asking to be less than 10 minutes. Because, right, because it's we have learned so much and we have collected so much data that it can help improve or rewrite any sort of variations that we possibly can do because it is such a simple legal, legal documents. Everything else is still manual. So...
Jeremy Jurgens: You know, I'd just like to come back on this aspect, you know, legal and so on. So, financial services is understandably one of the more heavily regulated sectors and his application machine learning AI made it easier to navigate that or because of these unknowns that accompany it, is it actually brought more challenges.
So how do you see navigating and understanding you know, strict regulatory environments with these AI tools?
Vincent Henry Iswaratioso: So I think one of the key always challenge – there's just so many new regulations that are being put in place, especially these days now because the government, the regulators are trying to keep up so much with the innovations.
So they do come out like – and this is a data itself as spoken by one of our regulators – every week, there's a new regulation, every single week. And so for any one legal, like your legal team and your compliance team, to keep up with it manually, it's pretty much impossible. So, I would say if you do that, almost every organization in the country would be in noncompliance.
So you do need some the AI LLM is helping you to make sure that every new regulations, every new government initiative or roadmap it's one being alerted to then your legal and compliance team have immediate access to what's there all about.
So they can reflect whenever they have a new product in development, whether they have the new submission for compliance or whether or not they have legal, they can benchmark and reflect that pretty quickly. As before it takes actually days or even weeks to comply.
Jeremy Jurgens: Thank you for that. I'd like to maybe shift come to you Warren. The media sector was maybe one of the first sectors and most heavily disrupted with the digital transformation. And you could argue that the media sector still hasn't recovered from that initial digital disruption and now seen another AI disruption taking place on top of the digital one.
What are you seeing – I don't think we need to look three or five years out in the future – what are you seeing actually today in media companies and how are they navigating this both AI challenge and AI opportunity there?
Warren Jude Fernandez: Thanks, Jeremy and thanks for having me on this panel discussion as well. I think the media and communication sector is definitely sort of grappling with this thesis. They've been dealing with digital and coming out of that, now they've got this new technology. But if I take a few steps back, I think all of us in this room sense that this is quite a critical moment in the technology development.
It's not some new tool, it's not some new software, it's really going to change the way we work, we operate, we think and we live and you can sense that from the response to I think you've had about 10 AI sessions for the last few days. All very well attended. And it kind of reminds me of the moment that we all, many of us must have lived through the 1990s when we were first dialing to join the internet – you remember the crackling sound, the squeaky noises.
And if you just think back then what we could not have imagined was going to happen in the next decade and to where we are today. And I think AI is at that point in its development as well. We cannot imagine where this is going at this point. And it's going to change the way we operate.
For the communications and media sector, just as an experiment, I thought, why not consult the latest whiz kid? So I put in a prompt and I asked Claude 3.5 so what are the upsides and the downsides about the impact on media and communication and it came up almost instantly, two pages of response. Let me just summarize it for you.
I mean, it says on the upside: improve efficiency, enhance creativity, better language precision, data analysis and insights, personalization at scale, crisis management support, cost reduction. Downsides: job displacement, authenticity and ethics challenges, homogenization of content, privacy and data issues, so on and so forth.
All pretty sound and sensible. But clever Claude came up with this caveat and I'll just quote it for you. It says, "This outline provides a starting point for considering the potential impacts of AI on the comms and media industry. The actual effects will likely evolve as the technology advances and the industry adapts to the new tools." Now, translation into human language, it means, you still need human input, human judgment and human interpretation to watch how this develops.
And that's the approach I think many communications in organizations, including my own, we are taking towards this. I think to embrace it, to figure out when to use it and how to use it, when not to use it and when to make sure that you have human input human oversight and make sure we never lose that creativity and originality.
And I think one of the big challenges that every media organization every communications organization is going to have to grapple with is this amazing paradox that with AI, you will have the ability to generate an infinite amount of content very quickly at very low cost. But the paradox is as AI goes up, trust goes down. And the key question is, we may have infinite content, but we all do not have infinite potential infinite time to engage. So we will be looking for a sense of who do we trust, who can I rely on to give me good, credible, reliable information.
I think that's the next big challenge. Some of you may be familiar with a Trust Barometer report that we put out every year. And this year, we launched a report on the lost focus on innovation and part of that was AI. The interesting thing we noticed was that many people are questioning and sceptical about where AI is gonna take us.
Less so in Asia, because generally we've seen the benefits of innovation and technology, you know, just seeing all around China what impact it's had but even in Asia, as a sense that two, twice as many people feel that AI is not being well-managed and the worry is whether governments are able to to keep up with this rapid change in technology.
And if governments cannot do it, then is AI being pursued for some narrow profit motive or some political agenda and is it being rolled out with the interests of someone like me. And when there is doubt about that or concerns about that, it leads to questions about should I be embracing this technology, impediments to the adoption of that technology?
And even further, it leads to questions about whether the technology is being used in the interest of the broad view of society. And also questions about is the system acting in my interest? So there are political implications there.
So I think the technology is moving here so fast. And the challenge for all of us in this room is how do we ensure the communication of the value of this emerging technology so that we get the buy in, we get the adoption and we get the full benefit of this technology?
Jeremy Jurgens: Great. Thank you for that. Now, this isn't the first time that we've seen a panellist sharing what they had pulled in from a Bard or ChatGPT. Claude. And I could actually imagine it could have been a French panelist for Mistral and just walked down the panel and we'll be putting the panellists out of work pretty quickly there with these tools.
Now. I think this is a legitimate question a lot of people are asking and Jay Lee, I'll come back to you. What does reskilling as we, companies are rapidly embracing it? Countries are taking different approaches. What does reskilling look like in the age of AI? And I'm also having to hear from other panellists as well.
But you know, we were talking earlier about manufacturing workforces, adapting. And those that are able to adapt actually can get massive benefits from it. But what does that mean for you know, the average worker employee?
Jay Lee: Well, typically, we talk about AI right in terms of technology, tools and talents, right? If you treat AI technology or if you need a tool to do day-to-day job, new talent to be skillful, to use the tools. So this is cycling, right. So eventually the human side.
So you depend on how we look at it, right? So if I want to replace humans, well wait a minute, you don't replace a task, right? Or a task. If you're not capable of that task, you will be challenged. So we don't understand the gaps of how to perform a high quality, high consistent task, a task.
So, eventually, I will say in order for the people to know how to do it, they have to know what the weaknesses are. For example, I take too long to do things or my job is inconsistent. So the AI helped to first augment the intelligence. Augment – I help you to do faster better. So we had, previously, we had an hourly worker comes in, take one week to train. Then we have augment guided training, so they can self train, self learn and one day, second day, they can get on the job. Right, so, that's good. They get confidence.
But of course, now you need to keep and continue working. That wasn't done well, you want to move to automated intelligence. Or help them to organic check or rendering check. So eventually sync that, eventually autonomous intelligence. Then they can move on to different tasks. Right. So I think the training people is very important.
Normally we take a four key steps. First is called principle-based training. Giving them a background with AI – OK, yeah, you can self-learn. Second, practice-based learning, ah, I give my baseline data set to you based on the based on test to you, you do you can make virtual mistakes, fast, virtual mistake, right. Third, practice the problem-based training – you go to your job, you physically use it to solve the problem to see how could you have done compared to other people.
The last one is a professional training; you have to teach other people do 123. So by the time – like a master black belt, once you become a black belt, oh, you have a confidence, train other people. So just train one person not good enough, you have to train very fast speed and scale to many people. Right?
That's systematic, not trial and error, systematic. The only thing we do that is because we want to make sure this is a implementable. Right so AI also stands stands for actual implementation. It's not amendable AI is just another thing.
Jeremy Jurgens: Excellent. We heard from Vu we'll actually be able to use the AI tutors actually help us support our system and training.
We do have the opportunity to take questions from the audience. So if you have a question, do raise your hand. We'll have a mic. Do you have any right now? Like take those if you need a moment to prepare your question. We'll come back to you. I don't see any hands up, we're a very bright stage in a dark room.
So I do see a hand up here in the front. If we could bring the mic up to the centre of the second row. If you could identify yourself and also your question here. Third.
Audience member 1: Hello, I'm [unidentified name] from Amazon Web Services, I'm the head of healthcare technology. Thanks for all your insights.
Clearly AI is also bringing a lot of bias, hallucinations, toxicity and other issues with regards to misinformation and disinformation. And I was wondering, based on your different inputs and perspectives, what do you think is the acceptable, if there is an acceptable, threshold for bias in the results and the outcomes that it makes? And who should be deciding on that?
Jeremy Jurgens: OK, who'd like to take this one up? Sam?
Samuele Ramadori: Well, in our in our field, I think there's less risk of bias I was gonna say, I think one thing we've experienced as we went from ChatGPT, where you get anything under the sun and get an answer and then you start risking elements of bias, you start getting into HR and everything so it starts getting heavy.
I think there's a lot of applications and we've seen the technology come around now where if it's pointed to a certain direction with a limited data set, you can get some pretty powerful outcomes without that risk. But I mean, in our field, it's a bit less of an issue but it comes up when you start hitting those delicate areas and well probably, it's front and centre.
Jeremy Jurgens: That's right. We have a number of elections coming up. In the UK and France, a lot of people also looking for the US as well as a number of elections taking place around the world. And this question of bias in the media is already present.
And then how does it get exacerbated with these new tools and capabilities?
Warren Jude Fernandez: I think bias and neutrality has always been a challenge in any form of communication and that's why you still need human input, human judgment. I think over time as the machine learning gets better and better you will narrow that will apply.
But the question goes back to the issue of trust and the various forms of bias misinformation, false news, fake news. The trust project has several different categories of what for some bad information might be just to separate them and help you help you deal with them. And you're right, we've seen this information playing out in the Indian elections in the Indonesian election and we'll see what happens in France and the US.
But I think it comes back to that ability to have a sense of how do we separate truth from facts. Because as the AI gets better and better at this, at lower and lower costs and faster and faster speed, we will get to a point where we might not be able to trust what we see with our very own eyes. And then we have a challenge right. We can't agree on if this is black and the system is white. How do we have meaningful discussions about policy which is shades of grey and I think that will be the challenge for all of us.
And it's not just the media that has to grapple with it, it's society that has to grapple with that. In the past, if you go back in history, we are almost at the point where we were in the late 19th century, where information was put out with very little ability for newsrooms and professional journalists to interpret to cross-check to do investigative journalism because there were no professional newsrooms at that point.
The professional newsrooms of the kind that we know today, we're already at 150% and we're not going back to that. So how do we re-establish that ability to distinguish fact from fiction from common bias? I do think this is something societies will have to grapple with.
I used to work in the other way of the world, editors forum and this was an issue we grappled with. The fundamental business model that funded good quality newsrooms is fundamentally broke, based on advertising revenue, paying for the journalism that no longer applies. We, I think as societies, we are going to have to figure out ways of providing that good quality information, separating bias, separating false news; just as universities are publicly funded, media might have to be publicly funded as well. That's one option.
Jeremy Jurgens: Great, I think this question on bias is really important. So I'd like to maybe come to you Vu, one more time on this because curriculum is another one of these areas that can be extremely contentious and debated.
And when you think around you know, developing curriculums, you look at your peers also as they approach this, how do you approach this question of bias, especially when you start having personalized, you know, education models, while personalizing which curriculum and on what basis, what framework?
Van Dinh Hong Vu: Now it's definitely a big topic, hallucination and bias is a big topic in education when we use AI, especially. I think that's what I mentioned briefly at the beginning, is the bias and the data that come into the training model, right and as we see the over-reliance on AI in education and harm, the common theme very soon is the responsible of all of the provider of the AI solution in the educational space to really ensure the quality of our training model, right.
And so, just answer your question, who's responsible for, there's a lot of regulation and conversation about responsible AI and ethics in AI but I think ultimately, for any company that put out AI model, we usually call it within our community, we call it explainable AI – can you really explain what's happening? And because historically, up until today, we say that it's a black box, right?
Whatever the model spit out is whatever we take and for curriculum training, it cannot be adapted, not telling the students that just drop whatever we putting out for you but we cannot explain to you why you're learning it. And so explaining AI is extremely important for us. And the other thing is in bias, for example, very particular example in our space, we build voice recognition technologies, to help language learners improve communication.
And one of the biases in the historical voice recognition model is that it's trained on the data of native speakers. It means a big technology understand you when you speak really good English with the native accent. Now there's the world people speaking English in different ways and people speaking English less well.
So when we started out the company, we actually spent a lot of time and resources in building a technology that's more inclusive in taking different accents of English when right now over the last almost 10 years, we probably have the most the biggest data set of assets in English that we can understand people from around the world.
And so I think as we open up the solution, it's important for us to be able to understand everything from people instead of being biased towards certain training data. So I think that's just an example. But I think ultimately, it's all a perk. And eventually that reskilling right, the teachers are like the content provider are no longer content creator because GPT can write content so fast.
But our job is to become content validator to make sure that we catch all the hallucination and bias and retrain the models so that we can spit out the right thing for the learners' future.
Jeremy Jurgens: Great. Thank you for that. I see another question here in the front. Just bring the mic up there.
Audience member 2: Thank you so much for the insights. My question was more around how we think about in a world of AI, what should be a public good versus a private good.
Obviously, if you think about this conversation happening 100 years ago, you know, talking about roads and that being important as a public good that is provided by the government.
So, I'm wondering if there are conversations happening around that I think Singapore is talking about building a foundational model for the country. And as we think about sort of, what should be certain aspects of the stack that should be government owned versus privately owned.
Samuele Ramadori: I'd like to take that. I was just going to the timing of that is interesting in our country where Canada just had an allocation of dollars from the government to invest in compute. Right.
And the issue is, you know, our sovereign compute was so small and I think a lot of countries are facing that issue with large hyperscalers providing service around the world Europe, obviously with the same dynamic and you know, there's a very large, very logical element as to why do that why put public dollars into something that normally the private would supply.
But at the same time, even if you go down that route, you know, often the resources are what they are and you may not achieve what you're looking for in the end, any ways. And by the time you do put that compute in the ground, it'll be out of date within two, three years. So, it's a bit of an issue there.
There's also that same question around the actual foundation models is you double down on that question as well. And especially when you start applying those to public data, public goods, public resources, so that debate's happening live as we speak, I don't think we really know the answers. The problem is that technology is moving so fast we're not even sure what we're trying to regulate or figure out.
But that's another example of an area that's just gonna, you pray that the speed of the government is able to keep up with the changes that are happening. You pray.
Jeremy Jurgens: Great, thanks. Vincent, I'd like to actually try the same question to you. I know that in India, we've seen both the Aadhaar programme for a national digital identity of providing the foundation for "know your customer" and then building the UPI payments tool on top of that and then providing it as a kind of a public infrastructure of financial services absolutely critical.
How do you see this tension between, you know, public goods and private goods while still having sustainable business models to pay for the compute to pay for the energy that needs to go on there?
Vincent Henry Iswaratioso: I think this is a very important question right? I mean, when we talk about the foundation technology, I don't think all of us and like all countries in the world should invest in it. Because you know, at the end of the day, there's going to be certain foundation technology that we can use and capitalize but I think the most important will be the knowledge base.
I think in the future knowledge base will be the key of the sustainability of any like either country organizations, institutions. So that's where we have to invest and talking about that and we just that's the danger as well on trying to democratize like everything from the government side.
So for example, like in India, if you talk about Adar and UPI, it's great for the public but it's very difficult for the companies because the companies actually doesn't own the data anymore. So for example, like if there's any transactions happening now in the realm of UPI in India, when you talk about the concept of issuer acquire principle in the middle.
So the company that issues that like the payment itself will only have the issuing data and they will not get the acquiring data and that the company that does the acquisition acquiring for the merchants, they will only get data from the merchant side without having the data from the user side so there's always one sided data where the full data itself besides would be in other or UPI and so, that does give I would say, a weakness in terms of the knowledge base.
And as a company, if you don't have the full set of data and then for you to be able to do any sort of product or profiling will be very difficult, right. For example, you want to create a credit scoring then you have one half of the data when you want to create a new set of whether or not insurance investment product and you only have partial data itself.
So as a company, I believe that you know, having that knowledge base will be the key and second thing is how to make sure that knowledge base can be used to reach out to make sure to target the right users so that they can benefit from this knowledge base. And I think that would be some discussion that we need to also make sure everybody understand. And that's mostly talking to the government and regulators, that it's not just, you know, purely entirely democratizing without thinking about the long-term sustainability.
Jeremy Jurgens: Maybe we challenge this just a little bit because we've also heard around concerns over aggregation of data. Right? There's a concern that a few companies can actually end up having all of this data and by providing as a public infrastructure, you may be actually forced some of the companies to revisit their business models. And rethink it in a way that is more customer centric.
Now, I recognize, you know, there's definitely advantages to that data aggregation but you know, potentially looking at models for the normalized data or so on. And so how do you see that balancing also, that tension only corporate needs?
There's clearly benefits on data aggregation, but also recognize in kind of citizen privileges, and even a kind of a tendency recently, we've seen many countries where people would like to have more control of their own data and keep it anonymized from various corporate actors.
Vincent Henry Iswaratioso: Sure. I think we discuss as well about privacy computing, privacy. We call that privacy enhancing technology. I think that will be a key technology that would help that.
Because when you talk about, you know, certain institutions hoarding all of the data I think that's a concept where even the consumer or the users, the customers itself will have difficulty to access their own data, right, because it's got for a certain institutions but we'll talk about the concept of a standard station open, open platform and especially in regards to privacy enhancing technology, then every every individual and institution, actually would have access to that data itself, even though they don't own it. So we can deploy that kind of framework.
And so essentially, it's no longer also the rules apply only in certain institutions. But then it's the ability of any of the participants in the ecosystem on how to process the data on how well they process the data so that they create solutions for the users. I think there's some definitely encouraging you know, like directions going there. I don't think it has yet been deployed massively in terms of the industry and especially understanding because as if you talk to the financial institutions like the banks, the central banks, they're called, they're scared that the data is going to be taken by someone else.
So they tend to have this hoarding mentality. But once you talk about all of this improvement, I think then everybody will understand now that it's kind of useless to hoard the data is just how can you process how your knowledge base and your technology or your ability to process this data will be better or will be targeted compared to the other parties.
Jeremy Jurgens: Thank you. Warren, I think you also want to jump in.
Warren Jude Fernandez: I wanted to come back to the question that the gentleman asked both public and private sector, I think there is an increasing sense that we need to see collaboration between the two. Because this technology is moving so far and fast, that the worry is that governments might not be able to keep up.
Even in a place like Singapore, the government is relatively organized, only 50% believe that the government is on top of regulating AI, let alone in other places. And the sense is that businesses seem to be having a responsibility to step up and engage and said there's a big desire for more collaboration between the two.
And I think it goes back to the issue of trust. And a colleague or speaker was talking about explainable AI. The sense I think that our study showed was that people want to have a sense that they are being engaged and not being talked down to by folks like us, telling them why this is good for them. But giving them a sense that this is technology that's actually going to benefit them and benefit someone like them and improve their lives.
And the more they feel they have a stake in it, they have a sense of it, the greater the adoption. So there is a business interest here of the technology may be wonderful but if you run up against human instinct to reject it, it's not going to pick up. So as important as the innovation is the implementation. So I think that's where we've got to find ways to get business and government to work together and collaborate. So that we can amplify this.
Jeremy Jurgens: Great. Thank you.
Samuele Ramadori: Just gonna comment here in terms of, you're saying the access to the data and engagement and we made a choice a few years back when we're collecting data that today's not collected. So we're accumulating this fresh set of data and the smart business decision might have been just to use our own nomenclature and keep the walled garden up, and so on and so forth. But we took a different approach.
And I think a number of startups and scale ups we're seeing are doing that. We're actually not just using some form of standard that's already out there but even participating in the creation of this thing and we're improving it. And I think if we do that, you start hitting the trust element, we start enabling the collaboration of multiple people using the data for benefits that makes sense to people. So I think that's a critical one.
I wish I had some kind of barometer as to some percent of companies that are thinking that way. Are they keeping the walled garden going? But those walled gardens I think is a real issue like I think it would, it's the risk of crashing trust is very high.
Jeremy Jurgens: So something for our friends at Edelman and the Trust Barometer to further pursue that. I'd like to come back to this aspect of productivity but I do see a question in the audience. We'll take the question there. And then I'd like to hear a little bit about productivity. But why don't we take the question here in the front of the audience.
Audience member 3: Thank you. I would like to ask a technical question.
Jeremy Jurgens: And could you identify yourself?
Audience member 3: Yes. I'm a Chinese from a petrochemical company.
Jeremy Jurgens: OK.
Audience member 3: And we understand the United States taking the leading role in commerce AI. As a Chinese, naturally, we try to understand what's the gap between the China and the United States in terms of AI. Thank you.
Jeremy Jurgens: Or is there a gap?
Samuele Ramadori: You know, to say? I'm not sure.
Jeremy Jurgens: Yeah. That's great. Anybody like to to take that up? Otherwise, I'll call on people directly there.
Samuele Ramadori: I don't want to I want to deny it as I've had that question asked probably four or five times in the last two days. And I came here trying to discover I mean, here's the trust again. To me, it feels like there's a, you know, coming to China, there's a wall around China where we know there's a lot of AI work being done but we don't really know what it is.
I guess in the US I would hope – maybe this is not gonna be a fair comment – I think it's more public. We kind of know every, every two weeks it's gonna be Google and then apple. They all take their turn launch the next LLM model and so it's very marketed and promoted. But again, I don't want to pretend that we don't know everything that's being worked on.
But from a Chinese perspective, we don't know from from our side sitting in Canada but we we do get the sense that the skill set and the talent pool and whatnot is very high now. Okay, this latest wave of generative AI that might be an advance, you know, forward step with the US and a few European countries. But the perception from our side is that there's still a quite an impressive innovation going on here. I don't know you're probably you're closer to it.
Jeremy Jurgens: And before I come to Jay Lee, I'd also maybe want to offer an observation, we talked about the bias earlier. And from what I've seen is there's clearly a bias is that English maybe is relatively widespread. And so then people start to assume that only what I read in English media is taking place. And yet when you're actually discussing with people here in the Chinese media, they actually have a good view of what's taking place on both sides of that.
There is even a case recently, I believe, with some young US researchers actually turning out that their model wasn't so novel but they had borrowed it from Chinese researchers that had an open source project. It was sufficiently open that they could take the work and try and pass it off as their own. But, again, thanks to the internet that was revealed relatively quickly but really well.
Jay Lee: To echo some of the common question which I don't think that we should confuse as good. We should the ABC right. Algorithm, big data and computing. If you compare all of them, depend how you look at it. I don't think, no one's good.
Okay, the big gaps, gaps, the verticals: your transportation, your energies, your manufacturing, your society, your student. You have to be good in those areas. ABC good for me is no meaning you gotto have a purpose to use AI.
Go back to problem source. Know you find a process to solve the problems to reach the purpose. Now you're good. Otherwise to me, AI has no point. Right? So that's very important point. So eventually, we need to rethink to be humble.
AI is not high tech. Right now, to me is the heart. We have to become a handicapper, then you can implement things you want to be a winner. Not here, you're just a talker.
Jeremy Jurgens: Thank you for that. So the ABCs and actual implementation there so I guess it looks different in China actual implementation that leads to challenges as it does in other countries there.
OK. Two more questions. And then we'll have to begin wrapping up there. In the front row.
Audience member 4: Oh, hi, thanks for taking my questions. I'm Palin from Tyson Media. Actually, you know back to like 10 years ago, when we were in China in or the internet age, we use the same infrastructure. We use the same database as the UN, as so we just develop applications on that.
But in the age of AI, especially in generative AI from the day one we are decoupling, we are using a different set of foundation model and different set of computing power. So as maybe for you guys who are you're doing applications layer, so for you, is that a problem for you?
I mean, we'll develop all this applications across countries that you need to use different infrastructures. So do you worry about that? Thank you.
Jeremy Jurgens: Maybe I could direct that to you Vincent.
Vincent Henry Iswaratioso: Yeah, I can answer that. Definitely. So we we do try and have the model LLM. So I think we experimented with a lot of them even including the one in China as well. So at the end of the day, I think what professor said is really relevant. It's what can give you the best result for your specific needs.
And, you know, I'll give example for like, pretty practical example as compared, you know, when in the US because majority people must try the LLM of the US at least once. There's a difference between, for example, like a token, contextualsnumbers from ChatGPT OpenAI compared to Gemini, Google, right.
I mean, Gemini, Google can have from 1 million to 10 million, ChatGPT is less than that. And of course, there's another one, which is the what Meta has, which is Llama, which is you need to have your own system to process that.
So once again, right, every LLM or AI will have its own advantage, will have its own capabilities that is different compared to the others. So for example, if you really want to have Chinese speaking, contextual problems or questions and answers, definitely the one in China will be better compared to anyone else, because you've been trained in that sense, if you want to talk and this is an exact sample that happens to us, right?
We try to process the token of Indonesian language Bahasa compared to English, the Indonesian language turns out to be at the time now it's better at the times two to three times more expensive in terms of the token pricing.
So that's why it's really up to every companies to explore, to really try to themself which one that fits them the best. So it doesn't really matter. So like whether or not that Chinese technology or you ask.
Jeremy Jurgens: Great, thank you. I think we're gonna have to wrap it up there. I'd like to invite the panellists maybe just, you know, 30 second takeaway that you'd share with the audience that you'd keep top of mind for thinking around the deployment and utilization of a say, actual implementation AI there.
Warren?
Warren Jude Fernandez: I mean, I would sum it up with the word "trust." I think, because this is moving so fast and so far we have to take a lot of effort to explain to people, engage with them in language. They can understand so that we get the most benefit out of this technology. It's not intuitive. It's not necessary that people will accept it.
People are more accepting of green technology than, say, gene based medicine because of that trust element. And with that I think we have an opportunity to win people's trust if we can get the benefits of it.
Jeremy Jurgens: Right. Thank you. Vincent.
Vincent Henry Iswaratioso: The key here is accessibility. So we believe that with the capitalizing this in the right way will give large accessibility especially for financial services for the people. Something like I'm coming from country like Indonesia, our accessibility is high in terms of financial service access but our literacy is low.
So to the point of view is that providing people with access to AI will actually accelerate with the financial literacy itself. So the opportunity is there. So AI is just a tool so as long as you know what to use them for, you have a vision on what to do, then you will help you otherwise it's just another tool.
Jeremy Jurgens: Thank you. Sam?
Samuele Ramadori: I would reiterate that we most companies are still early in the journey in being ready to use AI. And I'm seeing regular traditional ways of looking at ROI when it comes to investment. And I think we're gonna have to come up with a little bit of half faith ROI.
And these companies should be investing on the bottom platform kind of a little bit regardless of what you're able to prove because there will be outcomes that you cannot foresee today. So I think there's a bit of a leap of faith that a lot of that most companies should take in terms of preparing the base.
Van Dinh Hong Vu: Yeah, I think for us, I think it's the cultural shift on how you leverage AI to the best of your needs. And whether in education, that means that you have to really rethink about how AI can be applied in your training and in your learning style is really important and safe organization.
AI is available everywhere right now but how much do you want to integrate that AI into your training to assessment of your employees and identify that skill so that you can reskill, new skill, upskill is really important.
Jeremy Jurgens: Thanks. Jay Lee, one line.
Jay Lee: Before the potential, go back to legacy problems, government services, our energy, our transportation, our health care, identify those low hanging fruits and solve those problems before you keep talking about AI.
Jeremy Jurgens: Thank you. OK, trust, accessibility, looking at the fundamentals.
Thank you for joining us here today in Dalian, China and wishing you a good continuation of the meeting. Thank you.