Transcript of CCG seminar: European and Chinese Perspectives on AI Governance
In collaboration with German and French Embassies in Beijing, CCG event brings together European and Chinese minds on AI policy.
On June 3, 2024, the Center for China and Globalization (CCG), in collaboration with the German and French Embassies in Beijing, hosted a seminar at its Beijing headquarters titled "European and Chinese Perspectives on AI Governance."
The event featured keynote speeches by Patricia Flor, German Ambassador to China; Bertrand Lortholary, French Ambassador to China; and Gao Xiang, Director-General of the China Science and Technology Exchange Center, Ministry of Science and Technology.
Following the keynote speeches, a panel discussion took place with the following participants:
Zhang Linghan, Professor at the Institute of Data Law, China University of Political Science and Law Member, High-Level Advisory Body on United Nations
Marc Lendermann, Head of Division for Bilateral Digital Policy, German Ministry of Digital and Transport
Gu Dengchen, Director, International Policy Research Center, AliResearch
Marjut Hannonen, Head of Trade and Economic Section in Beijing, European External Action Service
The panel discussion was moderated by Mike Liu, Vice President and Senior Fellow at CCG.
CCG has broadcast the video recording of the event on Chinese internet platforms and the video remains accessible online. The recording is also available on YouTube.
The following transcript is based on a recording and has not been reviewed by the German Embassy, French Embassy, or the speakers.
Mabel Lu Miao, Co-Founder and Secretary-General of CCG
Your Excellencies, good afternoon. Ladies and gentlemen, warmly welcome to the Center for China and Globalization. .H.E. Ambassador Patricia Flor, H.E. Ambassador Bertrand Lortholary, and Mr. Gao Xiang, Director-General of China Science and Technology Exchange Center, distinguished guests, ladies and gentlemen, welcome to CCG, and to today's symposium on "European and Chinese Perspectives on Artificial Intelligence Governance."
I am very glad to find so many distinguished guests from foreign embassies, experts, and friends from important media outlets presenting at CCG's headquarters in Beijing. CCG is keen to bridge with the various sides to exchange and communicate global governance. As a non-governmental think tank in China, CCG has been recognized as one of the world's top 100 think tanks. It was granted special consultative status by the United Nations in 2018. We've just held our 10th annual China and Globalization Forum at the end of last month, and more than 300 participants from 60 countries around the world attended the conference. Artificial intelligence was among one of the hottest topics discussed at that forum. Actually, AI has long been a significant focus in global governance, even before the launch of ChatGPT in 2022. Today, we are very pleased to become a platform for all stakeholders to share their valuable opinions and insights on AI technology and its regulation and governance. The symposium today is jointly hosted by CCG, the French and German Embassy in China. We are glad to work with you to promote in-depth discussions on this critical topic alongside several distinguished experts in this regard. The symposium is divided into three parts. First, we will have speeches from Mr. Gao Xiang and the ambassadors from France and Germany. This will be followed by a panel discussion. We will conclude with the networking reception hosted by French and German ambassadors. Thank you all.
Let me first introduce our speakers today. We would like to invite Dr Henry Huiyao Wang, Founder and President of CCG. He was a counselor of the China State Council, Vice Chairman of the China Association for International Economic Cooperation under MOFCOM (Ministry of Commerce of the People's Republic of China), Vice Chairman of the China Public Relations Association, and Director of the Chinese People's Institute of Foreign Affairs. The floor is yours, Dr. Wang.
Henry Huiyao Wang, Founder and President of CCG
H.E., Ambassador Patricia Flor; H.E., Ambassador Bertrand Lortholary; Mr. Gao Xiang, Director-General of China Science and Technology Exchange Center under the Ministry of Science and Technology; ladies and gentlemen, and also all the distinguished guests gathered here today at the CCG, warmly welcome. It's really a great honor to extend the welcome to the diplomats and ambassadors. We see quite a few ambassadors here, experts, friends and also media journalists, and, of course, CCG fellows and all the distinguished gathering here. Your visit to this important discussion today underscores the big attention to AI lately. We see the room is already full, and I apologize to the people at the back for not having more seats here. But we can see how important this topic proposed by the French Ambassador and German Ambassador is, and we are actually extremely honored to co-host this event.
As we all know, China, Europe, and the United States stand at the front row of AI research and applications and, of course, also AI governance issues. Each region has made significant strides in advancing this transformative technology. In particular, the European Union has engaged as a leader in AI regulations by leading comprehensive laws aimed at ensuring ethical and responsible AI development. UNESCO also has an AI ethical development initiative. This regulatory framework will serve as a very good framework for the future AI governance discussion. It will provide valuable examples for the global community on how to approach better utilization of AI, which is coming very fast into our daily lives.
So, the imperative of cooperation in AI regulation cannot be overstated. AI is not merely a technological innovation, but it also raises profound ethical questions and presents numerous challenges for human society. Issues such as security, privacy, and societal impacts on AI necessitate a collective approach. It is through these joint efforts that our French colleagues and German colleagues and, of course, we also have the Minister of Science and Technology here, that the three countries can really work together with the whole, particularly with European countries. I'm encouraged by China's active participation in the AI Safety Summit held at Bletchley Park in the United Kingdom last year. I know the Ministry of Science and Technology sent representatives there. This summit marked a significant step towards fostering international dialogue on AI safety and governance. Furthermore, during President Xi Jinping's visit to France just last May, China and France issued a Joint Declaration on AI Governance. This declaration reflects our shared commitment to advance AI in a manner that is both innovative and ethically sound. Looking ahead, we eagerly anticipate the third AI Safety Summit scheduled to be held in Paris in 2025. This upcoming summit represents another critical opportunity to achieve a broader consensus on AI regulation and cooperation. We know that we also have another one this year. You can see this topic is getting more and more important.
We are optimistic through those continued dialogues, exchanges, and events like this, we can establish a robust framework that guides the ethical development and deployment of AI technologies. In addition to our collective cooperation with Europe, we would also like to mention the recent developments in Sino-American relations regarding AI governance. As we all know, in May, China and the United States held their first meeting of the Intergovernance Dialogue on AI in Geneva. This meeting facilitated a valuable exchange of views on technology risk associated with AI, underscoring the necessity for ongoing dialogue between China, the U.S., the EU, and many other countries. It is noteworthy that despite the frequent engagement between countries in recent years, we have yet to achieve concrete and essential milestones in AI governance as a new beginning. This highlights the urgency for us to identify our efforts and work collectively towards tangible outcomes. Let today's discussion, which the CCG, the French embassy, and the German embassy co-organize together, mark a new beginning of some development for these collective efforts to harness the potential of AI while mitigating its risk. In closing, I'm very honored to have this opportunity to continue to hear all our valuable insights on this matter, and I particularly expect our distinguished ambassadors to give your thoughts. Aso, I'm sure my colleague Mr. Gao Xiang, who is an expert on this and is looking after the Science and Technology Exchange Center at the Ministry of Science and Technology, is also very knowledgeable. So we'd like to hear from all of you for this important occasion. Let's really work together to strive for a better future where AI can serve as a force for good to promote the well-being of all humanity. Thank you very much. I appreciate your coming.
Mabel Lu Miao, Co-Founder and Secretary-General of CCG
Thank you, Dr. Wang, for your opening remark. Next, we would like to invite Mr. Gao Xiang, Director-General of the China Science and Technology Exchange Center under the Ministry of Science and Technology, to deliver your remark. The floor is yours.
Gao Xiang, Director-General of the China Science and Technology Exchange Center, Ministry of Science and Technology
Your Excellency, German Ambassador to China, Patricia Flor and Your Excellency Bertrand Lortholary, French Ambassador to China, President Henry Huiyang Wang, and Secretary-General Mabel Miao, esteemed experts, good afternoon. I'm very happy to attend today's symposium, and I would like to share with you some of my thoughts and understanding of AI. Artificial Intelligence is a new frontier in human development and a crucial driver of scientific and technological revolution and industrial transformation. Currently, AI is at a critical stage of explosive growth, with rapid advancements in big data, deep learning, augmented learning, and other technologies. Various large models and generative AI applications, such as Sora and ChatGPT, are emerging, which will undoubtedly spark a new wave of AI and accelerate the arrival of the era of Artificial General Intelligence (AGI).
The rapid growth of global AI technology will undoubtedly have a profound impact on economic and social development, as well as the progress of human civilization. This growth will bring numerous opportunities worldwide. However, it will also introduce risks and complex challenges, both predictable and unpredictable, such as privacy issues, ethical risks, social equity concerns, military security, and geopolitical tensions. These are key elements in AI governance, directly affecting the destiny of humanity. They are common issues faced by all countries globally. As a responsible country in AI, China believes there should be equal emphasis on development and security. That's a very important philosophy of AI governance for China. New things and new opportunities should be embraced; and at the same time, brakes should be checked before setting off. I'd like to share with you our philosophy and practice in AI governance from two aspects.
On the one hand, China promotes the healthy development of next-generation AI and actively explores new regulatory and governance frameworks. In July 2017, China issued the Plan for the Development of the Next Generation of AI, which clearly said that by 2025, a preliminary system of laws, regulations, ethical norms, and policies for AI will be established. Over the years, the Ministry of Science and Technology, the lead agency for this initiative, has been steadily improving various laws and regulations.
At the end of 2022, the development of large models has given rise to the iteration of AI technology, and generative AI has become a new development direction for the AI industry. In order to promote the healthy development and application of generative artificial intelligence, in July 2023, China issued the Interim Measures for the Management of Generative AI Services, which made the institutional arrangements for key issues such as data security, privacy, misinformation, and IPR for the development of generative AI. This marks an important step for China in accelerating AI legislation. As far as we know, this is the very first special legislation on generative AI in the world. On the other hand, China actively cooperates with all parties to promote AI governance and to push the reform of the global governance system in the right and rational direction:
First, China actively engages in global AI governance, contributing its insights and solutions. At the 3rd Belt and Road Forum for International Cooperation last year, President Xi Jinping proposed the Global AI Governance Initiative. This initiative outlined China's views and propositions, offering constructive ideas for AI governance and serving as a reference for international discussions and rule-making.
The Chinese government emphasizes three main principles. First, ensure that AI is a force for good. The development of AI should be conducive to the welfare of all humanity, in line with ethics and norms, in conformity with the rules of international law, and in keeping with the trend of human civilization. Second, ensure safety. AI should always be placed under human control, with constantly improving interpretability and predictability. Third, ensure fairness. All countries should be able to participate on equal terms in the process of AI development and share its benefits fairly.
Second, China actively conducts exchanges and dialogue with all parties on global AI governance and strengthens practical cooperation. Since the beginning of 2024, China has worked together with the United States, the EU, and other countries to conduct AI dialogues and exchanges, which has helped to build extensive consensus. This will promote the healthy development of AI.
From April 14 to 16, during German Chancellor Scholz's visit to China, President Xi noted that the industrial and supply chains of China and Germany are deeply intertwined, highlighting the significant potential for cooperation in green transition, digitization, and artificial intelligence. From May 5 to 7, during President Xi Jinping's visit to France, he held talks with the French President, resulting in a joint statement on AI and global governance. This statement, the first of its kind between China and a developed country, underscores a shared commitment to promoting safe, reliable, and trustworthy AI systems and enhancing international cooperation in AI. Last month, on May 14, in order to implement the consensus reached by the Chinese and U.S. leaders in San Francisco, the first China-U.S. AI intergovernmental dialogue was successfully held in Geneva, Switzerland. The two sides exchanged views professionally and constructively on issues such as AI technology risks and global governance, which greatly encouraged the international community and sent a positive signal for global AI governance. The Chinese government has always regarded Europe as an important aspect of China's "major-country diplomacy with Chinese characteristics" and an important partner in achieving Chinese modernization. Since 2023, China and Europe have fully restarted face-to-face exchanges at all levels, fully activated dialogue and cooperation in various fields, and explored the potential for cooperation in green transition, digitization, and science and technology development. China-EU relations have shown a good momentum.
Technological innovation has always been a key aspect of China-EU cooperation. Since the signing of the China-EU Science and Technology Cooperation Agreement, both sides have established comprehensive cooperation mechanisms led by the China-EU Steering Committee on Science and Technology. This framework provides significant momentum and institutional support for scientists from both regions to engage in exchanges and collaborations across various fields. China-EU cooperation in science and technology has made important contributions to global development and human welfare.
As a national-level institution for foreign science and technology exchanges, the China Science and Technology Exchange Center (CSTEC) leverages its unique advantages to actively engage in non-governmental and international science and technology exchanges and cooperation. Here I would like to briefly introduce how CSTEC has done in collaboration with the EU. Recently, we have launched a series of activities for the China-EU Shared Community of Science and Technology Innovation, supported by EU embassies in China. In fields such as life sciences, space sciences, and new energy equipment manufacturing, we have organized discussions and exchanges between Chinese universities and research institutions and their European counterparts. I would like to inform the German ambassador to China about our extensive cooperation with German research institutes, including the Bayer Group, which has significantly promoted non-governmental science and technology dialogue and cooperation between China and Europe.
In May, during President Xi Jinping's visit to France, CSTEC successfully organized the China-France Science and Technology Cooperation Achievement Exhibition in Paris, hosted by the science and technology authorities of both countries. The exhibition saw high participation from French scientific institutions and attracted numerous visitors, receiving wide acclaim from the scientific communities in both China and France. In the future, we look forward to conducting non-governmental exchanges with European universities, research institutions, and companies in the forefront of science and technology, contributing to global science and technology governance.
Ladies and gentlemen, today's society and human progress are increasingly intertwined with algorithms and networks that have transformed our work and lives. We are at a critical historical moment. We must vigorously develop AI technology and urgently guide AI in the direction of human civilization through international cooperation. Let us uphold the principles of consultation, joint development, and sharing, adhere to the concepts of people-oriented AI for good, and strengthen cooperation and exchanges in AI governance. Together, we can prevent risks, ensure that AI is safe, reliable, and controllable, and share the benefits of AI technology development among all countries.
These are my preliminary thoughts, and I look forward to your feedback. I wish this symposium a great success. Thank you.
Mabel Lu Miao, Co-Founder and Secretary-General of CCG
Thank you very much. Mr. Gao Xiang, Director-General of the China Science and Technology Exchange Center, Ministry of Science and Technology. Next, we would like to warmly welcome H.E. Ambassador Patricia Flor. Ambassador Flor is a senior diplomat with more than 30 years of experience working for both the EU and Germany. She was the German Ambassador to Georgia, the EU Special Representative for Central Asia, and the Ambassador of the EU to Japan. In 2022, she became the Ambassador of Germany to China. Ambassador Flor actually participated in the 10th annual China and Globalization Forum held in Beijing a week ago by CCG, where she gave a speech on multilateralism in the multipolar world. The floor is yours, Ambassador.
Patricia Flor, German Ambassador to China
Excellencies, ladies and gentlemen, colleagues, dear panelists, dear guests, and of course, thank you, President Wang, for hosting us today in your premises here. A warm welcome to all of you. Thank you for coming. I'm very pleased to see you all here with us today.
Today, We want to talk about AI governance. That's not an easy task, even though AI is surrounding us more and more in our daily lives. But the technology is still in the early stages of its development. Now, as an ambassador, I'm not an expert, so it's good to see that we have many real experts here today with us. Some might say it is way too early to talk about AI governance and regulation, and we should actually wait for further development of artificial intelligence before we start to think about how to regulate it. They fear that regulation may slow down creativity and progress. Others, on the other hand, are saying AI is developing way too fast. Regulation can't keep up and policymakers come too late. Some, like Elon Musk and others, have even called to pause the development of the most powerful AI systems to give humanity time to keep up. The reality is, though, that AI is here to stay, and we need to address its implications for daily life, for business, society, and even international relations. Many countries around the globe have started to use it and regulate it. With technology on the rise, global discussions have started about how to govern artificial intelligence internationally to foster its growth for productive purposes but also to contain its risks for society. The EU has recently brought forward a risk-based approach to AI regulation. The approach, in essence, is very simple. The riskier an application, the more regulated it needs to be. Of course, in Europe, AI applications for mass surveillance, such as facial recognition databases, predictive policing, or artificial intelligence used for social scoring, are banned. The goal is: don't hinder innovation but mitigate existential risks and protect citizens' rights at the same time. China has brought forward regulation on more specific AI matters, addressing issues like generative AI and recommendation algorithms, as explained just right now by Mr. Gao. It is also reported that China is working on an AI law. Therefore, today's event is very timely. When it comes to AI, China and Europe have many concerns in common. However, it's no secret that we also have our differences, for example, when it comes to the question of how, where, and under which conditions to use AI. That is probably not only true for Europe and China but for many regions and countries around the globe. So, how can we bridge those differences? How can we organize global AI governance in a way that common concerns are addressed, even if we do not agree on all relevant aspects of the matter? At the United Nations level, the UN High-Level Advisory Body on AI is debating how AI can be governed for the common good and how AI governance can be aligned in an internationally interoperable way, combined with human rights as defined in the United Nations Charter and also with the Sustainable Development Goals. That is why today, I am extremely pleased that we have one of the two Chinese experts of the UN High-Level Advisory Body here with us today, Professor Zhang Linghan. A special welcome to you. I am excited and looking forward to learning your perspectives on AI governance and the recommendations the High-level Body is suggesting. Besides the discussions ongoing at the UN level, other processes have kicked off. Just two weeks ago, many countries debated such questions at the AI Safety Summit hosted by South Korea, which was a follow-up to the AI Safety Summit in the UK and the Bletchley Declaration signed by many countries, including China. France will be next hosting the AI Safety Summit. And I'm sure my French colleague will give us some insights into what is planned. To sum up, now is the perfect time to talk about AI governance. The technology is on the rise. It's a hot topic around the globe. It's exciting and frightening at the same time. In any case, it will undoubtedly change our lives, which is why I'm extremely pleased to have a discussion with all of you today. And let me, therefore, now hand over to my French colleague, Ambassador Bertrand Lortholary. Thank you very much.
Mabel Lu Miao, Co-Founder and Secretary-General of CCG
Thank you, Ambassador Patricia Flor. Great remarks and introduction. Next, we would like to invite H.E. Ambassador Bertrand Lortholary. Ambassador Lortholary is a "ä¸å›½é€š", expert on Chinese culture, as he used to study oriental languages and civilization in Chinese. He has served as a diplomat for France since the 1990s in Africa, the U.S., and China. He became the French Ambassador to China in 2023. Let's warmly welcome, ambassador. The floor is yours.
Bertrand Lortholary, French Ambassador to China
Dear Dr. Henry Wang Huiyao, H.E. Ambassador Lieber Patricia, Mr. Director-General Gao Xiang, excellencies, ambassadors, panelists, ladies and gentlemen, dear guests, and chères amies, If I may, I would like to thank you all very warmly for accepting our joint invitation this afternoon. I would like first and foremost to convey to the Center for China and Globalization(CCG) my sincere appreciation for hosting this important event and, of course, to our dearest German friends for co-organizing this event with us. As you mentioned, Patricia, large-scale and far-reaching technology developments in the field of AI are taking place all over the world, including within the EU and as well as in China as we can all witness in our daily lives here in Beijing and all over the territory. And as you said, Patricia, AI is both an opportunity and an issue whose governance requires dialogue and cooperation across the board.
That is why I'm happy to report that the EU has chosen to regulate AI without hampering innovation. And we are proud to have adopted the world's first comprehensive regulation on artificial intelligence with the AI Act. This regulation will prohibit practices that present unacceptable risks, but it will, at the same time, enable the development of safe and trustworthy AI solutions to our citizens and also, as you said, Patricia, again, safeguard their fundamental rights. Our collective approach is thus very balanced. We want to mitigate the risks, but we also want to create a framework that fosters investment and innovation in AI.
On AI as well as in other global challenges, I want to underline that, of course, there is little France can achieve without a strong and unified European Union approach. President Macron was very clear in his recent April 24 speech on Europe that AI would need to be one of the key areas in which Europeans need to increase their investments and autonomous capabilities. Also, the EU has very strong ambitions when it comes to AI development. France, ladies and gentlemen, is determined to make a strong contribution to this overall EU strategy. On May 21, President Macron made important announcements concerning AI talents, infrastructures, fundings, uses, and governance.
France will invest €400 million in nine AI clusters. We will push for the development of France's computing power.
We will mobilize more private investment for the development of high-end technologies, including LLMs, the famous "large language models".
We will work on the diffusion of AI to the different sectors of the economy and the administration, including health, justice, and education.
I'm pleased to say that our efforts are not going unnoticed. Many of the world's tech leaders recently attended the Vivatech exhibition in Paris, including Robin Li, the CEO of Baidu. Yet again, as President Macron mentioned, the only good governance of AI is global governance of AI. That is why France is pursuing a constructive engagement on AI with all relevant stakeholders. China, as one of the world's leading AI players, must, of course, be involved in this conversation. As some of you know, and as Dr. Wang and Dr. Gao just highlighted, France and China have adopted a joint statement on global governance and AI during the President Xi's state visit to France last month. We agreed with China on the importance of fostering the development and safety of AI while promoting appropriate international governance and the use of AI for the common good. China has also expressed its willingness to participate in the AI Action Summit that France and the EU will host in February 2025 for following up on the process opened by the United Kingdom and South Korea.
Our approach is the same as our event today. We want to gather all relevant stakeholders, including states, civil society, and private actors to find as much common ground as possible on the global governance of AI. And beyond the issues of safety and security, this summit will also cover a wide range of challenges and opportunities triggered by AI. These include innovation, governance, the future of work as well as common goods. Ladies and gentlemen, dear friends, Germany and France, along with the European Union and its member states will no doubt continue to jointly contribute to the global conversation on AI governance. Of course, this global discussion needs to be informed by expert ideas and contribution. That's why I am also much looking forward to listening to our very distinguished panelist. Thank you, everybody. Thank you, everybody. 谢谢大家。
Mabel Lu Miao, Co-Founder and Secretary-General of CCG
The French Ambassador has a beautiful pronunciation in Mandarin. Thank you very much. All the perspectives from China, the EU, Germany, and France are wonderful. We would like to move to our panel discussions. Looking forward to it being a wonderful one. Thank you for you all and your great speeches. The panel is going to be moderated by our colleague, Mr. Mike Liu Hong.
Make Liu is the Vice President of CCG and a Senior Fellow. He joined CCG in 2022. Mike is the former Managing Director and Legal Representative for DXC Technology in the Great China region. Before that, Michael was the Global Vice President, Country Head, and Legal Representative for Infosys in the Great China region.
I would like to invite Mike and our panelists, including Professor Zhang Linghan. Just as Ambassador Flor mentioned, she is a professor at the Institute of Data Law, China University of Political Science and Law, member of the United Nations High-Level Advisory Body on AI. Another panelist is Mr. Gu Dengchen, Director, International Policy Research Center, Ali Research. We have another panelist, Ms. Marjut Hannonen, Head of Trade and Economic Section in Beijing, European External Action Service. Last but not least, we'll have another panelist, Dr. Marc Lenderman. He's the head of the division of bilateral digital policy, German Ministry for Digital and Transport. The floor is yours. Let's set up our panel.
Mike Liu, Vice President and Senior Fellow, CCG
Thank you so much for joining me today. After listening to the fascinating talks from ambassadors as well as Director-General Gao Xiang, this really is the right time to discuss this very hot topic. If I may take one minute to reflect on some major outcomes from last November Bletchley Park agreement, they indicate the prospects for stronger global governance and a path forward.
If you look at the accumulative share of a generative AI pattern by 2022, China is in a leading position with 61% of all the AI patterns created. On the other hand, if you look through the large language models, the U.S. is in the leading position with 50% of large language models. Even Elon Musk was saying that by 2030, artificial intelligence will overtake humans in intellectual capacity. We see a lot of fear and uncertainty, as well as opportunities.
I'll probably start the first question with Professor Zhang. We all have heard lots of discussions in the UN. Can you give us some insight into the discussions currently happening in a High-Level Advisory Body on Artificial Intelligence? Also, how do you see the role of major players like the U.S., China, and the EU member states at the UN level?
Zhang Linghan, Professor at the Institute of Data Law, China University of Political Science and Law Member, High-Level Advisory Body on United Nations
Thank you so much. It's a pleasure to be invited here. I want to briefly introduce the work in the UN High-Level Advisory Body. Our Advisory Body was assembled by the Secterery-General of the UN in last October. We have been working together for 10 months now. I also met great colleagues from France and Germany. My colleague from Germany is called Anna Christmann and my colleague from France is called Rahaf Harfoush. Until now, we have had three in-person meetings.
Last December, we had our first in-person meeting in New York, the headquarters of the UN. Before that, we worked together and met frequently, once a week actually, online, to draft our Interim Report. If you check and search online, our Interim Report was issued at the end of last December, just before Christmas.
In drafting for the Interim Report, we had 38 members in total and we were separated into three groups. The first group focused on Opportunities and Enablers. The second group focused on Risks and Challenges. The third group focused on International Governance of AI. I would like to share a very interesting detail: all the experts who volunteered to be in the first group of Opportunities and Enablers are all male, while all the experts who volunteered to join the second group about Risks and Challenges are all female. So it is shown in this investigation of AI risks, that women are more concerned compared to men about AI risks. This is a very interesting detail.
In our Interim Report, we listed all the opportunities and enablers, as well as the risks and challenges. But the most important one, also the one people are paying much more attention to, is AI global governance. We have a framework of how the UN should discuss AI global governance. We have the roadmap of what we should do in 6 months, 12 months, 24 months, and 48 months.
We had our second meeting in Geneva. On the first day, we visited seven of the international organizations under the UN framework, including WHO and ITU. When visiting them, we all heard that other international organizations under the UN framework say, "We already have done so much about global AI governance. Why does the UN Secretary-General want to establish a new AI agency or AI governance framework to do that? " Also, we discussed our form of the future global AI governance agency. I think all the state members of the UN have been informed that we will discuss it in the Summit of the Future this September.
Last week, I visited Singapore and met Mike there. We also have our third in-person meeting. We will finalize our final report on the UN Advisory Body and deliver it to all the member states in July or August. It's our progress so far. You talked about who and what kind of role the UN should play in global AI governance. We also had a discussion with the experts from the EU and the U.S., and we kind of debated about when, how, and who should play a critical role exert global AI governance. We can see other states like the UK and the U.S., have done a lot of work on it. We also have G7, G20, GPAI, and AI Safety Summit. Some people asked us why the UN wanted to do that as well. I will briefly explain that.
The first is trust. If we review all the global governance initiatives, we can see that some states just draft an initiative, and then call other countries to come and co-sign a declaration or co-sign a statement. But I feel this one thing very deeply in Singapore where we participated in the Digital Forum of Small States. An expert from Fiji said, "We are hardly included in this global AI governance discussion. We don't want just to be called and being told to sign here. We want to sit by the table and discuss all the details, our demands, and our needs in global AI governance. We want our voice to be heard."
That's why the UN should play the most important role in the global AI governance. We can see all the statements, for instance, the UK initiated the AI Safety Summit, but only a few developed countries (genuinely participated in it). We have statistics stating that 180 countries are not included in all these global AI governance statements or initiatives. So that's why the UN, should be the most important international organization in global AI governance. Thank you.
Mike Liu, Vice President and Senior Fellow, CCG
Thank you very much. You touched upon some very important topics, like the structure and whether all the members are involved, especially what concern we should give to those smaller states. It's a very important observation. As for the next speaker, I would like to invite Marc Lendermann. Marc is the Head of Bilateral Digital Policy at the German Ministry of Digital and Transport.
Mark also travels all the way here. He just arrived from Singapore yesterday. Germany participated in the follow-up summit of the AI Safety Summit in the UK two weeks ago in South Korea. Could you share with us some insight into what was discussed and agreed on? Also, about what Professor Zhang was sharing, what are some key takeaways from both meetings in the UK and South Korea? What are some opportunities as well as challenges from your perspective? Thank you.
Marc Lendermann, Head of Division for Bilateral Digital Policy, German Ministry of Digital and Transport
Thank you so much for the question. And also thank you so much for having me. It's really exciting and it's an honor to share the stage with so many distinguished experts. What happened in South Korea two weeks ago at the AI Safety Summit is that, first of all, perhaps as background information, I should say that Germany has participated in two ways. First, our Chancellor virtually participated in the meeting of the heads of government. And in the ministerial track, which took place in person in Seoul, our ministry has participated in the form of our State Secretary. Stefan Schnorr, who's in charge of digital policy, attended the summit. What happened was that the discussion on AI safety which was already initiated in November last year, in Bletchley Park, has been continued. But what is most remarkable is that the discussion has moved beyond an exclusive discussion about AI safety. It has also been expanded also to include the topics of innovation and inclusivity. It has been discussed how we can make sure that global AI governance is inclusive and that it is innovation-friendly. That has been something new and it has been added to the discussion that took place in the UK last year.
(There are) two main outcomes of the AI Safety Summit. The first was the Seoul Declaration that has been adopted by 10 countries, including Germany, in which these governments have committed to cooperate on AI risk research, a very important topic. Second, the Seoul Ministerial Statement has been adopted in which 27 countries have committed to promoting evidence-based reports on AI risk. These are two very important and helpful outcomes that we hope will form a good basis for the further discussions that will take place at next year's AI Safety Summit in Paris.
Mike Liu, Vice President and Senior Fellow, CCG
Thank you very much. Maybe I'll come back with some details later once we finish up the first one. I took two takeaways from your sharing. The first one is AI risk research. Also, how can we, all the member states, be able to come back with evidence-based research and reports so we can do a fair, good assessment of where we're as well as opportunities?
Next, I would invite Mr. Gu Dengchen, Director, International Policy Research Center, AliResearch. Mr Gu, the floor is yours. The question for you is, Alibaba is the leading technology company. How to navigate several regulatory spaces could be essentially challenging, and it's hard to train foundation models. What are some of Alibaba's expectations regarding global AI governance?
Gu Dengchen, Director, International Policy Research Center, AliResearch
Thank you for the question. Actually, regarding the expectations, we have a lot of hopes for AI governance globally. A lot of institutions and academics ask me, "What is your expectation regarding the issue?" I think I will give you two expectations. The first one is that, we hope that the AI governance aligns with the industry reality. We've heard a lot of things about "AI will do this, AI will do that". If we look back to the internet age, the big techs -- Amazon, Google, Meta, and Alibaba and Tencent in China -- all have experienced a long-time growth of about 20 years in market value to wait the product-technology fit and the product-market fit. It takes a long time. Nowadays, despite the big moments defined by GPT-4 or Sora early this year, we haven't seen a killer app on your cell phone. We haven't seen it. This is on the consumer end.
On the business end in China, Ali has rolled out our first large language model one year ago. We called it Tongyi Qianwen (Qwen). Just last month in Beijing, we rolled out our new edition, Qwen 2.5. Actually, there have been three or four editions of Qwen. Nowadays, we have about 19,000 business clients onboard, Integrating AI into their workflows. In our survey -- Alibaba itself has an e-commerce branch, a logistics branch, and a science branch -- we talked to a lot of our colleagues. We find that AI is mainly used as an efficiency-improvement tool nowadays. Now, we're not in an AI-native era. This is our first suggestion that the governance should align with the industry reality. The second one is, of course, trustworthiness, security, and safety. They all should be put at the very core part of AI governance. Alibaba has been a true believer in a secure and safe AI, not only last year but also several years ago. You know, AI is actually not a new topic. From day 1, we have been dedicated to developing and deploying AI model applications in a responsible way. We believe that without trustworthiness and security, there will be no AI. This is our second (expectation)
But just as a lot of our distinct guests have mentioned, last month in Seoul, we had the Seoul Summit about AI safety. We noticed a report before the Summit. Some top scientists issued their reports. What impressed me the most is that our understanding of AI is at a very early stage, and our standing about AI safety is also at a very nascent stage. So we believe that the problem brought by AI will ultimately be solved by the advancement of technology. We are trying hard to do AI safety research work nowadays. Thank you.
Mike Liu, Vice President and Senior Fellow, CCG
Thank you very much. One of the things about the industry reality is how can you take a step-by-step up approach. So, as far as AI technology is concerned, we can apply this technology to transform the industry, but we're still in the early stage. We should not be too panicky. Basically, that's what I can draw from your sharing. Last but not least, I would like to invite Marjut Hannonen, Head of Trade and Economic Section in Beijing, European External Action Service. Europe recently published a broad AI law. Could you tell us about the main points of this report?
I'd refer to a colleague -- Mr. Gu, who was talking about some observations from AI safety. What's the European perspective in structuring this AI law? Even the technology is still in a very early stage. And if I can also follow up on whether now is already a good time to make a big AI law. What's the European perspective?
Marjut Hannonen, Head of Trade and Economic Section in Beijing, European External Action Service
Thank you very much for inviting me. I'm not an expert, but I can say a couple of words on the European legislation. As some of you know, on 21 May, which is only actually a couple of weeks ago, the EU adopted an AI Act, which is the first comprehensive legislation on AI in the world. We do hope that it will contribute also to international discussions and fora because I think it's a very good model.
This is based, as already flagged by our ambassadors, on a risk-based approach. For us, we see the most critical (priority) is exactly to make sure that AI applications, that the safety and fundamental rights of citizens will be safeguarded through their use.
Maybe I can comment on the fact that you said, is this the right time now to come up with this legislation? Yes, it is, because those points which are the most fundamental, the safety and fundamental freedoms of people, will have to be safeguarded also when we go forward. They don't change. They are already there. These concerns are already there. The need to safeguard these issues is there. So the sooner we regulate those, I think that it also creates the key space for innovation and clarity for operators, so they know under which terms and under which framework they can develop this AI so that it will be actually ethically acceptable and it will actually benefit people. So I think the sooner we do that, the better. And the EU has taken the first steps on that. So I think it's actually very positive.
When I come back to this risk-based approach, we have this step-by-step approach. We have banned applications which are banned because they are considered too dangerous, for example, applications that go for manipulating people's free will or have use for social surveillance. We don't allow that.
Then the largest part that we are considering would be high-risk. High-risk is something that is used, for example, for critical infrastructure, for all medical devices, or for other types of applications. These high-risk applications are subject to certain rules and regulations. So they have to fulfill certain criteria to be able to enter the European market.
Of course, the standards will be very important in this respect -- the European Commission-set standards and international standards. We kind of expect that because there's compliance with the standards, there's compliance with the legislation. So this is very important. Then we have transparency and regulations. We have Chatbox and deepfakes, so it needs to be clear that people, when they are using these applications, understand that they are communicating with the robot. Transparency obligations by the developers have to be made clear. Also, these applications need to have an automatic system that the governments will immediately recognize that something is created by AI. So it cannot be hidden. There has to be a system that is able to recognize that this is done by automated artificial intelligence. The last one, what I would mention is the large platforms, like what are used to develop other systems. These are, of course, also subject to transparency obligations and depending on what kind of content they have, may have additional obligations and regulations as well. This is the topic in a nutshell.
Mike Liu, Vice President and Senior Fellow, CCG
Thank you so much. If I recall, this is also in history as far as technology is concerned. You know, we are seeing a global collaboration at such an early stage, (trying) to take a collaborative approach to try to figure out what's the risk and what's the opportunity. So my next question, I'm not going to direct to any particular person since we have two European representatives and two Chinese representatives.
China has been also making tremendous progress. What are some learnings we can draw from a European perspective? Maybe I invite our European friends to share one or two things you would suggest to Chinese friends. What are the things we can and we should learn in adapting this technology? There is always a matter of how we make a balance between innovation and regulation.
After that, I will invite my fellow Chinese panelists to share some straight smart ideas. What are some good things does the grassroots innovation we have seen in the last couple of years have for European practice? Maybe I start with Marc. Okay, Marc.
Marc Lendermann, Head of Division for Bilateral Digital Policy, German Ministry of Digital and Transport
I'm gonna start and please feel free to chime in and add some comments. I think it's very early to say what China could learn from the European approach because we have just adopted the rules. As Marjut has mentioned, the implementation still needs to be seen. I think when it comes to that phase of implementing the AI Act, we're gonna have some learnings, both on the European level and also on the level of the member states that China could look at in order to see how things work out in practice and whether or not they should perhaps follow the same approach or look at alternatives and also see where to draw inspirations from and where not.
Something that might be an interesting point of discussion is the question of whether the risk-based approach that has been mentioned earlier. It is an approach that has been fully supported by all member states.
But one thing that has been heavily discussed in the Parliament and also in the Council is whether or not the risk-based approach should apply to AI models or whether it should be applied to the applications of AI. There are some pros and cons to probably both approaches. Some people argued before the AI Act was adopted, that it might be more innovation-friendly to just apply the approach and the regulation to the actual application of AI, not to the models as such. So this is something that might deserve some closer observation and could be discussed by others, whether or not it's a good approach or not.
Mike Liu, Vice President and Senior Fellow; Former Managing, Director, DXC Technology Greater China
Thank you. The European approach is very much based on evidence and a data-based approach to analyze and assess the risks as well as the opportunities. That's basically we will lead the policymakers to make informed decisions, right?
Maybe I'll invite Professor Zhang to share. You probably also have seen the tremendous growth in China in the AI space. On what things are we going too aggressively or too fast versus Europeans making informed decisions? What's your view on this point?
Zhang Linghan, Professor at the Institute of Data Law, China University of Political Science and Law; Member, UN High-Level Advisory Body on AI
Thank you. Back to your last question: what China can learn from European regulation on AI? In the last 5 years, I participated in almost all the regulations in China about AI, like algorithm recommendation, Deep Synthesis, and generative AI interim regulation.
When we talk about learning, I like this word because AI safety is the greatest common divisor globally. I think all countries really value the importance of keeping AI safe. And I think all the efforts different countries made to keep AI safe are really appreciable. Also, when we talk about their own path or own measures to regulate AI to make sure AI is safe, I think they have different perspectives, histories, and frameworks of laws.
Actually, China issued the first generative AI regulation last August. And we all see that from the original version--the draft issued in April--it has changed a lot. In this procedure, I also organized several experts to advise the regulator to revise the articles, and I'm so glad all the advice was taken.
When the EU issued its AI Act, it would take effect in two years. We don't know what technology will develop into in two years, or what progress will be. And I think until then, maybe it's too early to say what kind of a regulatory framework should be adopted. Also, in China, we can see the first week in May, the State Council's 2024 Legislative Work Plan mentioned that China is drafting its AI law. I'm also in the expert group. According to history, we think there may be two or three years before we can finalize our AI law. I don't know what will happen in the middle.
So, I think we can learn from each other about all the measures that can make sure humans are protected by the regulatory framework to ensure AI safety. Still, we cannot ignore the difference between the EU and China as you introduced. We all know there are four levels of risks in the EU. Some acceptable risks and applications are legal in China. It's (due to) the difference of culture and the difference of situation. Also, I think the Chinese AI regulation is a kind of growth from content regulation, and it's a totally different approach from the EU risk-based regulatory approach. So I think in the next two or three years, when technology develops more and we can map the risk of AI, we can get a better conclusion of what we can collaborate with each other on and what consensus we have on AI regulation. Thank you.
Marjut Hannonen, Head of Trade and Economic Section in Beijing, European External Action Service
Yes, just a short comment on the entering into force of the EU regulation. In 6 months time, it is gonna enter into force as planned. Everything is planned. Also, [inaudible] will be enforced in six months, and then the large platform GPAI in one year. The rest comes in two years. So we already actually roll it into force in the coming months.
Mike Liu, Vice President and Senior Fellow, CCG
Very good. Before I open the floor for the audience, any more comments from Mr. Gu or Marc? I always feel this is a very critical moment, also the best time for Europe and China to join hands to explore the unknown. It's important to share, to amplify power, and also, to mitigate potential risk. Mr. Gu?
Gu Dengchen, Director, International Policy Research Center, AliResearch
Actually, we see the EU Act as a process. The EU Act was brought into discussion a long way before the AIGC, before ChatGPT, and before Sam Altman. I remember it's 2020. So the EU Act, we see it as a consensus-building process. The consensus includes multi-stakeholder engagement, transparency, accountability, regulations about data privacy, intellectual property, online misinformation, something like that. So we see it as a consensus-building process.
What impresses me most is that I watch a lot of videos from Geoff Hinton, the top scientist. He told us that the AGI will be a reality in like 20 years. We heard about Huang Tiejun, Director of the Beijing Academy of Artificial Intelligence, a famous institute in China about AGI. Mr. Huang told us that it would be like 15 or 5 years.
So the EU Act was for the first time to give us certainty, make us believe that for the first time, we, humans, are the creators of AI and AI is the tool. We're the creators of the tool and we're the final decision-makers of how to govern the tool and how to govern the people who use the tool. I believe that gives us some kind of confidence.
Actually, as the distinguished guests have mentioned, the EU Act won't be comprehensively put into practice in like, I remember 2027, right? So industries and governments all around the world are waiting to see what the act implementation will involve and if that means anything to China or to any other country. In summary, I believe China and the EU should learn from each other. Just as I said, the AI diffusion transformer and the parameter hallucination are new, but AI is not new. Long before ChatGPT, China rolled out a lot of regulations regarding online privacy protection, data security, and algorithm recommendations. All these are very precious resources. How would these regulations influence China's AI trajectory? And what does that mean to the rest of the world? We hold an open attitude to these further discussions. Thank you.
Mike Liu, Vice President and Senior Fellow, CCG
Thank you very much. Before I ask for the closing of the session since time runs fast, can we open the floor and invite our panelists to take questions? You can raise your hand. I have already seen some questions. Maybe the first one will go to Victor Gao.
Victor Gao, Vice President of CCG
Thank you, Mike. Thank you, the two ambassadors, and all the panelists for your wonderful remarks. Allow me to be the devil's advocate, while we are talking about European and Chinese perspectives on AI governance, which is absolutely important, and the more such dialogue, the better. I suspect maybe somewhere in the corner of cyberspace, AI is talking about how to break away from human governance. They will not be satisfied to be governed by humans. I think AI will not be satisfied to be subjugated always by humans. And why is this a threat? Because AI is so fast, so accelerating. I think Homo sapiens will be left behind if AI is left with itself. That's the major danger we are faced with now.
The other danger is that if mankind can get our acts together, if we are peaceful, for example, we talk about global cooperation, that's one thing. But if we are talking about opposing blocs, wars, for example, and there is a war in Europe, there's a war in the Middle East. And war, historically speaking, is the best accelerator of all technologies. Then in times of war or in times of anticipated war, you actually accelerate the fast speed of AI.
That means I think we really need to address one fundamental, most philosophical issue -- that's whether we should eventually allow AI to subjugate Homo sapiens? When I hear distinguished European people talk about privacy, etc. All these are very important, but when Homo sapiens become subjugated to AI eventually, holy cow! all our human rights, all our human dignitary, or our privacy issues will be completely irrelevant.
So I think we are in a race of time. And while all our discussions are important, we need to do whatever we can to promote peace, especially peace between China and the United States. Because if China or the US really wants to dedicate themselves to using AI for military purposes, holy cow! Homo sapiens will be subjugated by AI. That's my observation. Thank you very much.
Mike Liu, Vice President and Senior Fellow, CCG
Thank you so much. Do I have any volunteers to answer? Because I look at this as a very classical fear. This is also relevant to how we can fundamentally seek security for humans. Thank you, Professor Zhang.
Zhang Linghan, Professor at the Institute of Data Law, China University of Political Science and Law; Member, UN High Level Advisory Body on AI
Thank you. Just to briefly respond to your question, maybe not a question. Let's jump to the comments. I totally agree. Actually, I just heard one on podcast called "War is a Laboratory for AI". Just as it said, the war really accelerated the development of AI like in history.
Also, in some bilateral dialogues, the biggest concerns of people from both China and the United States and from other countries are the safe and ethical use of AI in military systems. But we have seen that AI has been used in military systems. OpenAI just removed the article to forbid AI in military use at the beginning of this year out of the user's agreement. Also we can see a protest in the United States to Google selling Gemini to the Israeli military. So we can see it has happened already.
But how to deal with this issue? I think it really raises the recognition that the UN should exert the most important impact to forbid AI as an ethical use in the military system. And only the UN has the legitimacy to forbid all countries to use AI unsafely, just like the IAEA. Mike Liu, Vice President and Senior Fellow; Former Managing, Director, DXC Technology Greater China
Thank you, Professor Zhang. I think this is wonderful, even though the artificial intelligence technology is still in such an early stage, the UN is playing a very active role in navigating how society and the industry work together. Next speaker, I also receive a request from Ray Zhang from Airdoc. He is the CEO and founder of Airdoc, also a listed company. So from a technology perspective, how do you see AI technology can make a contribution to society? As well as on the downside, what are the steps you have taken in order to mitigate this fear as well as a risk for society?
Ray Zhang, Founder and CEO of Airdoc
Thank you. To us, actually, because we're a product company, we see things differently as an engineer. We want to build something for a good purpose.
Airdoc was founded several years ago. We are focusing on developing a medical AI solution to directly shoot light across the pupil and then take pictures of blood vessels and neural systems in a non-invasive and accessible way so that everyone can check whether they have a heart attack, stroke, or diabetes in their home by themselves. It's already been approved by 40 countries, including European CE MDR, China FDA, and the United States FDA.
For us, the whole AI solution is still, to be honest, in relatively early phases. Lots of my friends, including Hinton and Professor Huang mentioned by AliResearch (colleague) -- we've been long-time friends for many years. Lots of our friends are working on AGI and worry about AGI. But as a product company like Airdoc, we still feel like the disadvantage or the shortage of the current AGI is a lot, much more than average people think about.
For example, it's very easy for a large language model to understand what's the sign of red light, green light, and yellow light. But it's very hard for it to directly understand why people set the rule to three colors rather than four colors or five colors. Because there are lots of hidden messages that are not well recognized and well recorded on the internet. So from the textbook perspective, there is no way for AI to learn from that. AI learns from data, from knowledge, and even from some of the insight, but the problem is that in most cases, human beings live in a world with physical interaction, a world of psychological and emotional interaction, and a world of much more complicated multi-dimensional information rather than only text, video, or images.
So the challenge we're facing is that even though there is a very good solution, it still takes lots of time for people to understand what the solution looks like. For example, in China, we have already screened more than 30 million people to do the diagnostics automatically. We're already the biggest provider in the world of medical AI, but it still takes us 5 years to get the approval of CE. We did the exact same clinical trial, exactly the same thing, just in different countries. After that, we still need to work with local medical associations, all kinds of medical things, as well as doing clinical validation, doing exactly the same thing. So those take time. We're also considering establishing the most advanced medical AI device manufactured maybe in Germany or in France. We're still deciding which country we're going to deploy that. All of this takes so many years.
So we believe that there is a very good dream or very good assumption of the damage of AI or AGI. But actually, it comes to the real world, I think it takes much longer than people expected.
Last but not least, many years ago, people were worried about fire, flash, thunder, and cars. But now, people are leveraging and controlling heating, electricity, or driverless cars. I believe human beings are smart enough to control all the tools we build. Thank you.
Mike Liu, Vice President and Senior Fellow, CCG
Thank you very much. One of reflections from Ray's speech is we should not draw an early conclusion because it takes time. Also, we humans learn from history. We're learning from the past on how to mitigate the risk as we go along. I saw the gentleman from here. Can we help the gentleman with the mic? Can you also identify yourself? You can direct the question to anyone in the panel.
Clas Neumann, Global Senior Vice President and Head of the Global SAP Labs Network; Head of the Fast Growth Market Strategy Group at SAP
Thank you very much. My name is Clas Neumann. I'm from SAP, a German software company.
You mentioned also that 61% of all patents in AI come from China and another 20% from the U.S. I see the same picture, of course, on the foundation models, which are also largely dominated by the U.S. and China. Also, within that group, it's mostly companies. So it's mostly private companies that push AI forward. Also your company, of course.
My question is to the panel: at which stages do you include the corporate views, and how deeply do you really interact with companies around the globe, whether it's in the Bletchley Park, Seoul Summit, or in other forua? Also, which companies do you include outside of the two big countries, for example, from the EU and even from the Global South? We talked a lot about inclusivity. So how are all of these companies, their views, and their inventions included in the governance discussion? Thank you.
Marjut Hannonen, Head of Trade and Economic Section in Beijing, European External Action Service
Maybe first on the EU, this is the easy part. In the framework of the new legislation, we have a scientific committee that is there to give expert opinions. So (they are) basically in constant interaction with the government because in the EU legislation, the high-risk activities are regulated or monitored at the country level. The Commission level is then the GPAI, the big platform. So there are two different kinds of ways of implementing this.
Of course, there's also then a stakeholder forum, which we're gonna set up with [inaudible]. We're gonna collect inputs from all stakeholders continuously on the process. I don't know exactly -- of course, it's very new legislation -- how often and how we're gonna set it up and how often it meets. But we certainly have foreseen the stakeholder inputs on both the scientific side and also from company side.
Marc Lendermann, Head of Division for Bilateral Digital Policy, German Ministry of Digital and Transport
To add to that, companies are very much involved in the discussion along with other stakeholders from other sectors. It's not only companies and businesses but also researchers and academia that need to play a role in the global discussion about AI governance.
For us the German government, it is really crucial to have such a multi-stakeholder approach being applied to the discussion about global governance. That's why we are very committed to supporting and promoting that approach.
By the way, also at the AI Safety Summits in Bletchley Park and in Korea two weeks ago, stakeholders were attending in the summit. And to add on that, not only in our multi-lateral discussions that we are having about AI governance but also in our bilateral relations, stakeholders such as companies and others play an important role.
For instance, we have digital dialogues with a number of countries. One of these partnerships exist with Singapore. And in the framework or under the umbrella of that digital dialogue, our State Secretary and I have been to Singapore last week. In the sidelines of our digital dialogue, we also visited SAP in order to discuss their perspective on what is happening in the field of AI. Perspectives that we get from companies and other stakeholders very much inform our approaches.
Zoon Ahmed Khan, Research Fellow, CCG
I would be quick. I think it's a follow-up question also from the previous one. I'm from CCG, by the way. Both the Chinese and European sides have stressed upon inclusivity. Of course, when we say some are leaders, many others are followers. So there needs to be a sense of affirmative action. Often we talk about the scope for China, EU, or China-Europe cooperation in South-South global governance. So I would like to know your views on this. What is particularly being done on maybe the Chinese side as well to have more inclusive views and incorporating them in future governance on AI, and if you foresee any collaboration between China and Europe? Thank you. Mike Liu, Vice President and Senior Fellow; Former Managing, Director, DXC Technology Greater China
I would say China and Europe have already been working together, right? That's the beauty we have in this forum. Can any volunteer give a short answer?
Zhang Linghan, Professor at the Institute of Data Law, China University of Political Science and Law; Member, UN High Level Advisory Body on AI
Thank you for the question. Yes, we have already seen many cooperation between China and the EU. We can see that China and France just made a joint statement on the AI governance. Also, we can see many collaboration between academia and the industries. We also see some international technical standards are cooperated by the industries from China and the EU. I think cooperation is going.
In April, I just participated in the China-Africa Internet Development and Cooperation Forum. I think for Global South, China is very interested in capacity building in the Global South and financial assistance in the Global South. I think it's on its way.
Mike Liu, Vice President and Senior Fellow, CCG
Thank you. As we are running short on time, I will invite our next speaker to present. Our final speaker is Li Ye, who is the Vice President from Merck, another global industrial company. Ms. Ye, the floor is yours.
Li Ye, Vice President and Head of Corporate Affairs and Government Relations at Merck China
Thank you, Mike. Thank you to all the panelists. I'm from Merck, a German company based in Darmstadt. We are a science and technology company looking for AI innovators in China and worldwide. There is quite some interesting news happening recently. I'll just name a few and ask for your point of view on those. The first is we noticed that the Chinese and US governments just conducted an inter-governmental exchange that focuses on AI in Geneva. This is the first time the two countries have focused on these topics and conducted this working group exchange in a third country, focusing on AI risks. This is one. The second is last Sunday, Jensen Huang, the CEO of Nvidia, just announced their gigantic AI chips innovation plan in Taipei, Taiwan. It is going to be part of the AI processing units. Geopolitics is a hot point there and they are going to speed up the processing speed in the next few years. The European Commission also just set up the Artificial Intelligence Office with very well-structured governance and policies. So, what's your perspective on all these happening in the last two weeks and can you give us some insights?
Marc Lendermann, Head of Division for Bilateral Digital Policy, German Ministry of Digital and Transport
To some of the points or elements that you mentioned, the first thing is the talks between the US and China that you mentioned took place recently in Geneva. I think Director-General Gao also mentioned this in his opening remarks, if I understood it correctly.
To us, it is really important to have bilateral conversations on AI and other topics of digital policy. Therefore, we really believe that dialogue formats such as the talks that took place recently can really push the needle in order to reach a common understanding of the challenges that we're facing and also the opportunities that AI provides.
Therefore, we believe that dialogue formats such as the one that the United States kicked off can really benefit. And we are also very much interested in keeping up bilateral dialogue formats like the talks that took place recently.
Representative, Jiabin Business School
It's really a great chance to be here. You can see lots of people. I promise we're real humans, not AI-generated people. I'm from the Jiabin Business School, which helps 1,000 Chinese companies go global.
So my question goes to Ms. Marjut Hannonen. Lots of Chinese social entrepreneur really want to enter the markets of European countries, but they have not enough knowledge of how we can just set up the policy that align with the AI governance and the data protection like the GDPR. Do you have any suggestion for this kind of companies which I really want to enter the European market and like the technology company? Any suggestion to pass to them on what they need to prepare?
Marjut Hannonen, Head of Trade and Economic Section in Beijing, European External Action Service
Yes, I have only one. They have to comply with European laws and legislations, then they can enter. That's very simple.
Mike Liu, Vice President and Senior Fellow; Former Managing, Director, DXC Technology Greater China
Thank you. Maybe I want to just echo what Marjut has said. Things have to stay step by step. You have to comply with what the regulation there is about how to grow your business, regardless of whether t is multinationals in China or Chinese companies going global. Thank you very much. Since we're running out of time, I would like to really thank our esteemed panelists for sharing your insights. Okay, Sorry.
Unidentified
Professor Zhang, you mentioned before that there is a group working on risk and another group working on the positive agenda. We know there's the AI Safety Summit that's focusing more on the risk parts. So what is your outlook for the positive part, like the positive agenda on AI? This question goes to all the panelists, but mostly to you, Professor Zhang. Thank you.
Zhang Linghan, Professor at the Institute of Data Law, China University of Political Science and Law; Member, UN High Level Advisory Body on AI
Thank you. That's a good question. Balancing security and development is always the spirit of legislation, both in the EU and China. I think in the EU AI Act, countries made compromises to ensure all startups in the EU can get great space for development. Also, congratulations to the French company, Mistral AI. It's a great success recently.
Talking about the innovation and development parts of AI at the AI Safety Summit, as Marc just mentioned, the AI Safety Summit not only focuses on AI safety issues. A report about innovation was also published at the AI Safety Summit in Seoul.
I think just like Mr. Gu said, some AI safety issues and concerns can be solved by technical development. Now we may be concerned about AI replacing humans or preventing AI misuse. But in the future, if we do have advanced technology, I think these issues may be solved one day. Also, I believe that all regulations are enablers, not limitations to technology development. Thank you.