Leading experts hail China’s initiative on AI governance, call for global efforts in enhancing communication and ensuring safety

Editor's Note:

The year of 2023 has witnessed important moves taken globally to deal with the rapid development of Artificial Intelligence (AI) and its subsequent impact. On November 15, Chinese President Xi Jinping and US President Joe Biden agreed to establish China-US government talks on AI. Earlier this month, representatives and companies from 28 countries as well as the EU signed the Bletchley Declaration, a "world-first" agreement on AI to tackle the risks of frontier AI models, during the UK's AI Safety Summit. In October, China put forward the Global AI Governance Initiative at the Belt and Road Forum for International Cooperation. Also in October, the United Nations (UN) Secretary-General launched an Advisory Board to explore ways to link various AI governance initiatives.

These moves marked the beginning of the world's united efforts in managing AI development. It will, however, be a long time before the results of such efforts are seen. Why have major countries and international organizations decided on a proactive approach toward AI development at this time? How will the Global AI Governance Initiative lead China to contribute more to the governance and development of AI technology? In which direction will AI likely take the whole human society and how should we prepare ourselves for such a reality? The Global Times spoke with three leading Chinese experts in the field of AI, who provided deeper insights into a human-AI integrated world.

Zeng Yi, Professor at the Chinese Academy of Sciences, Director of the Center for Long-term AI and a member of the UN High-level Advisory Body on AI:

In recent years, the application of AI has rapidly entered various sectors of society, especially in 2023, when the development of generative AI technology has reached unprecedented heights in terms of user experience and application scope. Promoting social and economic development aside, it has also brought various risks, such as a direct threat to social trust due to misinformation generated by AI; the challenges that AI may pose to social fairness and employment; and the potential safety risks and the negative impact on society following misuse, abuse, and malicious use of technology.

AI risks and safety issues are global challenges that no country can handle alone. UN Secretary-General António Guterres called for the global governance of AI risks at the UN Security Council meeting on AI in July. In a follow-up step, the UN established a dedicated high-level advisory body, of which I am one of the two Chinese members, to address the global development and governance of AI.

China has proposed the Global AI Governance Initiative. The initiative is not only a positive response to global challenges, but also substantially supports the UN in coordinating the global governance of AI.

The initiative proposes that the development of AI should adhere to the principle of a people-centered approach, with the goal of increasing the wellbeing of humanity. AI draws inspiration from natural intelligence, especially human intelligence, and strives to develop technologies with learning, reasoning, decision-making, and planning capabilities to handle and solve complex problems. It is expected to assist humans and become an empowering technology that promotes social and ecological development. Therefore, the development of AI should aim to enhance common wellbeing while adhering to human values and ethics, and promote the progress of human civilization.

At the same time, the initiative also stresses that we should adhere to the principle of mutual respect, equality, and mutual benefit in AI development. This principle first reflects China's commitment to global sustainable development and the construction of a global community of shared future in order to share development opportunities, platforms, and benefits with the world. Countries with advantages in technological development should all share the fruits and experiences of development from a global perspective while enjoying the opportunities brought about by technological development.

AI has tremendous potential value. Currently, generative AI processes and predicts information much faster than humans can. We should harness its benefits, but also pay close attention to the challenges it brings. AI has never been neutral, and without an ethical safety framework, it lacks boundaries. It is crucial to construct a robust risk, safety and security detection framework.

China has always had a global perspective and international sentiment in the governance of AI, and has been committed to actively contributing its practical experience to global governance in this field. However, successful and effective governance of AI requires joint efforts from the international community at the global scale, sharing ideas, opportunities, experiences, ensuring safety and security by collective efforts.
Liu Wei, Director of the human-machine interaction and cognitive engineering laboratory with the Beijing University of Posts and Telecommunications:

The main reason why the international community started to pay a lot of attention to the security issues of AI is because AI technology is advancing at an unexpected speed. With the emergence of ChatGPT, people feel that the automation level of AI products is increasing, posing the potential risk of loss of control. People worry that AI may lead to unexpected outcomes, especially when the technology is combined with certain critical security departments. For example, in the military field, if AI is combined with nuclear weapons, there is a high possibility of things getting out of control.

AI technology itself can also generate various complex risks, including economic, social, financial, cultural, and military, leading to a chain reaction in these fields. Many parts of the world are vigorously developing smart cities, smart homes, smart transportation, and smart buildings. However, if there are AI weapons, these smart technologies can become very dangerous.

As one of the pioneers in the field of AI, China has been injecting Eastern wisdom into the global governance system through practical actions, showcasing its vision and responsibility as a major country. The Chinese approach and Eastern wisdom are crucial for the management of future AI development.

The concept of "developing AI for good" may be problematic in the Western context. Technology is an objective existence in the material world, while "goodness" is an inevitable requirement of ethics and morality. Whether an inevitable requirement can be derived from an objective existence is a topic that is still up for debate in the West.

AI is the crystallization of human wisdom. Despite being labeled as "intelligent," it is, at most, an advanced tool created by humans. The development direction and utility of AI technologies are fundamentally determined by human perspectives, horizons, understanding, and means. The essence of "technology for good" is the goodness of "humans." Therefore, in terms of regulation, it is necessary to adhere to the concept of ethics first, establish and improve an ethical accountability mechanism for AI, and clarify the responsibilities and rights boundaries of AI entities.

In terms of research and development, it is necessary to ensure that advanced technological methods are always under responsible and reliable human control, prevent the generation of data algorithm biases, and make the research and development process controllable and trustworthy. In terms of usage, it is necessary to ensure personal privacy and data security, establish emergency mechanisms and fallback measures in advance, and provide necessary training for users.

The future direction of intelligent development should be "the coordinated development of the human-machine-environment system while operating at high speed." Here, "human" involves managers, designers, manufacturers, marketers, consumers, and maintainers among others; "machine" not only refers to the software and hardware in intelligence equipment, but also involves the mechanisms connecting various links in the industrial chain; "environment" involves the collaborative environment of "government, industry, academia, research, and business" in many fields. This judgment takes into account both the rationality and science of the West and the natural principles and ethics of the East, as well as the complementarity of humans and machines.

It is unlikely that AI will surpass human intelligence based on the existing mathematical system and design patterns of software and hardware. However, it might be possible in a human-machine-environment system in the future. The future of human-machine fusion intelligence lies in symbiosis, combining human wisdom with machine intelligence. The essence of human-machine interaction is coexistence, combining human physiology with machine physics.
Brian Tse, Founder and CEO of Concordia AI, a Beijing-based social enterprise focused on AI safety and governance and a policy affiliate at the Centre for the Governance of AI founded at the University of Oxford:

The world is entering a golden era of opportunity for international cooperation and the governance of AI.

China is indispensable in global discussions on addressing AI's risks and opportunities. In our recent 150-plus page report "State of AI Safety in China," Concordia AI analyzes the landscape of Chinese domestic governance, international governance, technical research, expert views, lab self-governance, and public opinion in addressing frontier AI risks. Based on the report, we believe China can make many invaluable contributions to global AI governance.

On domestic governance, China has moved faster than any other major jurisdiction in regulating recommendation algorithms, deepfakes, and generative AI. As countries seek to develop their own domestic governance frameworks to mitigate AI's worst risks, there is a golden window of opportunity for policymakers to exchange lessons and it would be immensely beneficial for China to share its regulatory insights with the rest of the world.

On the international stage, China can help empower the voices of countries in the Global South. Proliferation of frontier models has major dangers, but these cannot be addressed without engaging the Global South. Moreover, global inequality will be exacerbated if the Global South lacks AI solutions to pressing social and environmental challenges. As the first steps toward promoting greater equality, Chinese labs have been actively working to incorporate underrepresented languages in large language models. For example, China's National Peng Cheng Laboratory has constructed a diverse corpus dataset and data quality assessment toolkit, covering Chinese, English, and over 50 languages from countries and regions that are part of the Belt and Road Initiative (BRI).

In technical AI safety, China has cutting-edge research and top talent to offer. China's robustness research has already garnered global recognition. Over the last year, Chinese research groups have also started exploring increasingly sophisticated frontier AI safety issues such as safety evaluations, red-teaming (a tool used in cybersecurity to test how an organization would respond to a genuine cyberattack), and scalable oversight.

As China and the US have agreed to establish China-US government talks on AI, we also suggest the two countries explore cooperation in the following areas:

First, China and the US should establish regular channels of communication involving policymakers, leading developers, and experts on AI safety. Currently, frontier AI capabilities are highly concentrated in a few large model research institutions and companies in China and the US. Therefore, China-US dialogue should consider involving the leading developers, gradually establishing mechanisms that serve common interests such as sharing information about emergent risks and safety incidents.

Second, China and the US should jointly strengthen research investment and cooperation in the field of AI safety. Recently, more than 20 top scientists in AI development and governance from countries including China, the US, the UK, Canada, and others from Europe co-authored a position paper and convened in-person to propose, among other things, that governments and companies allocate at least one third of AI R&D funding to ensure the safety and ethical use of AI systems.

Third, China and the US should agree on "bottom lines" for the development of frontier AI. For example, China and the US could jointly commit to banning AI from launching nuclear weapons, requiring human decision-making pre-launch.

Fourth, China and the US can learn from each other's AI governance and regulatory programs. Each country and AI lab is feverishly experimenting internally, trying to perfect a cocktail of AI governance policies for their unique situation. However, there are more similarities than many think; for instance, Senators Blumenthal and Hawley's Bipartisan Framework for US AI legislation proposes a national oversight body, licensing requirements, and safety incident reporting requirements to govern AI systems, similar to provisions in an expert draft for China's national AI law.

Fifth, China and the US should jointly explore and promote international frameworks and standardization norms. For example, China's National Information Security Standardization Technical Committee (TC260) has released a standards implementation guide on watermarking generative AI and additional plans for drafting future generative AI standards. China's TC260, US National Institute of Standards and Technology, and international bodies such as the Institute of Electrical and Electronics Engineers can help formulate international standards to prevent the creation and distribution of deceptive AI-generated content.

Yet dialogues and actions between China and the US are only a part of the picture. As we enter into an era of rapid progress in AI development, it is imperative that countries around the world transcend their immediate differences to prevent catastrophic risks and harness AI for the betterment of humanity.

Leave a Reply

Your email address will not be published. Required fields are marked *