yhteenveto

Kansainvälisessä kilpailussa tekoälyn kehittämiseksi on keskeistä, että valtiot pyrkivät vahvistamaan asemaansa tekoälyn standardien määrittäjänä. Kiina näkee tekoälyn standardoinnin alueena, jossa siitä voisi tulla normien noudattajan sijaan niiden määrittäjä.

Tekoälyn maailmanlaajuinen standardointi on jälleenrakennusvaiheessa. Yhdysvallat ja Kiina neuvottelevat uusista kahdenvälisistä standardointikehyksistä samalla, kun jo olemassa olevien monenvälisten linjausten merkitys heikkenee.

On havaittu, että Kiinan lähestymistapa tekoälyn standardointiin on vahvasti sidoksissa yrityssektoriin ja noudattaa yritysvetoista ja valtiojohtoista mallia. Kiina tekee tiivistä käytännön yhteistyötä yksityisen sektorin kanssa ja toimii tekoälyn standardoinnin katalysaattorina prosessin alkuvaiheessa, tukijana keskivaiheessa ja valvojana sen myöhemmässä vaiheessa.

Eurooppalaisten päätöksentekijöiden on tärkeää vahvistaa kontekstisidonnaista ymmärrystä Kiinan nopeasti muuttuvasta tekoälystandardoinnin kentästä. Tämä ymmärrys on avainasemassa Euroopan kilpailukyvyn turvaamisessa, eurooppalaisten arvojen säilyttämisessä ja osallistumisessa keskusteluun tekoälyn maailmanlaajuisesta hallinnasta.

Introduction

In the current era of great power competition, technology plays a pivotal role. In particular, Artificial Intelligence (AI) could influence the balance of power between states due to its transformative potential to increase economic capability, develop means for military and national security, and boost technological supremacy. If data is the new oil, AI is expected to emerge as the new engine driving the next industrial revolution, given that AI products and services are trained on enormous quantities of data. As a result, countries around the globe have entered an international AI race with the aim of gaining a competitive advantage by introducing plans and policies that facilitate the development and deployment of AI technologies.

The AI race encompasses not only AI capabilities, such as physical infrastructure (e.g., computing power and access to data troves) and technical talent, but also standard-setting power. The reason for this is threefold. First, as an established set of norms, guidelines, or specifications, a standard enables compatibility and interoperability within an industry, duly reducing costs, increasing efficiency, and fostering innovation. Second, although a standard is not legally binding initially, it can be incorporated into legislation or regulation requiring mandatory compliance if a government or regulatory body considers it essential. Compliance with standards presumes that the products and services meet specific quality and/or technical descriptions that are usually required to access certain markets.[1] A country that leads in standard-setting can thus gain a competitive edge in global markets. Finally, via AI standardisation, a country can shape the global AI landscape to advance its own values, technological preferences, and strategic interests. To this end, AI standardisation has become a major battleground in this international AI race, as shown in the official documents provided by major players such as the US, the EU, and China.[2]

For European policymakers, understanding the AI standardisation initiatives is essential for making well-informed AI policies. Against this background, the EU needs nuanced and updated knowledge of the American and Chinese approaches to AI standardisation. In particular, the Chinese approach is little known in the EU. China has a government that engages in strategic long-term planning and strong policy support, and a private sector with enormous talent and market size. A common assumption is that the government commands and private enterprises are subservient, having little option but to follow the government’s dictates. However, this is an oversimplification that can be misleading.

This Briefing Paper analyses China’s approach to AI standardisation, and the roles of government bodies and leading enterprises, based on the latest developments in the Chinese AI standardisation landscape. It notes that the Chinese approach features both state and cooperative enterprises that make concerted efforts in versatile ways to advance domestic AI standardisation. This has significant implications for AI standardisation internationally, especially for the EU. 

The evolving global landscape of AI standardisation

Technical standardisation used to be about selecting a superior technology. However, in the context of competing over AI, approaches to AI standardisation by various countries have transitioned from this ideal and professional approach into a geopolitical one. States have begun to engage in AI standardisation both domestically and internationally, with standardisation more often seen as yet another field of geoeconomic competition between major states.[3]

For instance, China’s strategic motivation to become a norm-maker in AI standardisation was revealed in the AI Standardisation White Paper (the 2018 version), published by the China Electronics Standardisation Institute (CESI).[4] As an emerging technology, international standardisation work on AI is still in its infancy. In the White Paper, China sees this as a window of opportunity to become a leader in the industry if the opportunity is seized and rapid action is taken. Indeed, unlike other international norms and institutions, such as economic and financial governance, which have long been established and dominated by Western countries, the global norms and institutions for emerging technologies have yet to be created. This leaves strategic room for China to realise its aspiration to transition from a norm-taker into a norm-maker.[5] The EU is likely to be less willing to adopt the norm created by China if it is anticipated to be incompatible with  European values. This will lead to frictions. 

While improving efficiency, AI simultaneously poses various dangers to human society, including but not limited to invasion of privacy, discrimination, misinformation and disinformation, filter bubbles and polarisation of public opinion, as well as the potential arms race in military AI applications. As standards can mitigate risks, AI standardisation is an urgent agenda. Due to the borderless nature of AI products and services, these dangers are supranational issues that require global solutions to AI governance. Yet emerging frictions driven by contrasting national interests increasingly fragment the international AI standardisation landscape. When professionalism is eroded by nationalism, the role of international standardisation organisations such as the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IEC) may decline.

The silver lining is that different states are still showing some level of cooperation by keeping the dialogue on global AI governance alive. Although international observers are concerned about a possible bloc rivalry between the G7 and BRICS in this context, led by the US and China respectively, there has been a surprising level of engagement between the two countries recently. For instance, at the Asia-Pacific Economic Cooperation (APEC) Summit in San Francisco in November 2023, both the US and China committed to cooperation on global AI governance. Following the summit, the first intergovernmental dialogue on AI took place in Geneva, Switzerland, where the US and China discussed AI technology risks, global AI governance, and economic and social development.

This is not to say that the US and China would instantly introduce any impactful solutions to the issues raised above. Instead, this bilateral dialogue represents a modicum of cooperation on AI standardisation that could have significant implications for other countries. Meanwhile, the AI Safety Summit, hosted by the UK in November 2023, is another good example of international cooperation. It led to the Bletchley Declaration, the first global pact on tackling frontier AI risks, which was signed by 28 countries across the world, serving as a new multilateral framework for global AI governance.

In short, the global AI standardisation landscape is undergoing a reconstruction phase. Bilateral frameworks between the US and China, and new multilateral frameworks are replacing the existing international standardisation organisations. In this context, China’s role in global AI standardisation may well be greater in the future than today, and the EU’s role smaller. Hence, it is useful to gain a better understanding of how AI standardisation works domestically in China, from both the perspective of preserving the EU’s competitiveness, and global AI governance.

China’s approach to AI standardisation

Standards are generally developed in one of two ways. De facto standards are developed by one or more private enterprises and supported by this market dominance. De jure standards are established via formal processes and recognised by institutions that have a certain level of authority. The former include standards, for example, from fora, consortia, and industry alliances, while the latter mostly come from ISO, IEC, and national standardisation organisations.

In China, the Standardisation Administration of China (SAC), directly under the State Administration for Market Regulation, is responsible for the de jure national standardisation work. Just like ISO, it functions by setting up a series of technical committees. The national committees working on AI standardisation in China are mapped in Figure 1.

Illustration of which technical committees and subcommittees take part in AI standardisation in China.
Figure 1. Participation of Chinese national committees on AI standardisation. Source: The National Public Service Platform for Standards Information of China.

 

In particular, the Technical Committees on Information Technology and Cybersecurity (TC28 and TC260 respectively) are the main drafters of the current AI standards in force, such as the Code of practice for data labelling of machine learning and the Basic security requirements for generative artificial intelligence service. It is important to note that the secretariats of both committees and their subcommittees are under CESI, where the meetings within the committees are most likely to take place. All AI standards in force are published under the name of CESI, meaning that the actual AI standardisation work in China is very much centred on the efforts made by CESI.

Despite being directly under the Ministry of Industry and Information Technology (MIIT), CESI does not represent the top-down approach to standardisation that is commonly assumed in the EU. The institutions that drafted the national AI standards (most of which were drafted after 2021) are all either academic research institutes or private enterprises. This reflects CESI’s dependence on the private sector. Such dependency is arguably a result of the implementation of the 2017 revision of the National Standardisation Law, to which a new Article was added: “the state encourages enterprises, public organisations and educational and scientific research institutes to carry out or participate in standardisation work”.[6] The Chinese approach to de jure AI standardisation is ostensibly moving away from a government-led approach to an enterprise-led one, since the actual standardisation work is practically outsourced to the private sector.

While this industry-led approach resembles the European one, the Chinese state guides enterprises in drafting standards. For example, one of the technical committees at SAC (TC260) drafted a national standard on generative AI that defines the major security risks of training data and generated content as:

1) Violating core socialist values;

2) Discrimination;

3) Violating commercial laws;

4) Infringing upon the legitimate rights and interests of others; and

5) Failing to meet the security requirements of a specific service type such as medical information services or psychological counselling.[7]

These security requirements represent the first regulations on generative AI globally, serving as criteria for content moderation. They were directly adopted from the Interim Measures for the Administration of Generative AI Services, issued by seven state departments, including the Cyberspace Administration of China (CAC) and the MIIT. The first requirement specifically draws Europe’s attention to its emphasis on the 12 socialist values with Chinese characteristics and their potential clashes with European values. 

In addition to drafting national standards in cooperation with the SAC’s technical committees, private enterprises are also standardising the industry in a de facto way. This is facilitated by ‘the national AI team’, created by the Chinese state.[8] As shown in Figure 2, the team consists of 23 private enterprises that are leaders in the field of AI applications. They are responsible for the construction of a national AI Open Innovation Platform (AIOIP), which provides open access to data, toolkits, libraries, frameworks, and computing power for start-ups and small and medium-sized enterprises (SMEs).

Illustration of which companies are part of the national AI team in China and how they interact with small and medium-sized enterprises.
Figure 2. Interactions between the National AI Team and SMEs.

 

Accordingly, SMEs are able to access the technical and industrial chains and financial resources shared by the leading enterprises and further participate in the research, development, and diffusion of AI technologies.[9] Ultimately, this process is expected to benefit the real economy. Yet by using the resources of the leaders, SMEs gradually become dependent on them, reinforcing the standard-setting power that the leaders already possess. In other words, the construction of the national AIOIPs serves as a de facto standardisation process for the corresponding application scene.[10]

While China’s de facto AI standardisation, like de jure standardisation, is largely led by enterprises, the government is not completely out of the loop. In the early stages of building the AIOIPs, the government selects the team members and provides them with access to critical research and development resources, such as space for testing autonomous vehicles (Baidu), public medical data (Tencent), and city infrastructure data for monitoring and upgrading (Alibaba). The state is therefore facilitating the de facto standardisation in the industry.

At a later stage, there is also a risk that the national AI team members become so dominant that they wield power like monopolies in their respective fields. In such cases, the government is expected to intervene with macroeconomic mechanisms such as anti-trust legislation. The Chinese government has already taken precautions, for example, with the 2022 revision of the national Anti-Monopoly Law, which specifically targets anti-competitive behaviour facilitated by technological applications.

To sum up, the state plays different roles at different stages of the standardisation process. At the beginning, the state performs a catalysing role by issuing national policies and plans for de jure and de facto standardisations. In both cases, the state supports enterprises during the standard-making process. This support includes the establishment of technical committees, access to critical research and development resources, and endorsement for the establishment of AIOIPs. Later, the state also acts as a supervisor, setting limits on the enterprise-led standardisation work and safeguarding the market order through macroeconomic control. In addition to acting as a catalyst, supporter, and supervisor, in the final stage the state is also responsible for adopting and publishing the national standards. The SAC also represents China in ISO, IEC, and other international standardisation organisations in order to undertake the signing of relevant international cooperation agreements.  

This enterprise-led and state-guided approach to AI standardisation is underpinned by the state playing different roles at different stages and by the cooperative relationship between the state and enterprises. It is neither entirely top-down nor bottom-up. Instead, China has established a “community of practice” in which the state, enterprises, as well as other stakeholders from the public sector, such as academics and the media, develop a joint problem-solving mindset to put China at the forefront of AI technologies.[11]

An AI community of practice in the EU?

The EU lags behind the US and China in many aspects of the AI race, including available data resources, presence of major AI enterprises, level of AI adoption, as well as attracting top AI talent.[12] Nonetheless, with a focus on safe, ethical, and responsible AI, the EU is gaining the upper hand in establishing regulatory frameworks for AI, with initiatives such as the General Data Protection Regulation (GDPR) and the European AI Act shaping the global AI governance landscape. Standardisation, as an integral part of the regulatory framework, is becoming a critical arena for the EU to remain competitive.

From the EU’s perspective, China’s approach could put the Union at a competitive disadvantage since Europe does not have its “national team” of AI giants to standardise the European market. Therefore, the EU needs to be more proactive in developing its own “community of practice” to facilitate the process of AI standardisation within the Union. While this is likely to be a tougher goal given the fragmented nature of the EU, the GDPR and the recently enacted AI Act provide horizontal frameworks for general AI standardisation. For standardisation in specific subfields of AI, more vertical approaches are needed. This requires more joint efforts from different European enterprises.

As the AI industry is changing rapidly, Europe needs to improve its understanding of China’s standardisation landscape. This can be achieved, for example, by collecting regularly updated information on China’s AI standardisation (e.g., via the National Public Service Platform for Standards Information) and by funding relevant research. In particular, some of China’s unique socialist values are reflected in its national AI standards. The EU needs to further explore how these are applied to AI services and products in China, and their implications for the European market. Notably, the EU needs to make the same effort to study the US approach, as US national interests are not always in line with European ones.

From a global perspective, the EU should promote the role that international standardisation organisations have traditionally played. Although their influence has started to decline, they still function as the most professional standardisation bodies. At the same time, the EU needs to encourage European enterprises to engage in the Chinese (and American) national standardisation processes. Although rare, foreign enterprises do participate in the drafting of China’s national AI standardisation policies. Microsoft and IBM, for instance, both contributed to the 2021 version of the AI Standardisation White Paper. In addition, it is also beneficial to promote knowledge sharing between the technical committees in Europe and their counterparts in China and the US (by establishing expert workshops in which Chinese TCs are invited to present their work on AI standardisation, for example). This could strengthen the rare and commendable signs of cooperation in the global governance of AI and defuse some potential conflicts in the cross-national negotiations.

Conclusions

Many countries are adopting a geopolitical approach to AI standardisation, with an increasing emphasis on national interests. As international AI standardisation is likely to become more centred on a new bilateral governance framework between the US and China, the influence of existing international standardisation organisations is likely to decline. In this context, China is expected to play a bigger role than it does today.

As this Briefing Paper has noted, the AI standardisation processes in China are highly reliant on enterprises, following an enterprise-led and state-guided pattern. This finding challenges the common assumption, particularly in the EU, that standardisation in China is rigidly top-down. In both processes, the state cooperates closely with the private sector.  

The US is home to tech giants Google, Amazon, Apple, Meta, and Microsoft, while China has Baidu, Alibaba, and Tencent. The lack of European AI enterprises risks putting the EU at a competitive disadvantage. As standards can be set as requirements to gain access to certain markets, the dominant standard-setting power of the US and China means that the EU has more work to do to catch up. Since advancing one’s own interests and values is an essential part of geopolitical competition, falling behind in global AI standardisation could prevent the EU from safeguarding European values.

Despite lagging behind the US and China in AI capabilities, the EU is a leader in developing regulatory frameworks for AI. To maintain its competitiveness, the EU must therefore play to its strengths by being more proactive in the arena of international AI standardisation. This can be achieved by understanding, learning from, and engaging more with US and Chinese counterparts. Further joint efforts are also required to build a European community of practice for AI standardisation between European enterprises and governments.

Endnotes

[1] Gamito, M. C. (2023) The influence of China in AI governance through standardization. Telecommunications Policy, 47(10), 102673-. https://doi.org/10.1016/j.telpol.2023.102673.

[2] Standardization Administration of China et al. (2020) National guidelines for the construction of a new generation of artificial intelligence standards system. https://www.gov.cn/zhengce/zhengceku/2020-08/09/5533454/files/bf4f158874434ad096636ba297e3fab3.pdf.

[3] Rühlig, T. (2020) Technical standardization, China and the future international order: a European Perspective. Heinrich Böll Stiftung, Brussels. https://eu.boell.org/en/2020/03/03/technical-standardisation-china-and-future-international-order.

[4] China Electronics Standardization Institute (2018) AI Standardization White Paper (2018 Edition). http://www.cesi.cn/images/editor/20180124/20180124135528742.pdf.

[5] Cheng, J., & Zeng, J. (2023) Shaping AI’s Future? China in Global AI Governance. Journal of Contemporary China 32(143), 784810.

[6] National People’s Congress (2017) Standardization Law of the People’s Republic of China. https://www.gov.cn/xinwen/2017-11/05/content_5237328.htm.

[7] TC260 (2024) Basic security requirements for generative artificial intelligence service. https://www.tc260.org.cn/upload/2024-03-01/1709282398070082466.pdf.

[8] Larsen, B. (2019) Drafting China’s National AI Team for Governance. DigiChina. https://digichina.stanford.edu/work/drafting-chinas-national-ai-team-for-governance/.

[9] Ministry of Science and Technology (2019) National Guidelines for the Construction of a New Generation of Artificial Intelligence Open Innovation Platform. https://www.gov.cn/xinwen/2019-08/04/content_5418542.htm.

[10] Zhu, J. & Mattlin, M. (2024) The Chinese AI Innovation Ecosystem: Spurring Innovation or Consolidating Monopolies? ReConnect China Policy Brief 11. https://www.reconnect-china.ugent.be/wp-content/uploads/2024/06/ReConnect-China_Policy-Brief-11_The-Chinese-AI-Innovation-Ecosystem.pdf.

[11] Qiao-Franco, G., & Zhu, R. (2024) China’s Artificial Intelligence Ethics: Policy Development in an Emergent Community of Practice. Journal of Contemporary China, 33(146), 189205.

[12] Castro, D. et al. (2019) Who Is Winning the AI Race: China, the EU or the United States? Center for Data Innovation. https://www.datainnovation.org/2019/08/who-is-winning-the-ai-race-china-the-eu-or-the-united-states/.

Junhua Zhu