logonuevoair.jpg

South Korea implements the first comprehensive artificial intelligence law: what it regulates and who it applies to

The regulation classifies artificial intelligence systems according to their level of risk, imposes transparency obligations, and applies to both local companies and global firms operating in the country.

South Korea has announced that the Basic Act on the Development of Artificial Intelligence (AI), or Basic AI Act, entered into force this Thursday. The law seeks to regulate AI systems in order to guarantee public safety and prevent misuse, making South Korea the first country to implement a comprehensive regulatory framework for this technology.

The Basic AI Act establishes a set of guidelines aimed at AI developers and companies. While it seeks to promote the AI industry and innovation, it also emphasizes the protection of users by preventing disinformation and other potential harms associated with the technology.

This marks the first time that AI legislation has been fully implemented, as the law sets out a general framework for the government’s adoption of comprehensive guidelines on AI policy and governance. This was announced by South Korea’s Ministry of Science and ICT and reported by the Korean news agency Yonhap.

Specifically, the new law defines a legal framework to strengthen oversight and governance of national AI policies. To this end, it classifies AI models according to their level of risk, designating as “high-risk AI” those systems used to generate content that may affect users’ lives or safety.

This includes models used in areas such as job application screening, loan assessments, and medical advice, as noted by the cited outlet. Companies that deploy such high-risk models in their services must duly inform users of their use.

For example, they will be required to clearly indicate when content has been generated by AI, including through the use of watermarks, in order to prevent confusion that could compromise user safety, particularly in cases involving content such as deepfakes.

In addition to imposing these restrictions, the law also supports the promotion of the AI industry through measures designed to foster research and development, the creation of training datasets, the adoption of AI technologies, the training of specialists, and the construction of data centers.

The regulation also stipulates that global AI companies offering services in South Korea and meeting certain criteria must appoint a local representative to ensure compliance with the Basic AI Act.

These criteria apply to companies with annual global revenues exceeding one trillion won (approximately €582 million), domestic sales surpassing 10 billion won (approximately €5.8 million), or more than one million daily active users in South Korea. As a result, the law currently affects companies such as OpenAI and Google.

Although the Basic AI Act has already entered into force, the South Korean government has granted a one-year grace period to allow companies and institutions to adapt to the new requirements. During this phase, no investigations will be conducted and no financial penalties will be imposed.

Once this period ends, companies found to be in violation of the law may face fines of up to 30 million won (approximately €17,463). In addition, a support task force has been established to advise companies on compliance with the law.

As stated by South Korea’s Second Vice Minister of Science, Ryu Je-myung, in a statement reported by Yonhap, the Basic AI Act “stands at the center of South Korea’s AI industry and the realization of an AI-based society.”

This set of South Korean regulatory measures on artificial intelligence aligns with the proposed European Union Artificial Intelligence Act, which, although it represents the first legislation developed to regulate AI systems and entered into force in July 2024, will not become fully binding until 2027 and therefore has not yet been fully implemented.

Source: El Observador