South Korea has enacted the Act on the Development of Artificial Intelligence and Establishment of Trust (AI Basic Act), establishing one of the world’s first comprehensive national regulatory frameworks for artificial intelligence. The legislation, passed in December 2024, is set to take effect in January 2026 and positions South Korea as the second jurisdiction after the European Union to introduce economy-wide AI rules.
The AI Basic Act is designed to strengthen Korea’s national competitiveness in artificial intelligence while embedding safeguards aimed at ethics, safety, and public trust. It provides the legal foundation for a national AI governance structure, including the creation of a Presidential Council on National Artificial Intelligence Strategy and an AI Safety Institute responsible for overseeing safety and trust-related assessments. The law also mandates wide-ranging government support measures, covering research and development, data infrastructure, standardization, talent training, and assistance for startups and small and medium-sized enterprises.
A central feature of the legislation is the introduction of obligations for businesses that develop or deploy “high-impact” AI systems, high-performance (or “frontier”) AI, and generative AI. Companies operating in these categories may be required to carry out AI risk assessments, implement safety measures, ensure transparency, and designate a local representative in South Korea. Transparency obligations apply in particular to generative AI content that could be mistaken for real, such as deepfakes, which must clearly disclose their AI-generated nature.
South Korea’s regulatory approach differs in important respects from that of the European Union. While the EU’s AI Act focuses primarily on application-based risk categories, Korea defines high-performance AI using technical thresholds, such as cumulative training computation. As a result, only a very limited number of advanced models are expected to fall within the scope of the most stringent safety requirements. At present, the government has indicated that no existing domestic or foreign AI models meet these thresholds.
Enforcement under the AI Basic Act is intentionally light-touch. The law does not предусматри criminal sanctions and instead emphasizes corrective orders and compliance guidance. Administrative fines, capped at 30 million won (approximately USD 20,000), may be imposed only if corrective measures are ignored. The government also plans to introduce a grace period of at least one year following the law’s entry into force, during which it will prioritize education, consultations, and guidance rather than investigations or penalties.
The Ministry of Science and ICT is currently drafting subordinate regulations, expected to be released in the first half of 2025. Officials have repeatedly emphasized that the objective is to establish a minimum regulatory baseline consistent with emerging global norms, rather than to constrain innovation. Public consultations with industry stakeholders are expected as the implementing rules are finalized.
“This is not about boasting that we are the first in the world,” said Kim Kyeong-man, deputy minister of the office of artificial intelligence policy at the ICT ministry, during a study session with reporters in Seoul on Tuesday. “We’re approaching this from the most basic level of global consensus.” (The Korea Herald)
“The goal is not to stop AI development through regulation,” he said. “It’s to ensure that people can use it with a sense of trust.”
For companies—particularly U.S. technology firms operating in or entering the Korean market—the AI Basic Act represents both an opportunity and a compliance challenge. While the framework seeks to foster growth and innovation, it also signals Korea’s intent to integrate competition, safety, and transparency considerations into the governance of advanced AI systems.