It goes without saying that AI has been transformative. It’s changing the way people learn, work, and interact with each other, with a recent KPMG study suggesting 66 percent of people use AI on a regular basis.
But as the use of AI exploded and companies rushed to jump on the AI bandwagon, legislation of these tools was, in many cases, left behind.
What are the consequences?
The suicides of several adults and children in several countries have been linked to prolonged interactions with seemingly human chatbots (including India, the US, and Belgium). Others have been driven to suicide after criminals blackmailed them through the use of obscene AI-generated images resembling them. So-called deepfakes pose a threat to democracy, thanks to their potential to influence public opinion. In the workplace, biases in the algorithmic backbone of AI models used for hiring decisions can also result in racist and sexist outcomes.
AI’s demonstrable ability to cause real-world harm has led to numerous calls for stricter regulation – often most vocally from those within the industry. To find out which governments have taken notice, we performed an in-depth analysis of 178 countries. We looked at which had pending or existing AI legislation, what this legislation covered, and whether or not there were any exceptions to the rule (e.g. for police departments).
Each country has been scored against 11 different metrics, including whether AI legislation has been enacted or proposed, if any additional laws provide further safeguards, if there’s a regulatory body, if copyright disclosure is required, if risk levels are differentiated, if non-compliance is enforced with fines/punishments, and if legislation protects against deepfakes, bias, environmental impacts, worker protection, and use by minors. Each country is scored out of 14 with a higher score demonstrating more in-depth and all-encompassing AI legislation and a lower score finding various omissions (see the Methodology section for a more detailed breakdown).
Key findings:
- Denmark, France, and Greece have the strongest protections overall
- 33 out of 178 countries are bound by comprehensive AI legislation (27 of which are in the EU and must therefore comply with the EU’s AI Act)
- 47 out of 178 countries are working towards implementing national AI legislation
- None of the countries addressed the environmental impact of training and running AI systems
- The US scored 4 out 14 and has no federal AI legislation (a Trump America AI Act has been proposed)
- Only countries within the EU provide direct workplace protections, such as banning the use of emotion‑recognition AI in employee monitoring
- Prohibited AI systems, such as those designed to manipulate or analyze personality and behavior, can still be used by the military and police in 29 out of the 33 countries with national AI legislation
The best countries for AI legislation
13/14: Denmark, France, and Greece
As they’re part of the EU, these three countries are bound by the AI Act. This Act, which we explore further below, aims to ensure AI systems are safe, transparent, and respectful of fundamental rights while fostering innovation.
The Act:
- Provides some protection for minors (1 point)
- Provides some protections against deepfakes (1 point),
- Requires the creation of a regulatory body (1 point)
- Requires disclosure of copyrighted material used in model training (1 point)
- Differentiates between risk levels (1 point)
- Issues fines or other punishments for non-compliance (1 point)
- Provides protection against bias (1 point)
- Offers some protection against AI’s environmental impact (1 point)
- Provides indirect protection for workers impacted by AI systems (1 point)
In addition to this, each of the aforementioned countries has implemented further legislation designed to protect the public, which has garnered them an additional point each. France has criminalized the distribution of deepfakes, while Denmark is changing its copyright law to protect its citizens’ identities. In 2022, Greece introduced legislation to ensure transparency and fairness when AI was used in HR decisions.
12/14: The rest of the EU
The EU says that its AI Act is “the first-ever comprehensive legal framework on AI worldwide.” The first stage of the Act has been in force since February 2025 and prohibits AI applications deemed to pose an “unacceptable risk.” Banned applications include:
- Cognitive behavioral manipulation of people or specific vulnerable groups, e.g. voice-activated toys that encourage dangerous behavior in children
- Social scoring AI that classifies people based on behavior, socio-economic status, or personal characteristics
- Biometric identification and categorization of people
- Real-time and remote biometric identification systems, e.g. facial recognition in public spaces
The second stage of the Act took effect in August 2025 and requires that generative AI available in the EU, such as ChatGPT, complies with transparency requirements and EU copyright law.
Requirements include “publishing summaries of copyrighted data used for training” and “designing the model to prevent it from generating illegal content.” Perhaps most importantly, AI-generated content must be labelled as such to differentiate it from human-created content.
Most of the Act comes into force from August 2026, but specific obligations on certain high-risk systems, e.g. those embedded in products like medical devices, apply later, from August 2027.
11/14: Kazakhstan
Kazakhstan’s Artificial Intelligence legislation came into force in January 2026. While it broadly aligns with the same principles found within the EU AI Act, it is less detailed overall. This leaves greater room for interpretation, which may prove to be more or less favorable to consumers over time.
The law uses a risk-based approach (1 point), with prohibitions for AI that exploits vulnerabilities, determines emotions without consent, and enables social scoring. AI-generated video, audio, or image content (such as deepfakes) must be labeled as such (1 point), and copyright holders have the right to prohibit the use of their works for training (1 point).
Kazakhstan is one of the few countries that requires AI systems to comply with energy efficiency requirements (though there is no mention of what these requirements are). Workers’ rights are addressed indirectly – for example, users have the right to be informed when AI is used to make decisions regarding them (1 point).
10/14: Vietnam and South Korea
Vietnam’s National Assembly passed the Law on Digital Technology Industry in June 2025. The law, which took effect at the start of 2026, was the country’s first law dedicated to governing artificial intelligence.
It differentiates between risk levels (1 point), imposes labeling requirements on deepfakes (1 point), and gives the Ministry of Science and Technology regulatory powers (1 point). It states that AI systems should respect human values and be non-discriminatory (1 point) and indirectly addresses workers’ rights (1 point).
The law doesn’t mention what happens in cases of non-compliance, nor does it explicitly force developers to acknowledge when copyrighted material is used as training data.
That said, Vietnam has already enacted a new AI law, which becomes effective on March 1, 2026. The Law on Artificial Intelligence, which supersedes the provisions of the Law on Digital Technology Industry, gives an additional nod toward minimizing AI systems’ environmental impact (1 point) and outlines punishments for non-compliance (1 point).
South Korea’s National Assembly passed the Act on the Development of AI and Establishment of Trust in December 2024.
The Act establishes regulatory bodies (1 point) and uses a risk-based approach (1 point). Deepfakes must be labelled as being AI-generated (1 point), which helps to provide protection against deepfakes. One of the Act’s fundamental aims is to protect human rights and dignity, which suggests some protection against bias (1 point) and protections for workers (1 point).
The Ministry of Science and Information and Communication Technology (MSIT) can order the suspension of a service if it poses a threat to safety, and also issues fines for non-compliance (though these are fairly paltry and equivalent to just over $20,000).
On the whole, the Act is more industry-friendly than the EU’s Act. In addition to its more easily absorbable fines for non-compliance (the EU’s can reach more than $40,000,000 or 7% of global annual turnover), the Act doesn’t require developers to acknowledge copyrighted material used in training data. The Act also makes no mention of the environmental impact caused by AI systems.
Where does the United States rank?
4/14: US
The US has legislation specifically targeting the online distribution of sexual imagery, which includes that created by AI (1 point), as well as a law (COPPA – detailed below) designed to protect children’s privacy when using online services, which includes chatbots (1 point).
In general, though, the current administration is focused more on “global AI dominance” than consumer protection. In January 2025, President Trump issued the Executive Order for Removing Barriers to American Leadership in AI, which effectively removed all the guardrails put in place during the Biden administration. This was followed by the Executive Order for Ensuring a National Policy Framework for Artificial Intelligence in December 2025, which aimed to combat “excessive state regulation.”
Examples of state legislation include a Colorado law banning “algorithmic discrimination,” laws in Montana, South Dakota, New Jersey, and Kentucky regulating synthetic content. Utah’s Artificial Intelligence Policy Act requires that businesses disclose when their customers interact with AI.
Rather than require companies to navigate “a patchwork of 50 different regulatory regimes,” the previously mentioned Executive Order proposes the creation of a “minimally burdensome national standard.”
In January 2026, Senator Marsha Blackburn introduced the TRUMP AMERICA AI Act. If passed, this will place a duty of care on AI developers to prevent and mitigate foreseeable harm to users, and require “companies providing AI chatbot and companion services to protect kids.” AI companies will also face detailed transparency reporting requirements and bias audits for high-risk systems. Some commentators have noted that the proposed Act is far from being a “light-touch framework.”
Are there loopholes for certain sectors when it comes to AI legislation?
While it’s easy to assume that AI legislation applies equally to public and private organizations, this isn’t generally the case. Exceptions are most typically made for AI systems used by the military, the police, or in healthcare.
Countries with AI legislation exemptions for the military
Of the countries with national AI legislation, 29 of the 33 have caveats for military use. However, as we’ll see shortly, this doesn’t mean that the remaining countries are actively controlling the use of AI within the military. Rather, they simply don’t prohibit the types of AI systems that might interest the military.
Examples of laws with caveats include:
- EU: Here, otherwise prohibited systems are allowed for national defense purposes.
- South Korea: The country’s AI Basic Act doesn’t apply to AI that is developed and used “solely for purposes of national defense or national security.”
- Vietnam: Similar exclusions exist as in South Korea.
In other countries with AI laws, such prohibitions may be absent, negating the need for any caveats. For example:
- El Salvador: The government has previously stated to the UN that autonomous weapons systems “whose primary objective is the identification of human targets, whether to cause damage or loss of life” should be prohibited. However, its national law makes no mention of them. Assemblywoman Claudia Ortiz pointed out that only one of the law’s provisions addresses the ethical use of AI.
- China: Its current AI laws tend to focus on regulating synthetically generated content and generative AI. According to Oxford Analytics, President Xi Jinping has called for the “accelerated development of unmanned, intelligent combat capabilities.” The country has already produced a series of unmanned aerial vehicles and an unmanned missile boat.
Countries with AI legislation exemptions for the police
When it comes to police use of AI, it’s the same story as military use. 29 of the 33 countries have exemptions for police-use written into their laws. Police in the remaining four don’t need exemptions as AI systems, such as real-time facial recognition, are omitted from legislation.
- EU: While some systems are prohibited, the police are permitted to use real-time’ remote biometric identification (RBI) in publicly accessible spaces for several reasons, including searching for missing persons, preventing a substantial and imminent threat to life or foreseeable terrorist attack, and identifying suspects in serious crimes. In some urgent cases, the systems can be deployed without authorization, provided that authorization is requested within 24 hours. Non-urgent cases require necessary authorization. They must also register the system with the EU and complete a fundamental rights impact assessment.
- Vietnam: Its AI law doesn’t apply when AI is used for security. In the country’s capital, Hanoi, hundreds of cameras forming part of a real-time facial recognition system are being rolled out. The Director of the Hanoi Department of Public Security says that the cameras can automatically identify wanted individuals and alert the command centres when suspects are detected.
- South Korea: Similar exclusions permit the controversial roll-out of real-time facial recognition systems in public spaces. Examples include the Ministry of Justice’s attempt to screen suspicious activities of air travellers using real-time facial recognition systems in airports, and the use of AI-enhanced cameras for contact tracing during the COVID-19 pandemic.
- Kazakhstan: It forbids AI applications that evaluate or profile individuals based on sensitive attributes. Nevertheless, there have been reports that the country’s AI-powered facial recognition systems are being used to monitor activists and dissidents. In 2025, the country’s president made it illegal to wear face coverings in public places that would prevent easy facial identification.
- China: Meaningful limits on law enforcement AI use haven’t been introduced. AI has been used in the country’s criminal judicial system since 2006. AI systems help with predictive policing, detecting suspicious behavior, as well as facial recognition and surveillance. Individuals can be flagged as “suspicious” for anything from using a VPN to having an unusual amount of fuel in a vehicle. In 2021, AI camera systems designed to reveal a person’s emotional state were allegedly forcibly tested on Uyghurs living in Xinjiang.
- El Salvador: A list of unacceptable AI practices is omitted from its law. The country’s current government has rolled out surveillance cameras with facial recognition capabilities, though it’s unclear whether the National Artificial Intelligence Agency (ANIA) has issued the “technical-security criteria” that would establish the rules around its use.
Countries with AI legislation exemptions for healthcare purposes
Exceptions to AI legislation exist for some medical uses in 27 out of 33 countries (i.e. those countries covered by the EU AI Act). According to a study published in NPJ Digital Medicine, prohibitions on manipulative and exploitative practices “do not affect lawful practices in the context of medical treatment, such as psychological treatment for mental illness or physical rehabilitation.” Facial and emotion-recognition systems may also be permitted for medical reasons.
Which countries will introduce national AI legislation next?
According to our research, 47 countries have AI legislation in the works, though none are quite as wide-ranging as the AI Act.
The Council of Europe’s Framework Convention on Artificial Intelligence aims to ensure that AI systems are developed and utilized in ways that respect human rights, democracy, and the rule of law. It sets out fundamental principles for AI, including: human dignity, autonomy, non-discrimination, privacy/data protection, transparency, accountability, and reliability.
The convention currently has 17 signatories globally, which include the US, Canada, the UK, Japan, Uruguay, and Israel. However, it only becomes legally binding when five states have ratified it (including at least three member states of the Council of Europe). At the time of writing, it hasn’t been ratified by any states.
Legislation that is potentially nearing completion includes Brazil’s Bill No. 2,338/2023, which was approved by the Senate in December 2024. However, it still needs to undergo a vote by the House of Representatives before it can receive presidential assent and become law.
The bill places obligations on “AI agents” according to whether their AI systems are classified as excessive or high risk. However, unlike the EU law, developers using content protected by copyright must provide compensation to the respective copyright holders. Furthermore, the holder of copyrighted material used in the development of an AI system can prohibit its use.
Why is legislating AI so important?
Existential threats aside, there’s mounting evidence of widespread damage already occurring. A 2025 survey by the Ada Lovelace Institute found that 67% of the UK public had encountered some form of AI-related harm “at least a few times”.
In our research, we looked at which countries were addressing the following key issues: deepfakes, environmental impact, minors, workplace protection, and copyright infringement.
Deepfakes
One of the most concerning capabilities of AI is its ability to produce seemingly authentic but artificially generated media. This might involve replacing a person’s face in a video or photo with someone else’s, often making it look like they said or did something they didn’t. Events can also be faked. There are countless examples, from the relatively benign to the more troubling. Of particular concern is the ease with which AI can create non-consensual deepfake intimate imagery (sometimes referred to as deepnudes).
According to the Los Angeles Times, the release of OpenAI’s deepfake tool, Sora 2, resulted in a slew of “nonconsensual content including harassment of women and fake celebrity videos” within days of its release. In early 2026, the Internet Watch Foundation (IWF) also said it had found “criminal imagery” of 11- to 13-year-old girls which appeared to have been created using Grok (created by Elon Musk’s firm, xAI).
Some countries have moved relatively quickly to enact legislation that attempts to mitigate the effects of deepfake technology.
- France: Amended Article L. 226-8 of the Penal Code in May 2024 to criminalize the distribution of AI-generated content using someone’s image or voice without consent. Unfortunately, this doesn’t apply if that content is labelled as having been created using AI. Offenders can face up to three years in prison.
- US: Introduced the Take it Down Act, which aims to tackle the non-consensual distribution of intimate images, including both real (“revenge porn”) and AI-generated “deepfake” intimate content. The Act makes it illegal to knowingly publish non-consensual intimate visual depictions (NCII) of a person, with penalties of up to three years in prison.
- Australia: Introduced legislation targeting the use of generative AI to create non-consensual deepfake porn. Attorney-General Kyam Maher says that “Deep fakes are somewhere over ninety per cent non-consensual pornography, with the victims being 99 per cent women and girls.”
- Denmark: Announced in June 2025 that it would change copyright law so that citizens had the right to their own faces, bodies, and voices. The law, which is expected to be passed in 2026, will make it illegal to share deepfakes or other digital imitations of personal characteristics. The Danish culture minister, Jakob Engel-Schmidt, has said that tech platforms that do not respond accordingly to the new law could be subject to “severe fines”. Dutch MPs are also considering similar legislation.
- UK: The Data (Use and Access) Bill was amended in 2025 to criminalize the creation of sexually explicit ‘deepfakes’.
- South Korea: Making and viewing sexually explicit deepfakes is also illegal.
- EU: Content that qualifies as a “deepfake” must be labeled as being artificially generated or manipulated. Although the AI Act classifies deepfakes as ‘limited risk’, malicious uses of deepfakes are deemed as an ‘unacceptable risk’ and prohibited. The Provisions on the Administration of Deep Synthesis Internet Information Services also prohibit the creation of “fake news”.
In total, legislation that addresses the problems of deepfakes exists in 35 out of the 178 countries we looked at.
Environmental impact
It’s no secret that a vast amount of computing power is needed to run LLMs like ChatGPT. Researchers estimated that GPT-3 consumed 1,287 megawatt hours of electricity and generated 552 tons of carbon dioxide just to get it ready for launch.
Once LLMs are up and running, they continue to consume energy when responding to queries.
According to an article in the IEEE’s Spectrum publication, all generative AI queries will consume 15 TWh of electricity in 2025. This will increase to 347 TWh by 2030, which is the equivalent output of 44 nuclear reactors.
Water use is another issue. A report from the UK government predicts that the water used by AI (primarily used to keep computers cool in data centers) will lead to an increase in global water usage from 1.1 billion in 2025 to 6.6 billion cubic meters by 2027. This, says the report, “is equivalent to more than half of the UK’s total water usage.”
The flipside of all this is AI’s potential to make energy savings by increasing efficiency in areas such as transport, infrastructure management, and agriculture. However, evidence of these benefits is harder to obtain, especially since many of the gains are expected only in the future and remain uncertain.
In the meantime, environmental experts are becoming increasingly concerned. “There is still much we don’t know about the environmental impact of AI but some of the data we do have is concerning,” says Golestan (Sally) Radwan, the Chief Digital Officer of the United Nations Environment Programme (UNEP). She suggests that governments need to make sure that the net effect of AI on the planet is positive before deploying the technology at scale.
The EU’s AI Act, which is the best of the legislation in terms of the environment, is itself woefully short on describing environmental protections. As a study from Cambridge University noted, “the environment is currently not part of the risk-categorisation system and is mentioned only incidentally throughout the AI Act.”
Annex XI states that developers of general-purpose AI models must provide the “known or estimated energy consumption of the model.” No mention is made of an upper consumption limit, nor of the many other forms of AI beyond general-purpose models.
The Act puts the onus on standardization bodies to create standards that include “reporting and documentation processes to improve AI systems’ resource performance.” Critics point out that the environmental impact of AI systems is more than just the consumption of resources, and that the Act lacks binding caps or targets for emissions and waste.
Beyond the EU, only Vietnam and Kazakhstan mention the environment. Vietnam’s AI law makes reference to “protecting the environment” but doesn’t specify how it will limit the ecological impact of AI systems. Kazakhstan’s law states that AI systems must adhere to energy efficiency requirements and limit any negative impacts on the environment.
Minors
AI systems impact children in numerous ways, few of which they have any control over. In educational environments, AI can potentially be used to evaluate learning, behaviour, and even emotion.
A total of 40 countries out of 178 had legislation designed to mitigate any harmful effects stemming from children’s use of AI.
- EU: The AI Act classifies AI used in educational environments as high-risk, which places them under strict obligations for safety, transparency, and oversight. AI systems that exploit the vulnerabilities of people due to their age, and with the intent of persuading them to engage in unwanted behaviours, are categorized as posing an unacceptable risk and banned. Emotion-infering systems are also prohibited. By contrast, chatbots are mostly designated as limited risk. That’s not to say there aren’t any obligations. The systems must inform users they are talking to AI, and ensure interaction is safe and transparent.
- UK: The Online Safety Act is directly applicable to generative AI and AI-fuelled chatbots. Section 12 requires service providers to use proportionate systems and processes, e.g. age verification, to prevent children of any age from accessing “pornographic content and content that encourages, promotes, or provides instructions for self harm, eating disorders, or suicide”.
- Australia: Its Online Safety Act details Basic Online Safety Expectations, requiring chatbot providers to take reasonable steps to keep Australians safe.
- Italy: In late 2025, Italy became the first EU country to approve a comprehensive AI law. In addition to aligning with the EU Act (which itself requires member states to prohibit AI systems that exploit vulnerabilities of specific groups, such as children), Italy’s law stipulates that children under the age of 14 need parental consent to access AI.
- Vietnam & Kazakhstan: Here, legislation echoes the EU AI Act in that it expressly forbids the exploitation of children’s vulnerabilities to encourage them to harm themselves or others.
- China: Article 10 of the Interim Measures for the Management of Generative AI Services (2023) says that developers should clearly specify the intended group of users, and adopt effective measures to prevent minors from excessively relying on or becoming addicted to generative AI services.
- US: Some protections for children are provided via COPPA (Children’s Online Privacy Protection Act). Enforced by the Federal Trade Commission (FTC), the law is designed to protect the privacy of children under 13 when they use websites, apps, or online services. In late 2025, the FTC ordered seven companies providing AI-based chatbots to provide information on how they “measure, test, and monitor potentially negative impacts” on children and teens.
Workplace protection
AI is being used extensively in workplace environments, from sifting through candidates’ applications to monitoring employees and even issuing dismissals. There’s evidence that many systems have ingrained biases stemming from prejudiced training data.
The EU’s AI Act directly addresses the use of AI in the workplace, though critics note that there are no protections in the event of unjustified dismissal.
Employers are required to inform workers’ representatives and affected workers that they will be subject to an AI system prior to putting such a system in place (Article 26(7). Article 86 gives individuals subject to a decision based on a high-risk system’s output the right to a clear and meaningful explanation of the role of the AI system in the decision-making procedure.
Employers are banned from using biometric categorization that could, for example, infer whether someone was a member of a trade union. They are also banned from using emotion recognition systems, unless they are put in place for medical or safety reasons.
While social scoring systems are generally prohibited, the Center for Democracy and Technology notes that their use in worker evaluations “is not de facto prohibited.”
In Greece, workers already receive protections via Law 4961/2022. This requires employers to inform employees or job applicants before using AI systems that influence HR decisions. Systems are required to uphold the principle of equal treatment and anti-discrimination in employment.
Outside the EU, South Korea provides worker protections via its AI Basic Act. This categorizes AI used in hiring as high impact, and requires employers to give prior notice before using it. The results from the deployed AI system should be explainable. What isn’t clear is whether the use of AI in employee performance evaluation and monitoring will be categorized as high-impact AI.
Other countries only address workers’ rights indirectly. For example, while Vietnam emphasizes that AI should serve people and not replace them, there are no workplace-specific rights, such as a requirement for employer consultation before deploying AI that affects workers. Instead, workers must rely on the protections tied to the various risk categories.
In Bahrain, there are prohibitions on AI decisions made without human oversight, which could arguably help lessen the use of AI in the recruitment process. Prospective employees might also benefit from the measures against the discriminatory misuse of AI.
Copyright infringement
A significant challenge for any AI legislation is the issue of copyright; specifically, the use of copyrighted material to train AI models. After all, an AI model’s output is essentially a mish-mash of what it’s been fed. Many publishers, artists, and content creators consider generative AI to be plagiarism because they provide the AI training material but don’t see any benefit in return. AI companies often use copyrighted materials without the knowledge or consent of copyright owners. A further issue is that these models can produce works that accurately mimic the style of human creators, arguably diminishing the value of the original source material.
Hayao Miyazaki, the founder of the Studio Ghibli animation studio, famously said in a 2016 meeting where an AI animation demo was shown, that he was “utterly disgusted” and “would never wish to incorporate this technology into my work at all.”
Nine years later, ChatGPT was able to transform any uploaded images so that they closely resembled something created by Studio Ghibli. The Content Overseas Distribution Association (CODA), a trade body with Studio Ghibli as a member, has since officially requested that OpenAI discontinue the use of Japanese copyrighted materials when training its AI systems.
In the US alone, there are now approximately 60 ongoing lawsuits involving creators and copyright-holders suing AI companies. Cases have also been heard in the UK and Germany, though the outcomes were different.
Copyright concerns are impacting creative industries across the board, from film and music, through to art and literature. In essence, these industries are calling for more transparency over when copyrighted material is used, more control over its use, and appropriate remuneration when it is used.
Legislation that helps support the creative industries while also encouraging AI development does exist, though its development is typically fraught.
Following heavy lobbying and protest from Brazil’s creative industries, the country’s nascent AI Act (Bill No. 2338/2023) will only allow developers to use copyrighted material to train AI models if the material was obtained for non-profit purposes.
In the UK, the House of Lords and the House of Commons were at loggerheads over government proposals that would allow tech companies to use copyrighted material when training their models. The Lords were pushing for more transparency and better protections for human creators.
The debate over the Data (Use and Access) Bill was settled in June 2025, with an agreement that the Government would, within nine months of the Act being passed, produce a series of reports and assessments on the economic impact of four policy options (one of which is to leave copyright law unchanged), and propose methods that could be used to control and regulate the use of copyrighted material. In effect, the agreement gives both sides a break from a difficult conversation.
The EU’s AI Act contains two provisions related to copyright: Article 53(1)(c) and (d). The first authorizes text and data mining as long as rights-holders do not express their refusal. The second provision requires general-purpose AI providers to release detailed summaries explaining the content used for training.
South Korea is also considering introducing AI copyright legislation, though proposals. This is much like the country’s approach to AI in general – erring on the side of developers. If passed, the legislation would, under specific conditions, allow AI models to train on copyrighted works without explicit permission from copyright holders.
Vietnam has also taken a more permissive approach. Clause 5, Article 7 allows organisations and individuals to use legally published documents and data (which are publicly accessible) for the purpose of researching, training, and developing AI systems.
However, that permission does come with conditions. It must not involve copying, distributing, communicating, publishing, creating derivative works, or commercially exploiting the original works/data.
In addition to calls for the protection of existing works, there’s a concurrent demand for copyright to be given to AI output that has involved significant human input. For example, Italy amended its copyright law in 2025 to incorporate such works provided that there is “substantial human intellectual contribution.”
What problems arise when trying to regulate AI?
To get an idea of what the world might look like without any global AI regulation, we asked Chat GPT for a forecast. Its “plausible timeline of an unregulated AI future” seems particularly bleak:
0-5 years: A surge in innovation and a concurrent surge in disinformation. Companies race to dominate the market and safety would be deprioritized for speed.
5-10 years: A handful of corporations and states dominate AI. Untested AI systems in healthcare, transport, or finance cause accidents or collapses. Elections and public debates are destabilized by synthetic media.
10-20 years: Mass unemployment as superhuman AI systems surpass human experts in many domains. Autonomous military AI is deployed widely without treaties and conflicts escalate quickly. Poorly aligned AI systems start optimizing for unintended goals, possibly overriding human input in critical systems. A late scramble for regulation occurs, but it’s “too little, too late.”
Surprisingly, this dystopian vision pales in comparison to some of the warnings issued by those within the industry. A statement published on the webpage of the Centre for AI Safety asserts that: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Some of its industry signatories include Sam Altman (chief executive of OpenAI) and Demis Hassabis (chief executive of Google DeepMind).
Public sentiment also favors more control. A survey by the Ada Lovelace Institute suggests that the majority of the UK public (72%) would prefer increased regulation of AI systems.
The problem is that AI is big business and many governments are finding it tricky to find the right balance between protecting their citizens while also encouraging innovation.
In Canada, the Artificial Intelligence and Data Act (AIDA) was introduced in 2022 as a proposed Canadian federal law aimed at regulating the responsible design, development, and deployment of AI systems in the private sector. Although it passed its first reading, it was declared effectively dead in 2025. Tabled amendments were never properly debated and key areas were excluded (such as government use of AI).
Argentina has also faced setbacks in getting AI legislation through parliament. Bill 2504-D-2023 regarding the Regulation and Use of AI expired in 2025 and bill 747-S-2023 for the Development and Use of AI Systems in Argentina have stalled.
In the EU, observers suggest that intense lobbying could result in companies being given additional leeway when breaching rules on the highest-risk uses of AI.
Methodology
We looked at the proposed and existing AI legislation for 178 countries. Our primary focus was on dedicated AI and digital-focused legislation as opposed to amendments to existing laws. In some cases, pre-AI legislation may contain provisions that could be interpreted as being applicable to AI systems, but, for clarity, these haven’t been included.
For our scoring (which was out of a total of 14), we gave:
- 2 points for countries where AI legislation had been enacted, and 1 point for proposed AI legislation. For a clearer comparison between countries, we looked at national AI legislation rather than that issued at state or county level.
- One point for any supplementary legislation relevant to AI systems.
We scored the composition of any legislation as follows:
- 1 point if it offered protections against deepfakes.
- 1 point if it provided protections for children
- 1 point if it required the establishment of a regulatory body.
- 1 point if it required the disclosure of any copyrighted materials used to train AI models.
- 1 point if it differentiated between the relative risks of different AI systems
- 1 point if it stipulated any fines or punishments for non-compliance
- 1 point if it required AI systems to be non-biased
- 2 points if it provided comprehensive environmental protections, and 1 point if environmental considerations were mentioned but not fleshed out
- 2 points if worker protections were directly addressed, and 1 point if they were indirectly addressed.