Ethics in AI refers to the principles and guidelines that govern the responsible development and use of artificial intelligence. These principles aim to ensure that AI systems are fair, transparent, and aligned with human values and promoting trust.
Ethical AI is important because it helps prevent biases, discrimination, and unintended consequences in AI systems. By prioritizing ethical considerations, we can build AI that enhances human well-being and maintains public trust.
Regulation in AI involves creating laws and guidelines to ensure that AI technologies are developed and used safely, responsibly, and in ways that protect individual rights and societal interests. Regulations help to standardize practices, promote accountability, and prevent misuse.
AI regulations vary globally, reflecting different priorities and approaches:
European Union:
The EU AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive and binding regulatory framework for artificial intelligence. It classifies AI systems by their level of risk (unacceptable, high, limited, and minimal), bans uses considered as “unacceptable risk” — such as harmful manipulation and social scoring — and sets strict requirements for high-risk systems before they hit the market. The law will be rolled out in phases: prohibitions take effect in February 2025, rules for general-purpose AI systems start in August 2025, and full application will be mandatory from August 2026.
United States:
The US approach has shifted with political changes. The 2023 Executive Order on Safe AI was revoked in January 2025, and the new administration has prioritized reducing regulatory barriers to foster innovation. The NIST AI Risk Management Framework remains in place as a voluntary standard and the primary reference for AI risk management across federal and private sectors. Federal discussions on rights protections continue, but no comprehensive national AI law has been enacted so far.
China:
China maintains an intensive regulatory and supervisory scheme. The Interim Measures for the Administration of Generative AI Services, effective since August 2023, require government registration for public AI systems. In 2024, oversight has tightened: over 300 AI systems and 1,400 algorithms have been officially registered. The government continues to issue technical standards and AI security governance frameworks, with regulation and alignment to strategic and national security objectives as top priorities.
Canada:
Canada’s proposed Artificial Intelligence and Data Act (AIDA), part of Bill C-27, was shelved when Parliament was prorogued in January 2025. This leaves Canada without a specific federal AI regulation applicable to the private sector. The Directive on Automated Decision-Making remains in force but only applies to systems used by the federal government. After the elections, it’s expected that the legislative approach will shift toward innovation and light-touch regulation, with no broad federal law anticipated in the short term.
Japan:
Japan passed its first basic AI law in May 2025: the Act on the Promotion of Research, Development, and Utilization of AI Technologies. This “fundamental law” emphasizes innovation, international cooperation, and responsible development, establishing a Strategic Headquarters for AI and guiding principles — but without direct regulatory obligations or penalties. The government continues to favor self-regulation through updated guidelines (“soft law”) for businesses and public institutions.
Normative sources and official links:
European Union:
United States of America:
China:
Canadá:
Updated: July 2025