Artificial Intelligence (AI) has emerged as a transformative force in modern society, reshaping how we learn, work, play, and live. As AI innovation accelerates, global governance is struggling to keep pace, balancing the immense opportunities of technology with the urgent need to mitigate significant ethical and societal risks.
From comprehensive European legislation to deregulatory pressures in the United States, here is an overview of the pivotal developments shaping the future of AI governance, deepfakes, copyright, and the labor market.
The Diverging Paths of Global AI Regulation
The world’s two major economic blocs, the European Union (EU) and the United States (US), are adopting fundamentally different philosophies toward AI governance.
The EU AI Act is an extensive legal document intended for the entire European Union, setting rules for the responsible development and use of AI. This regulation aims to protect the safety, health, and fundamental rights of natural persons. By applying these rules, the EU seeks to ensure that the AI used by businesses and organizations is responsible and that they can confidently enjoy its benefits.
In contrast, the US approach, exemplified by the America’s AI Action Plan, signals a shift towards a philosophy of AI deregulation. This vision adopts a pro-innovation approach, prioritizing technological growth and development over precautionary oversight. This uncertain legislative environment places increasing pressure on the private sector to self-manage ethical AI risks, responsibility, and governance in the absence of clear legal mandates.
The EU’s Risk-Based Regulatory Framework
The EU AI Act employs a risk-based approach, categorizing AI systems and imposing requirements based on their potential to harm.
- Prohibited AI Practices: Certain practices that pose an unacceptable risk to people and society are prohibited, with these rules applying to both providers and deployers since February 2025. Examples of banned systems include those intended to manipulate human behaviour to restrict free choice and systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
- High-Risk AI Systems: These systems may result in risks to health, safety, or fundamental rights, such as the right to privacy or the right not to be discriminated against. High-risk systems must comply with a variety of requirements starting from August 2026. These include AI systems that are safety components of existing products (like a medical device) or AI systems deployed for specific high-risk applications. High-risk application areas range from AI used in education and vocational training (e.g., systems for evaluating learning outcomes) to employment, workers’ management, and law enforcement. Providers of these systems must comply with extensive obligations, including establishing a system for risk management, setting clear data and data governance standards, ensuring human oversight, and managing accuracy, robustness, and cybersecurity.
- General Purpose AI (GPAI) and Generative AI: Models capable of serving multiple purposes are subject to specific information requirements. Providers of these models must prepare technical documentation and detailed information for downstream AI system providers. If a GPAI model presents systemic risks—for example, if it was trained using at least 1025 floating point operations (FLOPs)—it faces additional obligations, such as implementing model evaluations to map out and mitigate systemic risks.
Deepfakes and the Copyright Conundrum
The rise of generative AI has created urgent legal and ethical challenges, particularly concerning synthetic media and intellectual property.
Deepfakes, media (images, videos, or audio) generated or edited using AI, leverage machine learning to create increasingly convincing synthetic content. Deepfakes raise concerns about promoting disinformation, hate speech, and interfering with elections. Governments and technology companies are actively seeking methods for detection and mitigation. The EU AI Act imposes specific transparency obligations on Generative AI and chatbots. Deployers of systems that generate audio, image, or video content must ensure that it is clear that the content is artificially generated or manipulated, perhaps by applying a watermark.
Regarding ownership, under U.S. law, AI-generated content is generally not protected by copyright, as the U.S. Copyright Office maintains that copyright protection is not granted to works created solely by non-humans. This stance complicates matters, especially when AI models are trained on large volumes of copyrighted human-made work scraped from the internet. Lawsuits, such as those filed by Getty Images against Stability AI and The New York Times against Open AI and Microsoft, are challenging the limits of fair use in AI training datasets. Globally, the EU AI Act mandates GPAI model providers to implement a policy ensuring that the model is trained without infringing the copyrights of natural persons and organizations.
AI’s Impact on Labor and the Need for Transparency
The integration of AI into the economy is having a complex and multifaceted impact on labor markets. AI automates tasks, leading to increased productivity and creating new job opportunities in fields such as data analytics and machine learning. However, AI also raises significant concerns about job displacement (technological unemployment) and skill polarization. Roles requiring lower income and education levels are categorized as high risk for automation.
To navigate this disruption and ensure the benefits of AI are shared, proactive policy measures, investment in education, and a commitment to ethical implementation are essential.
A crucial component of responsible AI adoption is Explainable AI (XAI), which addresses the issue of AI algorithms acting as “black boxes”—systems whose reasoning cannot be easily explained, even by their designers. XAI is a field of research dedicated to providing humans with intellectual oversight over AI algorithms by uncovering the reasoning behind decisions or predictions. XAI relies on principles like transparency, interpretability, and explainability to achieve this. Making AI decisions transparent is crucial for justifying decisions, tracing potential risks, building trust, and verifying outputs, especially in high-stakes fields such as law, finance, and medicine.
In essence, governing AI is like charting a course for a massive, powerful ship in uncharted waters: the EU has chosen a high-visibility, prescriptive map detailing every rock and reef (the risk-based AI Act), while the US has opted for a looser, faster approach, trusting the captains (private companies) to self-correct, emphasizing rapid speed and innovation to lead the global fleet. Regardless of the chosen path, the need for robust ethical frameworks, clarity on creation (copyright), and transparency on decision-making (XAI) remains paramount to ensure a safe journey for everyone on board.
This article is part of the Short Advanced Program “Democracy, Citizenship, and Emerging Challenges: European Values at risk?” from RUN EU. This SAP is the result of the collaboration between the UBU, the NHL, Howest, and the IPCA.
