AI Action Summit Paris: global talk with local impact
Global leaders from over 100 countries gathered in Paris for the AI Action Summit, reaffirming our collective responsibility to shape ethical, transparent, and sustainable AI. The accelerating evolution of AI presents unparalleled opportunities but also complex risks—from digital divides to algorithmic discrimination.
Most of the speeches and sessions cover:
- Bridging the Digital Divide – AI must be accessible to all, not just the tech elite. The newly launched Public Interest AI Initiative aims to support open AI models and digital public goods.
- Sustainability & AI – AI’s energy footprint is now under global scrutiny. The International Energy Agency will establish an observatory on AI’s environmental impact.
- AI & Work – AI is reshaping labor markets, requiring proactive protection of workers’ rights and investment in AI literacy.
- Stronger Governance – While the EU AI Act sets the standard for global AI regulation, not all major economies are aligned. The US and UK notably declined to sign the final declaration.
So, as expected, the summit is mostly politically driven and therefore might feel a far-from-my-bed show for many businesses and professionals. However, the impact of AI regulation is anything but distant.
The AI Act in Belgium
As AI becomes deeply embedded in the workplace, the EU AI Act already impacts Belgian businesses.
AI Literacy for Employees (From February 2025)
Belgian employers must ensure that employees understand AI—not just IT teams, but anyone using AI-driven tools. This includes recognising AI’s risks, ethical concerns, and decision-making processes.
Ban on High-Risk AI
AI systems posing unacceptable risks, such as emotion recognition in the workplace and social scoring, will be prohibited. Employers must ensure compliance to avoid legal and ethical risks.
Transparency & Compliance for AI Models (From August 2025)
General-purpose AI models (e.g., ChatGPT-like systems) must meet strict documentation and copyright requirements. Employers must ensure transparency in AI-driven decision-making to avoid biased outcomes.
The EU AI Act establishes a tiered system of fines (ranging from €7.5 MIO to €35 MIO or 1.5% to 7% of global annual turnover) depending on the type of violation. Since this is an EU regulation, the penalties will apply uniformly across all EU member states, including Belgium. However, each member state will be responsible for its own enforcement framework, meaning Belgium will need to set up its own regulatory body to oversee compliance, just as it did for GDPR.
The Big Question: how should business look at AI in today’s world?
How much do we really know about the AI systems shaping our daily lives? From ANPR cameras monitoring Belgian roads to AI driven chatbots
There’s a lot of common ground that AI has the potential to radically change the way we do business and even the way we live. However, concerns about transparency and trust remain at the forefront.
So as a business, these are the key topics you need to consider when rolling out AI projects:
- Transparency – Do your customers and employees understand how AI is making decisions?
- Fairness & Bias – Is your AI model unintentionally reinforcing discrimination?
- AI Governance & Compliance – Are your AI tools aligned with upcoming EU regulations?
- AI Literacy – Are your employees trained to use AI responsibly?
- Data Security & Privacy – Are you protecting user data in compliance with GDPR & AI Act requirements?
The EU led the way with privacy rights (GDPR)—can it now set the global standard for AI governance?
We tend to believe so, and at MultiMinds, we value data privacy and transparency in data use. We are determined to do the same when it comes to applying data in AI models.