EU AI Act for Belgian SMEs: what to do in 2026
The EU AI Act becomes concrete in 2026: where do we stand?
Regulation (EU) 2024/1689, better known as the EU AI Act, came into force in August 2024. Since February 2025, some of its provisions already apply to Belgian SMEs that use artificial intelligence. The major obligations for most companies — those concerning so-called "high-risk" AI systems — will fully apply from August 2026. Translation: for a Walloon or Brussels-based SME that has rolled out ChatGPT, a chatbot or a CV screening tool, the clock is ticking.
This article takes stock of what you need to know in spring 2026, while avoiding two traps: regulatory panic (most SMEs use minimal-risk AI) and underestimation (the AI literacy obligation already applies to every business). This is not legal advice — for a specific case, consult a specialised lawyer — but a practical orientation to help structure your approach.
The actual timeline: what already applies, what is coming
The AI Act rolls out in stages, and it matters to distinguish what is in force from what is on the way.
Since February 2025, two blocks have applied to any organisation using AI in Europe: the ban on unacceptable practices (social scoring, manipulative behaviour exploiting vulnerabilities, unauthorised real-time biometric identification, etc.) and the AI literacy obligation for providers and deployers of AI systems. The vast majority of SMEs are not affected by the prohibited practices — these are extreme cases — but AI literacy concerns everyone.
Since August 2025, the rules on general-purpose AI models (GPAI: ChatGPT, Claude, Gemini, etc.) apply to their providers, along with the governance and penalty regime. For an SME that uses these tools — rather than building them — the direct impact is limited, but you do gain more transparency about what you buy.
From August 2026, the obligations on high-risk AI systems under Annex III of the regulation (recruitment, credit scoring, critical infrastructure management, certain education domains, etc.) become fully applicable. This is the most important step for SMEs that develop or buy AI tools in these areas.
Finally, in August 2027, the last wave of obligations covers AI systems integrated into products already subject to EU safety rules (machinery, medical devices, toys, etc.).
Are you a provider, a deployer or a distributor?
The AI Act distinguishes several roles, and obligations differ sharply by position. Settling this qualification is the first task.
The provider is the entity that develops or places an AI system on the market under its own name. If your SME builds its own chatbot or recommendation algorithm, you are a provider. The deployer (also referred to as "professional user") is the entity that uses an AI system in a professional capacity. If you use ChatGPT for emails, Klaviyo for segmentation or a third-party CV screening tool, you are a deployer. The distributor makes an AI system available on the European market without being its provider — rarer for an SME.
For most Belgian SMEs, the relevant status is deployer of limited- or minimal-risk systems. That means a small set of obligations, but they exist: transparency towards end users (article 50), AI literacy across your team, and — when AI processes personal data — articulation with the GDPR.
AI literacy: the quiet obligation that already applies
Article 4 of the regulation requires providers and deployers to take the measures needed to ensure a sufficient level of AI literacy among their staff and any other person acting on their behalf. This obligation has been in force since 2 February 2025 and covers every SME, regardless of the risk level of the systems in use.
What does it mean concretely? There is no mandatory format. The regulation refers to sufficient "skills, knowledge and understanding" to use and deploy AI "in an informed manner". For a Belgian SME, this can take several forms: a half-day awareness session for the team on the tools in use, an internal AI usage charter, short training on risks (hallucinations, bias, data leakage), documented use cases validated by management.
The mistake I see most often: letting each employee use ChatGPT, Copilot or Claude in their own corner, without a framework. That is neither compliant with article 4 nor healthy for the quality of work produced. See also Train your team for AI adoption in SMEs to structure that effort.
Transparency obligations (article 50)
Article 50 imposes several transparency obligations that affect every SME interacting with customers via AI. Three practical cases.
First case: chatbot or conversational agent. If your customers interact with an AI system (chat on your website, AI phone receptionist, support agent), you must inform them that they are talking to an AI, unless this is obviously self-evident. The practice: a clear message at the start of the interaction ("You are speaking with a virtual assistant") and an option to switch to a human.
Second case: AI-generated content (deepfakes, images, audio, synthetic text). If you publish AI-generated content liable to be taken as authentic (AI product visuals, synthetic voices, fully AI-written articles), you must signal the artificial nature of the content. For standard marketing cases (clearly decorative illustrations, AI suggestions reviewed and rewritten by a human), the obligation is softer, but caution is advised.
Third case: biometric categorisation or emotion recognition. If you use these technologies (rare in SMEs, but not non-existent in HR or customer service), reinforced obligations apply.
A 30-60 day compliance plan for a Belgian SME
Here is a pragmatic roadmap to address AI Act compliance in a Belgian SME without disrupting operations.
Weeks 1-2: mapping. List every AI use in your business, including those by individual employees (personal ChatGPT used for work, Excel AI plug-ins, AI features in Gmail/Outlook, SaaS tools embedding AI). For each use, identify: who uses it, for what purpose, what data is sent, which provider, which risk category.
Weeks 3-4: qualification. For each use, determine your role (provider or deployer — almost always deployer for an SME) and risk level. For 90% of SMEs you sit at minimal or limited risk. If you identify a high-risk use (recruitment, credit scoring, etc.), that is where specialist support is needed.
Weeks 5-6: setting the foundations. Three deliverables: an internal AI usage charter (1-2 pages: authorised tools, forbidden data, human validation, incident reporting), an AI literacy training plan (half-day minimum for anyone using AI), and an update to your client-facing privacy policy (transparency on AI usage).
Weeks 7-8: ongoing controls. Set up a quarterly review of AI tools in use (a new SaaS, a new feature), keep a simple register of use cases, and appoint an internal point of contact. For documentation, see also Data security with AI for SMEs which covers the GDPR side.
Official resources to consult
Two reliable sources to dig deeper. The European Commission's EU AI Act portal (digital-strategy.ec.europa.eu) gathers the official text, guidelines and implementing acts. The independent tracker artificialintelligenceact.eu lets you follow annexes and timelines without navigating the official journal.
In Belgium, the national competent authority for the AI Act is being finalised at the time of writing. Watch for communications from the FPS Economy and the Data Protection Authority (dataprotectionauthority.be), which covers the AI Act / GDPR articulation. To anticipate common implementation pitfalls, see also AI integration mistakes to avoid in SMEs.
Going further
The AI Act is neither a bogeyman nor a formality. For an SME using AI sensibly — productivity tools, automation, marketing — compliance is reachable in a few weeks of structured work. The real question is not "how do I dodge the regulation" but "how do I integrate AI governance into the way I run the company". That is precisely the kind of work AIves Consulting does with Walloon and Brussels SMEs: usage audit, internal charter, AI literacy training, articulation with your existing GDPR programme.
If you want to start, the first useful reflex is to produce the internal usage map. Without it, every step that follows stays theoretical.
Want to discuss this?
Get in touch