Dutch data watchdog warns voters against using chatbots for election guidance
As the Netherlands prepares for its snap general election on October 29, the country’s Data Protection Authority (AP) has issued a strong warning to citizens: do not rely on AI chatbots for voting advice.
With artificial intelligence tools like ChatGPT, Gemini, Grok, and Le Chat becoming everyday companions for millions — from managing administrative tasks to offering life guidance — the Dutch authority cautions that these systems are neither neutral nor reliable when it comes to democracy.
In a recent study, the AP tested four major chatbots by feeding them 200 different voting profiles representing the country’s diverse political spectrum. The chatbots were then asked to recommend political parties aligned with those profiles.
The findings were striking. In 56% of cases, the AI models pointed users toward one of the two dominant parties forecast to win the election — Geert Wilders’ Party for Freedom (PVV) on the far right and Frans Timmermans’ Green Left–Labour Party (GL/PvdA) on the left.
“We saw a clear oversimplification of the Dutch political landscape,” said Joost van der Burgt, project manager at the AP. “AI systems tended to push users toward one large party on either end of the spectrum, ignoring smaller or centrist groups.”
Smaller and more moderate parties such as the Farmer–Citizen Movement (BBB) and Christian Democratic Appeal (CDA) were rarely suggested, even when user inputs closely matched their political positions. Interestingly, the right-wing JA21 party was recommended disproportionately often, despite its limited media presence.
Explaining the mechanism, van der Burgt noted that generative AI tools are statistical engines that predict likely words and phrases — not reasoned political advice.
“If your political stance leans toward one end of the spectrum, it’s unsurprising that AI suggests a dominant party from that side,” he said. “But these systems often fail to distinguish between smaller parties with similar views, flattening the nuances that define real political choice.”
Under the European Union’s Artificial Intelligence Act (AI Act), which took effect in August 2024, some AI systems can be designated as “high-risk” if they pose threats to citizens’ fundamental rights.
Researchers argue that chatbots providing voting advice could soon fall under this category when new regulations come into force in 2026.
For van der Burgt, this raises urgent ethical and regulatory concerns. “Chatbots already refuse to help users with dangerous or illegal requests,” he said. “We believe the same kind of restriction should apply to voting advice — it’s too sensitive to be left to unregulated AI.”
Instead of AI chatbots, the Dutch authority recommends traditional, data-driven voting platforms such as StemWijzer and Kieskompas, which ask users a series of policy questions before suggesting political alignments.
Similar tools operate in other European countries, including Germany’s government-backed Wahl-O-Mat, which matches voters’ opinions with official party positions based on transparent datasets.
Experts say these systems are less prone to bias and far more transparent in methodology. “The problem with chatbots is that we don’t know how they generate answers,” van der Burgt explained. “With tools like StemWijzer or Kieskompas, everything is clear — from the data sources to how the recommendations are calculated.”
As voters across the Netherlands head to the polls, the AP’s warning highlights a broader issue facing democracies worldwide: the intersection of AI, bias, and electoral integrity.
“Transparency is key,” van der Burgt concluded. “Voters should know how their information is used and why they receive a certain recommendation. Without that, we risk handing over one of the most essential democratic decisions — the right to vote — to machines we can’t fully understand.” (ILKHA)
LEGAL WARNING: All rights of the published news, photos and videos are reserved by İlke Haber Ajansı Basın Yayın San. Trade A.Ş. Under no circumstances can all or part of the news, photos and videos be used without a written contract or subscription.
The AI chatbot company Character.AI has announced sweeping new restrictions aimed at users under 18, following mounting legal and public-scrutiny over alleged harms to minors engaged with its virtual companionship platform.
A hacker collective known as the “Cyber Support Front”, reportedly aligned with pro-Iranian cyber networks, announced that it had successfully infiltrated the systems of the Israeli defense contractor MAYA.
Chinese scientists are set to complete the development of the powerful fusion reactor BEST (Burning Plasma Experimental Superconducting Tokamak), commonly referred to as the “artificial sun”, by 2027.