26.1 C
New Delhi
Sunday, November 24, 2024

Algorithms Regulating Welfare Face Criticism for Bias After Years of Use

More from Author

In Short:

A new algorithm used to assess social benefits is harming disabled individuals and single-parent families, particularly single mothers. It unfairly increases risk scores for those receiving disability aid and higher scores for recently divorced beneficiaries. Human rights groups warn that such algorithms, already causing issues in other countries, may lead to severe consequences. The EU plans to ban such “social scoring” by February 2025.


Individuals receiving the Allocation Adulte Handicapé (AAH), a social allowance specifically for persons with disabilities, are facing heightened scrutiny due to a variable incorporated into a welfare algorithm. According to Bastien Le Querrec, a legal expert at La Quadrature du Net, the risk score for those on AAH who are employed is particularly elevated.

Discrimination Concerns

The algorithm has also been found to assign higher scores to single-parent families compared to two-parent households. This practice is believed to indirectly discriminate against single mothers, who are statistically more likely to be the sole caregivers. Le Querrec points out that, “the criteria for the 2014 version of the algorithm assigns a heightened score for beneficiaries who have been divorced for less than 18 months.”

Response from Advocacy Groups

Advocacy group Changer de Cap has reported that it has been approached by both single mothers and disabled individuals seeking assistance after being subjected to investigation.

Agency’s Silence on Algorithm Changes

The CNAF agency, responsible for distributing various forms of financial aid, including housing, disability, and child benefits, has not yet responded to requests for comments regarding whether the current algorithm has undergone significant modifications since its 2014 iteration.

Broader Implications in Europe

Human rights organizations across Europe are echoing similar concerns, arguing that welfare algorithms subject the most vulnerable populations to excessive scrutiny, often resulting in severe consequences. For instance, in the Netherlands, numerous individuals, particularly from the Ghanaian community, faced wrongful accusations of defrauding the child benefits system. In addition to being ordered to repay allegedly misappropriated funds, many experienced accruing debt and deteriorating credit ratings.

Algorithm Design vs. Practical Application

Soizic Pénicaud, a lecturer on AI policy at Sciences Po Paris, emphasizes that the issue lies not in the algorithm’s design but rather its application within the welfare system. “Using algorithms in the context of social policy comes with far more risks than benefits,” she asserts, noting that she has not observed any instances in Europe or globally where such systems have yielded positive outcomes.

Implications for EU AI Regulations

The situation in France extends its implications to the broader European context. The welfare algorithms currently in use will serve as an early assessment of the enforcement of the EU’s new AI regulations, which are set to take effect in February 2025. At that point, the practice of “social scoring”—utilizing AI systems to evaluate and potentially penalize individuals based on their behavior—will be formally banned across the European Union.

Ongoing Debate on Social Scoring

Matthias Spielkamp, co-founder of the nonprofit Algorithm Watch, posits that many welfare systems engaged in fraud detection might essentially function as social scoring in practice. However, public sector representatives may contest this interpretation, leading to potential legal disputes regarding the classification of these systems. “I think this is a very hard question,” Spielkamp remarks.

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- Advertisement -spot_img

Latest article