close
close

Solondais

Where news breaks first, every time

sinolod

Algorithms have controlled welfare systems for years. Now they are criticized for their prejudices

“People benefiting from a social allowance reserved for disabled people (the Disabled Adult Allowance, or AAH) are directly targeted by a variable in the algorithm,” explains Bastien Le Querrec, lawyer at La Quadrature du Net. “The risk score for people receiving AAH and who are working is increased. »

Because it also scores single-parent families higher than two-parent families, the groups argue that it indirectly discriminates against single mothers, who are statistically more likely to be sole caregivers. alone. “In the criteria of the 2014 version of the algorithm, the score of beneficiaries divorced for less than 18 months is higher,” specifies Le Querrec.

Changer de Cap claims to have been contacted by both single mothers and disabled people seeking help, after being the subject of an investigation.

The CNAF agency, responsible for distributing financial aid including housing, disability and family benefits, did not immediately respond to a request for comment or to WIRED’s question about whether the algorithm currently used had changed significantly since the 2014 version.

Just as in France, human rights groups in other European countries say they subject the lowest-income members of society to intense surveillance, often with profound consequences.

When tens of thousands of people in the Netherlands, including many members of the country’s Ghanaian community, were falsely accused of defrauding the child benefit system, they were not only ordered to repay the money that they would have stolen according to the algorithm. Many of them say they also found themselves with mounting debt and destroyed credit scores.

The problem is not the way the algorithm was designed, but its use in the social protection system, explains Soizic Pénicaud, lecturer in AI policy at Sciences Po Paris, who previously worked for the French government on the transparency of public sector algorithms. “Using algorithms in the context of social policy carries far more risks than benefits,” she says. “I have not seen any examples in Europe or around the world in which these systems have been used with positive results.”

The affair has ramifications beyond France. The social protection algorithms are expected to be a first test of how the EU’s new AI rules will be applied once they come into force in February 2025. From there, the ‘social score’ – the use of AI systems to assess people’s behavior and then subject some of them to harmful treatment – ​​will be banned across the bloc.

“A lot of these social welfare systems that detect fraud can, in my opinion, in practice be a social rating system,” says Matthias Spielkamp, ​​co-founder of the nonprofit Algorithm Watch. Yet public sector representatives are unlikely to agree with this definition – with arguments over how to define these systems likely to end up in court. “I think it’s a very difficult question,” says Spielkamp.