PARIS — French mathematician and Fields Medal recipient Cédric Villani has devoted some of his formidable natural intelligence to the subject of artificial intelligence (AI), helping write a report that was submitted to the French government in March. He also knows a thing or two about politics having been elected as a national deputy last year. It is his belief, as surprising as it may sound, that AI and politics could very well have a future together.
"Even though not everything is clear yet, AI could become very useful in terms of politics, especially regarding the link between citizen and government," Villani states in the latest edition of Charles magazine.
"No one is supposed to be unaware of the law. But the law is a set of incomprehensible texts," he adds. That's where AI could come in. The mathematician turned lawmaker imagines a chatbot, for example, that could review all the articles in our dense legal codes in order to extract a substantial response to any particular legal question.
In Japan, an artificial intelligence program even ran for mayor of Tama, in the Tokyo region.
In a case like this, he argues, AI simply provides a service. But couldn't it also help politicians make decisions? After all, they argue with each other all year round. What if we let the machine decide instead? And what if the algorithm simply took their place? The idea may sound crazy, but 18% of French people believe that "AI could make better choices than elected officials, provided the final decision is made by a human being," according to an OpenText online survey of 2,000 people.
In Japan, an artificial intelligence program even ran for mayor of Tama, in the Tokyo region, in April. True, it was officially a human running for the post, Michihito Matsuda. But on his campaign posters you could see a robot with a female shape. Michihito Matsuda, had he won, wanted to let AI determine policy using the data at its disposal. The political project didn't win the support of the population, but still obtained 9.31% of the votes, or more than 4,000 votes.
Michihito Matsuda's campaign posters in Tama — Photo: Samim/Twitter
In France, the Sorbonne University's computer lab developed a piece of software called WorkSim to assess the consequences of employment policies. In 2016, it estimated that the labor reform that attracted such strong opposition would bring unemployment down by 0.5% in the short term but would have no long-term impact.
The model was also trained to find solutions, one of which the two researchers who developed it happened to disagree. For Jean-Daniel Kant, the most effective measure suggested was a reduction in working time, while for his colleague, Gérard Ballot, it was better to reinforce training. This is a good example of how the machine can produce results but shouldn't necessarily make decisions. Humans are still needed to interpret those results.
At the Boston Consulting Group, Managing Director Sylvain Duranton has worked on optimizing a city's transportation network with the help of an AI system. All of the data — schedules, the number of trains, the number of passengers — were crunched to achieve increased efficiency. Optimizing efficiency, however, meant eliminating certain steps. And that, from a social-justice standpoint, is problematic. Here again is an example of how results and the decisions they suggest don't necessarily match up.
In the Boston Consulting Group case, determining the extent to which traffic reductions could be allowed at certain locations was a highly political issue. A numerical value thus had to be integrated into the algorithm in order to correct its conclusions, a term value that is not just a pure etymological accident.
When an algorithm is corrected, the value that is integrated is an ideal value. "Models, despite their reputation for impartiality, reflect goals and ideologies," writes mathematician Cathy O'Neil in her 2016 book Weapons of Math Destruction. "When I removed the possibility [for my family] of eating Pop-Tarts at every meal, I was imposing my ideology on the meals model," O'Neil explains.
When humans use algorithms to implement public policies, they must have the intellectual integrity to explain what they are aiming for, their goal, their horizon. Seeking economic efficiency isn't the same thing as seeking social justice... at least for an algorithm.
The mistake would be to think that efficiency and justice must be opposed. But isn't justice also a type of efficiency? When we refuse to isolate neighborhoods by guaranteeing them access to public transport, our goal is geographical cohesion, or even national unity. The argument can also be made that this cohesion serves economic efficiency, since it allows people to get to work at a lower cost.
Some human behaviors cannot be modeled.
Of course, we can imagine responding to these various imperatives by multiplying the variables, by integrating economic data, social data, environmental data, you name it. The problem is that the variables become countless and often contradictory. They may also lack methodological soundness. In its April 28 issue, The Economist reminded its readers that 80% of microeconomic studies exaggerated the reported results and that 90% of them relied on insufficient sample sizes. Even when studies help construct variables, the difficulty remains in knowing how to articulate them.
Another concern is that some human behaviors cannot be modeled. Otherwise, many parents would have understood long ago why their children, despite all of them being raised the same way, are so different. Still, artificial intelligence — like statistics before it — remains a useful scientific tool for developing and evaluating public policies. In the end, though, it's up to human beings to clearly say in which direction these policies should go.
See more from Tech / Science here