PARIS — This video of Barack Obama is both real and not real. It's an excerpt of the former U.S. president during an official appearance, and the words he utters are things he really did say, albeit in an entirely different speech. Researchers at the University of Washington, in Seattle, used artificial intelligence to perfectly sync the movement of Obama's lips in one appearance to the words he used on a previous occasion. Moral of the story? It's now possible to create a "fake" video as perceivably real as the original one.
Gartner, Inc., an American research and advisory firm, estimates that by 2022, people living in developed countries will have more exposure to false information than to real news. Welcome to a world where "the average citizen is no longer in a position to know if a piece of information is serious or not," says David Glance, director of IT practices at the University of Western Australia.
The spread of so-called fake news is as old as politics. "Already, in Athens, the tyrant Peisistratus (6th century BC), former master in the art of fake news, seized power and exercised it thanks to propaganda based on the falsification of Greek literature," notes Patrick Chastenet, a professor of political science at the Montesquieu Research Institute in Bordeaux.
Only these days, the echo chambers of Facebook, YouTube, Twitter, Instagram and LinkedIn give fake news unprecedented power, as do the algorithms they use, which suggest certain content to us based on our browsing history. The lure of profit fuels the plague, since false information, spread on a social platform, costs nothing to produce but directs Internet users back to certain ads.
We know too that certain politicians are prepared to do anything to get elected. And of course, we can't forget that foreign powers sometimes attempt to influence outcomes. All of this gave birth to a system "where emotion takes precedence over facts, which threatens democracy," says Lisa-Maria Neudert, a specialist in propaganda and manipulation at Oxford University in England.
So what do we do?
The question then, is what to do about it. That is precisely what hundreds of researchers all over the world are asking themselves as we speak. Here are some of the solutions currently being studied:
Fake news prospers in a system based on the economy of attention: It directs Internet users towards ads, meaning the ad creators earn money. To financially asphyxiate these fake operations, some experts imagine moving away from an economic model in which all information is free. But how do we convince Internet users to start paying for quality information? Another, no doubt more realistic, tactic is to put pressure on brands that run ads (most often unknowingly) on fake news websites.
The United States could require social platforms to become "serious" media. But this poses moral, economic and legal problems. "Is it really up to a social network to filter information? On what criteria and on whose behalf could Facebook decree that a certain publication is fake, satirical, or even politically biased?" asks Romain Badouard, a social sciences researcher and author of the essay "The Disenchantment of the Internet" (FYP Editions).
Another question is how to make social media platforms act in the same way throughout the whole world when certain countries are clearly more progressive than others. Could this verification happen automatically? Or would it be necessary to resort to armies of censors who would no doubt be poorly paid for doing such tedious work?
"With the criteria that it adopted, almost unanimously, to fight against pornography, Facebook demonstrated that it could take an ethical position on a social subject," says Jean Pouly, a digital economics expert at Télécom Saint-Etienne. But at Facebook, which has recently increased its information verification initiatives, employees don't think it's necessary to go much further. "We are a tool at the service of the media," says Edouard Braud, Facebook's director of media partnerships in France and southern Europe. "We work hand in hand with them to improve the spread of their content."
Social networks must be viewed not as media outlets, but as advertising agencies. We can't ask them to be simultaneously judge (filtering content) and interested party (invested in the number of clicks). The validation of information must therefore be entrusted to third parties. Pierre-Albert Ruquier, is cofounder of the Parisian startup Storyzy, a website that verifies quotes. "Thanks to extraction technology for quotations, we are able to spot quotes shared only by pre-identified fake news sites," he explains. "We can detect sites that were previously unknown to our system but that are using those quotes, which is suspicious."
The cybersecurity supervisors of Silicon Valley giants exchange information as soon as one of them spots a new hacking threat. Why don't the Big Five (Google, Facebook, Amazon, Apple, Microsoft) also cooperate when they see a new piece of fake news?, asked Karen Wickre, Twitter's former editorial director, in a recent Wired article.
Closed doors at Facebook? — Photo: Sachin
Stamping a "questionable information" banner on an article published on a social network often produces the opposite of the desired effect. The author of the incriminated publication could call it censorship. Likewise, dismantling all the arguments of a biased piece often only serves to reinforce it. "It's better to simply publish an article that tells the truth, without referring to the fake news in question," Gartner, Inc. advises.
How can we legally protect photos and videos uploaded online? Many authors publish them under the Creative Commons license, which allows others to reuse and modify the images. But, as a result, this prevents the original publisher from pursuing potential forgers. Is a new Creative Commons license necessary to stop images from being used for propaganda?
It's necessary to raise awareness among young people and adults alike about verifying information, but also about the social pressures that exist online. There are mechanisms whose core motivations are still poorly understood. "In a group, each individual can think that the others view positively what he views negatively. The result is that he will act against his own values," says Vincent F. Hendricks, supervisor at the Center for Information and Bubble Studies at the University of Copenhagen.
If it remains out of control, will fake news kill social networks? "I don't think so, because Internet users use these sites for many things other than information," Hendricks adds. Certainly, though, fake news will eventually force the platforms to change.
See more from Tech / Science here