Posts

Villasenor Warns Against Digital Misinformation

Public Policy Professor John Villasenor joined CNN London to discuss the growing threat of deepfake videos, which use artificial intelligence to alter images, swap faces or edit voice audio to create very realistic footage. In one example, a deepfake video was released showing British Prime Minister Boris Johnson appearing to endorse his political rival, Jeremy Corbyn. Villasenor explained that digital misinformation is a real concern in today’s political environment. “We can expect both here in the United States and in other countries that the technology that can be used for these deepfakes will, in some cases, be used in an attempt to influence elections,” he said. Villasenor explained that there are “subtle differences between the audio and the mouth movements, but you have to be looking carefully.” Moving forward, he urges people to “recalibrate their expectations” and unlearn the habit of assuming that what we see on video is always true.


Villasenor on Widespread Use of Deepfakes

John Villasenor, professor of public policy, electrical engineering and management, spoke to CNBC about the proliferation of “deepfakes” on the internet. Deepfakes — videos or other digital representations that appear real but are actually manipulated by artificial intelligence —are becoming increasingly more sophisticated and accessible to the public, Villasenor said. They can make candidates appear to say or do things that undermine their reputation, thus influencing the outcome of elections, he warned. Deepfake detection software is being developed but still lags behind advanced techniques used in creating the misleading messages. “Will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated?” Villasenor asked.


 

Villasenor on Strategies to Guard Against ‘Deepfakes’

Public Policy Professor John Villasenor spoke to the Brookings Cafeteria podcast about strategies that voters and other consumers of digital media can adopt to guard against “deepfakes” — videos manipulated with artificial intelligence technology to deceive, parody or, sometimes, educate. “Anybody who has a computer and access to the Internet is in a position to produce deepfakes,” Villasenor said, but he added that the technology to detect the doctored videos is also quickly evolving. He urged consumers of digital media to “unlearn what we’ve learned since we were all small, which is usually seeing is believing. … Deepfakes scramble that understanding.” Even if a video is clearly fake, he said, “the visual imagery is very powerful and so I think it’s a big concern.” Villasenor is a professor of management, law and electrical engineering, in addition to public policy. He is a nonresident senior fellow at the Brookings Institution.

 

Villasenor on Easy Access to Powerful Technology

Public Policy Professor John Villasenor spoke to Business Insider about “deepfakes,” phony videos and digital images manipulated using artificial intelligence. Easy access to both the technology to alter videos and the platforms to distribute them widely has heightened concern about deepfakes, Villsasenor said. “Everyone’s a global broadcaster now. So I think it’s those two things together that create a fundamentally different landscape than we had when Photoshop came out,” he said. Altered videos can be used in satire and entertainment, creating complications for legal efforts to crack down on malicious users. Time constraints are also an issue, Villasenor said, citing deepfakes used in political attacks. “Election cycles are influenced over the course of sometimes days or even hours with social media, so if someone wants to take legal action that could take weeks or even months,” he said. “And in many cases, the damage may have already been done.”


 

Villasenor on ‘Deepfakes,’ Free Speech and the 2020 Race

Public Policy Professor John Villasenor narrated a short Atlantic video on the proliferation of “deepfakes,” videos and audio manipulated using sophisticated technology to convincingly present fiction as fact. Deepfakes are “engineered to further undermine our ability to decide what is true and what is not true,” he said. “We are crossing over into an era where we have to be skeptical of what we see on video.”  Villasenor, who studies the intersection of digital technology with public policy and the law, predicted that deepfakes will be used to deceive voters during the 2020 presidential campaign yet cautioned against aggressive laws to rein them in. While the technology could harm targeted individuals, the First Amendment protects free expression, including many forms of parody, he said. “As concerning as this technology is, I think it’s important not to rush a whole raft of new laws into place because we risk overcorrecting,” Villasenor said.


 

Villasenor on ‘Deepfakes’ and the Uncertainty of Truth

Public Policy Professor John Villasenor wrote a piece for the Brookings Institution on “deepfakes” and the uncertainty of truth as a result. Villasenor defined deepfakes as intentionally manipulated videos that make a person appear to say or do something they, in fact, did not. He suggested three strategies to address this issue: deepfake detection technology, legal and legislative remedies, and an increase in public awareness. Artificial intelligence would detect image inconsistencies due to video manipulation, he said, adding that legal and legislative actions must strike a balance to protect people from deepfakes without overstepping. He said viewers can combat deepfakes by refusing to believe questionable videos are real. “That knowledge won’t stop deepfakes, but it can certainly help blunt their impact,” he said. Villasenor is currently a nonresident senior fellow in Governance Studies and the Center for Technology Innovation at the Brookings Institution.