Public Policy Professor John Villasenor co-authored an article with UC Berkeley Professor Rebecca Wexler describing the dangers of new data privacy laws and their unintended contribution to wrongful convictions. They explain how the “growing volume of data gathered and stored by mobile network providers, social media companies, and location-based app providers has quite rightly spurred interest in updating privacy laws.” However, these laws often favor prosecutors in legal cases, making it easier for them to deploy state power to search for and seize data, while defense attorneys struggle to access the same data using subpoenas. The article for the Brookings Institution’s TechTank blog describes a “fundamental asymmetry”: “While law enforcement can compel the production of data that can help establish guilt, a defendant will have a much harder time compelling the production of data that establish innocence.” The authors recommend drafting laws that accommodate “the legitimate needs of both law enforcement and defense investigations.”
In an opinion piece for the Chronicle of Higher Education, John Villasenor, professor of public policy, electrical engineering and law, explained why he does not allow his classes to be recorded. Villasenor acknowledged that recording a lecture could be beneficial for a number of legitimate reasons, including helping out students who miss class due to illness. However, he said he is more concerned with protecting his students’ privacy. “A highly interactive classroom should be a space beyond the reach of the digital panopticon,” Villasenor said. Recording can chill classroom discourse, with students perhaps choosing to speak more cautiously. This can rob students of “the opportunity to engage in dialogue with fellow students who hold perspectives that, while legitimate and valuable to consider, might not fit neatly with their own views.” Especially in smaller, highly engaged classrooms, the convenience of a recorded lecture is outweighed by the cost of a diminished learning environment, Villasenor argued.
Public Policy Professor John Villasenor joined CNN London to discuss the growing threat of deepfake videos, which use artificial intelligence to alter images, swap faces or edit voice audio to create very realistic footage. In one example, a deepfake video was released showing British Prime Minister Boris Johnson appearing to endorse his political rival, Jeremy Corbyn. Villasenor explained that digital misinformation is a real concern in today’s political environment. “We can expect both here in the United States and in other countries that the technology that can be used for these deepfakes will, in some cases, be used in an attempt to influence elections,” he said. Villasenor explained that there are “subtle differences between the audio and the mouth movements, but you have to be looking carefully.” Moving forward, he urges people to “recalibrate their expectations” and unlearn the habit of assuming that what we see on video is always true.
John Villasenor, professor of public policy, electrical engineering and management, spoke to the Wall Street Journal about the potential challenges of 5G cybersecurity. While 5G is expected to be 100 times faster than 4G, enabling new technologies and strengthening security, Villasenor remained cautious. He predicted that some cybersecurity risks and vulnerabilities will not be addressed right away. “I’m not very confident that we’re going to be on top of these problems,” he said. “People only get cybersecurity right after they get it wrong. We’re going to learn the hard way, and hopefully the mistakes will not be particularly costly and harmful.”
John Villasenor, professor of public policy, electrical engineering and management, wrote a report for the Brookings Institution about the intersection between artificial intelligence (AI) and product liability law. While AI-based systems can make decisions that are more objective, consistent and reliable than those made by humans, they sometimes make mistakes, Villasenor wrote. Product liability law can help clarify who is responsible for AI-induced harms, he added. “AI systems don’t simply implement human-designed algorithms. Instead, they create their own algorithms — sometimes by revising algorithms originally designed by humans, and sometimes completely from scratch. This raises complex issues in relation to products liability, which is centered on the issue of attributing responsibility for products that cause harms,” he wrote. “Companies need to bear responsibility for the AI products they create, even when those products evolve in ways not specifically desired or foreseeable by their manufacturers,” he argued.
John Villasenor, professor of public policy, electrical engineering and management, wrote an article for the Chronicle for Higher Education about the importance of preparing college students for an AI future. Artificial intelligence will have a profound and transformative impact — one that college students today have the opportunity to shape, Villasenor said. He advocated for a wide range of disciplines to incorporate issues surrounding artificial intelligence into their curricula. “We need philosophers, lawyers and ethicists to help navigate the complex questions that will arise as we give machines more power to make decisions,” he wrote. In addition, political scientists, urban planners, economists, public policy experts, climate scientists and physicians are among those who should harness the power of artificial intelligence to effect positive social change — and ensure that the technology is not hijacked by malicious actors.
John Villasenor, professor of public policy, electrical engineering and management, spoke to CNBC about the proliferation of “deepfakes” on the internet. Deepfakes — videos or other digital representations that appear real but are actually manipulated by artificial intelligence —are becoming increasingly more sophisticated and accessible to the public, Villasenor said. They can make candidates appear to say or do things that undermine their reputation, thus influencing the outcome of elections, he warned. Deepfake detection software is being developed but still lags behind advanced techniques used in creating the misleading messages. “Will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated?” Villasenor asked.
Public Policy Professor John Villasenor spoke to the Brookings Cafeteria podcast about strategies that voters and other consumers of digital media can adopt to guard against “deepfakes” — videos manipulated with artificial intelligence technology to deceive, parody or, sometimes, educate. “Anybody who has a computer and access to the Internet is in a position to produce deepfakes,” Villasenor said, but he added that the technology to detect the doctored videos is also quickly evolving. He urged consumers of digital media to “unlearn what we’ve learned since we were all small, which is usually seeing is believing. … Deepfakes scramble that understanding.” Even if a video is clearly fake, he said, “the visual imagery is very powerful and so I think it’s a big concern.” Villasenor is a professor of management, law and electrical engineering, in addition to public policy. He is a nonresident senior fellow at the Brookings Institution.
Public Policy Professor John Villasenor spoke to Business Insider about “deepfakes,” phony videos and digital images manipulated using artificial intelligence. Easy access to both the technology to alter videos and the platforms to distribute them widely has heightened concern about deepfakes, Villsasenor said. “Everyone’s a global broadcaster now. So I think it’s those two things together that create a fundamentally different landscape than we had when Photoshop came out,” he said. Altered videos can be used in satire and entertainment, creating complications for legal efforts to crack down on malicious users. Time constraints are also an issue, Villasenor said, citing deepfakes used in political attacks. “Election cycles are influenced over the course of sometimes days or even hours with social media, so if someone wants to take legal action that could take weeks or even months,” he said. “And in many cases, the damage may have already been done.”
Public Policy Professor John Villasenor narrated a short Atlantic video on the proliferation of “deepfakes,” videos and audio manipulated using sophisticated technology to convincingly present fiction as fact. Deepfakes are “engineered to further undermine our ability to decide what is true and what is not true,” he said. “We are crossing over into an era where we have to be skeptical of what we see on video.” Villasenor, who studies the intersection of digital technology with public policy and the law, predicted that deepfakes will be used to deceive voters during the 2020 presidential campaign yet cautioned against aggressive laws to rein them in. While the technology could harm targeted individuals, the First Amendment protects free expression, including many forms of parody, he said. “As concerning as this technology is, I think it’s important not to rush a whole raft of new laws into place because we risk overcorrecting,” Villasenor said.