Villasenor on AI Curriculum in Higher Education

John Villasenor, professor of public policy, electrical engineering and management, wrote an article for the Chronicle for Higher Education about the importance of preparing college students for an AI future. Artificial intelligence will have a profound and transformative impact — one that college students today have the opportunity to shape, Villasenor said. He advocated for a wide range of disciplines to incorporate issues surrounding artificial intelligence into their curricula. “We need philosophers, lawyers and ethicists to help navigate the complex questions that will arise as we give machines more power to make decisions,” he wrote. In addition, political scientists, urban planners, economists, public policy experts, climate scientists and physicians are among those who should harness the power of artificial intelligence to effect positive social change — and ensure that the technology is not hijacked by malicious actors.


 

Villasenor on Widespread Use of Deepfakes

John Villasenor, professor of public policy, electrical engineering and management, spoke to CNBC about the proliferation of “deepfakes” on the internet. Deepfakes — videos or other digital representations that appear real but are actually manipulated by artificial intelligence —are becoming increasingly more sophisticated and accessible to the public, Villasenor said. They can make candidates appear to say or do things that undermine their reputation, thus influencing the outcome of elections, he warned. Deepfake detection software is being developed but still lags behind advanced techniques used in creating the misleading messages. “Will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated?” Villasenor asked.


 

Villasenor on Strategies to Guard Against ‘Deepfakes’

Public Policy Professor John Villasenor spoke to the Brookings Cafeteria podcast about strategies that voters and other consumers of digital media can adopt to guard against “deepfakes” — videos manipulated with artificial intelligence technology to deceive, parody or, sometimes, educate. “Anybody who has a computer and access to the Internet is in a position to produce deepfakes,” Villasenor said, but he added that the technology to detect the doctored videos is also quickly evolving. He urged consumers of digital media to “unlearn what we’ve learned since we were all small, which is usually seeing is believing. … Deepfakes scramble that understanding.” Even if a video is clearly fake, he said, “the visual imagery is very powerful and so I think it’s a big concern.” Villasenor is a professor of management, law and electrical engineering, in addition to public policy. He is a nonresident senior fellow at the Brookings Institution.

 

Villasenor on Easy Access to Powerful Technology

Public Policy Professor John Villasenor spoke to Business Insider about “deepfakes,” phony videos and digital images manipulated using artificial intelligence. Easy access to both the technology to alter videos and the platforms to distribute them widely has heightened concern about deepfakes, Villsasenor said. “Everyone’s a global broadcaster now. So I think it’s those two things together that create a fundamentally different landscape than we had when Photoshop came out,” he said. Altered videos can be used in satire and entertainment, creating complications for legal efforts to crack down on malicious users. Time constraints are also an issue, Villasenor said, citing deepfakes used in political attacks. “Election cycles are influenced over the course of sometimes days or even hours with social media, so if someone wants to take legal action that could take weeks or even months,” he said. “And in many cases, the damage may have already been done.”


 

Villasenor on ‘Deepfakes,’ Free Speech and the 2020 Race

Public Policy Professor John Villasenor narrated a short Atlantic video on the proliferation of “deepfakes,” videos and audio manipulated using sophisticated technology to convincingly present fiction as fact. Deepfakes are “engineered to further undermine our ability to decide what is true and what is not true,” he said. “We are crossing over into an era where we have to be skeptical of what we see on video.”  Villasenor, who studies the intersection of digital technology with public policy and the law, predicted that deepfakes will be used to deceive voters during the 2020 presidential campaign yet cautioned against aggressive laws to rein them in. While the technology could harm targeted individuals, the First Amendment protects free expression, including many forms of parody, he said. “As concerning as this technology is, I think it’s important not to rush a whole raft of new laws into place because we risk overcorrecting,” Villasenor said.


 

Villasenor Weighs In on Electronic-Tracking Bracelets

Public Policy Professor John Villasenor spoke to the Associated Press about using technology to locate missing seniors with dementia or autism. Los Angeles County began a program called LA Found to help police, sheriffs, fire departments, nursing homes and hospitals coordinate searches for missing people. LA Found uses a system of electronic bracelets worn by vulnerable people; the bracelets can be tracked by the L.A. County Sheriff’s Department if these individuals go missing. While some technology that tracks people can raise red flags about privacy, experts said in this case the upside outweighs any concerns. Villasenor said the voluntary use of the electronic bracelets “seems like a very good potential use of location-tracking technology.” The report was picked up by several media outlets, including NBC Los Angeles, the San Francisco Chronicle and Fox News.


 

Villasenor on Risk Assessment Tools in Legal Proceedings

John Villasenor, professor of public policy, and UCLA student Virgina Foggo wrote a blog post for the Brookings Institution about the ramifications of using data-driven risk assessment tools in criminal sentencing. Risk assessment tools have raised due process concerns, as offenders have challenged the accuracy and relevance of algorithm-based information used at sentencing, the authors wrote. Offenders argue that they have a right to know what their risk assessment score is, how it was computed and how it is being used, the blog post noted. Moving forward, “a foundational assumption in the dialogue will need to be that the right to due process can’t be collateral damage to the adoption of increasingly sophisticated algorithmic risk assessment technologies,” the authors wrote. Villasenor is currently a nonresident senior fellow in governance studies at the Center for Technology Innovation at Brookings.


 

Villasenor Moderates Dialogue on Free Speech on Campus

John Villasenor of UCLA Luskin Public Policy moderated a conversation with University of California President Janet Napolitano and Assistant Attorney General for Civil Rights Eric Dreiband at a conference on the future of free expression on campus held March 21, 2019, in Washington, D.C. Napolitano said that, from a policy perspective, free speech is widely respected and upheld on U.S. college campuses, but implementing First Amendment protections raises challenges. These include safeguarding individuals or groups targeted by harassing speech, paying for security to maintain order when controversial speakers are invited to campus, and educating the next generation about the value of civil discourse by “not just speaking but listening to all voices,” she said. Citing a survey of college students conducted by Villasenor, Dreiband said many believe that shouting down speakers, or even using violence, is appropriate to counter offensive speech. Free speech is also threatened by the groupthink that takes hold on some campuses, leading to self-censorship by students who agree with controversial speakers but fear retaliation by peers or professors, Dreiband said. The trio’s dialogue came at the close of the “Speech Matters” conference organized by the UC’s National Center for Free Speech and Civic Engagement.

View a video of the conference here.


 

Villasenor on ‘Deepfakes’ and the Uncertainty of Truth

Public Policy Professor John Villasenor wrote a piece for the Brookings Institution on “deepfakes” and the uncertainty of truth as a result. Villasenor defined deepfakes as intentionally manipulated videos that make a person appear to say or do something they, in fact, did not. He suggested three strategies to address this issue: deepfake detection technology, legal and legislative remedies, and an increase in public awareness. Artificial intelligence would detect image inconsistencies due to video manipulation, he said, adding that legal and legislative actions must strike a balance to protect people from deepfakes without overstepping. He said viewers can combat deepfakes by refusing to believe questionable videos are real. “That knowledge won’t stop deepfakes, but it can certainly help blunt their impact,” he said. Villasenor is currently a nonresident senior fellow in Governance Studies and the Center for Technology Innovation at the Brookings Institution.


 

Villasenor Explores Potential Consequences of UCLA Memorandum About Publisher

Public Policy Professor John Villasenor published an article in the Chronicle of Higher Education exploring the potential repercussions of university involvement in boycotts. Amid negotiations for a new contract between UCLA and academic publisher Elsevier, UCLA executives published a memorandum “Important Notice Regarding Elsevier Journals” in December 2018, urging UCLA faculty to consider “declining to review articles for Elsevier journals,” “looking at other journal-publishing options” and “contacting the publisher … and letting them know that you share the negotiators’ concerns.” By advocating an Elsevier boycott, Villasenor said, UCLA administration may be forced to “come up with a framework to decide which types of boycotts the institution can endorse.” Villasenor concludes that the “UCLA administration’s call for faculty members to boycott Elsevier has blurred the lines between grass-roots, faculty-led activism — a time-honored mechanism that can be very effective for social change — and institution-led activism, which raises complex legal, policy and ethical issues.”