Public Policy Professor John Villasenor joined CNN London to discuss the growing threat of deepfake videos, which use artificial intelligence to alter images, swap faces or edit voice audio to create very realistic footage. In one example, a deepfake video was released showing British Prime Minister Boris Johnson appearing to endorse his political rival, Jeremy Corbyn. Villasenor explained that digital misinformation is a real concern in today’s political environment. “We can expect both here in the United States and in other countries that the technology that can be used for these deepfakes will, in some cases, be used in an attempt to influence elections,” he said. Villasenor explained that there are “subtle differences between the audio and the mouth movements, but you have to be looking carefully.” Moving forward, he urges people to “recalibrate their expectations” and unlearn the habit of assuming that what we see on video is always true.
John Villasenor, professor of public policy, electrical engineering and management, spoke to the Wall Street Journal about the potential challenges of 5G cybersecurity. While 5G is expected to be 100 times faster than 4G, enabling new technologies and strengthening security, Villasenor remained cautious. He predicted that some cybersecurity risks and vulnerabilities will not be addressed right away. “I’m not very confident that we’re going to be on top of these problems,” he said. “People only get cybersecurity right after they get it wrong. We’re going to learn the hard way, and hopefully the mistakes will not be particularly costly and harmful.”
Michael Manville, associate professor of urban planning, spoke to LAist about how Los Angeles today has lived up to the predictions of the 1982 sci-fi cult classic “Blade Runner,” which takes place in an imagined future 2019. The film presents a “vision of a sort of hyper-dense metropolis of the future … that’s really not pleasant at all,” he said. While the film’s characters have been left behind on Earth, Manville points out that present-day Los Angeles is actually planning for a future with more people. Furthermore, he explains that the film presents aerial transit “in a highly stylized way that ignores most of the actual logistics,” whereas a real-life flying car service in a major city would cause huge congestion problems. “Blade Runner,” Manville concluded, “is one of the great urban backdrops, especially dystopian urban backdrops, in film, but its relevance to the Los Angeles we live in is probably pretty limited.”
John Villasenor, professor of public policy, electrical engineering and management, wrote a report for the Brookings Institution about the intersection between artificial intelligence (AI) and product liability law. While AI-based systems can make decisions that are more objective, consistent and reliable than those made by humans, they sometimes make mistakes, Villasenor wrote. Product liability law can help clarify who is responsible for AI-induced harms, he added. “AI systems don’t simply implement human-designed algorithms. Instead, they create their own algorithms — sometimes by revising algorithms originally designed by humans, and sometimes completely from scratch. This raises complex issues in relation to products liability, which is centered on the issue of attributing responsibility for products that cause harms,” he wrote. “Companies need to bear responsibility for the AI products they create, even when those products evolve in ways not specifically desired or foreseeable by their manufacturers,” he argued.
John Villasenor, professor of public policy, electrical engineering and management, wrote an article for the Chronicle for Higher Education about the importance of preparing college students for an AI future. Artificial intelligence will have a profound and transformative impact — one that college students today have the opportunity to shape, Villasenor said. He advocated for a wide range of disciplines to incorporate issues surrounding artificial intelligence into their curricula. “We need philosophers, lawyers and ethicists to help navigate the complex questions that will arise as we give machines more power to make decisions,” he wrote. In addition, political scientists, urban planners, economists, public policy experts, climate scientists and physicians are among those who should harness the power of artificial intelligence to effect positive social change — and ensure that the technology is not hijacked by malicious actors.
John Villasenor, professor of public policy, electrical engineering and management, spoke to CNBC about the proliferation of “deepfakes” on the internet. Deepfakes — videos or other digital representations that appear real but are actually manipulated by artificial intelligence —are becoming increasingly more sophisticated and accessible to the public, Villasenor said. They can make candidates appear to say or do things that undermine their reputation, thus influencing the outcome of elections, he warned. Deepfake detection software is being developed but still lags behind advanced techniques used in creating the misleading messages. “Will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated?” Villasenor asked.
Ian Holloway, associate professor of social welfare, has received an Avenir Award of more than $2 million from the National Institute on Drug Abuse to advance his research into health interventions for LGBTQ communities. Holloway leads a UCLA team that is developing a social media tool designed to offer highly personalized health information to prevent substance abuse and HIV infection among gay men. Under a previous grant, the researchers built a library of nearly 12,000 data points made up of text phrases and emojis that correlate with offline health behaviors. Holloway’s Avenir Award will be used to create a machine-learning system that will monitor social media interactions with participants’ consent, then send customized health reminders and other alerts via an app. The team’s goal is to develop a wide-reaching and cost-effective tool to promote public health, said Holloway, director of the Hub for Health Intervention, Policy and Practice at UCLA Luskin. The Avenir Awards, named for the French word for “future,” provide grants to early-stage researchers who propose highly innovative studies, particularly in the field of HIV and addiction.
Juan Matute, deputy director of the Institute of Transportation Studies at UCLA Luskin, spoke to Fox Business about reports that Google Maps will soon launch advertising on the app — to the tune of $11 billion in annual revenues within four years, according to some estimates. The app has become so popular that its users are not expected to strongly object to the ads. “Google has developed a high-quality mapping product with a significant user base over the past two decades. That they haven’t fully monetized it sooner is the anomaly,” Matute said. Linking people with information about nearby businesses, services and events is a useful service, he added. Google has also announced plans to integrate bike riding, ridesharing and transit information into their maps. “Google Maps helps transit and commuters,” Matute said. “It provides them with easy-to-understand, actionable information in context, which can help them make informed travel decisions.”
Urban Planning Professor Chris Tilly spoke to Bloomberg Law about new apps that allow workers to tap into their paychecks ahead of the traditional two-week cycle. At least five tech startups have entered the market, which is primarily aimed at workers who live paycheck to paycheck. By accessing their earnings earlier, people will gain more flexibility in paying bills and avoiding high-interest credit card charges, the services say. However, some observers say that speeding up pay cycles could mask a larger problem: stagnant wages. “The smoothing of pay availability over a pay period is advantageous to people who have very little savings,” said Tilly, a labor economist. “What it doesn’t address is why those people have very little savings in the first place. Low pay is low pay, and this is being intensified by increasing housing, health care and other costs in many places.”
Public Policy Professor John Villasenor spoke to Business Insider about “deepfakes,” phony videos and digital images manipulated using artificial intelligence. Easy access to both the technology to alter videos and the platforms to distribute them widely has heightened concern about deepfakes, Villsasenor said. “Everyone’s a global broadcaster now. So I think it’s those two things together that create a fundamentally different landscape than we had when Photoshop came out,” he said. Altered videos can be used in satire and entertainment, creating complications for legal efforts to crack down on malicious users. Time constraints are also an issue, Villasenor said, citing deepfakes used in political attacks. “Election cycles are influenced over the course of sometimes days or even hours with social media, so if someone wants to take legal action that could take weeks or even months,” he said. “And in many cases, the damage may have already been done.”