Posts

Political Courage Is Key to Curing Traffic Ills, Manville Says

UCLA Luskin Urban Planning chair Michael Manville spoke to the Los Angeles Times about plans to tap into artificial intelligence to find ways to make California’s roads safer and less congested. Caltrans is asking tech companies to pitch generative AI tools that could analyze immense amounts of data quickly, perhaps helping the state’s traffic engineers make decisions on signal timing and lane usage. Manville said that the problem is not a lack of data-backed solutions but rather a lack of political courage to put existing solutions, such as congestion pricing, into play. “If you want to make cities safer for pedestrians, if you want to lower speeds, if you want to deal with congestion in a meaningful way, technology is not going to rescue you from difficult political decisions,” he said.


 

Europe’s Challenge of Fostering Growth, Sharing the Wealth

Michael Storper, distinguished professor of urban planning, spoke to the podcast Regio Waves about strategies Europe can employ to foster growth without exacerbating social inequality. In parts of Europe, participation in the labor force is flagging and wages are stagnant. One factor contributing to the malaise is the rise of artificial intelligence. “This is the moment for Europe to shape its future with artificial intelligence and to make sure that it is deployed in a way that augments people’s skills,” Storper told the podcast, which is produced by the European Commission. “We have to think about the interconnectedness of all territories,” he said. “That means that even as we focus on making Europe dynamic and innovative, mostly in the big and middle-sized cities, we want to make sure the other territories are maintained in a way that makes their conditions of life good and makes the human potential of every new generation there ready to contribute to the overall European dynamic.”


 

Shoup Reflects on Evolution of Parking Industry

Urban Planning Professor Donald Shoup wrote an article in Parking Today about changes in the parking industry over the last 25 years. For most of the 20th century, the industry was stagnant, with parking meters that “looked identical to the original ones introduced in 1935,” Shoup explained. Since he published “The High Cost of Free Parking” in 2005, new technologies have made it possible to measure occupancy, charge variable prices for curb parking and make paying for parking much easier. Using license-plate-recognition cameras, parking apps and voice commands, many cities have been able to adopt demand-based pricing for curb parking. Shoup predicted that in the future, artificial intelligence may be able to determine optimal parking spots for price and time. “Better parking management can improve cities, the economy and the environment,” Shoup wrote. “The parking industry can help save the world, one space at a time.”


Product Liability Law Can Mitigate AI Harms, Villasenor Says

John Villasenor, professor of public policy, electrical engineering and management, wrote a report for the Brookings Institution about the intersection between artificial intelligence (AI) and product liability law. While AI-based systems can make decisions that are more objective, consistent and reliable than those made by humans, they sometimes make mistakes, Villasenor wrote. Product liability law can help clarify who is responsible for AI-induced harms, he added. “AI systems don’t simply implement human-designed algorithms. Instead, they create their own algorithms — sometimes by revising algorithms originally designed by humans, and sometimes completely from scratch. This raises complex issues in relation to products liability, which is centered on the issue of attributing responsibility for products that cause harms,” he wrote.  “Companies need to bear responsibility for the AI products they create, even when those products evolve in ways not specifically desired or foreseeable by their manufacturers,” he argued.


 

Villasenor on AI Curriculum in Higher Education

John Villasenor, professor of public policy, electrical engineering and management, wrote an article for the Chronicle for Higher Education about the importance of preparing college students for an AI future. Artificial intelligence will have a profound and transformative impact — one that college students today have the opportunity to shape, Villasenor said. He advocated for a wide range of disciplines to incorporate issues surrounding artificial intelligence into their curricula. “We need philosophers, lawyers and ethicists to help navigate the complex questions that will arise as we give machines more power to make decisions,” he wrote. In addition, political scientists, urban planners, economists, public policy experts, climate scientists and physicians are among those who should harness the power of artificial intelligence to effect positive social change — and ensure that the technology is not hijacked by malicious actors.


 

Villasenor on Widespread Use of Deepfakes

John Villasenor, professor of public policy, electrical engineering and management, spoke to CNBC about the proliferation of “deepfakes” on the internet. Deepfakes — videos or other digital representations that appear real but are actually manipulated by artificial intelligence —are becoming increasingly more sophisticated and accessible to the public, Villasenor said. They can make candidates appear to say or do things that undermine their reputation, thus influencing the outcome of elections, he warned. Deepfake detection software is being developed but still lags behind advanced techniques used in creating the misleading messages. “Will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated?” Villasenor asked.


 

Villasenor on ‘Deepfakes,’ Free Speech and the 2020 Race

Public Policy Professor John Villasenor narrated a short Atlantic video on the proliferation of “deepfakes,” videos and audio manipulated using sophisticated technology to convincingly present fiction as fact. Deepfakes are “engineered to further undermine our ability to decide what is true and what is not true,” he said. “We are crossing over into an era where we have to be skeptical of what we see on video.”  Villasenor, who studies the intersection of digital technology with public policy and the law, predicted that deepfakes will be used to deceive voters during the 2020 presidential campaign yet cautioned against aggressive laws to rein them in. While the technology could harm targeted individuals, the First Amendment protects free expression, including many forms of parody, he said. “As concerning as this technology is, I think it’s important not to rush a whole raft of new laws into place because we risk overcorrecting,” Villasenor said.


 

Villasenor on Risk Assessment Tools in Legal Proceedings

John Villasenor, professor of public policy, and UCLA student Virgina Foggo wrote a blog post for the Brookings Institution about the ramifications of using data-driven risk assessment tools in criminal sentencing. Risk assessment tools have raised due process concerns, as offenders have challenged the accuracy and relevance of algorithm-based information used at sentencing, the authors wrote. Offenders argue that they have a right to know what their risk assessment score is, how it was computed and how it is being used, the blog post noted. Moving forward, “a foundational assumption in the dialogue will need to be that the right to due process can’t be collateral damage to the adoption of increasingly sophisticated algorithmic risk assessment technologies,” the authors wrote. Villasenor is currently a nonresident senior fellow in governance studies at the Center for Technology Innovation at Brookings.


 

Villasenor on ‘Deepfakes’ and the Uncertainty of Truth

Public Policy Professor John Villasenor wrote a piece for the Brookings Institution on “deepfakes” and the uncertainty of truth as a result. Villasenor defined deepfakes as intentionally manipulated videos that make a person appear to say or do something they, in fact, did not. He suggested three strategies to address this issue: deepfake detection technology, legal and legislative remedies, and an increase in public awareness. Artificial intelligence would detect image inconsistencies due to video manipulation, he said, adding that legal and legislative actions must strike a balance to protect people from deepfakes without overstepping. He said viewers can combat deepfakes by refusing to believe questionable videos are real. “That knowledge won’t stop deepfakes, but it can certainly help blunt their impact,” he said. Villasenor is currently a nonresident senior fellow in Governance Studies and the Center for Technology Innovation at the Brookings Institution.


 

Villasenor on the Growing Promise of Artificial Intelligence

In a recent article for research think-tank Brookings, UCLA Luskin Public Policy Professor John Villasenor commented on the increasing presence of artificial intelligence in diverse fields such as geopolitics, manufacturing, trade, agriculture and transportation. As research and funding increase at a dramatic rate, innovation in AI is becoming ever more synonymous with technological progress and economic growth. Villasenor enumerates the many advantages of working with artificial intelligence, claiming that “AI will make it easier to predict violent storms. It can help with drug development to help reduce the impact of disease. It can improve agricultural yields, and help manage the complexities of the supply chain for food [and] medicine.” However, AI is more than a marker of technological progress: Villasenor ends his piece with the overarching conclusion that “as we move towards the middle of the 21st century, a nation’s geopolitical standing and its strength in AI will be increasingly intertwined.”