A new Retail Dive article highlights Urban Planning Professor Chris Tilly’s research on the impacts of technology on retail in order to better understand pandemic job losses. Tilly and co-author Françoise Carré’s research paper “Change and Uncertainty, Not Apocalypse: Technological Change and Store-Based Retail” delves into the technological and structural shifts occurring within the retail sector. They found that e-commerce, decentralized checkout and other technologies could eliminate cashier jobs and even some management jobs. Tilly also noted the racial implications of such industry changes; job losses are prevalent in the general merchandise sector, which employs far higher percentages of women and people of color, while the growing e-commerce sector is considerably whiter and more male. COVID-19 has accelerated change within the retail sector as contactless checkout and curbside pickup options emerge and online sales skyrocket, heightening the uncertainties faced by retail workers.
Fox 10 News in Phoenix spoke to Assistant Professor of Urban Planning V. Kelly Turner about research measuring how a person experiences heat. The interview, conducted during a triple-digit heat wave in Arizona, focused on a mobile weather station known as MaRTy, invented by Turner’s research partner, Arizona State University Assistant Professor Ariane Middel. The robot collects climatological data to determine “mean radiant temperature,” which “gives us a sense of how pedestrians experience heat, not just how the ground feels to the touch,” Turner explained. The research adds to the body of knowledge surrounding the urban heat island effect, which makes high temperatures in cities even more unbearable. Turner and Middel have been walking MaRTy around urban areas in Phoenix, Tempe and Los Angeles to help city leaders determine which areas would benefit most from increasing shade and planting trees.
A surge in telehealth services amid the COVID-19 pandemic presents an opportunity to bridge inequities in access to health services, according to a report from the UCLA Latino Policy and Politics Initiative (LPPI) and the David Geffen School of Medicine at UCLA. A previous study from LPPI found that more than 7 million Latinos in California lack adequate access to health care. Telehealth could lessen that shortfall if implemented strategically, according to the new report. “Latinos, who are twice as likely to lack health insurance than other Californians, are increasingly online and have high adoptions of cellphone technology,” said Sonja Diaz, LPPI founding director. “Telehealth can serve as an important bridge to ensure that underserved communities, especially rural and linguistically diverse patients, access the medical attention they need.” Telehealth, which has surfaced as a medical screening tool during the pandemic, also has application in mental health and social services settings. “Telehealth will never replace the importance of face-to-face interactions between a patient and their doctor,” said report co-author Yohualli B. Anaya, a physician at UCLA Medical Center, Santa Monica. “But improving access to high-quality care is an important first step that can start to address systemic inequities in health care and save lives.” The report offers guidelines to help California advance telehealth in underserved communities, including accommodating monolingual Spanish-speakers and expanding access to broadband technology.
Public Policy Professor John Villasenor joined CNN London to discuss the growing threat of deepfake videos, which use artificial intelligence to alter images, swap faces or edit voice audio to create very realistic footage. In one example, a deepfake video was released showing British Prime Minister Boris Johnson appearing to endorse his political rival, Jeremy Corbyn. Villasenor explained that digital misinformation is a real concern in today’s political environment. “We can expect both here in the United States and in other countries that the technology that can be used for these deepfakes will, in some cases, be used in an attempt to influence elections,” he said. Villasenor explained that there are “subtle differences between the audio and the mouth movements, but you have to be looking carefully.” Moving forward, he urges people to “recalibrate their expectations” and unlearn the habit of assuming that what we see on video is always true.
John Villasenor, professor of public policy, electrical engineering and management, spoke to the Wall Street Journal about the potential challenges of 5G cybersecurity. While 5G is expected to be 100 times faster than 4G, enabling new technologies and strengthening security, Villasenor remained cautious. He predicted that some cybersecurity risks and vulnerabilities will not be addressed right away. “I’m not very confident that we’re going to be on top of these problems,” he said. “People only get cybersecurity right after they get it wrong. We’re going to learn the hard way, and hopefully the mistakes will not be particularly costly and harmful.”
Michael Manville, associate professor of urban planning, spoke to LAist about how Los Angeles today has lived up to the predictions of the 1982 sci-fi cult classic “Blade Runner,” which takes place in an imagined future 2019. The film presents a “vision of a sort of hyper-dense metropolis of the future … that’s really not pleasant at all,” he said. While the film’s characters have been left behind on Earth, Manville points out that present-day Los Angeles is actually planning for a future with more people. Furthermore, he explains that the film presents aerial transit “in a highly stylized way that ignores most of the actual logistics,” whereas a real-life flying car service in a major city would cause huge congestion problems. “Blade Runner,” Manville concluded, “is one of the great urban backdrops, especially dystopian urban backdrops, in film, but its relevance to the Los Angeles we live in is probably pretty limited.”
John Villasenor, professor of public policy, electrical engineering and management, wrote a report for the Brookings Institution about the intersection between artificial intelligence (AI) and product liability law. While AI-based systems can make decisions that are more objective, consistent and reliable than those made by humans, they sometimes make mistakes, Villasenor wrote. Product liability law can help clarify who is responsible for AI-induced harms, he added. “AI systems don’t simply implement human-designed algorithms. Instead, they create their own algorithms — sometimes by revising algorithms originally designed by humans, and sometimes completely from scratch. This raises complex issues in relation to products liability, which is centered on the issue of attributing responsibility for products that cause harms,” he wrote. “Companies need to bear responsibility for the AI products they create, even when those products evolve in ways not specifically desired or foreseeable by their manufacturers,” he argued.
John Villasenor, professor of public policy, electrical engineering and management, wrote an article for the Chronicle for Higher Education about the importance of preparing college students for an AI future. Artificial intelligence will have a profound and transformative impact — one that college students today have the opportunity to shape, Villasenor said. He advocated for a wide range of disciplines to incorporate issues surrounding artificial intelligence into their curricula. “We need philosophers, lawyers and ethicists to help navigate the complex questions that will arise as we give machines more power to make decisions,” he wrote. In addition, political scientists, urban planners, economists, public policy experts, climate scientists and physicians are among those who should harness the power of artificial intelligence to effect positive social change — and ensure that the technology is not hijacked by malicious actors.
John Villasenor, professor of public policy, electrical engineering and management, spoke to CNBC about the proliferation of “deepfakes” on the internet. Deepfakes — videos or other digital representations that appear real but are actually manipulated by artificial intelligence —are becoming increasingly more sophisticated and accessible to the public, Villasenor said. They can make candidates appear to say or do things that undermine their reputation, thus influencing the outcome of elections, he warned. Deepfake detection software is being developed but still lags behind advanced techniques used in creating the misleading messages. “Will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated?” Villasenor asked.
Ian Holloway, associate professor of social welfare, has received an Avenir Award of more than $2 million from the National Institute on Drug Abuse to advance his research into health interventions for LGBTQ communities. Holloway leads a UCLA team that is developing a social media tool designed to offer highly personalized health information to prevent substance abuse and HIV infection among gay men. Under a previous grant, the researchers built a library of nearly 12,000 data points made up of text phrases and emojis that correlate with offline health behaviors. Holloway’s Avenir Award will be used to create a machine-learning system that will monitor social media interactions with participants’ consent, then send customized health reminders and other alerts via an app. The team’s goal is to develop a wide-reaching and cost-effective tool to promote public health, said Holloway, director of the Hub for Health Intervention, Policy and Practice at UCLA Luskin. The Avenir Awards, named for the French word for “future,” provide grants to early-stage researchers who propose highly innovative studies, particularly in the field of HIV and addiction.