More stories

  • in

    How to run a password update campaign efficiently and with minimal IT costs

    Updating passwords for all users of a company or institution’s internal computer systems is stressful and disruptive to both users and IT professionals. Many studies have looked at user struggles and password best practices. But very little research has been done to determine how a password update campaign can be conducted most efficiently and with minimal IT costs. Until now.
    A team of computer scientists at the University of California San Diego partnered with the campus’ Information Technology Services to analyze the messaging for a campuswide mandatory password change impacting almost 10,000 faculty and staff members. The team found that email notifications to update passwords potentially yielded diminishing returns after three messages. They also found that a prompt to update passwords while users were trying to log in was effective for those who had ignored email reminders. Researchers also found that users whose jobs didn’t require much computer use struggled the most with the update.
    To the team’s knowledge, it’s the first time an empirical analysis of a mandatory password update has been conducted at this large a scale and in the wild, rather than as part of a simulation or controlled experiment.
    The research team hopes that lessons from their analysis will be helpful to IT professionals at other institutions and companies.
    The team presented their work at ACSAC ’23: Annual Computer Security Applications Conference in December 2023.
    During the campaign, almost 10,000 faculty and staff at UC San Diego received four emails at about a weekly interval prompting them to change their single sign-on password. Users who still hadn’t changed their password even after receiving four emails then got a prompt to do so as they logged in.
    The emails were clearly effective, leading between 5 and 15% of users to update their passwords during each wave of emails. However, even after four such email prompts, a quarter of users had not completed the update procedure.

    The finding contradicts a previous study that found 98% of participants changed their passwords after receiving multiple email messages. But that study had a much smaller sample size.
    Remarkably, 80% of the remaining users who hadn’t changed their passwords after the email campaign finally did so when they were prompted at log in.
    “The active single sign on prompting was a big winner across the board,” said Ariana Mirian, the paper’s first author, who earned her Ph.D. in the UC San Diego Department of Computer Science and Engineering. “You managed to get people who are stubborn-and maybe not paying attention-to take action, and that’s huge.”
    Researchers also noted that despite concerns from the campus, the campaign did not generate a significant increase in tickets to the IT help desk. Ticket volume did increase three to four times, but tickets related to the password update only represented 8% of all requests.
    Not surprisingly, users that struggled the most work in areas where they’re not required to log in to their computers regularly, such as maintenance, recreation and dining services.
    “Targeting such users earlier, or forgoing email reminders and using login intercepts from the start, or even using a different notification mechanism such as text messages, may be more effective,” the researchers write.
    The research was funded in part by the National Science Foundation, the UC San Diego CSE postdoctoral fellows program, the Irwin Mark and Joel Klein Jacobs Chair in Information and Computer Science, and operational support from the UC San Diego Center for Networked Systems.
    An Empirical Analysis of the Enterprise-Wide Mandatory Password Updates
    Ariana Mirian, Grant Ho, Stefan Savage and Geoffrey M. Voelker, Department of Computer Science and Engineering, University of California San Diego More

  • in

    Promising heart drugs ID’d by cutting-edge combo of machine learning, human learning

    University of Virginia scientists have developed a new approach to machine learning — a form of artificial intelligence — to identify drugs that help minimize harmful scarring after a heart attack or other injuries.
    The new machine-learning tool has already found a promising candidate to help prevent harmful heart scarring in a way distinct from previous drugs. The UVA researchers say their cutting-edge computer model has the potential to predict and explain the effects of drugs for other diseases as well.
    “Many common diseases such as heart disease, metabolic disease and cancer are complex and hard to treat,” said researcher Anders R. Nelson, PhD, a computational biologist and former student in the lab of UVA’s Jeffrey J. Saucerman, PhD. “Machine learning helps us reduce this complexity, identify the most important factors that contribute to disease and better understand how drugs can modify diseased cells.”
    “On its own, machine learning helps us to identify cell signatures produced by drugs,” said Saucerman, of UVA’s Department of Biomedical Engineering, a joint program of the School of Medicine and School of Engineering. “Bridging machine learning with human learning helped us not only predict drugs against fibrosis [scarring] but also explain how they work. This knowledge is needed to design clinical trials and identify potential side effects.”
    Combining Machine Learning, Human Learning
    Saucerman and his team combined a computer model based on decades of human knowledge with machine learning to better understand how drugs affect cells called fibroblasts. These cells help repair the heart after injury by producing collagen and contract the wound. But they can also cause harmful scarring, called fibrosis, as part of the repair process. Saucerman and his team wanted to see if a selection of promising drugs would give doctors more ability to prevent scarring and, ultimately, improve patient outcomes.
    Previous attempts to identify drugs targeting fibroblasts have focused only on selected aspects of fibroblast behavior, and how these drugs work often remains unclear. This knowledge gap has been a major challenge in developing targeted treatments for heart fibrosis. So Saucerman and his colleagues developed a new approach called “logic-based mechanistic machine learning” that not only predicts drugs but also predicts how they affect fibroblast behaviors.

    They began by looking at the effect of 13 promising drugs on human fibroblasts, then used that data to train the machine learning model to predict the drugs’ effects on the cells and how they behave. The model was able to predict a new explanation of how the drug pirfenidone, already approved by the federal Food and Drug Administration for idiopathic pulmonary fibrosis, suppresses contractile fibers inside the fibroblast that stiffen the heart. The model also predicted how another type of contractile fiber could be targeted by the experimental Src inhibitor WH4023, which they experimentally validated with human cardiac fibroblasts.
    Additional research is needed to verify the drugs work as intended in animal models and human patients, but the UVA researchers say their research suggests mechanistic machine learning represents a powerful tool for scientists seeking to discover biological cause-and-effect. The new findings, they say, speak to the great potential the technology holds to advance the development of new treatments — not just for heart injury but for many diseases.
    “We’re looking forward to testing whether pirfenidone and WH4023 also suppress the fibroblast contraction of scars in preclinical animal models,” Saucerman said. “We hope this provides an example of how machine learning and human learning can work together to not only discover but also understand how new drugs work.”
    The research was supported by the National Institutes of Health, grants HL137755, HL007284, HL160665, HL162925 and 1S10OD021723-01A1. More

  • in

    Swarming cicadas, stock traders, and the wisdom of the crowd

    Pick almost any location in the eastern United States — say, Columbus Ohio. Every 13 or 17 years, as the soil warms in springtime, vast swarms of cicadas emerge from their underground burrows singing their deafening song, take flight and mate, producing offspring for the next cycle.
    This noisy phenomenon repeats all over the eastern and southeastern US as 17 distinct broods emerge in staggered years. In spring 2024, billions of cicadas are expected as two different broods — one that appears every 13 years and another that appears every 17 years — emerge simultaneously.
    Previous research has suggested that cicadas emerge once the soil temperature reaches 18°C, but even within a small geographical area, differences in sun exposure, foliage cover or humidity can lead to variations in temperature.
    Now, in a paper published in the journal Physical Review E, researchers from the University of Cambridge have discovered how such synchronous cicada swarms can emerge despite these temperature differences.
    The researchers developed a mathematical model for decision-making in an environment with variations in temperature and found that communication between cicada nymphs allows the group to come to a consensus about the local average temperature that then leads to large-scale swarms. The model is closely related to one that has been used to describe ‘avalanches’ in decision-making like those among stock market traders, leading to crashes.
    Mathematicians have been captivated by the appearance of 17- and 13-year cycles in various species of cicadas, and have previously developed mathematical models that showed how the appearance of such large prime numbers is a consequence of evolutionary pressures to avoid predation. However, the mechanism by which swarms emerge coherently in a given year has not been understood.
    In developing their model, the Cambridge team was inspired by previous research on decision-making that represents each member of a group by a ‘spin’ like that in a magnet, but instead of pointing up or down, the two states represent the decision to ‘remain’ or ’emerge’.

    The local temperature experienced by the cicadas is then like a magnetic field that tends to align the spins and varies slowly from place to place on the scale of hundreds of metres, from sunny hilltops to shaded valleys in a forest. Communication between nearby nymphs is represented by an interaction between the spins that leads to local agreement of neighbours.
    The researchers showed that in the presence of such interactions the swarms are large and space-filling, involving every member of the population in a range of local temperature environments, unlike the case without communication in which every nymph is on its own, responding to every subtle variation in microclimate.
    The research was carried out Professor Raymond E Goldstein, the Alan Turing Professor of Complex Physical Systems in the Department of Applied Mathematics and Theoretical Physics (DAMTP), Professor Robert L Jack of DAMTP and the Yusuf Hamied Department of Chemistry, and Dr Adriana I Pesci, a Senior Research Associate in DAMTP.
    “As an applied mathematician, there is nothing more interesting than finding a model capable of explaining the behaviour of living beings, even in the simplest of cases,” said Pesci.
    The researchers say that while their model does not require any particular means of communication between underground nymphs, acoustical signalling is a likely candidate, given the ear-splitting sounds that the swarms make once they emerge from underground.
    The researchers hope that their conjecture regarding the role of communication will stimulate field research to test the hypothesis.
    “If our conjecture that communication between nymphs plays a role in swarm emergence is confirmed, it would provide a striking example of how Darwinian evolution can act for the benefit of the group, not just the individual,” said Goldstein.
    This work was supported in part by the Complex Physical Systems Fund. More

  • in

    Engineers develop hack to make automotive radar ‘hallucinate’

    A black sedan cruises silently down a quiet suburban road, driver humming Christmas carols quietly while the car’s autopilot handles the driving. Suddenly, red flashing lights and audible warnings blare to life, snapping the driver from their peaceful reprieve. They look at the dashboard screen and see the outline of a car speeding toward them for a head-on collision, yet the headlights reveal nothing ahead through the windshield.
    Despite the incongruity, the car’s autopilot grabs control and swerves into a ditch. Exasperated, the driver looks around the vicinity, finding no other vehicles as the incoming danger disappears from the screen. Moments later, the real threat emerges — a group of hijackers jogging toward the immobilized vehicle.
    This scene seems destined to become a common plot point in Hollywood films for decades to come. But due to the complexities of modern automotive detection systems, it remains firmly in the realm of science fiction. At least for the moment.
    Engineers at Duke University, led by Miroslav Pajic, the Dickinson Family Associate Professor of Electrical and Computer Engineering, and Tingjun Chen, assistant professor of electrical and computer engineering, have now demonstrated a system they’ve dubbed “MadRadar” for fooling automotive radar sensors into believing almost anything is possible.
    The technology can hide the approach of an existing car, create a phantom car where none exists or even trick the radar into thinking a real car has quickly deviated from its actual course. And it can achieve this feat in the blink of an eye without having any prior knowledge about the specific settings of the victim’s radar, making it the most troublesome threat to radar security to date.
    The researchers say MadRadar shows that manufacturers should immediately begin taking steps to better safeguard their products.
    The research will be published in 2024 Network and Distributed System Security Symposium, taking place February 26 — March 1 in San Diego, California.

    “Without knowing much about the targeted car’s radar system, we can make a fake vehicle appear out of nowhere or make an actual vehicle disappear in real-world experiments,” Pajic said. “We’re not building these systems to hurt anyone, we’re demonstrating the existing problems with current radar systems to show that we need to fundamentally change how we design them.”
    In modern cars that feature assistive and autonomous driving systems, radar is typically used to detect moving vehicles in front of and around the vehicle. It also helps to augment visual and laser-based systems to detect vehicles moving in front of or behind the car.
    Because there are now so many different cars using radar on a typical highway, it is unlikely that any two vehicles will have the exact same operating parameters, even if they share a make and model. For example, they might use slightly different operating frequencies or take measurements at slightly different intervals. Because of this, previous demonstrations of radar-spoofing systems have needed to know the specific parameters being used.
    “Think of it like trying to stop someone from listening to the radio,” explained Pajic. “To block the signal or to hijack it with your own broadcast, you’d need to know what station they were listening to first.”
    In the MadRadar demonstration, the team from Duke showed off the capabilities of a radar-spoofing system they’ve built that can accurately detect a car’s radar parameters in less than a quarter of a second. Once they’ve been discovered, the system can send out its own radar signals to fool the target’s radar.
    In one demonstration, MadRadar sends signals to the target car to make it perceive another car where none actually exist. This involves modifying the signal’s characteristics based on time and velocity in such a way that it mimics what a real contact would look like.

    In a second and much more complicated example, it fools the target’s radar into thinking the opposite — that there is no passing car when one actually does exist. It achieves this by delicately adding masking signals around the car’s true location to create a sort of bright spot that confuses the radar system.
    “You have to be judicious about adding signals to the radar system, because if you simply flooded the entire field of vision, it’d immediately know something was wrong,” said David Hunt, a PhD student working in Pajic’s lab.
    In a third kind of attack, the researchers mix the two approaches to make it seem as though an existing car has suddenly changed course. The researchers recommend that carmakers try randomizing a radar system’s operating parameters over time and adding safeguards to the processing algorithms to spot similar attacks.
    “Imagine adaptive cruise control, which uses radar, believing that the car in front of me was speeding up, causing your own car to speed up, when in reality it wasn’t changing speed at all,” said Pajic. “If this were done at night, by the time your car’s cameras figured it out you’d be in trouble.”
    Each of these attack demonstrations, the researchers emphasize, were done on real-world radar systems in actual cars moving at roadway speeds. It’s an impressive feat, given that if the spoofing radar signals are even a microsecond off the mark, the fake datapoint would be misplaced by the length of a football field.
    “These lessons go far beyond radar systems in cars as well,” Pajic said. “If you want to build drones that can explore dark environments, like in search and rescue or reconnaissance operations, that don’t cost thousands of dollars, radar is the way to go.”
    This research was supported by the Office of Naval Research (N00014-23-1-2206, N00014-20-1-2745), the Air Force Office of Scientific Research (FA9550-19-1-0169), the National Science Foundation (CNS-1652544, CNS-2211944), and the National AI Institute for Edge Computing Leveraging Next Generation Wireless Networks (Athena) (CNS-2112562). More

  • in

    Scientists make breakthrough in quantum materials research

    Researchers at the University of California, Irvine and Los Alamos National Laboratory, publishing in the latest issue of Nature Communications, describe the discovery of a new method that transforms everyday materials like glass into materials scientists can use to make quantum computers.
    “The materials we made are substances that exhibit unique electrical or quantum properties because of their specific atomic shapes or structures,” said Luis A. Jauregui, professor of physics & astronomy at UCI and lead author of the new paper. “Imagine if we could transform glass, typically considered an insulating material, and convert it into efficient conductors akin to copper. That’s what we’ve done.”
    Conventional computers use silicon as a conductor, but silicon has limits. Quantum computers stand to help bypass these limits, and methods like those described in the new study will help quantum computers become an everyday reality.
    “This experiment is based on the unique capabilities that we have at UCI for growing high-quality quantum materials. How can we transform these materials that are poor conductors into good conductors?” said Jauregui, who’s also a member of UCI’s Eddleman Quantum Institute. “That’s what we’ve done in this paper. We’ve been applying new techniques to these materials, and we’ve transformed them to being good conductors.”
    The key, Jauregui explained, was applying the right kind of strain to materials at the atomic scale. To do this, the team designed a special apparatus called a “bending station” at the machine shop in the UCI School of Physical Sciences that allowed them to apply large strain to change the atomic structure of a material called hafnium pentatelluride from a “trivial” material into a material fit for a quantum computer.
    “To create such materials, we need to ‘poke holes’ in the atomic structure,” said Jauregui. “Strain allows us to do that.”
    “You can also turn the atomic structure change on or off by controlling the strain, which is useful if you want to create an on-off switch for the material in a quantum computer in the future,” said Jinyu Liu, who is the first author of the paper and a postdoctoral scholar working with Jauregui.

    “I am pleased by the way theoretical simulations offer profound insights into experimental observations, thereby accelerating the discovery of methods for controlling the quantum states of novel materials,” said co-author Ruqian Wu, professor of physics and Associate Director of the UCI Center for Complex and Active Materials — a National Science Foundation Materials Research Science and Engineering Center (MRSEC). “This underscores the success of collaborative efforts involving diverse expertise in frontier research.”
    “I’m excited that our team was able to show that these elusive and much-sought-after material states can be made,” said Michael Pettes, study co-author and scientist with the Center for Integrated Nanotechnologies at Los Alamos National Laboratory. “This is promising for the development of quantum devices, and the methodology we demonstrate is compatible for experimentation on other quantum materials as well.”
    Right now, quantum computers only exist in a few places, such as in the offices of companies like IBM, Google and Rigetti. “Google, IBM and many other companies are looking for effective quantum computers that we can use in our daily lives,” said Jauregui. “Our hope is that this new research helps make the promise of quantum computers more of a reality.”
    Funding came from the UCI-MRSEC — an NSF CAREER grant to Jauregui and Los Alamos National Laboratory Directed Research and Development Directed Research program funds. More

  • in

    Paper calls for patient-first regulation of AI in healthcare

    Ever wonder if the latest and greatest artificial intelligence (AI) tool you read about in the morning paper is going to save your life? A new study published in JAMA led by John W. Ayers, Ph.D., of the Qualcomm Institute within the University of California San Diego, finds that question can be difficult to answer since AI products in healthcare do not universally undergo any externally evaluated approval process assessing how it might benefit patient outcomes before coming to market.
    The research team evaluated the recent White House Executive Order that instructed the Department of Health and Human Services to develop new AI-specific regulatory strategies addressing equity, safety, privacy, and quality for AI in healthcare before April 27, 2024. However, team members were surprised to find the order did not once mention patient outcomes, the standard metric by which healthcare products are judged before being allowed to access the healthcare marketplace.
    “The goal of medicine is to save lives,” said Davey Smith, M.D., head of the Division of Infectious Disease and Global Public Health at UC San Diego School of Medicine, co-director of the university’s Altman Clinical and Translational Research Institute, and study senior author. “AI tools should prove clinically significant improvements in patient outcomes before they are widely adopted.”
    According to the team, AI-powered early warning systems for sepsis, a fatal acute illness among hospitalized patients that affects 1.7 million Americans each year, demonstrates the consequences of inadequate prioritization of patient outcomes in regulations. A third-party evaluation of the most widely adopted AI sepsis prediction model revealed 67% of patients who developed sepsis were not identified by the system. Would hospital administrators have chosen this sepsis prediction system if trials assessing patient outcomes data were mandated, the team wondered, considering the array of available early warning systems for sepsis?
    “We are calling for a revision to the White House Executive Order that prioritizes patient outcomes when regulating AI products,” added John W. Ayers, Ph.D., who is deputy director of informatics in Altman Clinical and Translational Research Institute in addition to his Qualcomm Institute affiliation. “Similar to pharmaceutical products, AI tools that impact patient care should be evaluated by federal agencies for how they improve patients’ feeling, function, and survival.”
    The team points to its 2023 study in JAMA Internal Medicine on using AI-powered chatbots to respond to patient messages as an example of what patient outcome-centric regulations can achieve. “A study comparing standard care versus standard care enhanced by AI conversational agents found differences in downstream care utilization in some patient populations, such as heart failure patients,” said Nimit Desai, B.S., who is a research affiliate at the Qualcomm Institute, UC San Diego School of Medicine student, and study coauthor. “But studies like this don’t just happen unless regulators appropriately incentivize them. With a patient outcomes-centric approach, AI for patient messaging and all other clinical applications can truly enhance people’s lives.”
    The team recognizes that its proposed regulatory strategy can be a significant lift for AI and healthcare industry partners and may not be necessary for every flavor of AI use case in healthcare. However, the researchers say, excluding patient outcomes-centric rules in the White House Executive Order is a serious omission. More

  • in

    Bringing together real-world sensors and VR to improve building maintenance

    A new system that brings together real-world sensing and virtual reality would make it easier for building maintenance personnel to identify and fix issues in commercial buildings that are in operation. The system was developed by computer scientists at the University of California San Diego and Carnegie Mellon University.
    The system, dubbed BRICK, consists of a handheld device equipped with a suite of sensors to monitor temperature, CO2 and airflow. It is also equipped with a virtual reality environment that has access to the sensor data and metadata in a specific building while being connected to the building’s electronic control system.
    When an issue is reported in a specific location, a building manager can go on-site with the device and quickly scan the space with the Lidar tool on their smartphone, creating a virtual reality version of the space. The scanning can also occur ahead of time. Once they open this mixed reality recreation of the space on a smartphone or laptop, building managers can locate sensors, as well as the data gathered from the handheld device, overlaid onto that mixed reality environment.
    The goal is to allow building managers to quickly identify issues by inspecting hardware and gathering and logging relevant data.
    “Modern buildings are complex arrangements of multiple systems from climate control, lighting and security to occupant management. BRICK enables their efficient operation, much like a modern computer system,” said Rajesh K. Gupta, one of the paper’s senior authors, director of the UC San Diego Halicioglu Data Science Institute and a professor in the UC San Diego Department of Computer Science and Engineering.
    Currently, when building managers receive reports of a problem, they first have to consult the building management database for that specific location. But the system doesn’t tell them where the sensors and hardware are located exactly in that space. So managers have to go to the location, gather more data with cumbersome sensors, then compare that data against the information in the building management system and try to deduce what the issue is. It’s also difficult to log the data gathered at various spatial locations in a precise way.
    By contrast, with BRICK, the building manager can directly go to the location equipped with a handheld device and a laptop or smartphone. They will immediately have access on location to all the building management system data, the location of the sensors and the data from the handheld device all overlapping in one mixed reality environment. Using this system, the operators can also detect faults in the building equipment from stuck air-control valves to poorly operating handling systems.

    In the future, researchers hope to find CO2, temperature and airflow sensors that can directly connect to a smartphone, to enable occupants to take part in managing local environments as well as to simplify building operations.
    A team at Carnegie Mellon built the handheld device. Xiaohan Fu, a computer science Ph.D. student in the research group of Rajesh Gupta, director of the Halicioglu Data Science Institute, built the backend and VR components that build upon their earlier work on BRICK metadata schema that has been adopted by many commercial vendors.
    Ensuring that the location used in the VR environment was accurate was a major challenge. GPS is only accurate to a radius of about a meter. In this case, the system needs to be accurate within a few inches. The researchers’ solution was to post a (few) AprilTags-similar to QR codes — in every room that would be read by the handheld device’s camera and recalibrate the system to the correct location.
    “It’s an intricate system,” Fu said. “The mixed reality itself is not easy to build. From a software standpoint, connecting the building management system, where hardware, sensors and actuators are controlled, was a complex task that requires safety and security guarantees in a commercial environment. Our system architecture enables us to do it in an interactive and programmable way.”
    The team presented their work at the BuildSys 23 Conference on Nov. 15 and 16 in Istanbul, Turkey.
    The work was sponsored by the CONIX Research Center, one of the six centers in JUMP, a Semiconductor Research Corporation program sponsored by DARPA. More

  • in

    Machine learning guides carbon nanotechnology

    Carbon nanostructures could become easier to design and synthesize thanks to a machine learning method that predicts how they grow on metal surfaces. The new approach, developed by researchers at Japan’s Tohoku University and China’s Shanghai Jiao Tong University, will make it easier to exploit the unique chemical versatility of carbon nanotechnology. The method was published in the journal Nature Communications.
    The growth of carbon nanostructures on a variety of surfaces, including as atomically thin films, has been widely studied, but little is known about the dynamics and atomic-level factors governing the quality of the resulting materials. “Our work addresses a crucial challenge for realizing the potential of carbon nanostructures in electronics or energy processing devices,” says Hao Li of the Tohoku University team.
    The wide range of possible surfaces and the sensitivity of the process to several variables make direct experimental investigation challenging. The researchers therefore turned to machine learning simulations as a more effective way to explore these systems.
    With machine learning, various theoretical models can be combined with data from chemistry experiments to predict the dynamics of carbon crystalline growth and determine how it can be controlled to achieve specific results. The simulation program explores strategies and identifies which ones work and which don’t, without the need for humans to guide every step of the process.
    The researchers tested this approach by investigating simulations of the growth of graphene, a form of carbon, on a copper surface. After establishing the basic framework, they showed how their approach could also be applied to other metallic surfaces, such as titanium, chromium and copper contaminated with oxygen.
    The distribution of electrons around the nuclei of atoms in different forms of graphene crystals can vary. These subtle differences in atomic structure and electron arrangement affect the overall chemical and electrochemical properties of the material. The machine learning approach can test how these differences affect the diffusion of individual atoms and bonded atoms and the formation of carbon chains, arches and ring structures.
    The team validated the results of the simulations through experiments and found that they closely matched. “Overall, our work provides a practical and efficient method for designing metallic or alloy substrates to achieve desired carbon nanostructures and explore further opportunities,” Li says.
    He adds that future work will build on this to investigate topics such as the interfaces between solids and liquids in advanced catalysts and the chemical properties of materials used for processing and storing energy. More