More stories

  • in

    Smart devices: Putting a premium on peace of mind

    The White House has announced plans to roll out voluntary labeling for internet-connected devices like thermostats and baby monitors that meet certain cybersecurity standards. A new survey of U.S. consumers shows that they are willing to pay a significant premium to tell which gadgets are safe from security attacks before they buy. But voluntary product labels may not be enough if the program is going to protect consumers in the long run, the findings suggest.
    Two out of five homes worldwide have at least one smart device that is vulnerable to cyber-attacks. Soon, that new smart TV or robot vacuum you’ve been considering for your home will come with a label that helps you gauge whether the device is secure and protected from bad actors trying to spy on you or sell your data.
    In July, the White House announced plans to roll out voluntary labeling for internet-connected devices like refrigerators, thermostats and baby monitors that meet certain cybersecurity standards, such as requiring data de-identification and automatic security updates.
    For tech companies that choose to participate, the good news is that there is a market for such a guarantee. A new survey of U.S. consumers shows that they are willing to pay a significant premium to tell which gadgets respect their privacy and are safe from security attacks before they buy.
    But voluntary product labels may not be enough if the program is going to protect consumers in the long run, the authors of the study caution.
    “Device manufacturers that do not care about security and privacy might decide not to disclose at all,” said Duke University assistant professor of computer science Pardis Emami-Naeini, who conducted the survey with colleagues at Carnegie Mellon University. “That’s not what we want.”
    The average household in the U.S. now has more than 20 devices connected to the internet, all collecting and sharing data. Fitness trackers measure your steps and monitor the quality of your sleep. Smart lights track your phone’s location and turn on as soon as you pull in the driveway. Video doorbells let you see who’s at the door — even when you’re not home. More

  • in

    Uncovering the Auger-Meitner Effect’s crucial role in electron energy loss

    Defects often limit the performance of devices such as light-emitting diodes (LEDs). The mechanisms by which defects annihilate charge carriers are well understood in materials that emit light at red or green wavelengths, but an explanation has been lacking for such loss in shorter-wavelength (blue or ultraviolet) emitters.
    Researchers in the Department of Materials at UC Santa Barbara, however, recently uncovered the crucial role of the Auger-Meitner effect, a mechanism that allows an electron to lose energy by kicking another electron up to a higher-energy state.
    “It is well known that defects or impurities — collectively referred to as ‘traps’ — reduce the efficiency of LEDs and other electronic devices,” said Materials Professor Chris Van de Walle, whose group performed the research.
    The new methodology revealed that the trap-assisted Auger-Meitner effect can produce loss rates that are orders of magnitude greater than those caused by other previously considered mechanisms, thus resolving the puzzle of how defects affect the efficiency of blue or UV light emitters. The findings are published in the journal Physical Review Letters.
    Observations of this phenomenon date back to the 1950s, when researchers at Bell Labs and General Electric observed its detrimental impact on transistors. Van de Walle explained that electrons can get trapped at defects and become unable to perform their intended role in the device, be it amplifying a charge in a transistor or emitting light by recombining it with a hole (an unoccupied lower-energy state) in an LED. The energy lost in this recombination process was assumed to be released in the form of phonons, i.e., lattice vibrations that heat up the device.
    Van de Walle’s group had previously modeled this phonon-mediated process and found that it duly fitted the observed efficiency loss in LEDs that emit light in the red or green regions of the spectrum. However, for blue or ultraviolet LEDs the model failed; the larger amount of energy carried by the electrons at these shorter wavelengths simply cannot be dissipated in the form of phonons.
    “This is where the Auger-Meitner process comes in,” explained Fangzhou Zhao, a postdoctoral researcher in Van de Walle’s group and the project’s lead researcher. The researchers found that, instead of releasing energy in the form of phonons, the electron transfers its energy to another electron that gets kicked up to a higher energy state. This process is often referred to as the Auger effect, after Pierre Auger, who reported it in 1923. However Lise Meitner — whose many accomplishments were never properly recognized during her lifetime — had already described the same phenomenon in 1922.
    Experimental work in the group of UC Santa Barbara materials professorJames Speck had suggested previously that trap-assisted Auger-Meitner processes could occur; however, based on measurements alone, it is difficult to rigorously distinguish between different recombination channels. Zhao and his co-researchers developed a first-principles methodology that, combined with cutting-edge computations, conclusively established the crucial role of the Auger-Meitner process. In the case of gallium nitride, the key material used in commercial LEDs, the results showed trap-assisted recombination rates that were more than a billion times greater than if only the phonon-mediated process had been considered. Clearly, not every trap will show such huge enhancements; but with the new methodology in hand, researchers can now accurately assess which defects or impurities are actually detrimental to the efficiency.
    “The computational formalism is completely general and can be applied to any defect or impurity in semiconducting or insulating materials,” said Mark Turiansky, another postdoctoral researcher in Van de Walle’s group who was involved in the project. The researchers hope that these results will increase understanding of recombination mechanisms not only in semiconductor light emitters, but also in any wide-band-gap material in which defects limit efficiency.
    The research was supported by the Department of Energy Office of Basic Energy Sciences and a Department of Defense Vannevar Bush Faculty Fellowship, which was awarded to Van de Walle in 2022. Zhao was the recipient of an Elings Prize Postdoctoral Fellowship. The computations were performed at the National Energy Research Scientific Computing Center (NERSC). More

  • in

    Self-supervised AI learns physics to reconstruct microscopic images from holograms

    Researchers from the UCLA Samueli School of Engineering have unveiled an artificial intelligence-based model for computational imaging and microscopy without training with experimental objects or real data.
    In a recent paper published in Nature Machine Intelligence, UCLA’s Volgenau Professor for Engineering Innovation Aydogan Ozcan and his research team introduced a self-supervised AI model nicknamed GedankenNet that learns from physics laws and thought experiments.
    Artificial intelligence has revolutionized the imaging process across various fields — from photography to sensing. The application of AI in microscopy, however, has continued to face persistent challenges. For one, existing AI-powered models rely heavily on human supervision and large-scale, pre-labeled data sets, requiring laborious and costly experiments with numerous samples. Moreover, these methodologies often struggle to process new types of samples or experimental set-ups.
    With GedankenNet, the UCLA team was inspired by Albert Einstein’s hallmark Gedanken experiment (German for “thought experiment”) approach using visualized, conceptual thought experiments in creating the theory of relativity.
    Informed only by the laws of physics that universally govern the propagation of electromagnetic waves in space, the researchers taught their AI model to reconstruct microscopic images using only random artificial holograms — synthesized solely from “imagination” without relying on any real-world experiments, actual sample resemblances or real data.
    Following GedankenNet’s “thought training,” the team tested the AI model using 3D holographic images of human tissue samples captured with a new experimental set-up. In its first attempt, GedankenNet successfully reconstructed the microscopic images of human tissue samples and Pap smears from their holograms.
    Compared with state-of-the-art microscopic image reconstruction methods based on supervised learning using large-scale experimental data, GedankenNet exhibited superior generalization to unseen samples without relying on any experimental data or prior information on samples. In addition to providing better microscopic image reconstruction, GedankenNet also generated output light waves that are consistent with the physics of wave equations, accurately representing the 3D light propagation in space.
    “These findings illustrate the potential of self-supervised AI to learn from thought experiments, just like scientists do,” said Ozcan, who holds faculty appointments in the departments of Electrical and Computer Engineering, and Bioengineering at UCLA Samueli. “It opens up new opportunities for developing physics-compatible, easy-to-train and broadly generalizable neural network models as an alternative to standard, supervised deep learning methods currently employed in various computational imaging tasks.”
    The other authors of the paper are graduate students Luzhe Huang (first author) and Hanlong Chen, as well as postdoctoral scholar Tairan Liu from the UCLA Electrical and Computer Engineering Department. Ozcan also holds a faculty appointment at the David Geffen School of Medicine at UCLA and is an associate director of the California NanoSystems Institute. More

  • in

    Using social media to raise awareness of women’s resources

    The Covid-19 pandemic created a global increase in domestic violence against women. Now, an MIT-led experiment designed with that fact in mind shows that some forms of social media can increase awareness among women about where to find resources and support for addressing domestic violence.
    In the randomized experiment, set in Egypt, women recruited via Facebook were sent videos via social media as well as reminders to watch television programming from a well-known Egyptian human rights lawyer focused on gender norms and violence. The study found that receiving the videos or reminders increased consumption of media content about the issue, increased knowledge about the resources available, and increased reported and hypothetical use of some resources in response to violence. The experiment did not appear to change long-term attitudes about gender and marital equity, and sexual violence, however.
    “We found that women did exhibit increased knowledge of where they can find resources about what to do, and how to get informed,” says MIT Professor Fotini Christia, who led the study.
    The experiment also showed the texting service WhatsApp to be the best way to ensure participants viewed the messages or videos, at least compared to Facebook in this setting.
    “There was increased consumption of the content through social media,” Christia says. “We found a lot more effectiveness with WhatsApp than Facebook in terms of reaching our audience and getting the message out.”
    The paper, “Can Media Campaigns Empower Women Facing Gender-Based Violence amid COVID-19?,” is being published today in Nature Human Behavior. The project is backed, in part, by MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL), which supports randomized field experiments offering solutions to poverty and other social issues.
    The authors are Christia, who is the Ford International Professor of the Social Sciences in MIT’s Department of Political Science and the director of the MIT Sociotechnical Systems Research Center (SSRC); Horacio Larreguy, an associate professor of economics and political science at the Instituto Tecnologico Autonomo de Mexico, in Mexico City; Elizabeth Parker-Magyar, a PhD candidate in MIT’s Department of Political Science; and Manuel Quintero, an incoming PhD student in MIT’s doctoral Program in Social and Engineering Systems. More

  • in

    Out with the life coach, in with the chatbot

    As we start to edge out of winter, improving our diet and boosting our exercise start to appear on our agenda. But, when it comes to encouraging a healthier lifestyle, it may surprise you that artificial intelligence could be your best friend.
    Now, in a first systematic review and meta-analysis of its kind, researchers at the University of South Australia show that chatbots are an effective tool to significantly improve physical activity, diet and sleep, in a step to get ready for the warmer months ahead.
    Published in Nature Digital Medicine, the study found that chatbots — otherwise known as conversational agents or virtual assistants — can quickly and capably support you to increase your daily steps, add extra fruits and vegetables to your diet, and even improve sleep duration and quality.
    Specifically, chatbots led to: An extra 735 steps per day, one additional serving of fruit and vegetables per day, and An additional 45 mins of sleep per night.Insufficient physical activity, excessive sedentary behaviour, poor diet and poor sleep are major global health issues and are among the leading modifiable causes of depression, anxiety and chronic diseases including type 2 diabetes, cardiovascular disease, obesity, cancers and increased mortality.
    Lead researcher, UniSA’s Dr Ben Singh says the findings highlight the potential of artificial intelligence to revolutionise healthcare delivery.
    “When we think of chatbots, we often think of simple applications such as daily news notifications or Uber orders. But in recent years, this technology has advanced to the point where it can sometimes be hard to determine whether you are chatting to a machine, or a real person,” Dr Singh says. More

  • in

    AI transformation of medicine: Why doctors are not prepared

    As artificial intelligence systems like ChatGPT find their way into everyday use, physicians will start to see these tools incorporated into their clinical practice to help them make important decisions on diagnosis and treatment of common medical conditions. These tools, called clinical decision support (CDS) algorithms, can be enormously helpful in helping guide health care providers in determining, for example, which antibiotics to prescribe or whether to recommend a risky heart surgery.
    The success of these new technologies, however, depends largely on how physicians interpret and act upon a tool’s risk predictions — and that requires a unique set of skills that many are currently lacking, according to a new perspective article published today in the New England Journal of Medicine that was written by faculty in the University of Maryland School of Medicine (UMSOM).
    CDS algorithms, which make predictions under conditions of clinical uncertainty, can include everything from regression-derived risk calculators to sophisticated machine learning and artificial intelligence-based systems. They can be used to predict which patients are most likely to go into life-threatening sepsis from an uncontrolled infection or which therapy has the highest probability of preventing sudden death in an individual heart disease patient.
    “These new technologies have the potential to significantly impact patient care, but doctors need to first learn how machines think and work before they can incorporate algorithms into their medical practice,” said Daniel Morgan, MD, MS, Professor of Epidemiology & Public Health at UMSOM and co-author of the perspective.
    While some clinical decision support tools are already incorporated into electronic medical record systems, health care providers often find the current software to be cumbersome and difficult to use. “Doctors don’t need to be math or computer experts, but they do need to have a baseline understanding of what an algorithm does in terms of probability and risk adjustment, but most have never been trained in those skills,” said Katherine Goodman, JD, PhD, Assistant Professor of Epidemiology & Public Health at UMSOM and co-author of the perspective.
    To address this gap, medical education and clinical training need to incorporate explicit coverage of probabilistic reasoning tailored specifically to CDS algorithms. Drs. Morgan, Goodman, and their co-author Adam Rodman, MD, MPH, at Beth Israel Deaconess Medical Center in Boston, proposed the following: Improve Probabilistic Skills: Early in medical school, students should learn the fundamental aspects of probability and uncertainty and use visualization techniques to make thinking in terms of probability more intuitive. This training should include interpreting performance measures like sensitivity and specificity to better understand test and algorithm performance. Incorporate Algorithmic Output into Decision Making: Physicians should be taught to critically evaluate and use CDS predictions in their clinical decision-making. This training involves understanding the context in which algorithms operate, recognizing limitations, and considering relevant patient factors that algorithms may have missed. Practice Interpreting CDS Predictions in Applied Learning: Medical students and physicians can engage in practice-based learning by applying algorithms to individual patients and examining how different inputs affect predictions. They should also learn to communicate with patients about CDS-guided decision making.The University of Maryland, Baltimore (UMB), University of Maryland, College Park (UMCP) and University of Maryland Medical System (UMMS) recently launched plans for a new Institute for Health Computing (IHC). The UM-IHC will leverage recent advances in artificial intelligence, network medicine, and other computing methods to create a premier learning health care system that evaluates both de-identified and secure digitized medical health data to enhance disease diagnosis, prevention, and treatment. Dr. Goodman is beginning a position at IHC, which will be a site that is dedicated to educating and training health care providers on the latest technologies. The Institute plans to eventually offer a certification in health data science among other formal educational opportunities in data sciences.
    “Probability and risk analysis is foundational to the practice of evidence-based medicine, so improving physicians’ probabilistic skills can provide advantages that extend beyond the use of CDS algorithms,” saidUMSOM Dean Mark T. Gladwin, MD, Vice President for Medical Affairs, University of Maryland, Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor. “We’re entering a transformative era of medicine where new initiatives like our Institute for Health Computing will integrate vast troves of data into machine learning systems to personalize care for the individual patient.” More

  • in

    Modified virtual reality tech can measure brain activity

    Researchers have modified a commercial virtual reality headset, giving it the ability to measure brain activity and examine how we react to hints, stressors and other outside forces.
    The research team at The University of Texas at Austin created a noninvasive electroencephalogram (EEG) sensor that they installed in a Meta VR headset that can be worn comfortably for long periods. The EEG measures the brain’s electrical activity during the immersive VR interactions.
    The device could be used in many ways, from helping people with anxiety, to measuring the attention or mental stress of aviators using a flight simulator, to giving a human the chance to see through the eyes of a robot.
    “Virtual reality is so much more immersive than just doing something on a big screen,” said Nanshu Lu, a professor in the Cockrell School of Engineering’s Department of Aerospace Engineering and Engineering Mechanics who led the research. “It gives the user a more realistic experience, and our technology enables us to get better measurements of how the brain is reacting to that environment.”
    The research is published in Soft Science.
    The pairing of VR and EEG sensors has made its way into the commercial sphere already. However, the devices that exist today are costly, and the researchers say their electrodes are more comfortable for the user, extending the potential wearing time and opening up additional applications.
    The best EEG devices today consist of a cap covered in electrodes, but that does not work well with the VR headset. And individual electrodes struggle to get a strong reading because our hair blocks them from connecting with the scalp. The most popular electrodes are rigid and comb-shaped, inserting through the hairs to connect with the skin, an uncomfortable experience for the user. More

  • in

    How good is that AI-penned radiology report?

    AI tools that quickly and accurately create detailed narrative reports of a patient’s CT scan or X-ray can greatly ease the workload of busy radiologists.
    Instead of merely identifying the presence or absence of abnormalities on an image, these AI reports convey complex diagnostic information, detailed descriptions, nuanced findings, and appropriate degrees of uncertainty. In short, they mirror how human radiologists describe what they see on a scan.
    Several AI models capable of generating detailed narrative reports have begun to appear on the scene. With them have come automated scoring systems that periodically assess these tools to help inform their development and augment their performance.
    So how well do the current systems gauge an AI model’s radiology performance?
    The answer is good but not great, according to a new study by researchers at Harvard Medical School published Aug. 3 in the journal Patterns.
    Ensuring that scoring systems are reliable is critical for AI tools to continue to improve and for clinicians to trust them, the researchers said, but the metrics tested in the study failed to reliably identify clinical errors in the AI reports, some of them significant. The finding, the researchers said, highlights an urgent need for improvement and the importance of designing high-fidelity scoring systems that faithfully and accurately monitor tool performance.
    The team tested various scoring metrics on AI-generated narrative reports. The researchers also asked six human radiologists to read the AI-generated reports. More