More stories

  • in

    Smartphone attachment could increase racial fairness in neurological screening

    Engineers at the University of California San Diego have developed a smartphone attachment that could enable people to screen for a variety of neurological conditions, such as Alzheimer’s disease and traumatic brain injury, at low cost — and do so accurately regardless of their skin tone.
    The technology, published in Scientific Reports, has the potential to improve the equity and accessibility of neurological screening procedures while making them widely available on all smartphone models.
    The attachment fits over a smartphone’s camera and improves its ability to capture clear video recordings and measurements of the pupil, which is the dark center of the eye. Recent research has shown that tracking pupil size changes during certain tasks can provide valuable insight into an individual’s neurological functions. For example, the pupil tends to dilate during complex cognitive tasks or in response to unexpected stimuli.
    However, tracking pupil size can be difficult in individuals with dark eye colors, such as those with darker skin tones, because conventional color cameras struggle to distinguish the pupil from the surrounding dark iris.
    To enhance the visibility of the pupil, UC San Diego engineers equipped their smartphone attachment with a specialized filter that selectively permits a certain range of light into the camera. That range is called far-red light — the extreme red end of the visible spectrum located just before infrared light. Melanin, the dark pigment in the iris, absorbs most visible wavelengths of light but reflects longer wavelengths, including far-red light. By imaging the eye with far-red light while blocking out other wavelengths, the iris appears significantly lighter, making it easier to see the pupil with a regular camera.
    “There has been a large issue with medical device design that depends on optical measurements ultimately working only for those with light skin and eye colors, while failing to perform well for those with dark skin and eyes,” said study senior author Edward Wang, an electrical and computer engineering professor in The Design Lab at UC San Diego, where he is the director of the Digital Health Technologies Lab. “By focusing on how we can make this work for all people while keeping the solution simple and low cost, we aim to pave the way to a future of fair access to remote, affordable healthcare.”
    Another feature of this technology that makes it more accessible is that it is designed to work on all smartphones. Traditionally, pupil measurements have been performed using infrared cameras, which are only available in high-end smartphone models. Since regular cameras cannot detect infrared light, this traditional approach limits accessibility to those who can afford more expensive smartphones. By using far-red light, which is still part of the visible spectrum and can be captured by regular smartphone cameras, this technology levels the playing field. More

  • in

    Plant-based materials give ‘life’ to tiny soft robots

    A team of University of Waterloo researchers has created smart, advanced materials that will be the building blocks for a future generation of soft medical microrobots.
    These tiny robots have the potential to conduct medical procedures, such as biopsy, and cell and tissue transport, in a minimally invasive fashion. They can move through confined and flooded environments, like the human body, and deliver delicate and light cargo, such as cells or tissues, to a target position.
    The tiny soft robots are a maximum of one centimetre long and are bio-compatible and non-toxic. The robots are made of advanced hydrogel composites that include sustainable cellulose nanoparticles derived from plants.
    This research, led by Hamed Shahsavan, a professor in the Department of Chemical Engineering, portrays a holistic approach to the design, synthesis, fabrication, and manipulation of microrobots. The hydrogel used in this work changes its shape when exposed to external chemical stimulation. The ability to orient cellulose nanoparticles at will enables researchers to program such shape-change, which is crucial for the fabrication of functional soft robots.
    “In my research group, we are bridging the old and new,” said Shahsavan, director of the Smart Materials for Advanced Robotic Technologies (SMART-Lab). “We introduce emerging microrobots by leveraging traditional soft matter like hydrogels, liquid crystals, and colloids.”
    The other unique component of this advanced smart material is that it is self-healing, which allows for programming a wide range in the shape of the robots. Researchers can cut the material and paste it back together without using glue or other adhesives to form different shapes for different procedures.
    The material can be further modified with a magnetism that facilitates the movement of soft robots through the human body. As proof of concept of how the robot would maneuver through the body, the tiny robot was moved through a maze by researchers controlling its movement using a magnetic field.
    “Chemical engineers play a critical role in pushing the frontiers of medical microrobotics research,” Shahsavan said. “Interestingly, tackling the many grand challenges in microrobotics requires the skillset and knowledge chemical engineers possess, including heat and mass transfer, fluid mechanics, reaction engineering, polymers, soft matter science, and biochemical systems. So, we are uniquely positioned to introduce innovative avenues in this emerging field.”
    The next step in this research is to scale the robot down to submillimeter scales.
    Shahsavan’s research group collaborated with Waterloo’s Tizazu Mekonnen, a professor from the Department of Chemical Engineering, Professor Shirley Tang, Associate Dean of Science (Research), and Amirreza Aghakhani, a professor from the University of Stuttgart in Germany. More

  • in

    Simulating cold sensation without actual cooling

    The perception of persistent thermal sensations, such as changes in temperature, tends to gradually diminish in intensity as our bodies become accustomed to the temperature. This phenomenon leads to a shift in our perception of temperature when transitioning between different scenes in a virtual environment. Researchers at the University of Tsukuba developed a technology to generate a virtual cold sensation via a non-contact method without physically altering the skin temperature.
    Our skin plays a key role in perceiving temperature and the surroundings. For instance, we perceive the chill of the outdoors when our cheeks blush with cold, and we sense the onset of spring when our skin warms up gradually. However, getting exposed to the same stimuli repeatedly, makes us accustomed to the stimuli, making it challenging to sense new sensations. This process, known as “temperature acclimatization,” can interfere with our ability to gauge temperature changes in a virtual reality (VR) environment while switching scenes.
    In this study, the researchers have developed a non-contact technology for simulating a cold sensation that continually generates thermal experiences while maintaining nearly constant skin temperature. This innovative approach leverages human body’s natural sensitivity to rapid temperature changes. The technology employs a combination of cold air flow and a light source to instantly switch between a quick cold and a gentle warm stimulus, inducing a cold sensation while maintaining the skin temperature fluctuations close to zero. Evaluation results have demonstrated that this system can provide a virtual cold sensation without any actual change in temperature. Moreover, the researchers have succeeded in replicating a cold sensation of the same intensity as one would experience with continuous skin temperature changes.
    This breakthrough technology offers a novel perspective on simulating skin sensations without altering the body’s physical state. It has the potential to enable immersive experiences in the world of VR, including the Metaverse, by offering not only instantaneous thermal sensations like a sudden cold breeze but also persistent thermal experiences over extended periods, akin to those encountered during international travel.
    This work was supported in part by grants from JSPS KAKENHI (JP21H03474, JP21K19778) and in part by JST SPRING (JPMJSP2124). More

  • in

    Adaptive optical neural network connects thousands of artificial neurons

    Scientists headed by physicists Prof. Wolfram Pernice, Prof. Martin Salinga and computer specialist Prof. Benjamin Risse, all from the University of Münster (Germany), developed a so-called event-based architecture, using photonic processors. In a similar way to the brain, this makes possible the continuous adaptation of the connections within the neural network.
    Modern computer models — for example for complex, potent AI applications — push traditional digital computer processes to their limits. New types of computing architecture, which emulate the working principles of biological neural networks, hold the promise of faster, more energy-efficient data processing. A team of researchers has now developed a so-called event-based architecture, using photonic processors with which data are transported and processed by means of light. In a similar way to the brain, this makes possible the continuous adaptation of the connections within the neural network. This changeable connections are the basis for learning processes. For the purposes of the study, a team working at Collaborative Research Centre 1459 (“Intelligent Matter”) — headed by physicists Prof. Wolfram Pernice and Prof. Martin Salinga and computer specialist Prof. Benjamin Risse, all from the University of Münster — joined forces with researchers from the Universities of Exeter and Oxford in the UK. The study has been published in the journal “Science Advances.”
    What is needed for a neural network in machine learning are artificial neurons which are activated by external excitatory signals, and which have connections to other neurons. The connections between these artificial neurons are called synapses — just like the biological original. For their study, the team of researchers in Münster used a network consisting of almost 8,400 optical neurons made of waveguide-coupled phase-change material, and the team showed that the connection between two each of these neurons can indeed become stronger or weaker (synaptic plasticity), and that new connections can be formed, or existing ones eliminated (structural plasticity). In contrast to other similar studies, the synapses were not hardware elements but were coded as a result of the properties of the optical pulses — in other words, as a result of the respective wavelength and of the intensity of the optical pulse. This made it possible to integrate several thousand neurons on one single chip and connect them optically.
    In comparison with traditional electronic processors, light-based processors offer a significantly higher bandwidth, making it possible to carry out complex computing tasks, and with lower energy consumption. This new approach consists of basic research. “Our aim is to develop an optical computing architecture which in the long term will make it possible to compute AI applications in a rapid and energy-efficient way,” says Frank Brückerhoff-Plückelmann, one of the lead authors.
    Methodology: The non-volatile phase-change material can be switched between an amorphous structure and a crystalline structure with a highly ordered atomic lattice. This feature allows permanent data storage even without an energy supply. The researchers tested the performance of the neural network by using an evolutionary algorithm to train it to distinguish between German and English texts. The recognition parameter they used was the number of vowels in the text. More

  • in

    To excel at engineering design, generative AI must learn to innovate, study finds

    ChatGPT and other deep generative models are proving to be uncanny mimics. These AI supermodels can churn out poems, finish symphonies, and create new videos and images by automatically learning from millions of examples of previous works. These enormously powerful and versatile tools excel at generating new content that resembles everything they’ve seen before.
    But as MIT engineers say in a new study, similarity isn’t enough if you want to truly innovate in engineering tasks.
    “Deep generative models (DGMs) are very promising, but also inherently flawed,” says study author Lyle Regenwetter, a mechanical engineering graduate student at MIT. “The objective of these models is to mimic a dataset. But as engineers and designers, we often don’t want to create a design that’s already out there.”
    He and his colleagues make the case that if mechanical engineers want help from AI to generate novel ideas and designs, they will have to first refocus those models beyond “statistical similarity.”
    “The performance of a lot of these models is explicitly tied to how statistically similar a generated sample is to what the model has already seen,” says co-author Faez Ahmed, assistant professor of mechanical engineering at MIT. “But in design, being different could be important if you want to innovate.”
    In their study, Ahmed and Regenwetter reveal the pitfalls of deep generative models when they are tasked with solving engineering design problems. In a case study of bicycle frame design, the team shows that these models end up generating new frames that mimic previous designs but falter on engineering performance and requirements.
    When the researchers presented the same bicycle frame problem to DGMs that they specifically designed with engineering-focused objectives, rather than only statistical similarity, these models produced more innovative, higher-performing frames. More

  • in

    Keeping a human in the loop: Managing the ethics of AI in medicine

    Artificial intelligence (AI) — of ChatGPT fame — is increasingly used in medicine to improve diagnosis and treatment of diseases, and to avoid unnecessary screening for patients. But AI medical devices could also harm patients and worsen health inequities if they are not designed, tested, and used with care, according to an international task force that included a University of Rochester Medical Center bioethicist.
    Jonathan Herington, PhD, was a member of the AI Task Force of the Society for Nuclear Medicine and Medical Imaging, which laid out recommendations on how to ethically develop and use AI medical devices in two papers published in the Journal of Nuclear Medicine. In short, the task force called for increased transparency about the accuracy and limits of AI and outlined ways to ensure all people have access to AI medical devices that work for them — regardless of their race, ethnicity, gender, or wealth.
    While the burden of proper design and testing falls to AI developers, health care providers are ultimately responsible for properly using AI and shouldn’t rely too heavily on AI predictions when making patient care decisions.
    “There should always be a human in the loop,” said Herington, who is assistant professor of Health Humanities and Bioethics at URMC and was one of three bioethicists added to the task force in 2021. “Clinicians should use AI as an input into their own decision making, rather than replacing their decision making.”
    This requires that doctors truly understand how a given AI medical device is intended to be used, how well it performs at that task, and any limitations — and they must pass that knowledge on to their patients. Doctors must weigh the relative risks of false positives versus false negatives for a given situation, all while taking structural inequities into account.
    When using an AI system to identify probable tumors in PET scans, for example, health care providers must know how well the system performs at identifying this specific type of tumor in patients of the same sex, race, ethnicity, etc., as the patient in question.
    “What that means for the developers of these systems is that they need to be very transparent,” said Herington. More

  • in

    Ensuring fairness of AI in healthcare requires cross-disciplinary collaboration

    Pursuing fair artificial intelligence (AI) for healthcare requires collaboration between experts across disciplines, says a global team of scientists led by Duke-NUS Medical School in a new perspective published in npj Digital Medicine.
    While AI has demonstrated potential for healthcare insights, concerns around bias remain. “A fair model is expected to perform equally well across subgroups like age, gender and race. However, differences in performance may have underlying clinical reasons and may not necessarily indicate unfairness,” explained first author Ms Liu Mingxuan, a PhD candidate in the Quantitative Biology and Medicine (Biostatistics & Health Data Science) Programme and Centre for Quantitative Medicine (CQM) at Duke-NUS.
    “Focusing on equity — that is, recognising factors like race, gender, etc., and adjusting the AI algorithm or its application to make sure more vulnerable groups get the care they need — rather than complete equality, is likely a more reasonable approach for clinical AI,” said Dr Ning Yilin, Research Fellow with CQM and a co-first-author of the paper. “Patient preferences and prognosis are also crucial considerations, as equal treatment does not always mean fair treatment. An example of this is age, which frequently factors into treatment decisions and outcomes.”
    The paper highlights key misalignments between AI fairness research and clinical needs. “Various metrics exist to measure model fairness, but choosing suitable ones for healthcare is difficult as they can conflict. Trade-offs are often inevitable,” commented Associate Professor Liu Nan also from Duke-NUS’ CQM, senior and corresponding author of the paper.
    He added, “Differences detected between groups are frequently treated as biases to be mitigated in AI research. However, in the medical context, we must discern between meaningful differences and true biases requiring correction.”
    The authors emphasise the need to evaluate which attributes are considered ‘sensitive’ for each application. They say that actively engaging clinicians is vital for developing useful and fair AI models.
    “Variables like race and ethnicity need careful handling as they may represent systemic biases or biological differences,” said Assoc Prof Liu. “Clinicians can provide context, determine if differences are justified, and guide models towards equitable decisions.”
    Overall, the authors argue that pursuing fair AI for healthcare requires collaboration between experts in AI, medicine, ethics and beyond. More

  • in

    Eyes may be the window to your soul, but the tongue mirrors your health

    A 2000-year-old practice by Chinese herbalists — examining the human tongue for signs of disease — is now being embraced by computer scientists using machine learning and artificial intelligence.
    Tongue diagnostic systems are fast gaining traction due to an increase in remote health monitoring worldwide, and a study by Iraqi and Australian researchers provides more evidence of the increasing accuracy of this technology to detect disease.
    Engineers from Middle Technical University (MTU) in Baghdad and the University of South Australia (UniSA) used a USB web camera and computer to capture tongue images from 50 patients with diabetes, renal failure and anaemia, comparing colours with a data base of 9000 tongue images.
    Using image processing techniques, they correctly diagnosed the diseases in 94 per cent of cases, compared to laboratory results. A voicemail specifying the tongue colour and disease was also sent via a text message to the patient or nominated health provider.
    MTU and UniSA Adjunct Associate Professor Ali Al-Naji and his colleagues have reviewed the worldwide advances in computer-aided disease diagnosis, based on tongue colour, in a new paper in AIP Conference Proceedings.
    “Thousands of years ago, Chinese medicine pioneered the practice of examining the tongue to detect illness,” Assoc Prof Al-Naji says.
    “Conventional medicine has long endorsed this method, demonstrating that the colour, shape, and thickness of the tongue can reveal signs of diabetes, liver issues, circulatory and digestive problems, as well as blood and heart diseases. More