More stories

  • in

    AI can speed design of health software

    Artificial intelligence helped clinicians to accelerate the design of diabetes prevention software, a new study finds.
    Publishing online March 6 in the Journal of Medical Internet Research, the study examined the capabilities of a form of artificial intelligence (AI) called generative AI or GenAI, which predicts likely options for the next word in any sentence based on how billions of people used words in context on the internet. A side effect of this next-word prediction is that the generative AI “chatbots” like chatGPT can generate replies to questions in realistic language, and produce clear summaries of complex texts.
    Led by researchers at NYU Langone Health, the current paper explores the application of ChatGPT to the design of a software program that uses text messages to counter diabetes by encouraging patients to eat healthier and get exercise. The team tested whether AI-enabled interchanges between doctors and software engineers could hasten the development of such a personalized automatic messaging system (PAMS).
    In the current study, eleven evaluators in fields ranging from medicine to computer science successfully used ChatGPT to produce a version of the diabetes tool over 40 hours, where an original, non-AI-enabled effort had required more than 200 programmer hours.
    “We found that ChatGPT improves communications between technical and non-technical team members to hasten the design of computational solutions to medical problems,” says study corresponding author Danissa Rodriguez, PhD, assistant professor in the Department of Population Health at NYU Langone, and member of its Healthcare Innovation Bridging Research, Informatics and Design (HiBRID) Lab. “The chatbot drove rapid progress throughout the software development life cycle, from capturing original ideas, to deciding which features to include, to generating the computer code. If this proves to be effective at scale it could revolutionize healthcare software design.”
    AI as Translator
    Generative AI tools are sensitive, say the study authors, and asking a question of the tool in two subtly different ways may yield divergent answers. The skill required to frame the questions asked of chatbots in a way that elicits the desired response, called prompt engineering, combines intuition and experimentation. Physicians and nurses, with their understanding of nuanced medical contexts, are well positioned to engineer strategic prompts that improve communications with engineers, and without learning to write computer code.
    These design efforts, however, where care providers, the would-be users of a new software, seek to advise engineers about what it must include can be compromised by attempts to converse using “different” technical languages. In the current study, the clinical members of the team were able to type their ideas in plain English, enter them into chatGPT, and ask the tool to convert their input into the kind of language required to guide coding work by the team’s software engineers. AI could take software design only so far before human software developers were needed for final code generation, but the overall process was greatly accelerated, say the authors.
    “Our study found that chatGPT can democratize the design of healthcare software by enabling doctors and nurses to drive its creation,” says senior study author Devin Mann, MD, director of the HiBRID Lab, and strategic director of Digital Health Innovation within NYU Langone Medical Center Information Technology (MCIT).”GenAI-assisted development promises to deliver computational tools that are usable, reliable, and in-line with the highest coding standards.”
    Along with Rodriguez and Mann, study authors from the Department of Population Health at NYU Langone were Katharine Lawrence, MD, Beatrix Brandfield-Harvey, Lynn Xu, Sumaiya Tasneem, and Defne Levine. Javier Gonzalez,technical lead in the HIBRID Lab, was also a study author. This work was supported by the National Institute of Diabetes and Digestive and Kidney Diseases grant 1R18DK118545-01A1. More

  • in

    Can you tell AI-generated people from real ones?

    If you recently had trouble figuring out if an image of a person is real or generated through artificial intelligence (AI), you’re not alone.
    A new study from University of Waterloo researchers found that people had more difficulty than was expected distinguishing who is a real person and who is artificially generated.
    The Waterloo study saw 260 participants provided with 20 unlabeled pictures: 10 of which were of real people obtained from Google searches, and the other 10 generated by Stable Diffusion or DALL-E, two commonly used AI programs that generate images.
    Participants were asked to label each image as real or AI-generated and explain why they made their decision. Only 61 per cent of participants could tell the difference between AI-generated people and real ones, far below the 85 per cent threshold that researchers expected.
    “People are not as adept at making the distinction as they think they are,” said Andreea Pocol, a PhD candidate in Computer Science at the University of Waterloo and the study’s lead author.
    Participants paid attention to details such as fingers, teeth, and eyes as possible indicators when looking for AI-generated content — but their assessments weren’t always correct.
    Pocol noted that the nature of the study allowed participants to scrutinize photos at length, whereas most internet users look at images in passing.

    “People who are just doomscrolling or don’t have time won’t pick up on these cues,” Pocol said.
    Pocol added that the extremely rapid rate at which AI technology is developing makes it particularly difficult to understand the potential for malicious or nefarious action posed by AI-generated images. The pace of academic research and legislation isn’t often able to keep up: AI-generated images have become even more realistic since the study began in late 2022.
    These AI-generated images are particularly threatening as a political and cultural tool, which could see any user create fake images of public figures in embarrassing or compromising situations.
    “Disinformation isn’t new, but the tools of disinformation have been constantly shifting and evolving,” Pocol said. “It may get to a point where people, no matter how trained they will be, will still struggle to differentiate real images from fakes. That’s why we need to develop tools to identify and counter this. It’s like a new AI arms race.”
    The study, “Seeing Is No Longer Believing: A Survey on the State of Deepfakes, AI-Generated Humans, and Other Nonveridical Media,” appears in the journal Advances in Computer Graphics. More

  • in

    Shortcut to Success: Toward fast and robust quantum control through accelerating adiabatic passage

    Researchers at Osaka University’s Institute of Scientific and Industrial Research (SANKEN) used the shortcuts to the adiabaticity (STA) method to greatly speed-up the adiabatic evolution of spin qubits. The spin flip fidelity after pulse optimization can be as high as 97.8% in GaAs quantum dots. This work may be applicable to other adiabatic passage and will be useful for fast and high-fidelity quantum control.
    A quantum computer uses the superposition of “0” and “1” states to perform information processing, which is completely different from classical computing, thus allowing for the solution of certain problems at a much faster rate. High-fidelity quantum state operation in large enough programmable qubit spaces is required to achieve the “quantum advantage.” The conventional method for changing quantum states uses pulse control, which is sensitive to noises and control errors. In contrast, adiabatic evolution can always keep the quantum system in its eigenstate. It is robust to noises but requires a certain length of time.
    Recently, a team from SANKEN used the STA method to greatly accelerate the adiabatic evolution of spin qubits in gate-defined quantum dots for the first time. The theory they used was proposed by the scientist Xi Chen et al. “We used the transitionless quantum driving style of STA, thus allowing the system to always remain in its ideal eigenstate even under rapid evolution.” co-author Takafumi Fujita explains.
    According to the target evolution of spin qubits, this group’s experiment adds another effective driving to suppress diabatic errors, which guarantees a fast and nearly ideal adiabatic evolution. The dynamic properties were also investigated and proved the effectiveness of this method. Additionally, the modified pulse after optimization was able to further suppress noises and improve the efficiency of quantum state control. Finally, this group achieved spin flip fidelity of up to 97.8%. According to their estimation, the acceleration of adiabatic passage would be much better in Si or Ge quantum dots with less nuclear spin noise.
    “This provides a fast and high-fidelity quantum control method. Our results may also be useful to accelerate other adiabatic passage in quantum dots.” corresponding author Akira Oiwa says. As a promising candidate for quantum computing, gate-defined quantum dots have long coherence times and good compatibility with the modern semiconductor industry. The team is trying to find more applications in gate-defined quantum dots systems, such as the promotion to more spin qubits. They hope to find a simpler and more feasible solution for fault-tolerant quantum information processing using this method. More

  • in

    Running performance helped by mathematical research

    How to optimise running? A new mathematical model1 has shown, with great precision, the impact that physiological and psychological parameters have on running performance and provides tips for optimised training. The model grew out of research conducted by a French-British team including two CNRS researchers2, the results of which will appear on March 5th 2024 in the journal Frontiers in Sports and Active Living.
    This innovative model was developed thanks to extremely precise data3 from the performances of Matthew Hudson-Smith (400m), Femke Bol (400m), and Jakob Ingebrigtsen (1500m) at the 2022 European Athletics Championships in Munich, and for Gaia Sabbatini (1500m) at the 2021 European Athletics U23 Championships in Tallinn. It led to an optimal control problem for finishing time, effort, and energy expenditure. This is the first time that such a model has also considered the variability of motor control, i.e., the role of the brain in the process of producing movement. The simulations allow the researchers to have access to the physiological parameters of the runners — especially oxygen consumption (or VO2)4, and energy expenditure during the race — as well as compute their variations. Quantifying costs and benefits in the model provides immediate access to the best strategy for achieving the runner’s optimal performance.
    The study details multiple criteria, such as the importance of a quick start in the first 50 metres (due to the need for fast oxygen kinetics), or reducing the decrease in velocity in a 400m race. The scientists also demonstrated that improving the aerobic metabolism (oxygen uptake) and the ability to maintain VO2 are crucial elements to 1500m race performance.
    The development of this model represents considerable progress in studying variations in physiological parameters during championship races, for which in vivo measurements are not possible.
    Notes:
    1 For more details on the model, “Be a champion, 40 facts you didn’t know about sports and science,” Amandine Aftalion, Springer, to appear May 14th 2024.
    2 From the Centre for Analysis and Social Mathematics (CNRS/EHESS), in collaboration with the Jacques-Louis Lions Laboratory (CNRS/Sorbonne Université/Université Paris Cité) and the Carnegie School of Sport at Leeds Beckett University.
    3 Values measured every 100 milliseconds.
    4 Rate at which oxygen is transformed into energy. More

  • in

    Robotic-assisted surgery for gallbladder cancer as effective as traditional surgery

    Each year, approximately 2,000 people die annually of gallbladder cancer (GBC) in the U.S., with only one in five cases diagnosed at an early stage. With GBC rated as the first biliary tract cancer and the 17th most deadly cancer worldwide, pressing attention for proper management of disease must be addressed. For patients diagnosed, surgery is the most promising curative treatment. While there has been increasing adoption of minimally invasive surgical techniques in gastrointestinal malignancies, including utilization of laparoscopic and robotic surgery, there are reservations in utilizing minimally invasive surgery for gallbladder cancer.
    A new study by researchers at Boston University Chobanian & Avedisian School of Medicine has found that robotic-assisted surgery for GBC is as effective as traditional open and laparoscopic methods, with added benefits in precision and quicker post-operative recovery.
    “Our study demonstrates the viability of robotic surgery for gallbladder cancer treatment, a field where minimally invasive approaches have been cautiously adopted due to concerns over oncologic efficacy and technical challenges,” say’s corresponding author Eduardo Vega, MD, assistant professor of surgery at the school.
    The researchers conducted a systematic review of the literature focusing on comparing patient outcomes following robotic, open and laparoscopic surgeries. This involved analyzing studies that reported on oncological results and perioperative benefits, such as operation time, blood loss and recovery period.
    According to the researchers, there has been reluctance to utilize robotic surgery for GBC due to fears of dissemination of the tumor via tumor manipulation, bile spillage and technical challenges, including liver resection and adequate removal of lymph nodes. “Since its early use, robotic surgery has advanced in ways that provide surgeons technical advantages over laparoscopic surgery, improving dexterity and visualization of the surgical field. Additionally, robotic assistance has eased the process of detailed dissection around blood vessels as well as knot tying and suturing, and provides high-definition, three-dimensional vision, allowing the surgeon to perform under improved ergonomics,” said Vega.
    The researchers believe these findings are significant since they suggest robotic surgery is a safer and potentially less painful option for gallbladder cancer treatment, with a faster recovery time. Clinically, it could lead to the adoption of robotic surgery as a standard care option for gallbladder cancer, improving patient outcomes and potentially reducing healthcare costs due to shorter hospital stays,” he added.
    These findings appear online in the American Journal of Surgery. More

  • in

    New AI model draws treasure maps to diagnose disease

    Medical diagnostics expert, doctor’s assistant, and cartographer are all fair titles for an artificial intelligence model developed by researchers at the Beckman Institute for Advanced Science and Technology.
    Their new model accurately identifies tumors and diseases in medical images and is programmed to explain each diagnosis with a visual map. The tool’s unique transparency allows doctors to easily follow its line of reasoning, double-check for accuracy, and explain the results to patients.
    “The idea is to help catch cancer and disease in its earliest stages — like an X on a map — and understand how the decision was made. Our model will help streamline that process and make it easier on doctors and patients alike,” said Sourya Sengupta, the study’s lead author and a graduate research assistant at the Beckman Institute.
    This research appeared in IEEE Transactions on Medical Imaging.
    Cats and dogs and onions and ogres
    First conceptualized in the 1950s, artificial intelligence — the concept that computers can learn to adapt, analyze, and problem-solve like humans do — has reached household recognition, due in part to ChatGPT and its extended family of easy-to-use tools.
    Machine learning, or ML, is one of many methods researchers use to create artificially intelligent systems. ML is to AI what driver’s education is to a 15-year-old: a controlled, supervised environment to practice decision-making, calibrating to new environments, and rerouting after a mistake or wrong turn.

    Deep learning — machine learning’s wiser and worldlier relative — can digest larger quantities of information to make more nuanced decisions. Deep learning models derive their decisive power from the closest computer simulations we have to the human brain: deep neural networks.
    These networks — just like humans, onions, and ogres — have layers, which makes them tricky to navigate. The more thickly layered, or nonlinear, a network’s intellectual thicket, the better it performs complex, human-like tasks.
    Consider a neural network trained to differentiate between pictures of cats and pictures of dogs. The model learns by reviewing images in each category and filing away their distinguishing features (like size, color, and anatomy) for future reference. Eventually, the model learns to watch out for whiskers and cry Doberman at the first sign of a floppy tongue.
    But deep neural networks are not infallible — much like overzealous toddlers, said Sengupta, who studies biomedical imaging in the University of Illinois Urbana-Champaign Department of Electrical and Computer Engineering.
    “They get it right sometimes, maybe even most of the time, but it might not always be for the right reasons,” he said. “I’m sure everyone knows a child who saw a brown, four-legged dog once and then thought that every brown, four-legged animal was a dog.”
    Sengupta’s gripe? If you ask a toddler how they decided, they will probably tell you.

    “But you can’t ask a deep neural network how it arrived at an answer,” he said.
    The black box problem
    Sleek, skilled, and speedy as they may be, deep neural networks struggle to master the seminal skill drilled into high school calculus students: showing their work. This is referred to as the black box problem of artificial intelligence, and it has baffled scientists for years.
    On the surface, coaxing a confession from the reluctant network that mistook a Pomeranian for a cat does not seem unbelievably crucial. But the gravity of the black box sharpens as the images in question become more life-altering. For example: X-ray images from a mammogram that may indicate early signs of breast cancer.
    The process of decoding medical images looks different in different regions of the world.
    “In many developing countries, there is a scarcity of doctors and a long line of patients. AI can be helpful in these scenarios,” Sengupta said.
    When time and talents are in high demand, automated medical image screening can be deployed as an assistive tool — in no way replacing the skill and expertise of doctors, Sengupta said. Instead, an AI model can pre-scan medical images and flag those containing something unusual — like a tumor or early sign of disease, called a biomarker — for a doctor’s review. This method saves time and can even improve the performance of the person tasked with reading the scan.
    These models work well, but their bedside manner leaves much to be desired when, for example, a patient asks why an AI system flagged an image as containing (or not containing) a tumor.
    Historically, researchers have answered questions like this with a slew of tools designed to decipher the black box from the outside in. Unfortunately, the researchers using them are often faced with a similar plight as the unfortunate eavesdropper, leaning against a locked door with an empty glass to their ear.
    “It would be so much easier to simply open the door, walk inside the room, and listen to the conversation firsthand,” Sengupta said.
    To further complicate the matter, many variations of these interpretation tools exist. This means that any given black box may be interpreted in “plausible but different” ways, Sengupta said.
    “And now the question is: which interpretation do you believe?” he said. “There is a chance that your choice will be influenced by your subjective bias, and therein lies the main problem with traditional methods.”
    Sengupta’s solution? An entirely new type of AI model that interprets itself every time — that explains each decision instead of blandly reporting the binary of “tumor versus non-tumor,” Sengupta said.
    No water glass needed, in other words, because the door has disappeared.
    Mapping the model
    A yogi learning a new posture must practice it repeatedly. An AI model trained to tell cats from dogs studying countless images of both quadrupeds.
    An AI model functioning as doctor’s assistant is raised on a diet of thousands of medical images, some with abnormalities and some without. When faced with something never-before-seen, it runs a quick analysis and spits out a number between 0 and 1. If the number is less than .5, the image is not assumed to contain a tumor; a numeral greater than .5 warrants a closer look.
    Sengupta’s new AI model mimics this setup with a twist: the model produces a value plus a visual map explaining its decision.
    The map — referred to by the researchers as an equivalency map, or E-map for short — is essentially a transformed version of the original X-ray, mammogram, or other medical image medium. Like a paint-by-numbers canvas, each region of the E-map is assigned a number. The greater the value, the more medically interesting the region is for predicting the presence of an anomaly. The model sums up the values to arrive at its final figure, which then informs the diagnosis.
    “For example, if the total sum is 1, and you have three values represented on the map — .5, .3, and .2 — a doctor can see exactly which areas on the map contributed more to that conclusion and investigate those more fully,” Sengupta said.
    This way, doctors can double-check how well the deep neural network is working — like a teacher checking the work on a student’s math problem — and respond to patients’ questions about the process.
    “The result is a more transparent, trustable system between doctor and patient,” Sengupta said.
    X marks the spot
    The researchers trained their model on three different disease diagnosis tasks including more than 20,000 total images.
    First, the model reviewed simulated mammograms and learned to flag early signs of tumors. Second, it analyzed optical coherence tomography images of the retina, where it practiced identifying a buildup called Drusen that may be an early sign of macular degeneration. Third, the model studied chest X-rays and learned to detect cardiomegaly, a heart enlargement condition that can lead to disease.
    Once the mapmaking model had been trained, the researchers compared its performance to existing black-box AI systems — the ones without a self-interpretation setting. The new model performed comparably to its counterparts in all three categories, with accuracy rates of 77.8% for mammograms, 99.1% for retinal OCT images, and 83% for chest x-rays compared to the existing 77.8%, 99.1%, and 83.33.%
    These high accuracy rates are a product of the deep neural network, the non-linear layers of which mimic the nuance of human neurons.
    To create such a complicated system, the researchers peeled the proverbial onion and drew inspiration from linear neural networks, which are simpler and easier to interpret.
    “The question was: How can we leverage the concepts behind linear models to make non-linear deep neural networks also interpretable like this?” said principal investigator Mark Anastasio, a Beckman Institute researcher and the Donald Biggar Willet Professor and Head of the Illinois Department of Bioengineering. “This work is a classic example of how fundamental ideas can lead to some novel solutions for state-of-the-art AI models.”
    The researchers hope that future models will be able to detect and diagnose anomalies all over the body and even differentiate between them.
    “I am excited about our tool’s direct benefit to society, not only in terms of improving disease diagnoses, but also improving trust and transparency between doctors and patients,” Anastasio said. More

  • in

    A key to the future of robots could be hiding in liquid crystals

    Robots and cameras of the future could be made of liquid crystals, thanks to a new discovery that significantly expands the potential of the chemicals already common in computer displays and digital watches.
    The findings, a simple and inexpensive way to manipulate the molecular properties of liquid crystals with light exposure, are now published in Advanced Materials.
    “Using our method, any lab with a microscope and a set of lenses can arrange the liquid crystal alignment in any pattern they’d want,” said author Alvin Modin, a doctoral researcher studying physics at Johns Hopkins. “Industrial labs and manufacturers could probably adopt the method in a day.”
    Liquid crystal molecules flow like a liquid, but they have a common orientation like in solids, and this orientation can change in response to stimuli. They are useful in LCD screens, biomedical imaging instruments, and other devices that require precise control of light and subtle movements. But controlling their alignment in three dimensions requires costly and complicated techniques, Modin said.
    The team, which includes Johns Hopkins physics professor Robert Leheny and assistant research professor Francesca Serra, discovered they could manipulate the three-dimensional orientation of liquid crystals by controlling light exposures of a photosensitive material deposited on glass.
    They shined polarized and unpolarized light at the liquid crystals through a microscope. In polarized light, light waves oscillate in specific directions rather than randomly in all directions, as they would in unpolarized light. The team used the method to create a microscopic lens of liquid crystals able to focus light depending on the polarization of light shining through it.
    First, the team beamed polarized light to align the liquid crystals on a surface. Then, they used regular light to reorient the liquid crystals upward from that plane. This allowed them to control the orientation of two types of common liquid crystals and create patterns with features the size of a few micrometers, a fraction of the thickness of a human hair.

    The findings could lead to the creation of programmable tools that shapeshift in response to stimuli, like those needed in soft, rubberlike robots to handle complex objects and environments or camera lenses that automatically focus depending on lighting conditions, said Serra, who is also an associate professor at the University of Southern Denmark.
    “If I wanted to make an arbitrary three-dimensional shape, like an arm or a gripper, I would have to align the liquid crystals so that when it is subject to a stimulus, this material restructures spontaneously into those shapes,” Serra said. “The missing information until now was how to control this three-dimensional axis of the alignment of liquid crystals, but now we have a way to make that possible.”
    The scientists are working to obtain a patent for their discovery and plan to further test it with different types of liquid crystal molecules and solidified polymers made of these molecules.
    “Certain types of structures couldn’t be attempted before because we didn’t have the right control of the three-dimensional alignment of the liquid crystals,” Serra said. “But now we do, so it is just limited by one’s imagination in finding a clever structure to build with this method, using a three-dimensional varying alignment of liquid crystals.” More

  • in

    New dressing robot can ‘mimic’ the actions of care-workers

    Scientists have developed a new robot that can ‘mimic’ the two-handed movements of care-workers as they dress an individual.
    Until now, assistive dressing robots, designed to help an elderly person or a person with a disability get dressed, have been created in the laboratory as a one-armed machine, but research has shown that this can be uncomfortable for the person in care or impractical.
    To tackle this problem, Dr Jihong Zhu, a robotics researcher at the University of York’s Institute for Safe Autonomy, proposed a two-armed assistive dressing scheme, which has not been attempted in previous research, but inspired by caregivers who have demonstrated that specific actions are required to reduce discomfort and distress to the individual in their care.
    It is thought that this technology could be significant in the social care system to allow care-workers to spend less time on practical tasks and more time on the health and mental well-being of individuals.
    Dr Zhu gathered important information on how care-workers moved during a dressing exercise, through allowing a robot to observe and learn from human movements and then, through AI, generate a model that mimics how human helpers do their task.
    This allowed the researchers to gather enough data to illustrate that two hands were needed for dressing and not one, as well as information on the angles that the arms make, and the need for a human to intervene and stop or alter certain movements.
    Dr Zhu, from the University of York’s Institute for Safe Autonomy and the School of Physics, Engineering and Technology, said: “We know that practical tasks, such as getting dressed, can be done by a robot, freeing up a care-worker to concentrate more on providing companionship and observing the general well-being of the individual in their care. It has been tested in the laboratory, but for this to work outside of the lab we really needed to understand how care-workers did this task in real-time.

    “We adopted a method called learning from demonstration, which means that you don’t need an expert to programme a robot, a human just needs to demonstrate the motion that is required of the robot and the robot learns that action. It was clear that for care workers two arms were needed to properly attend to the needs of individuals with different abilities.
    “One hand holds the individual’s hand to guide them comfortably through the arm of a shirt, for example, whilst at the same time the other hand moves the garment up and around or over. With the current one-armed machine scheme a patient is required to do too much work in order for a robot to assist them, moving their arm up in the air or bending it in ways that they might not be able to do.”
    The team were also able to build algorithms that made the robotic arm flexible enough in its movements for it to perform the pulling and lifting actions, but also be prevented from making an action by the gentle touch of a human hand, or guided out of an action by a human hand moving the hand left or right, up or down, without the robot resisting.
    Dr Zhu said: “Human modelling can really help with efficient and safe human and robot interactions, but it is not only important to ensure it performs the task, but that it can be halted or changed mid-action should an individual desire it. Trust is a significant part of this process, and the next step in this research is testing the robot’s safety limitations and whether it will be accepted by those who need it most.”
    The research, in collaboration with researchers from TU Delft and Honda Research Institute Europe, was funded by the Honda Research Institute Europe. More