More stories

  • in

    AI model finds the cancer clues at lightning speed

    Researchers at the University of Gothenburg have developed an AI model that increases the potential for detecting cancer through sugar analyses. The AI model is faster and better at finding abnormalities than the current semi-manual method.
    Glycans, or structures of sugar molecules in our cells, can be measured by mass spectrometry. One important use is that the structures can indicate different forms of cancer in the cells.
    However, the data from the mass spectrometer measurement must be carefully analysed by humans to work out the structure from the glycan fragmentation. This process can take anywhere from hours to days for each sample and can only be carried out with high confidence by a small number of experts in the world, as it is essentially detective work learnt over many years.
    Automating the detective work
    The process is thus a bottleneck in the use of glycan analyses, for example for cancer detection, when there are many samples to be analysed.
    Researchers at the University of Gothenburg have developed an AI model to automate this detective work. The AI model, named Candycrunch, solves the task in just a few seconds per test. The results are reported in a scientific article in the journal Nature Methods.
    The AI model was trained using a database of over 500,000 examples of different fragmentations and associated structures of sugar molecules.

    “The training has enabled Candycrunch to calculate the exact sugar structure in a sample in 90 per cent of cases,” says Daniel Bojar, Associate Senior Lecturer in Bioinformatics at the University of Gothenburg.
    Can find new biomarkers
    This means that the AI model could soon reach the same levels of accuracy as the sequencing of other biological sequences, such as DNA, RNA or proteins.
    Because the AI model is so fast and accurate in its answers, it can accelerate the discovery of glycan-based biomarkers for both diagnosis and prognosis of the cancer.
    “We believe that glycan analyses will become a bigger part of biological and clinical research now that we have automated the biggest bottleneck,” says Daniel Bojar.
    The AI model Candycrunch is also able to identify structures that are often missed by human analyses due to their low concentrations. The model can therefore help researchers to find new glycan-based biomarkers. More

  • in

    How powdered rock could help slow climate change

    On a banana plantation in rural Australia, a second-generation farming family spreads crushed volcanic rock between rows of ripening fruit. Eight thousand kilometers away, two young men in central India dust the same type of rock powder onto their dry-season rice paddy, while across the ocean, a farmer in Kenya sprinkles the powder by hand onto his potato plants. Far to the north in foggy Scotland, a plot of potatoes gets the same treatment, as do cattle pastures on sunny slopes in southern Brazil.

    And from Michigan to Mississippi, farmers are scattering volcanic rock dust on their wheat, soy and corn fields with ag spreaders typically reserved for dispersing crushed limestone to adjust soil acidity. More

  • in

    ‘World record’ for data transmission speed

    Aston University researchers are part of a team that has sent data at a record rate of 402 terabits per second using commercially available optical fibre.
    This beats their previous record, announced in March 2024, of 301 terabits or 301,000,000 megabits per second using a single, standard optical fibre.
    “If compared to the internet connection speed recommendations of Netflix, of 3 Mbit/s or higher, for a watching a HD movie, this speed is over 100 million times faster.
    The speed was achieved by using a wider spectrum, using six bands rather than the previous four, which increased capacity for data sharing. Normally just one or two bands are used.
    The international research team included Professor Wladek Forysiak and Dr Ian Philips who are members of the University’s Aston Institute of Photonic Technologies (AIPT). Led by the Photonic Network Laboratory of the National Institute of Information and Communications Technology (NICT) which is based in Tokyo, Japan it also including Nokia Bell labs of the USA.
    Together they achieved the feat by constructing the first optical transmission system covering six wavelength bands (O,E,S,C,L and U) used in fibre optical communication. Aston University contributed specifically by building a set of U-band Raman amplifiers, the longest part of the combined wavelength spectrum, where conventional doped fibre amplifiers are not presently available from commercial sources.
    Optical fibres are small tubular strands of glass that pass information using light unlike regular copper cables that can’t carry data at such speeds.

    As well as increasing capacity by approximately a third, the technique uses so-called “standard fibre” that is already deployed in huge quantities worldwide, so there would be no need to install new specialist cables.
    As demand for data from business and individuals increases this new discovery could help keep broadband prices stable despite an improvement in capacity and speed.
    Aston University’s Dr Philips said: “This finding could help increase capacity on a single fibre so the world would have a higher performing system.
    “The newly developed technology is expected to make a significant contribution to expand the communication capacity of the optical communication infrastructure as future data services rapidly increase demand.”
    His colleague Professor Wladek Forysiak added: ‘This is a ‘hero experiment’ made possible by a multi-national team effort and very recent technical advances in telecommunications research laboratories from across the world’.”
    The results of the experiment were accepted as a post-deadline paper at the 47th International Conference on Optical Fiber Communications (OFC 2024) in the USA on 28 March.
    To help support some of its work in this area Aston University has received funding from EPSRC (UKRI), the Royal Society (RS Exchange grant with NICT) and the EU (European Training Network). More

  • in

    New computational microscopy technique provides more direct route to crisp images

    For hundreds of years, the clarity and magnification of microscopes were ultimately limited by the physical properties of their optical lenses. Microscope makers pushed those boundaries by making increasingly complicated and expensive stacks of lens elements. Still, scientists had to decide between high resolution and a small field of view on the one hand or low resolution and a large field of view on the other.
    In 2013, a team of Caltech engineers introduced a microscopy technique called FPM (for Fourier ptychographic microscopy). This technology marked the advent of computational microscopy, the use of techniques that wed the sensing of conventional microscopes with computer algorithms that process detected information in new ways to create deeper, sharper images covering larger areas. FPM has since been widely adopted for its ability to acquire high-resolution images of samples while maintaining a large field of view using relatively inexpensive equipment.
    Now the same lab has developed a new method that can outperform FPM in its ability to obtain images free of blurriness or distortion, even while taking fewer measurements. The new technique, described in a paper that appeared in the journal Nature Communications, could lead to advances in such areas as biomedical imaging, digital pathology, and drug screening.
    The new method, dubbed APIC (for Angular Ptychographic Imaging with Closed-form method), has all the advantages of FPM without what could be described as its biggest weakness — namely, that to arrive at a final image, the FPM algorithm relies on starting at one or several best guesses and then adjusting a bit at a time to arrive at its “optimal” solution, which may not always be true to the original image.
    Under the leadership of Changhuei Yang, the Thomas G. Myers Professor of Electrical Engineering, Bioengineering, and Medical Engineering and an investigator with the Heritage Medical Research Institute, the Caltech team realized that it was possible to eliminate this iterative nature of the algorithm.
    Rather than relying on trial and error to try to home in on a solution, APIC solves a linear equation, yielding details of the aberrations, or distortions introduced by a microscope’s optical system. Once the aberrations are known, the system can correct for them, basically performing as though it is ideal, and yielding clear images covering large fields of view.
    “We arrive at a solution of the high-resolution complex field in a closed-form fashion, as we now have a deeper understanding in what a microscope captures, what we already know, and what we need to truly figure out, so we don’t need any iteration,” says Ruizhi Cao (PhD ’24), co-lead author on the paper, a former graduate student in Yang’s lab, and now a postdoctoral scholar at UC Berkeley. “In this way, we can basically guarantee that we are seeing the true final details of a sample.”
    As with FPM, the new method measures not only the intensity of the light seen through the microscope but also an important property of light called “phase,” which is related to the distance that light travels. This property goes undetected by human eyes but contains information that is very useful in terms of correcting aberrations. It was in solving for this phase information that FPM relied on a trial-and-error method, explains Cheng Shen (PhD ’23), co-lead author on the APIC paper, who also completed the work while in Yang’s lab and is now a computer vision algorithm engineer at Apple. “We have proven that our method gives you an analytical solution and in a much more straightforward way. It is faster, more accurate, and leverages some deep insights about the optical system.”

    Beyond eliminating the iterative nature of the phase-solving algorithm, the new technique also allows researchers to gather clear images over a large field of view without repeatedly refocusing the microscope. With FPM, if the height of the sample varied even a few tens of microns from one section to another, the person using the microscope would have to refocus in order to make the algorithm work. Since these computational microscopy techniques frequently involve stitching together more than 100 lower-resolution images to piece together the larger field of view, that means APIC can make the process much faster and prevent the possible introduction of human error at many steps.
    “We have developed a framework to correct for the aberrations and also to improve resolution,” says Cao. “Those two capabilities can be potentially fruitful for a broader range of imaging systems.”
    Yang says the development of APIC is vital to the broader scope of work his lab is currently working on to optimize image data input for artificial intelligence (AI) applications. “Recently, my lab showed that AI can outperform expert pathologists at predicting metastatic progression from simple histopathology slides from lung cancer patients,” says Yang. “That prediction ability is exquisitely dependent on obtaining uniformly in-focus and high-quality microscopy images, something that APIC is highly suited for.” More

  • in

    Soft, stretchy electrode simulates touch sensations using electrical signals

    A team of researchers led by the University of California San Diego has developed a soft, stretchy electronic device capable of simulating the feeling of pressure or vibration when worn on the skin. This device, reported in a paper published in Science Robotics, represents a step towards creating haptic technologies that can reproduce a more varied and realistic range of touch sensations.
    The device consists of a soft, stretchable electrode attached to a silicone patch. It can be worn like a sticker on either the fingertip or forearm. The electrode, in direct contact with the skin, is connected to an external power source via wires. By sending a mild electrical current through the skin, the device can produce sensations of either pressure or vibration depending on the signal’s frequency.
    “Our goal is to create a wearable system that can deliver a wide gamut of touch sensations using electrical signals — without causing pain for the wearer,” said study co-first author Rachel Blau, a nano engineering postdoctoral researcher at the UC San Diego Jacobs School of Engineering.
    Existing technologies that recreate a sense of touch through electrical stimulation often induce pain due to the use of rigid metal electrodes, which do not conform well to the skin. The air gaps between these electrodes and the skin can result in painful electrical currents.
    To address these issues, Blau and a team of researchers led by Darren Lipomi, a professor in the Aiiso Yufeng Li Family Department of Chemical and Nano Engineering at UC San Diego, developed a soft, stretchy electrode that seamlessly conforms to the skin.
    The electrode is made of a new polymer material constructed from the building blocks of two existing polymers: a conductive, rigid polymer known as PEDOT:PSS, and a soft, stretchy polymer known as PPEGMEA. “By optimizing the ratio of these [polymer building blocks], we molecularly engineered a material that is both conductive and stretchable,” said Blau.
    The polymer electrode is laser-cut into a spring-shaped, concentric design and attached to a silicone substrate. “This design enhances the electrode’s stretchability and ensures that the electrical current targets a specific location on the skin, thus providing localized stimulation to prevent any pain,” said Abdulhameed Abdal, a Ph.D. student in the Department of Mechanical and Aerospace Engineering at UC San Diego and the study’s other co-first author. Abdal and Blau worked on the synthesis and fabrication of the electrode with UC San Diego nano engineering undergraduate students Yi Qie, Anthony Navarro and Jason Chin.

    In tests, the electrode device was worn on the forearm by 10 participants. In collaboration with behavioral scientists and psychologists at the University of Amsterdam, the researchers first identified the lowest level of electrical current detectable. They then adjusted the frequency of the electrical stimulation, allowing participants to experience sensations categorized as either pressure or vibration.
    “We found that by increasing the frequency, participants felt more vibration rather than pressure,” said Abdal. “This is interesting because biophysically, it was never known exactly how current is perceived by the skin.”
    The new insights could pave the way for the development of advanced haptic devices for applications such as virtual reality, medical prosthetics and wearable technology.
    This work was supported by the National Science Foundation Disability and Rehabilitation Engineering program (CBET-2223566). This work was performed in part at the San Diego Nanotechnology Infrastructure (SDNI) at UC San Diego, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the National Science Foundation (grant ECCS-1542148). More

  • in

    Can A.I. tell you if you have osteoporosis? Newly developed deep learning model shows promise

    Osteoporosis is so difficult to detect in early stage it’s called the “silent disease.” What if artificial intelligence could help predict a patient’s chances of having the bone-loss disease before ever stepping into a doctor’s office?
    Tulane University researchers made progress toward that vision by developing a new deep learning algorithm that outperformed existing computer-based osteoporosis risk prediction methods, potentially leading to earlier diagnoses and better outcomes for patients with osteoporosis risk.
    Their results were recently published in Frontiers in Artificial Intelligence.
    Deep learning models have gained notice for their ability to mimic human neural networks and find trends within large datasets without being specifically programmed to do so. Researchers tested the deep neural network (DNN) model against four conventional machine learning algorithms and a traditional regression model, using data from over 8,000 participants aged 40 and older in the Louisiana Osteoporosis Study. The DNN achieved the best overall predictive performance, measured by scoring each model’s ability to identify true positives and avoid mistakes.
    “The earlier osteoporosis risk is detected, the more time a patient has for preventative measures,” said lead author Chuan Qiu, a research assistant professor at the Tulane School of Medicine Center for Biomedical Informatics and Genomics. “We were pleased to see our DNN model outperform other models in accurately predicting the risk of osteoporosis in an aging population.”
    In testing the algorithms using a large sample size of real-world health data, the researchers were also able to identify the 10 most important factors for predicting osteoporosis risk: weight, age, gender, grip strength, height, beer drinking, diastolic pressure, alcohol drinking, years of smoking, and income level.
    Notably, the simplified DNN model using these top 10 risk factors performed nearly as well as the full model which included all risk factors.
    While Qiu admitted that there is much more work to be done before an AI platform can be used by the public to predict an individual’s risk of osteoporosis, he said identifying the benefits of the deep learning model was a step in that direction.
    “Our final aim is to allow people to enter their information and receive highly accurate osteoporosis risk scores to empower them to seek treatment to strengthen their bones and reduce any further damage,” Qiu said. More

  • in

    Wireless receiver blocks interference for better mobile device performance

    The growing prevalence of high-speed wireless communication devices, from 5G mobile phones to sensors for autonomous vehicles, is leading to increasingly crowded airwaves. This makes the ability to block interfering signals that can hamper device performance an even more important — and more challenging — problem.
    With these and other emerging applications in mind, MIT researchers demonstrated a new millimeter-wave multiple-input-multiple-output (MIMO) wireless receiver architecture that can handle stronger spatial interference than previous designs. MIMO systems have multiple antennas, enabling them to transmit and receive signals from different directions. Their wireless receiver senses and blocks spatial interference at the earliest opportunity, before unwanted signals have been amplified, which improves performance.
    Key to this MIMO receiver architecture is a special circuit that can target and cancel out unwanted signals, known as a nonreciprocal phase shifter. By making a novel phase shifter structure that is reconfigurable, low-power, and compact, the researchers show how it can be used to cancel out interference earlier in the receiver chain.
    Their receiver can block up to four times more interference than some similar devices. In addition, the interference-blocking components can be switched on and off as needed to conserve energy.
    In a mobile phone, such a receiver could help mitigate signal quality issues that can lead to slow and choppy Zoom calling or video streaming.
    “There is already a lot of utilization happening in the frequency ranges we are trying to use for new 5G and 6G systems. So, anything new we are trying to add should already have these interference-mitigation systems installed. Here, we’ve shown that using a nonreciprocal phase shifter in this new architecture gives us better performance. This is quite significant, especially since we are using the same integrated platform as everyone else,” says Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the Microsystems Technology Laboratories and Research Laboratory of Electronics (RLE), and the senior author of a paper on this receiver.
    Reiskarimian wrote the paper with EECS graduate students Shahabeddin Mohin, who is the lead author, Soroush Araei, and Mohammad Barzgari, an RLE postdoc. The work was recently presented at the IEEE Radio Frequency Circuits Symposium and received the Best Student Paper Award.

    Blocking interference
    Digital MIMO systems have an analog and a digital portion. The analog portion uses antennas to receive signals, which are amplified, down-converted, and passed through an analog-to-digital converter before being processed in the digital domain of the device. In this case, digital beamforming is required to retrieve the desired signal.
    But if a strong, interfering signal coming from a different direction hits the receiver at the same time as a desired signal, it can saturate the amplifier so the desired signal is drowned out. Digital MIMOs can filter out unwanted signals, but this filtering occurs later in the receiver chain. If the interference is amplified along with the desired signal, it is more difficult to filter out later.
    “The output of the initial low-noise amplifier is the first place you can do this filtering with minimal penalty, so that is exactly what we are doing with our approach,” Reiskarimian says.
    The researchers built and installed four nonreciprocal phase shifters immediately at the output of the first amplifier in each receiver chain, all connected to the same node. These phase shifters can pass signal in both directions and sense the angle of an incoming interfering signal. The devices can adjust their phase until they cancel out the interference.
    The phase of these devices can be precisely tuned, so they can sense and cancel an unwanted signal before it passes to the rest of the receiver, blocking interference before it affects any other parts of the receiver. In addition, the phase shifters can follow signals to continue blocking interference if it changes location.

    “If you start getting disconnected or your signal quality goes down, you can turn this on and mitigate that interference on the fly. Because ours is a parallel approach, you can turn it on and off with minimal effect on the performance of the receiver itself,” Reiskarimian adds.
    A compact device
    In addition to making their novel phase shifter architecture tunable, the researchers designed them to use less space on the chip and consume less power than typical nonreciprocal phase shifters.
    Once the researchers had done the analysis to show their idea would work, their biggest challenge was translating the theory into a circuit that achieved their performance goals. At the same time, the receiver had to meet strict size restrictions and a tight power budget, or it wouldn’t be useful in real-world devices.
    In the end, the team demonstrated a compact MIMO architecture on a 3.2-square-millimeter chip that could block signals which were up to four times stronger than what other devices could handle. Simpler than typical designs, their phase shifter architecture is also more energy efficient.
    Moving forward, the researchers want to scale up their device to larger systems, as well as enable it to perform in the new frequency ranges utilized by 6G wireless devices. These frequency ranges are prone to powerful interference from satellites. In addition, they would like to adapt nonreciprocal phase shifters to other applications.
    This research was supported, in part, by the MIT Center for Integrated Circuits and Systems. More

  • in

    Study reveals why AI models that analyze medical images can be biased

    Artificial intelligence models often play a role in medical diagnoses, especially when it comes to analyzing images such as X-rays. However, studies have found that these models don’t always perform well across all demographic groups, usually faring worse on women and people of color.
    These models have also been shown to develop some surprising abilities. In 2022, MIT researchers reported that AI models can make accurate predictions about a patient’s race from their chest X-rays — something that the most skilled radiologists can’t do.
    That research team has now found that the models that are most accurate at making demographic predictions also show the biggest “fairness gaps” — that is, discrepancies in their ability to accurately diagnose images of people of different races or genders. The findings suggest that these models may be using “demographic shortcuts” when making their diagnostic evaluations, which lead to incorrect results for women, Black people, and other groups, the researchers say.
    “It’s well-established that high-capacity machine-learning models are good predictors of human demographics such as self-reported race or sex or age. This paper re-demonstrates that capacity, and then links that capacity to the lack of performance across different groups, which has never been done,” says Marzyeh Ghassemi, an MIT associate professor of electrical engineering and computer science, a member of MIT’s Institute for Medical Engineering and Science, and the senior author of the study.
    The researchers also found that they could retrain the models in a way that improves their fairness. However, their approached to “debiasing” worked best when the models were tested on the same types of patients they were trained on, such as patients from the same hospital. When these models were applied to patients from different hospitals, the fairness gaps reappeared.
    “I think the main takeaways are, first, you should thoroughly evaluate any external models on your own data because any fairness guarantees that model developers provide on their training data may not transfer to your population. Second, whenever sufficient data is available, you should train models on your own data,” says Haoran Zhang, an MIT graduate student and one of the lead authors of the new paper. MIT graduate student Yuzhe Yang is also a lead author of the paper, which will appear in Nature Medicine. Judy Gichoya, an associate professor of radiology and imaging sciences at Emory University School of Medicine, and Dina Katabi, the Thuan and Nicole Pham Professor of Electrical Engineering and Computer Science at MIT, are also authors of the paper.
    Removing bias
    As of May 2024, the FDA has approved 882 AI-enabled medical devices, with 671 of them designed to be used in radiology. Since 2022, when Ghassemi and her colleagues showed that these diagnostic models can accurately predict race, they and other researchers have shown that such models are also very good at predicting gender and age, even though the models are not trained on those tasks.

    “Many popular machine learning models have superhuman demographic prediction capacity — radiologists cannot detect self-reported race from a chest X-ray,” Ghassemi says. “These are models that are good at predicting disease, but during training are learning to predict other things that may not be desirable.” In this study, the researchers set out to explore why these models don’t work as well for certain groups. In particular, they wanted to see if the models were using demographic shortcuts to make predictions that ended up being less accurate for some groups. These shortcuts can arise in AI models when they use demographic attributes to determine whether a medical condition is present, instead of relying on other features of the images.
    Using publicly available chest X-ray datasets from Beth Israel Deaconess Medical Center in Boston, the researchers trained models to predict whether patients had one of three different medical conditions: fluid buildup in the lungs, collapsed lung, or enlargement of the heart. Then, they tested the models on X-rays that were held out from the training data.
    Overall, the models performed well, but most of them displayed “fairness gaps” — that is, discrepancies between accuracy rates for men and women, and for white and Black patients.
    The models were also able to predict the gender, race, and age of the X-ray subjects. Additionally, there was a significant correlation between each model’s accuracy in making demographic predictions and the size of its fairness gap. This suggests that the models may be using demographic categorizations as a shortcut to make their disease predictions.
    The researchers then tried to reduce the fairness gaps using two types of strategies. For one set of models, they trained them to optimize “subgroup robustness,” meaning that the models are rewarded for having better performance on the subgroup for which they have the worst performance, and penalized if their error rate for one group is higher than the others.
    In another set of models, the researchers forced them to remove any demographic information from the images, using “group adversarial” approaches. Both of these strategies worked fairly well, the researchers found.

    “For in-distribution data, you can use existing state-of-the-art methods to reduce fairness gaps without making significant trade-offs in overall performance,” Ghassemi says. “Subgroup robustness methods force models to be sensitive to mispredicting a specific group, and group adversarial methods try to remove group information completely.”
    Not always fairer
    However, those approaches only worked when the models were tested on data from the same types of patients that they were trained on — for example, only patients from the Beth Israel Deaconess Medical Center dataset.
    When the researchers tested the models that had been “debiased” using the BIDMC data to analyze patients from five other hospital datasets, they found that the models’ overall accuracy remained high, but some of them exhibited large fairness gaps.
    “If you debias the model in one set of patients, that fairness does not necessarily hold as you move to a new set of patients from a different hospital in a different location,” Zhang says.
    This is worrisome because in many cases, hospitals use models that have been developed on data from other hospitals, especially in cases where an off-the-shelf model is purchased, the researchers say.
    “We found that even state-of-the-art models which are optimally performant in data similar to their training sets are not optimal — that is, they do not make the best trade-off between overall and subgroup performance — in novel settings,” Ghassemi says. “Unfortunately, this is actually how a model is likely to be deployed. Most models are trained and validated with data from one hospital, or one source, and then deployed widely.”
    The researchers found that the models that were debiased using group adversarial approaches showed slightly more fairness when tested on new patient groups that those debiased with subgroup robustness methods. They now plan to try to develop and test additional methods to see if they can create models that do a better job of making fair predictions on new datasets.
    The findings suggest that hospitals that use these types of AI models should evaluate them on their own patient population before beginning to use them, to make sure they aren’t giving inaccurate results for certain groups.
    The research was funded by a Google Research Scholar Award, the Robert Wood Johnson Foundation Harold Amos Medical Faculty Development Program, RSNA Health Disparities, the Lacuna Fund, the Gordon and Betty Moore Foundation, the National Institute of Biomedical Imaging and Bioengineering, and the National Heart, Lung, and Blood Institute. More