More stories

  • in

    A step towards AI-based precision medicine

    Artificial intelligence, AI, which finds patterns in complex biological data could eventually contribute to the development of individually tailored healthcare. Researchers at Linköping University, Sweden, have developed an AI-based method applicable to various medical and biological issues. Their models can for instance accurately estimate people’s chronological age and determine whether they have been smokers or not.
    There are many factors that can affect which out of all our genes are used at any given point in time. Smoking, dietary habits and environmental pollution are some such factors. This regulation of gene activity can be likened to a power switch determining which genes are switched on or off, witout altering the actual genes, and is called epigenetics.
    Researchers at Linköping University have used data with epigenetic information from more than 75,000 human samples to train a large number of AI neural network models. They hope that such AI-based models could eventually be used in precision medicine to develop treatments and preventive strategies tailored to the individual. Their models are of the autoencoder type, that self-organises the information and finds interrelation patterns in the large amount of data.
    To test their model, the LiU researchers compared it with existing models. There are already existing models of the effects of smoking on the body, building on the fact that specific epigenetic changes reflect the effect of smoking on the functioning of the lungs. These traces remain in the DNA long after a person has quit smoking, and this type of model can identify whether someone is a current, former or never smoker. Other models can, based on epigenetic markers, estimate the chronological age of an individual, or group individuals according to whether they have a disease or are healthy.
    The LiU researchers trained their autoencoder and then used the result to answer three different queries: age determination, smoker status and diagnosing the disease systemic lupus erythematosus, SLE. Although the existing models rely on selected epigenetic markers known to be associated with the condition they aim to classify. However, it turned out that the LiU researchers’ autoencoders functioned better or equally well.
    “Our models not only enable us to classify individuals based on their epigenetic data. We found that our models can identify previously known epigenetic markers used in other models, but also new markers associated with the condition we’re examining. One example of this is that our model for smoking identifies markers associated with respiratory diseases, such as lung cancer, and DNA damage,” says David Martínez, PhD student at Linköping University.
    The objective of the autoencoder models is to enable compression of extremely complex biological data into a representation of the most relevant characteristics and patterns in data. More

  • in

    New easy-to-use optical chip can self-configure to perform various functions

    Researchers have developed an easy-to-use optical chip that can configure itself to achieve various functions. The positive real-valued matrix computation they have achieved gives the chip the potential to be used in applications requiring optical neural networks. Optical neural networks can be used for a variety of data-heavy tasks such as image classification, gesture interpretation and speech recognition.
    Photonic integrated circuits that can be reconfigured after manufacturing to perform different functions have been developed previously. However, they tend to be difficult to configure because the user needs to understand the internal structure and principles of the chip and individually adjust its basic units.
    “Our new chip can be treated as a black box, meaning users don’t need to understand its internal structure to change its function,” said research team leader Jianji Dong from Huazhong University of Science and Technology in China. “They only need to set a training objective, and, with computer control, the chip will self-configure to achieve the desired functionality based on the input and output.”
    In the journal Optical Materials Express, the researchers describe their new chip, which is based on a network of waveguide-based optical components called Mach-Zehnder interferometers (MZIs) arranged in a quadrilateral pattern. The researchers showed that the chip can self-configure to perform optical routing, low-loss light energy splitting and the matrix computations used to create neural networks.
    “In the future, we anticipate the realization of larger-scale on-chip programmable waveguide networks,” said Dong. “With additional development, it may become possible to achieve optical functions comparable to those of field-programmable gate arrays (FPGAs) — electrical integrated circuits that can be reprogrammed to perform any desired application after they are manufactured.”
    Creating the programmable MZI network
    The on-chip quadrilateral MZI network is potentially useful for applications involving optical neural networks, which are created from networks of interconnected nodes. To use an optical neural network effectively, the network must be trained with known data to determine the weights between each pair of nodes — a task that involves matrix multiplication. More

  • in

    AI speeds up identification brain tumor type

    What type of brain tumor does this patient have? AI technology helps to determine this as early as during surgery, within 1.5 hours. This process normally takes a week. The new technology allows neurosurgeons to adjust their surgical strategies on the spot. Today, researchers from UMC Utrecht and researchers, pathologists and neurosurgeons from the Princess Máxima Center for pediatric oncology and Amsterdam UMC have published about this study in Nature.
    Every year, 1,400 adults and 150 children are diagnosed with a tumor in the brain or spinal cord in the Netherlands. Surgery is often the first step taken in treatment. Currently, during surgery, neurosurgeons do not precisely know what type of brain tumor and what degree of aggressiveness they are dealing with. The exact diagnosis will usually only be available one week after surgery, after the tumor tissue has been visually and molecularly analyzed by the pathologist.
    Deep-learning algorithm
    Researchers from UMC Utrecht have developed a new ‘deep-learning algorithm’, a form of artificial intelligence, which significantly speeds up diagnosis.
    Jeroen de Ridder, research group leader within UMC Utrecht and Oncode Institute: “Recently, Nanopore sequencing became available: a technology that helps to read DNA in real time. For this, we developed an algorithm that is equipped to learn from millions of simulated realistic ‘DNA snapshots’. With this algorithm, we can identify the tumor type within 20 to 40 minutes. And that is fast enough to directly adjust the surgical strategy, if necessary.”
    Tested and trained with biobank
    Bastiaan Tops is in charge of the Pediatric Oncology Laboratory at the Princess Máxima Center. He brought together the new technology and the needs from the operating room. This was made possible by funding from the KiKa foundation and, more specifically, to the extensive biobank that the Máxima Center has maintained for years. Among other things, this biobank stores tissue from children with brain tumors. The algorithm was trained and tested using the biobank. More

  • in

    A new way to erase quantum computer errors

    Quantum computers of the future hold promise in solving all sorts of problems. For example, they could lead to more sustainable materials, new medicines, and even crack the hardest problems in fundamental physics. But compared to classical computers in use today, rudimentary quantum computers are more prone to errors. Wouldn’t it be nice if researchers could just take out a special quantum eraser and get rid of the mistakes?
    Reporting in the journal Nature, a group of researchers led by Caltech is among the first to demonstrate a type of quantum eraser. The physicists show that they can pinpoint and correct for mistakes in quantum computing systems known as “erasure” errors.
    “It’s normally very hard to detect errors in quantum computers, because just the act of looking for errors causes more to occur,” says Adam Shaw, co-lead author of the new study and a graduate student in the laboratory of Manuel Endres, a professor of physics at Caltech. “But we show that with some careful control, we can precisely locate and erase certain errors without consequence, which is where the name erasure comes from.”
    Quantum computers are based on the laws of physics that govern the subatomic realm, such as entanglement, a phenomenon in which particles remain connected to and mimic each other without being in direct contact. In the new study, the researchers focused on a type of quantum-computing platform that uses arrays of neutral atoms, or atoms without a charge. Specifically, they manipulated individual alkaline-earth neutral atoms confined inside “tweezers” made of laser light. The atoms were excited to high-energy states — or “Rydberg” states — in which neighboring atoms start interacting.
    “The atoms in our quantum system talk to each other and generate entanglement,” explains Pascal Scholl, the other co-lead author of the study and a former postdoctoral scholar at Caltech now working at a quantum computing company in France called PASQAL.
    Entanglement is what allows quantum computers to outperform classical computers. “However, nature doesn’t like to remain in these quantum entangled states,” Scholl explains. “Eventually, an error happens, which breaks the entire quantum state. These entangled states can be thought of as baskets full of apples, where the atoms are the apples. With time, some apples will start to rot, and if these apples are not removed from the basket and replaced by fresh ones, all the apples will rapidly become rotten. It is not clear how to fully prevent these errors from happening, so the only viable option nowadays is to detect and correct them.”
    The new error-catching system is designed in such a way that erroneous atoms fluoresce, or light up, when hit with a laser. “We have images of the glowing atoms that tell us where the errors are, so we can either leave them out of the final statistics or apply additional laser pulses to actively correct them,” Scholl says.
    The theory for implementing erasure detection in neutral atom systems was first developed by Jeff Thompson, a professor of electrical and computer engineering at Princeton University, and his colleagues. That team also recently reported demonstrating the technique in the journal Nature.
    By removing and locating errors in their Rydberg atom system, the Caltech team says that they can improve the overall rate of entanglement, or fidelity. In the new study, the team reports that only one in 1,000 pairs of atoms failed to become entangled. That’s a factor-of-10 improvement over what was achieved previously and is the highest-ever observed entanglement rate in this type of system.
    Ultimately, these results bode well for quantum computing platforms that use Rydberg neutral atom arrays. “Neutral atoms are the most scalable type of quantum computer, but they didn’t have high-entanglement fidelities until now,” says Shaw. More

  • in

    Powering AI could use as much electricity as a small country

    Artificial intelligence (AI) comes with promises of helping coders code faster, drivers drive safer, and making daily tasks less time-consuming. But in a commentary published October 10 in the journal Joule, the founder of Digiconomist demonstrates that the tool, when adopted widely, could have a large energy footprint, which in the future may exceed the power demands of some countries.
    “Looking at the growing demand for AI service, it’s very likely that energy consumption related to AI will significantly increase in the coming years,” says author Alex de Vries, a Ph.D. candidate at Vrije Universiteit Amsterdam.
    Since 2022, generative AI, which can produce text, images, or other data, has undergone rapid growth, including OpenAI’s ChatGPT. Training these AI tools requires feeding the models a large amount of data, a process that is energy intensive. Hugging Face, an AI-developing company based in New York, reported that its multilingual text-generating AI tool consumed about 433 megawatt-hours (MWH) during training, enough to power 40 average American homes for a year.
    And AI’s energy footprint does not end with training. De Vries’s analysis shows that when the tool is put to work — generating data based on prompts — every time the tool generates a text or image, it also uses a significant amount of computing power and thus energy. For example, ChatGPT could cost 564 MWh of electricity a day to run.
    While companies around the world are working on improving the efficiencies of AI hardware and software to make the tool less energy intensive, de Vries says that an increase in machines’ efficiency often increases demand. In the end, technological advancements will lead to a net increase in resource use, a phenomenon known as Jevons’ Paradox.
    “The result of making these tools more efficient and accessible can be that we just allow more applications of it and more people to use it,” de Vries says.
    Google, for example, has been incorporating generative AI in the company’s email service and is testing out powering its search engine with AI. The company processes up to 9 billion searches a day currently. Based on the data, de Vries estimates that if every Google search uses AI, it would need about 29.2 TWh of power a year, which is equivalent to the annual electricity consumption of Ireland.
    This extreme scenario is unlikely to happen in the short term because of the high costs associated with additional AI servers and bottlenecks in the AI server supply chain, de Vries says. But the production of AI servers is projected to grow rapidly in the near future. By 2027, worldwide AI-related electricity consumption could increase by 85 to 134 TWh annually based on the projection of AI server production.
    The amount is comparable to the annual electricity consumption of countries such as the Netherlands, Argentina, and Sweden. Moreover, improvements in AI efficiency could also enable developers to repurpose some computer processing chips for AI use, which could further increase AI-related electricity consumption.
    “The potential growth highlights that we need to be very mindful about what we use AI for. It’s energy intensive, so we don’t want to put it in all kinds of things where we don’t actually need it,” de Vries says. More

  • in

    New study offers improved strategy for social media communications during wildfires

    In the last 20 years, disasters have claimed more than a million lives and caused nearly $3 trillion in economic losses worldwide, according to the United Nations.
    Disaster relief organizations (DROs) mobilize critical resources to help impacted communities, and they use social media to distribute information rapidly and broadly. Many DROs post content via multiple accounts within a single platform to represent both national and local levels.
    Specifically examining wildfires in collaboration with the Canadian Red Cross (CRC), new research from the University of Notre Dame contradicts existing crisis communication theory that recommends DROs speak with one voice during the entirety of wildfire response operations.
    “Speak with One Voice? Examining Content Coordination and Social Media Engagement During Disasters” is forthcoming in Information Systems Research from Alfonso Pedraza-Martinez, the Greg and Patty Fox Collegiate Professor of IT, Analytics and Operations at the University of Notre Dame’s Mendoza College of Business.
    Social media informs victims about wildfires, but it also connects volunteers, donors and other supporters. Accounts can send coordinated messages targeting the same audience (match) or different audiences (mismatch).
    According to crisis communication theory, a disaster relief organization’s communication channels should speak with one voice through multiple accounts targeting the same audience, but the team’s study recommends a more nuanced approach.
    “We find the national and local levels should match audiences during the early wildfire response when uncertainty is very high, but they should mismatch audiences during recovery while the situation is still critical but uncertainty has decreased,” said Pedraza-Martinez, who specializes in humanitarian operations and disaster management. “We find that user engagement increases when the national headquarters lead the production of content and the local accounts follow either by tweeting to a matching or mismatching audience, depending on timing in the operation.”
    The study reveals that engagement improves by 4.3 percent from a match only during the uncertain and urgent response phase, while a divergence of content creation decisions, or mismatch, yields 29.6 percent more engagement when uncertainty subsides during the recovery phase. More

  • in

    What is the impact of predictive AI in the health care setting?

    Models built on machine learning in health care can be victims of their own success, according to researchers at the Icahn School of Medicine and the University of Michigan. Their study assessed the impact of implementing predictive models on the subsequent performance of those and other models. Their findings — that using the models to adjust how care is delivered can alter the baseline assumptions that the models were “trained” on, often for worse — were detailed in the October 9 online issue of Annals of Internal Medicine.
    “We wanted to explore what happens when a machine learning model is deployed in a hospital and allowed to influence physician decisions for the overall benefit of patients,” says first and corresponding author Akhil Vaid, M.D., Clinical Instructor of Data-Driven and Digital Medicine (D3M), part of the Department of Medicine at Icahn Mount Sinai. “For example, we sought to understand the broader consequences when a patient is spared from adverse outcomes like kidney damage or mortality. AI models possess the capacity to learn and establish correlations between incoming patient data and corresponding outcomes, but use of these models, by definition, can alter these relationships. Problems arise when these altered relationships are captured back into medical records.”
    The study simulated critical care scenarios at two major health care institutions, the Mount Sinai Health System in New York and Beth Israel Deaconess Medical Center in Boston, analyzing 130,000 critical care admissions. The researchers investigated three key scenarios:
    1. Model retraining after initial use
    Current practice suggests retraining models to address performance degradation over time. Retraining can improve performance initially by adapting to changing conditions, but the Mount Sinai study shows it can paradoxically lead to further degradation by disrupting the learned relationships between presentation and outcome.
    2. Creating a new model after one has already been in use
    Following a model’s predictions can save patients from adverse outcomes such as sepsis. However, death may follow sepsis, and the model effectively works to prevent both. Any new models developed in the future for prediction of death will now also be subject to upset relationships as before. Since we do not know the exact relationships between all possible outcomes, any data from patients with machine-learning influenced care may be inappropriate to use in training further models. More

  • in

    AI language models could help diagnose schizophrenia

    Scientists at the UCL Institute for Neurology have developed new tools, based on AI language models, that can characterise subtle signatures in the speech of patients diagnosed with schizophrenia.
    The research, published in PNAS, aims to understand how the automated analysis of language could help doctors and scientists diagnose and assess psychiatric conditions.
    Currently, psychiatric diagnosis is based almost entirely on talking with patients and those close to them, with only a minimal role for tests such as blood tests and brain scans.
    However, this lack of precision prevents a richer understanding of the causes of mental illness, and the monitoring of treatment.
    The researchers asked 26 participants with schizophrenia and 26 control participants to complete two verbal fluency tasks, where they were asked to name as many words as they could either belonging to the category “animals” or starting with the letter “p,” in five minutes.
    To analyse the answers given by participants, the team used an AI language model that had been trained on vast amounts of internet text to represent the meaning of words in a similar way to humans. They tested whether the words people spontaneously recalled could be predicted by the AI model, and whether this predictability was reduced in patients with schizophrenia.
    They found that the answers given by control participants were indeed more predictable by the AI model than those generated by people with schizophrenia, and that this difference was largest in patients with more severe symptoms. More