More stories

  • in

    New platform optimizes selection of combination cancer therapies

    Researchers at The University of Texas MD Anderson Cancer Center have developed a new bioinformatics platform that predicts optimal treatment combinations for a given group of patients based on co-occurring tumor alterations. In retrospective validation studies, the tool selected combinations that resulted in improved patient outcomes across both pre-clinical and clinical studies.
    The findings were presented today at the American Association for Cancer Research (AACR) Annual Meeting 2022 by principal investigator Anil Korkut, Ph.D., assistant professor of Bioinformatics and Computational Biology. The study results also were published today in Cancer Discovery.
    The platform, called REcurrent Features LEveraged for Combination Therapy (REFLECT), integrates machine learning and cancer informatics algorithms to analyze biological tumor features — including genetic mutations, copy number changes, gene expression and protein expression aberrations — and identify frequent co-occurring alterations that could be targeted by multiple drugs.
    “Our ultimate goal is to make precision oncology more effective and create meaningful patient benefit,” Korkut said. “We believe REFLECT may be one of the tools that can help overcome some of the current challenges in the field by facilitating both the discovery and the selection of combination therapies matched to the molecular composition of tumors.”
    Targeted therapies have improved clinical outcomes for many patients with cancer, but monotherapies against a single target often lead to treatment resistance. Cancer cells frequently rely on co-occurring alterations, such as mutations in two signaling pathways, to drive tumor progression. Increasing evidence suggests that identifying and targeting both alterations simultaneously could increase durable responses, Korkut explained.
    Led by Korkut and postdoctoral fellow Xubin Li, Ph.D., the researchers built and used the REFLECT tool to develop a systematic and unbiased approach to match patients with optimal combination therapies. More

  • in

    Engineering team develops new AI algorithms for high accuracy and cost effective medical image diagnostics

    Medical imaging is an important part of modern healthcare, enhancing both the precision, reliability and development of treatment for various diseases. Artificial intelligence has also been widely used to further enhance the process.
    However, conventional medical image diagnosis employing AI algorithms require large amounts of annotations as supervision signals for model training. To acquire accurate labels for the AI algorithms — radiologists, as part of the clinical routine, prepare radiology reports for each of their patients, followed by annotation staff extracting and confirming structured labels from those reports using human-defined rules and existing natural language processing (NLP) tools. The ultimate accuracy of extracted labels hinges on the quality of human work and various NLP tools. The method comes at a heavy price, being both labour intensive and time consuming.
    An engineering team at the University of Hong Kong (HKU) has developed a new approach “REFERS” (Reviewing Free-text Reports for Supervision), which can cut human cost down by 90%, by enabling the automatic acquisition of supervision signals from hundreds of thousands of radiology reports at the same time. It attains a high accuracy in predictions, surpassing its counterpart of conventional medical image diagnosis employing AI algorithms.
    The innovative approach marks a solid step towards realizing generalized medical artificial intelligence. The breakthrough was published in Nature Machine Intelligence in the paper titled “Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports.”
    “AI-enabled medical image diagnosis has the potential to support medical specialists in reducing their workload and improving the diagnostic efficiency and accuracy, including but not limited to reducing the diagnosis time and detecting subtle disease patterns,” said Professor YU Yizhou, leader of the team from HKU’s Department of Computer Science under the Faculty of Engineering.
    “We believe abstract and complex logical reasoning sentences in radiology reports provide sufficient information for learning easily transferable visual features. With appropriate training, REFERS directly learns radiograph representations from free-text reports without the need to involve manpower in labelling.” Professor Yu remarked.
    For training REFERS, the research team uses a public database with 370,000 X-Ray images, and associated radiology reports, on 14 common chest diseases including atelectasis, cardiomegaly, pleural effusion, pneumonia and pneumothorax. The researchers managed to build a radiograph recognition model using 100 radiographs only, and attains 83% accuracy in predictions. When the number was increased to 1,000, their model exhibits amazing performance with an accuracy of 88.2%, which surpasses its counterpart trained with 10,000 radiologist annotations (accuracy at 87.6%). When 10,000 radiographs were used, the accuracy is at 90.1%. In general, an accuracy above 85% in predictions is useful in real-world clinical applications.
    REFERS achieves the goal by accomplishing two report-related tasks, i.e., report generation and radiograph-report matching. In the first task, REFERS translates radiographs into text reports by first encoding radiographs into an intermediate representation, which is then used to predict text reports via a decoder network. A cost function is defined to measure the similarity between predicted and real report texts, based on which gradient-based optimization is employed to train the neural network and update its weights.
    As for the second task, REFERS first encodes both radiographs and free-text reports into the same semantic space, where representations of each report and its associated radiographs are aligned via contrastive learning.
    “Compared to conventional methods that heavily rely on human annotations, REFERS has the ability to acquire supervision from each word in the radiology reports. We can substantially reduce the amount of data annotation by 90% and the cost to build medical artificial intelligence. It marks a significant step towards realizing generalized medical artificial intelligence, ” said the paper’s first author Dr ZHOU Hong-Yu.
    Story Source:
    Materials provided by The University of Hong Kong. Note: Content may be edited for style and length. More

  • in

    The ethics of research on 'conscious' artificial brains

    One way in which scientists are studying how the human body grows and ages is by creating artificial organs in the laboratory. The most popular of these organs is currently the organoid, a miniaturized organ made from stem cells. Organoids have been used to model a variety of organs, but brain organoids are the most clouded by controversy.
    Current brain organoids are different in size and maturity from normal brains. More importantly, they do not produce any behavioral output, demonstrating they are still a primitive model of a real brain. However, as research generates brain organoids of higher complexity, they will eventually have the ability to feel and think. In response to this anticipation, Associate Professor Takuya Niikawa of Kobe University and Assistant Professor Tsutomu Sawai of Kyoto University’s Institute for the Advanced Study of Human Biology (WPI-ASHBi), in collaboration with other philosophers in Japan and Canada, have written a paper on the ethics of research using conscious brain organoids. The paper can be read in the academic journal Neuroethics.
    Working regularly with both bioethicists and neuroscientists who have created brain organoids, the team has been writing extensively about the need to construct guidelines on ethical research. In the new paper, Niikawa, Sawai and their coauthors lay out an ethical framework that assumes brain organoids already have consciousness rather than waiting for the day when we can fully confirm that they do.
    “We believe a precautionary principle should be taken,” Sawai said. “Neither science nor philosophy can agree on whether something has consciousness. Instead of arguing about whether brain organoids have consciousness, we decided they do as a precaution and for the consideration of moral implications.”
    To justify this assumption, the paper explains what brain organoids are and examines what different theories of consciousness suggest about brain organoids, inferring that some of the popular theories of consciousness permit them to possess consciousness.
    Ultimately, the framework proposed by the study recommends that research on human brain organoids follows the ethical principles similar to those for animal experiments. Therefore, recommendations include using the minimum number of organoids possible and doing the upmost to prevent pain and suffering while considering the interests of the public and patients.
    “Our framework was designed to be simple and is based on valence experiences and the sophistication of those experiences,” said Niikawa.
    This, the paper explains, provides guidance on how strict the conditions for experiments should be. These conditions should be decided based upon several criteria, which include the physiological state of the organoid, the stimuli to which it responds, the neural structures it possesses, and its cognitive functions.
    Moreover, the paper argues that this framework is not exclusive to brain organoids. It can be applied to anything that is perceived to hold consciousness, such as fetuses, animals and even robots.
    “Our framework depends on the precautionary principle. Something that we believe does not have consciousness today may, through the development of consciousness studies, be found to have consciousness in the future. We can consider how we ought to treat these entities based on our ethical framework,” conclude Niikawa and Sawai.
    Story Source:
    Materials provided by Kyoto University. Note: Content may be edited for style and length. More

  • in

    New transistor could cut 5% from world’s digital energy budget

    A new spin on one of the 20th century’s smallest but grandest inventions, the transistor, could help feed the world’s ever-growing appetite for digital memory while slicing up to 5% of the energy from its power-hungry diet.
    Following years of innovations from the University of Nebraska-Lincoln’s Christian Binek and University at Buffalo’s Jonathan Bird and Keke He, the physicists recently teamed up to craft the first magneto-electric transistor.
    Along with curbing the energy consumption of any microelectronics that incorporate it, the team’s design could reduce the number of transistors needed to store certain data by as much as 75%, said Nebraska physicist Peter Dowben, leading to smaller devices. It could also lend those microelectronics steel-trap memory that remembers exactly where its users leave off, even after being shut down or abruptly losing power.
    “The implications of this most recent demonstration are profound,” said Dowben, who co-authored a recent paper on the work that graced the cover of the journal Advanced Materials.
    Many millions of transistors line the surface of every modern integrated circuit, or microchip, which itself is manufactured in staggering numbers — roughly 1 trillion in 2020 alone — from the industry-favorite semiconducting material, silicon. By regulating the flow of electric current within a microchip, the tiny transistor effectively acts as a nanoscopic on-off switch that’s essential to writing, reading and storing data as the 1s and 0s of digital technology.
    But silicon-based microchips are nearing their practical limits, Dowben said. Those limits have the semiconductor industry investigating and funding every promising alternative it can. More

  • in

    Innovative technology will use smart sensors to ensure vaccine safety

    A new study from Tel Aviv University enables developers, for the first time in the world, to determine vaccine safety via smart sensors that measure objective physiological parameters. According to the researchers, most clinical trials testing the safety of new vaccines. including COVID-19 vaccines, rely on participants’ subjective reports, which can lead to biased results. In contrast, objective physiological data, obtained through sensors attached to the body, is clear and unambiguous.
    The study was led by Dr. Yftach Gepner of the Department of Epidemiology and Preventive Medicine at TAU’s Sackler Faculty of Medicine, together with Dr. Dan Yamin and Dr. Erez Shmueli from TAU’s Fleischman Faculty of Engineering. The paper was published in Communications Medicine, a journal from the Nature portfolio.
    Dr. Gepner: “In most methods used today, clinical trials designed to evaluate the safety of a new drug or vaccine employ self-report questionnaires, asking participants how they feel before and after receiving the treatment. This is clearly a totally subjective report. Even when Pfizer and Moderna developed their vaccines for the new COVID-19 virus, they used self-reports to prove their safety.”
    In the current study, researchers from Tel Aviv University demonstrated that smart sensors can be used to test new vaccines. The study was conducted when many Israelis received their second dose of the COVID-19 vaccine. The researchers equipped volunteers with innovative, FDA-approved sensors developed by the Israeli company Biobeat. Attached to their chests, these sensors measured physiological reactions from one day before to three days after receiving the vaccine. The innovative sensors monitored 13 physiological parameters, such as: heart rate, breathing rate, saturation (blood oxygen levels), heartbeat volume, temperature, cardiac output, and blood pressure.
    The surprising results: a significant discrepancy was found between subjective self-reports about side effects and actual measurements. That is, in nearly all objective measures, significant changes were identified after vaccination, even for subjects who reported having no reaction at all.
    In addition, the study found that side effects escalate over the first 48 hours, and then parameters return to the level measured before vaccination. In other words: a direct assessment of the vaccine’s safety identified physiological reactions during the first 48 hours, with levels restabilizing afterwards.
    “The message from our study is clear,” says Dr. Gepner. “In 2022 the time has come to conduct continual, sensitive, objective testing of the safety of new vaccines and therapies. There is no reason to rely on self-reports or wait for the occurrence of rare side effects like myocarditis, an inflammation of the heart muscle, which occurs in one of 10,000 patients. Preliminary signs that predict such conditions can be detected with advanced sensors, identifying normal vs. extreme alterations in physiological parameters and any risk of inflammation. Today trial participants are invited to the clinic for blood pressure testing, but often their blood pressure rises just because the situation is stressful. Continual monitoring at home solves these problems with simple, convenient, inexpensive, and accurate means. This is the kind of medicine we should strive for in 2022.”
    Story Source:
    Materials provided by Tel-Aviv University. Note: Content may be edited for style and length. More

  • in

    Trainee teachers made sharper assessments about learning difficulties after receiving feedback from AI

    A trial in which trainee teachers who were being taught to identify pupils with potential learning difficulties had their work ‘marked’ by artificial intelligence has found the approach significantly improved their reasoning.
    The study, with 178 trainee teachers in Germany, was carried out by a research team led by academics at the University of Cambridge and Ludwig-Maximilians-Universität München (LMU Munich). It provides some of the first evidence that artificial intelligence (AI) could enhance teachers’ ‘diagnostic reasoning’: the ability to collect and assess evidence about a pupil, and draw appropriate conclusions so they can be given tailored support.
    During the trial, trainees were asked to assess six fictionalised ‘simulated’ pupils with potential learning difficulties. They were given examples of their schoolwork, as well as other information such as behaviour records and transcriptions of conversations with parents. They then had to decide whether or not each pupil had learning difficulties such as dyslexia or Attention Deficit Hyperactivity Disorder (ADHD), and explain their reasoning.
    Immediately after submitting their answers, half of the trainees received a prototype ‘expert solution’, written in advance by a qualified professional, to compare with their own. This is typical of the practice material student teachers usually receive outside taught classes. The others received AI-generated feedback, which highlighted the correct parts of their solution and flagged aspects they might have improved.
    After completing the six preparatory exercises, the trainees then took two similar follow-up tests — this time without any feedback. The tests were scored by the researchers, who assessed both their ‘diagnostic accuracy’ (whether the trainees had correctly identified cases of dyslexia or ADHD), and their diagnostic reasoning: how well they had used the available evidence to make this judgement.
    The average score for diagnostic reasoning among trainees who had received AI feedback during the six preliminary exercises was an estimated 10 percentage points higher than those who had worked with the pre-written expert solutions. More

  • in

    From computer to benchtop: Researchers find clues to new mechanisms for coronaviruses infections

    A group of bat viruses related to SARS-CoV-2 can also infect human cells but uses a different and unknown entryway.
    While researchers are still honing in on how these viruses infect cells, the findings could help in the development of new vaccines that prevent coronaviruses from causing another pandemic.
    Publishing in the journal, eBioMedicine, a team of Washington State University researchers used a computational approach based on network science to distinguish between a group of coronaviruses that can infect human cells from those that can’t. The researchers then confirmed their computational results in the laboratory, showing that a specific cluster of viruses can infect both human and bat cells.
    “What we find with these viruses is that they’re able to get into the cells through another mechanism or receptor, and that has a lot of implications for how, and if, they would be able to infect us,” said Michael Letko, co-senior author and assistant professor in the Paul Allen School of Global Health.
    Cross-species transmission of coronaviruses poses a serious threat to global health. While numerous coronaviruses have been discovered in wildlife, researchers haven’t been able to predict which pose the greatest threat to humans and are left scrambling to develop vaccines after viruses spill over.
    “As we encroach more and more on places where there are human and animal interactions, it’s quite likely that there will be many viruses that will need to be examined,” said Shira Broschat, professor in the School of Electrical Engineering and Computer Science, also co-senior author on the paper. More

  • in

    Toward high-powered telecommunication systems

    For all the recent advances in integrated lithium niobate photonic circuits — from frequency combs to frequency converters and modulators — one big component has remained frustratingly difficult to integrate: lasers.
    Long haul telecommunication networks, data center optical interconnects, and microwave photonic systems all rely on lasers to generate an optical carrier used in data transmission. In most cases, lasers are stand-alone devices, external to the modulators, making the whole system more expensive and less stable and scalable.
    Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) in collaboration with industry partners at Freedom Photonics and HyperLight Corporation, have developed the first fully integrated high-power laser on a lithium niobate chip, paving the way for high-powered telecommunication systems, fully integrated spectrometers, optical remote sensing, and efficient frequency conversion for quantum networks, among other applications.
    “Integrated lithium niobate photonics is a promising platform for the development of high-performance chip-scale optical systems, but getting a laser onto a lithium niobate chip has proved to be one of the biggest design challenges,” said Marko Loncar, the Tiantsai Lin Professor of Electrical Engineering and Applied Physics at SEAS and senior author of the study. “In this research, we used all the nano-fabrication tricks and techniques learned from previous developments in integrated lithium niobate photonics to overcome those challenges and achieve the goal of integrating a high-powered laser on a thin-film lithium niobate platform.”
    The research is published in the journal Optica.
    Loncar and his team used small but powerful distributed feedback lasers for their integrated chip. On chip, the lasers sit in small wells or trenches etched into the lithium niobate and deliver up to 60 milliwatts of optical power in the waveguides fabricated in the same platform. The researchers combined the laser with a 50 gigahertz electro-optic modulator in lithium niobate to build a high-power transmitter.
    “Integrating high-performance plug-and-play lasers would significantly reduce the cost, complexity, and power consumption of future communication systems,” said Amirhassan Shams-Ansari, a graduate student at SEAS and first author of the study. “It’s a building block that can be integrated into larger optical systems for a range of applications, in sensing, lidar, and data telecommunications.”
    By combining thin-film lithium niobate devices with high-power lasers using an industry-friendly process, this research represents a key step towards large-scale, low-cost, and high-performance transmitter arrays and optical networks. Next, the team aims to increase the laser’s power and scalability for even more applications.
    Harvard’s Office of Technology Development has protected the intellectual property arising from the Loncar Lab’s innovations in lithium niobate systems. Loncar is a cofounder of HyperLight Corporation, a startup which was launched to commercialize integrated photonic chips based on certain innovations developed in his lab.
    The research was co-authored by Dylan Renaud, Rebecca Cheng, Linbo Shao,
    Di Zhu, and Mengjie Yu, from SEAS, Hannah R. Grant, Leif Johansson from Freedom Photonics and Lingyan He and Mian Zhang from HyperLight Corporation. It was supported by the Defense Advanced Research Projects Agency under grant HR0011-20-C-0137 and the Air Force Office of Scientific Research under grant FA9550-19-1-0376. More