More stories

  • in

    Deep learning algorithm may streamline lung cancer radiotherapy treatment

    Lung cancer, the most common cancer worldwide, is targeted with radiation therapy (RT) in nearly one-half of cases. RT planning is a manual, resource-intensive process that can take days to weeks to complete, and even highly trained physicians vary in their determinations of how much tissue to target with radiation. Furthermore, a shortage of radiation-oncology practitioners and clinics worldwide is expected to grow as cancer rates increase. Brigham and Women’s Hospital researchers and collaborators, working under the Artificial Intelligence in Medicine Program of Mass General Brigham, developed and validated a deep learning algorithm that can identify and outline (“segment”) a non-small cell lung cancer (NSCLC) tumor on a computed tomography (CT) scan within seconds. Their research, published in Lancet Digital Health, also demonstrates that radiation oncologists using the algorithm in simulated clinics performed as well as physicians not using the algorithm, while working 65 percent more quickly.
    “The biggest translation gap in AI applications to medicine is the failure to study how to use AI to improve human clinicians, and vice versa,” said corresponding author Raymond Mak, MD, of the Brigham’s Department of Radiation Oncology. “We’re studying how to make human-AI partnerships and collaborations that result in better outcomes for patients. The benefits of this approach for patients include greater consistency in segmenting tumors and accelerated times to treatment. The clinician benefits include a reduction in mundane but difficult computer work, which can reduce burnout and increase the time they can spend with patients.”
    The researchers used CT images from 787 patients to train their model to distinguish tumors from other tissues. They tested the algorithm’s performance using scans from over 1,300 patients from increasingly external datasets. Developing and validating the algorithm involved close collaboration between data scientists and radiation oncologists. For example, when the researchers observed that the algorithm was incorrectly segmenting CT scans involving the lymph nodes, they retrained the model with more of these scans to improve its performance.
    Finally, the researchers asked eight radiation oncologists to perform segmentation tasks as well as rate and edit segmentations produced by either another expert physician or the algorithm (they were not told which). There was no significant difference in performance between human-AI collaborations and human-produced (de novo) segmentations. Intriguingly, physicians worked 65 percent faster and with 32 percent less variation when editing an AI-produced segmentation compared to a manually produced one, even though they were unaware of which one they were editing. They also rated the quality of AI-drawn segmentations more highly than the human expert-drawn segmentations in this blinded study.
    Going forward, the researchers plan to combine this work with AI models they designed previously that can identify “organs at risk” of receiving undesired radiation during cancer treatment (such as the heart) and thereby exclude them from radiotherapy. They are continuing to study how physicians interact with AI to ensure that AI-partnerships help, rather than harm, clinical practice, and are developing a second, independent segmentation algorithm that can verify both human and AI-drawn segmentations.
    “This study presents a novel evaluation strategy for AI models that emphasizes the importance of human-AI collaboration,” said co-author Hugo Aerts, PhD, of the Department of Radiation Oncology. “This is especially necessary because in silico (computer-modeled) evaluations can give different results than clinical evaluations. Our approach can help pave the way towards clinical deployment.”
    This study was funded by the National Institutes of Health (U24CA194354, U01CA190234, and U01CA209414).
    Story Source:
    Materials provided by Brigham and Women’s Hospital. Note: Content may be edited for style and length. More

  • in

    Researchers demonstrate error correction in a silicon qubit system

    Researchers from RIKEN in Japan have achieved a major step toward large-scale quantum computing by demonstrating error correction in a three-qubit silicon-based quantum computing system. This work, published in Nature, could pave the way toward the achievement of practical quantum computers.
    Quantum computers are a hot area of research today, as they promise to make it possible to solve certain important problems that are intractable using conventional computers. They use a completely different architecture, using superimposition states found in quantum physics rather than the simple 1 or 0 binary bits used in conventional computers. However, because they are designed in a completely different way, they are very sensitive to environmental noise and other issues, such as decoherence, and require error correction to allow them to do precise calculations.
    One important challenge today is choosing what systems can best act as “qubits”–the basic units used to make quantum calculations. Different candidate systems have their own strengths and weaknesses. Some of the popular systems today include superconducting circuits and ions, which have the advantage that some form of error correction has been demonstrated, allowing them to be put into actual use albeit on a small scale. Silicon-based quantum technology, which has only begun to be developed over the past decade, is known to have an advantage in that it utilizes a semiconductor nanostructure similar to what is commonly used to integrate billions of transistors in a small chip, and therefore could take advantage of current production technology.
    However, one major problem with the silicon-based technology is that there is a lack of technology for error connection. Researchers have previously demonstrated control of two qubits, but that is not enough for error correction, which requires a three-qubit system.
    In the current research, conducted by researchers at the RIKEN Center for Emergent Matter Science and the RIKEN Center for Quantum Computing, the group achieved this feat, demonstrating full control of a three-qubit system (one of the largest qubit systems in silicon), thus providing a prototype for the first time of quantum error correction in silicon. They achieved this by implementing a three-qubit Toffoli-type quantum gate.
    According to Kenta Takeda, the first author of the paper, “The idea of implementing a quantum error-correcting code in quantum dots was proposed about a decade ago, so it is not an entirely new concept, but a series of improvements in materials, device fabrication, and measurement techniques allowed us to succeed in this endeavor. We are very happy to have achieved this.”
    According to Seigo Tarucha, the leader of the research group, “Our next step will be to scale up the system. We think scaling up is the next step. For that, it would be nice to work with semiconductor industry groups capable of manufacturing silicon-based quantum devices at a large scale.
    Story Source:
    Materials provided by RIKEN. Note: Content may be edited for style and length. More

  • in

    Helping autonomous vehicles navigate tricky highway merges

    If autonomous vehicles are ever going to achieve widespread adoption, we need to know they are capable of navigating complex traffic situations, such as merging into heavy traffic when lanes disappear on a highway. To that end, researchers from North Carolina State University have developed a technique that allows autonomous vehicle software to make the relevant calculations more quickly — improving both traffic and safety in simulated autonomous vehicle systems.
    “Right now, the programs designed to help autonomous vehicles navigate lane changes rely on making problems computationally simple enough to resolve quickly, so the vehicle can operate in real time,” says Ali Hajbabaie, corresponding author of a paper on the work and an assistant professor of civil, construction and environmental engineering at NC State. “However, simplifying the problem too much can actually create a new set of problems, since real world scenarios are rarely simple.
    “Our approach allows us to embrace the complexity of real-world problems. Rather than focusing on simplifying the problem, we developed a cooperative distributed algorithm. This approach essentially breaks a complex problem down into smaller sub-problems, and sends those to different processors to solve separately. This process, called parallelization, improves efficiency significantly.”
    At this point, the researchers have only tested their approach in simulations, where the sub-problems are shared among different cores in the same computing system. However, if autonomous vehicles ever use the approach on the road, the vehicles would network with each other and share the computing sub-problems.
    In proof-of-concept testing, the researchers looked at two things: whether their technique allowed autonomous vehicle software to solve merging problems in real time; and how the new “cooperative” approach affected traffic and safety compared to an existing model for navigating autonomous vehicles.
    In terms of computation time, the researchers found their approach allowed autonomous vehicles to navigate complex freeway lane merging scenarios in real time in moderate and heavy traffic, with spottier performance when traffic volumes got particularly high.
    But when it came to improving traffic and safety, the new technique did exceptionally well. In some scenarios, particularly when traffic volume was lower, the two approaches performed about the same. But in most instances, the new approach outperformed the previous model by a considerable margin. What’s more, the new technique had zero incidents where vehicles had to come to a stop or where there were “near crash conditions.” The other model’s results included multiple scenarios where there were literally thousands of stoppages and near crash conditions.
    “For a proof-of-concept test, we’re very pleased with how this technique has performed,” Hajbabaie says. “There is room for improvement, but we’re off to a great start.
    “The good news is that we’re developing these tools and tackling these problems now, so that we’re in a good position to ensure safe autonomous systems as they are adopted more widely.”
    Story Source:
    Materials provided by North Carolina State University. Original written by Matt Shipman. Note: Content may be edited for style and length. More

  • in

    New stable quantum batteries can reliably store energy into electromagnetic fields

    Quantum technologies, i.e. technological devices obtained by building and manipulating quantum mechanical systems, are becoming a reality in recent days. The most prominent example is certainly given by quantum computers, where the unit of information, the bit, is replaced by its quantum mechanical counterpart, informally called the qubit. Contrary to classical computers, quantum computers promise to use the full quantum mechanical features of qubits, in order to address and solve computational problems which would be out of reach by using classical computers. As an example, the Canadian company Xanadu recently claimed that its quantum computer has been able to solve, in just 36 microseconds, a computational task that would have required 9000 years using state-of-the-art supercomputers.
    Quantum technologies need energy to operate. This simple consideration has led researchers, in the last ten years, to develop the idea of quantum batteries, which are quantum mechanical systems used as energy storage devices. In the very recent past, researchers at the Center for Theoretical Physics of Complex Systems (PCS) within the Institute for Basic Science (IBS), South Korea have been able to put tight constraints on the possible charging performance of a quantum battery. Specifically, they showed that a collection of quantum batteries can lead to an enormous improvement in charging speed compared to a classical charging protocol. This is thanks to quantum effects, which allow the cells in quantum batteries to be charged simultaneously.
    Despite these theoretical achievements, the experimental realizations of quantum batteries are still scarce. The only recent notable counter-example used a collection of two-level systems (very similar to the qubits just introduced) for energy storage purposes, with the energy being provided by an electromagnetic field (a laser).
    Given the current situation, it is clearly of uttermost importance to find new and more accessible quantum platforms which can be used as quantum batteries. With this motivation in mind, researchers from the same IBS PCS team, working in collaboration with Giuliano Benenti (University of Insubria, Italy), recently decided to revisit a quantum mechanical system that has been studied heavily in the past: the micromaser. Micromaser is a system where a beam of atoms is used to pump photons into a cavity. Put in simple terms, a micromaser can be thought of as a configuration specular to the experimental model of quantum battery mentioned above: the energy is stored into the electromagnetic field, which is charged by a stream of qubits sequentially interacting with it.
    The IBS PCS researchers and their collaborator showed that micromasers have features that allow them to serve as excellent models of quantum batteries. One of the main concerns when trying to use an electromagnetic field to store energy is that in principle, the electromagnetic field could absorb an enormous amount of energy, potentially much more than what is necessary. Making an analogy with a simple case, this would correspond to a phone battery that, when plugged, continues to increase its charge indefinitely. In such a scenario, forgetting that the phone is plugged in could be extremely risky, since there would be no mechanism to stop the charging.
    Luckily, the team’s numerical results show that this cannot happen in micromasers. The electromagnetic field reaches quickly a final configuration (technically called a steady state), whose energy can be determined and decided a priori when building the micromaser. This property ensures protection from the risks of overcharging.
    In addition, the researchers showed that the final configuration of the electromagnetic field is in a pure state, which means that it brings no memory of the qubits that have been used during the charging. This latter property is particularly crucial when dealing with a quantum battery. It ensures that all the energy stored in the battery can be extracted and used whenever necessary, without the need of keeping track of the qubits used during the charging process.
    Finally, it was shown that these appealing features are robust and are not destroyed by changing the specific parameters defined in this study. This property is of clear importance when trying to build an actual quantum battery since imperfections in the building process are simply unavoidable.
    Interestingly, in a parallel series of papers, Stefan Nimmrichter and his collaborators have shown that quantum effects can make the charging process of the micromaser faster than classical charging. In other words, they have been able to show the presence of the previously mentioned quantum advantage during the charging of a micromaser battery.
    All these results suggest that micromaser could be considered as a promising new platform that can be used to build quantum batteries. The fact that these systems have been already implemented in experimental realizations for many years could give a serious boost in building new accessible prototypes of quantum batteries.
    To this end, the IBS PCS researchers and Giuliano Benenti are currently starting a joint collaboration with Stefan Nimmrichter and his collaborators, to further explore these promising models. The hope is that this new research collaboration will finally be able to benchmark and experimentally test the performances of micromaser-based quantum battery devices. More

  • in

    How the sounds we hear help us predict how things feel

    Researchers at the University of East Anglia have made an important discovery about the way our brains process the sensations of sound and touch.
    A new study published today shows how the brain’s different sensory systems are all closely interconnected — with regions that respond to touch also involved when we listen to specific sounds associated with touching objects.
    They found that these areas of the brain can tell the difference between listening to sounds such as such as a ball bouncing, or the sound of typing on a keyboard.
    It is hoped that understanding this key area of brain function may in future help people who are neurodiverse, or with conditions such as schizophrenia or anxiety. And it could lead to developments in brain-inspired computing and AI.
    Lead researcher Dr Fraser Smith, from UEA’s School of Psychology, said: “We know that when we hear a familiar sound such as a bouncing a ball, this leads us to expect to see a particular object. But what we have found is that it also leads the brain to represent what it might feel like to touch and interact with that object.
    “These expectations can help the brain process sensory information more efficiently.”
    The research team used an MRI scanner to collect brain imaging data while 10 participants listened to sounds generated by interacting with objects — such as bouncing a ball, knocking on a door, crushing paper, or typing on a keyboard. More

  • in

    Using digital media to relax is related to lower-quality parenting

    Caregivers who consume digital media for relaxation are more likely to engage in negative parenting practices, according to a new multinational study.
    The new study led by the University of Waterloo aimed to investigate the relationship between caregivers’ use of digital media, mental health, and parenting practices at the start of the COVID-19 pandemic. On average, caregivers spend three to four hours a day consuming digital media.
    “All members of the family matter when we try to understand families in a society saturated with technology,” said Jasmine Zhang, lead author of the study and a master’s candidate in clinical psychology at Waterloo. “It’s not just children who are often on devices. Parents use digital media for many reasons, and these behaviours can impact their children.”
    To conduct the study, the researchers surveyed 549 participants who are parents of at least two children between the ages of five and 18. Caregivers provided information about their digital use, their own mental health and their children’s, family functioning, and parenting practices.
    The researchers found that caregivers with higher levels of distress engage in more screen-based activities and were more likely to turn to devices for relaxation. This consumption was correlated with negative parenting practices such as nagging and yelling. They also found that negative parenting behaviours were more likely when technology interrupted family interactions. The experiment didn’t focus on specific apps or websites that caregivers use but rather found that caregivers who spend time on screens were retreating from being present with their family, which is correlated with negative parenting practices.
    However, not all media consumption was correlated with negative outcomes: maintaining social connections through digital channels was related to lower levels of anxiety and depression and higher levels of positive parenting practices such as listening to their children’s ideas and speaking of the good their children do.
    “When we study how parents use digital media, we need to consider caregivers’ motivations for using devices in addition to how much time they spend on them,” Zhang said.
    Dillon Browne, Canada Research Chair in Child and Family Clinical Psychology and professor of psychology at Waterloo, expects these patterns to continue after the pandemic.
    “The family media landscape continues to grow and become more prominent,” said Browne, a co-author of the study. “Going forward, it’s important to consider the nuances of digital media as some behaviours are related to well-being, and others are related to distress.”
    The researchers plan to build on these findings and hope that their work will aid in creating guidelines that will help caregivers manage their screen-based behaviours.
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    Reasons behind gamer rage in children are complex — and children are good at naming them

    Children’s outbursts of rage while playing digital games are causing both concern and public debate around the topic. Taking a novel approach to gamer rage, a new study conducted at the University of Eastern Finland examines the topic from a child’s perspective, finding complex reasons for gamer rage in children. As data, the researchers used interviews with, and essays by, children. The study examined children’s views on the reasons for, and background factors of, gamer rage. In addition, the researchers analysed how rage manifested itself.
    The results show that a reason for gamer rage in children was often found in their own performance.
    “For example, repeated or last-minute in-game failures, or losing to a beginner, caused annoyance and rage. In digital gaming, competitiveness and social factors play a major role in general,” says Project Researcher Juho Kahila from the University of Eastern Finland.
    Children often compared their own performance to that of other players. Frustrating actions by other players, such as cheating or losing a game due to incompetent teammates, were perceived as a source of rage. In addition, out-of-game interruptions, such as having to do chores or homework, and technical problems, such as a poor internet connection, were also identified as sources of rage.
    Failure, humiliation, noise and hunger predispose to rage
    Some games were perceived as rage triggering. For instance, playing against a real human, or getting humiliated by another player were identified as factors predisposing to gamer rage. Besides choice of game, also the gaming environment had an influence on rage.
    “Toxicity within the gaming community, such as unpleasant remarks or bullying by other players, as well as a noisy gaming environment, were identified as predisposing to rage. In addition, troubles in daily life, such as having a bad day at school, or feeling hungry, were also recognised as factors contributing to rage,” Kahila says.
    In the children’s essays, gamer rage often took the form of verbal expressions, but also physical expressions, as well as quitting gaming. In an outburst of rage, children not only yelled and cursed, but they also kicked, beat and threw items on hand, such as their gaming equipment or pieces of furniture. A gaming session was often quit while feeling outraged. However, the results also showed that quitting a gaming session, or switching to a less infuriating game, were often used as a preventive measure to avoid becoming even more enraged.
    The study shows that the reasons behind gamer rage in digital gaming are very complex — and that children are good at naming them. According to Kahila, many of the reasons leading to rage in digital gaming, such as in-game failures, cheating opponents, or a toxic gaming environment, are similar also in other gaming settings.
    “For example, feelings of outrage caused by one’s own mistakes, a penalty missed by a referee, or annoying behaviour by an opponent, are all familiar from ice hockey, just to pick an example,” Kahila points out. More

  • in

    AI-based method for dating archeological remains

    By analyzing DNA with the help of artificial intelligence (AI), an international research team led by Lund University in Sweden has developed a method that can accurately date up to ten-thousand year-old human remains.
    Accurately dating ancient humans is key when mapping how people migrated during world history.
    The standard dating method since the 1950s has been radiocarbon dating. The method, which is based on the ratio between two different carbon isotopes, has revolutionized archaeology. However, the technology is not always completely reliable in terms of accuracy, making it complicated to map ancient people, how they moved and how they are related.
    In a new study published in Cell Reports Methods, a research team has developed a dating method that could be of great interest to archaeologists and paleognomicists.
    “Unreliable dating is a major problem, resulting in vague and contradictory results. Our method uses artificial intelligence to date genomes via their DNA with great accuracy, says Eran Elhaik, researcher in molecular cell biology at Lund University.
    The method is called Temporal Population Structure (TPS) and can be used to date genomes that are up to 10,000 years old. In the study, the research team analyzed approximately 5,000 human remains — from the Late Mesolithic period (10,000-8,000 BC) to modern times. All of the studied samples could be dated with a rarely seen accuracy.
    “We show that information about the period in which people lived is encoded in the genetic material. By figuring out how to interpret it and position it in time, we managed to date it with the help of AI,” says Eran Elhaik.
    The researchers do not expect TPS to eliminate radiocarbon dating but rather see the method as a complementary tool in the paleogeographic toolbox. The method can be used when there is uncertainty involving a radiocarbon dating result. One example is the famous human skull from Zlatý kůň in today’s Czech Republic, which could be anywhere between 15,000 and 34,000 years old.
    “Radiocarbon dating can be very unstable and is affected by the quality of the material being examined. Our method is based on DNA, which makes it very solid. Now we can seriously begin to trace the origins of ancient people and map their migration routes,” concludes Eran Elhaik.
    Story Source:
    Materials provided by Lund University. Note: Content may be edited for style and length. More