More stories

  • in

    Machine learning classifier accelerates the development of cellular immunotherapies

    Making a personalised T cell therapy for cancer patients currently takes at least six months; scientists at the German Cancer Research Center (DKFZ) and the University Medical Center Mannheim have shown that the laborious first step of identifying tumor-reactive T cell receptors for patients can be replaced with a machine learning classifier that halves this time.
    Personalized cellular immunotherapies are considered promising new treatment options for various types of cancer. One of the therapeutic approaches currently being tested are so-called “T-cell receptor transgenic T-cells.” The idea behind this: immune T cells from a patient are equipped in the laboratory to recognize the patient’s own unique tumor, and then reinfused in large numbers to effectively kill the tumor cells.
    The development of such therapies is a complicated process. First, doctors isolate tumor-infiltrating T cells (TILs) from a sample of the patient’s tumor tissue. This cell population is then searched for T-cell receptors that recognize tumor-specific mutations and can thus kill tumor cells. This search is laborious and has so far required knowledge of the tumor-specific mutations that lead to protein changes that are recognized by the patients’ immune system. During this time the tumor is constantly mutating and spreading, making this step a race against time.
    “Finding the right T cell receptors is like looking for a needle in a haystack, costly and time-consuming,” says Michael Platten, Head of Department at the DKFZ and Director of the Department of Neurology at the University Medical Center Mannheim. “With a method that allows us to identify tumor-reactive T-cell receptors independently of knowledge of the respective tumor epitopes, the process could be considerably simplified and accelerated.”
    A team led by Platten and co-study head Ed Green has now presented a new technology that can achieve precisely this goal in a recent publication. As a starting point, the researchers isolated TILs from a melanoma patient’s brain metastasis and performed single cell sequencing to characterise each cell. The T cell receptors expressed by these TILs were then individually tested in the lab to identify those that were recognised and killed patient tumor cells. The researchers then combined these data to train a machine learning model to predict tumor reactive T cell receptors. The resulting classifier could identify tumor reactive T cells from TILs with 90% accuracy, works in many different types of tumor, and accommodates data from different cell sequencing technologies.
    “predicTCR enables us to cut the time it takes to identify personalised tumor reactive T cell receptors from over three months to a matter of days, regardless of tumor type” said Ed Green.
    “We are now focusing on bringing this technology into clinical practice here in Germany. To finance further development, we have founded the biotech start-up Tcelltech,” adds Michael Platten. “predicTCR is one of the key technologies of this new DKFZ spin-off.” More

  • in

    New study shows analog computing can solve complex equations and use far less energy

    A team of researchers including University of Massachusetts Amherst engineers have proven that their analog computing device, called a memristor, can complete complex, scientific computing tasks while bypassing the limitations of digital computing.
    Many of today’s important scientific questions — from nanoscale material modeling to large-scale climate science — can be explored using complex equations. However, today’s digital computing systems are reaching their limit for performing these computations in terms of speed, energy consumption and infrastructure.
    Qiangfei Xia, UMass Amherst professor of electrical and computer engineering, and one of the corresponding authors of the research published in Science, explains that, with current computing methods, every time you want to store information or give a computer a task, it requires moving data between memory and computing units. With complex tasks moving larger amounts of data, you essentially get a processing “traffic jam” of sorts.
    One way traditional computing has aimed to solve this is by increasing bandwidth. Instead, Xia and his colleagues at UMass Amherst, the University of Southern California, and computing technology maker, TetraMem Inc. have implemented in-memory computing with analog memristor technology as an alternative that can avoid these bottlenecks by reducing the number of data transfers.
    The team’s in-memory computing relies on an electrical component called a memristor — a combination of memory and resistor (which controls the flow of electricity in a circuit). A memristor controls the flow of electrical current in a circuit, while also “remembering” the prior state, even when the power is turned off, unlike today’s transistor-based computer chips, which can only hold information while there is power. The memristor device can be programmed into multiple resistance levels, increasing the information density in one cell.
    When organized into a crossbar array, such a memristive circuit does analog computing by using physical laws in a massively parallel fashion, substantially accelerating matrix operation, the most frequently used but very power-hungry computation in neural networks. The computing is performed at the site of the device, rather than moving the data between memory and processing. Using the traffic analogy, Xia compares in-memory computing to the nearly empty roads seen at the height of the pandemic: “You eliminated traffic because [nearly] everybody worked from home,” he says. “We work simultaneously, but we only send the important data/results out.”
    Previously, these researchers demonstrated that their memristor can complete low-precision computing tasks, like machine learning. Other applications have included analog signal processing, radiofrequency sensing, and hardware security.

    “In this work, we propose and demonstrate a new circuit architecture and programming protocol that can efficiently represent high-precision numbers using a weighted sum of multiple, relatively low-precision analog devices, such as memristors, with a greatly reduced overhead in circuitry, energy and latency compared with existing quantization approaches,” says Xia.
    “The breakthrough for this particular paper is that we push the boundary further,” he adds. “This technology is not only good for low-precision, neural network computing, but it can also be good for high-precision, scientific computing.”
    For the proof-of-principle demonstration, the memristor solved static and time-evolving partial differential equations, Navier-Stokes equations, and magnetohydrodynamics problems.
    “We pushed ourselves out of our own comfort zone,” he says, expanding beyond the low-precision requirements of edge computing neural networks to high-precision scientific computing.
    It took over a decade for the UMass Amherst team and collaborators to design a proper memristor device and build sizeable circuits and computer chips for analog in-memory computing. “Our research in the past decade has made analog memristor a viable technology. It is time to move such a great technology into the semiconductor industry to benefit the broad AI hardware community,” Xia says. More

  • in

    Vac to the future

    Scientists love a challenge. Or a friendly competition.
    Scientists at La Jolla Institute for Immunology (LJI) recently published the results of a competition that put researchers to the test. For the competition, part of the NIH-funded Computational Models of Immunity network, teams of researchers from different institutions offered up their best predictions regarding B. pertussis (whooping cough) vaccination.
    Each team tried to answer the same set of questions about vaccine responses in a diverse set of clinical study participants. Which study participants would show the highest antibody response to B. pertussis toxin 14 days post-vaccination? Which participants would show the highest increase of monocytes in their blood one day post-vaccination? And so on.
    The teams were given data on the study participant’s age, sex, and characteristics of their immune status prior to vaccination. The teams then developed computational models to predict vaccine responses in different patient groups.
    “We asked, ‘What do you think is the most important factor that drives vaccination outcome?'” says LJI Professor Bjoern Peters, Ph.D., who led the recent Cell Reports Methodsstudy. “The idea was to make the teams really put their money where their mouth is.”
    Multiple computational models to predict vaccine responses have been developed previously, many of them based on complex patterns in immune state before and after vaccination. Surprisingly, the best predictor in the competition was based on a very simple correlation: antibody responses decrease with the calendar age of study participants.
    The result may seem anti-climactic, but the competition sheds light on where more vaccine research is needed. “We know calendar age is important, but we still see a lot of variability in vaccination responses that we can’t explain,” says Peters.

    The competition has also helped rally scientists around further B. pertussis vaccine research. In the United States, B. pertussis vaccines were reformulated in the 1990s to address relatively minor adverse side effects. Research suggests the newer (aP vaccine) design may not be as effective as the older (wP vaccine) design in preventing disease transmission and infection.
    “We don’t know what’s missing from this current vaccine,” says Peters. “That’s an open question.”
    The prediction competition is shaping up to be an annual event, and previous entrants have gone back to the data to further hone their predictions. Perhaps, Peters hopes, this closer look at exactly what drives higher antibody responses in younger people can lead to better vaccines for all patient groups.
    “We are hoping to use this competition not just as a way to examine the capacity of people to predict vaccination outcomes, but also as a way to address an important public health question,” says Peters.
    The Peters Lab and the CMI-PB Team are currently finishing up their second invited challenge. They will host a public contest in or around August 2024. Researchers can join them at https://www.cmi-pb.org/
    Additional authors of the study, “A multi-omics systems vaccinology resource to develop and test computational models of immunity,” include Pramod Shinde, Ferran Soldevila, Joaquin Reyna, Minori Aoki, Mikkel Rasmussen, Lisa Willemsen, Mari Kojima, Brendan Ha, Jason A Greenbaum, James A Overton, Hector Guzman-Orozco, Somayeh Nili, Shelby Orfield, Jeremy P. Gygi, Ricardo da Silva Antunes, Alessandro Sette, Barry Grant, Lars Rønn Olsen, Anna Konstorum, Leying Guan, Ferhat Ay, and Steven H. Kleinstein.
    This study was supported by the National Institutes of Health’s (NIH) National Institute of Allergy and Infectious Diseases (NIAID; grants U01AI150753, U01AI141995 and U19AI142742.) More

  • in

    Information overload is a personal and societal danger

    We are all aware of the dangers of pollution to our air, water, and earth. In a letter recently published in Nature Human Behavior, scientists are advocating for the recognition and mitigation of another type of environmental pollution that poses equivalent personal and societal dangers: information overload.
    With the internet at our fingertips with smartphones, we are exposed to an unprecedented amount of data far beyond our ability to process. The result is an inability to evaluate information and make decisions. Further, it can lead us to limit our social activities, feel unsatisfied with our jobs, as well as unmotivated, and generally negative. Economists estimate that it all comes at a global cost of about $1 trillion. On top of the emotional and cognitive effects, contextual and environmental considerations may add to the personal and economic costs.
    The idea to explore information overload was incubated in a meeting of an international group of scientists two years ago, all of whom were supported by an E.U. grant for international collaboration. The E.U. team selected partners abroad including, for the third time, Rensselaer Polytechnic Institute’s Network Science and Technology Center (NeST), led by Boleslaw Szymanski, Ph.D., professor of computer science, in the United States.
    The researchers compare information overload to other historical shifts in society: open publishing brought about the need to filter out low-quality research from the vast number of accessible publications, the Industrial Revolution gave rise to air pollution, and environmental activists have helped usher in legal and economic changes to help curb pollution. Similarly, so-called “information pollution” or “data smog” must be addressed.
    Through the lens of computer science, there are at least three levels of information overload: “neural and cognitive mechanisms on the individual level… information and decisions at the group level… (and) societal level interactions among individuals, groups, and information providers.” These levels do not operate independently, so the flow of information may be treated as a multilevel network with nodes, which may give rise to an abrupt change. The researchers cite teamwork as an example: one team member’s information overload may cause the group’s performance to be hindered. It is a complex problem.
    “We are calling for action in science, education, and legislation,” said Szymanski. “We need further interdisciplinary research on information overload. Information ecology must be taught in school. We also need to start the conversation on legislative possibilities, akin to the Clean Air Act in the U.K. decades ago.”
    “Information overload can have severe implications,” said Curt Breneman, Ph.D., dean of Rensselaer’s School of Science. “It begins by eroding our emotional health, job performance, and satisfaction, subsequently influencing the actions of groups and ultimately, entire societies. I hope that Dr. Szymanski’s letter, written with colleagues from across the world, will raise public awareness of the problem and enable solutions to be studied and implemented.”
    Szymanski was joined in authoring the letter by Janusz A. Hołyst of Warsaw University of Technology, the principal investigator of the E.U. grant; Philipp Mayr of the Leibniz Institute for the Social Sciences; Michael Thelwall of University of Sheffield; Ingo Frommholz of University of Wolverhampton; Shlomo Havlin and Alon Sela of Bar-Ilan University; Yoed N. Kenett of Technion — Israel Institute of Technology; Denis Helic of Modul University Vienna; Aljoša Rehar and Sebastijan R. Maček of Slovenian Press Agency; Przemysław Kazienko and Tomasz Kajdanowicz of Wroclaw University of Science and Technology; Przemysław Biecek of Warsaw University of and University of Warsaw; and Julian Sienkiewicz of Warsaw University of Technology. More

  • in

    Advanced army robots more likely to be blamed for deaths

    Advanced killer robots are more likely to blamed for civilian deaths than military machines, new research has revealed.
    The University of Essex study shows that high-tech bots will be held more responsible for fatalities in identical incidents.
    Led by the Department of Psychology’s Dr Rael Dawtry it highlights the impact of autonomy and agency.
    And showed people perceive robots to be more culpable if described in a more advanced way.
    It is hoped the study — published in The Journal of Experimental Social Psychology — will help influence lawmakers as technology advances.
    Dr Dawtry said: “As robots are becoming more sophisticated, they are performing a wider range of tasks with less human involvement.
    “Some tasks, such as autonomous driving or military uses of robots, pose a risk to peoples’ safety, which raises questions about how — and where — responsibility will be assigned when people are harmed by autonomous robots.

    “This is an important, emerging issue for law and policy makers to grapple with, for example around the use of autonomous weapons and human rights.
    “Our research contributes to these debates by examining how ordinary people explain robots’ harmful behaviour and showing that the same processes underlying how blame is assigned to humans also lead people to assign blame to robots.”
    As part of the study Dr Dawtry presented different scenarios to more than 400 people.
    One saw them judge whether an armed humanoid robot was responsible for the death of a teenage girl.
    During a raid on a terror compound its machine guns “discharged” and fatally hit the civilian.
    When reviewing the incident, the participants blamed a robot more when it was described in more sophisticated terms despite the outcomes being the same.

    Other studies showed that simply labelling a variety of devices ‘autonomous robots’ lead people to hold them accountable compared to when they were labelled ‘machines’.
    Dr Dawtry added: “These findings show that how robots’ autonomy is perceived- and in turn, how blameworthy robots are — is influenced, in a very subtle way, by how they are described.
    “For example, we found that simply labelling relatively simple machines, such as those used in factories, as ‘autonomous robots’, lead people to perceive them as agentic and blameworthy, compared to when they were labelled ‘machines’.
    “One implication of our findings is that, as robots become more objectively sophisticated, or are simply made to appear so, they are more likely to be blamed.” More

  • in

    Alzheimer’s drug fermented with help from AI and bacteria moves closer to reality

    Galantamine is a common medication used by people with Alzheimer’s disease and other forms of dementia around the world to treat their symptoms. Unfortunately, synthesizing the active compounds in a lab at the scale needed isn’t commercially viable. The active ingredient is extracted from daffodils through a time-consuming process, and unpredictable factors, such as weather and crop yields, can affect supply and price of the drug.
    Now, researchers at The University of Texas at Austin have developed tools — including an artificial intelligence system and glowing biosensors — to harness microbes one day to do all the work instead.
    In a paper in Nature Communications, researchers outline a process using genetically modified bacteria to create a chemical precursor of galantamine as a byproduct of the microbe’s normal cellular metabolism. Essentially, the bacteria are programmed to convert food into medicinal compounds.
    “The goal is to eventually ferment medicines like this in large quantities,” said Andrew Ellington, a professor of molecular biosciences and author of the study. “This method creates a reliable supply that is much less expensive to produce. It doesn’t have a growing season, and it can’t be impacted by drought or floods.”
    Danny Diaz, a postdoctoral fellow with the Deep Proteins research group in UT’s Institute for Foundations of Machine Learning (IFML), developed an AI system called MutComputeX that is key to the process. It identifies how to mutate proteins inside the bacteria to improve their efficiency and operating temperature in order to maximize production of a needed medicinal chemical.
    “This system helped identify mutations that would make the bacteria more efficient at producing the target molecule,” Diaz said. “In some cases, it was up to three times as efficient as the natural system found in daffodils.”
    The process of harnessing microbes to produce useful byproducts is nothing new. Brewers use yeast to make alcohol, and bacteria help create cheese and yogurt. Microbial fermentation is currently used to make certain types of insulin for diabetes treatment, hormones and recombinant proteins used in several drugs such as autoimmune treatments, and even vaccines. But applying AI in the process is relatively new and expands what is possible with microbial fermentation.

    The research team genetically modified E. coli to produce 4-O’Methyl-norbelladine, a chemical building block of galantamine. The complex molecule is in a family of compounds extracted from daffodils that have medicinal uses in treating conditions such as cancer, fungal infections and viral infections, but using microbial fermentation to create a chemical in this family is new.
    The scientists also created a fluorescent biosensor to quickly detect and analyze which bacteria were producing the desired chemicals and how much. When the biosensor, a specially created protein, comes into contact with the chemical researchers wanted to create, it glows green.
    “The biosensor allows us to test and analyze samples in seconds when it used to take something like five minutes each,” said Simon d’Oelsnitz, a postdoctoral researcher formerly at UT Austin and now at Harvard University, the first author of the paper. “And the machine learning program allows us to easily narrow candidates from tens of thousands to tens. Put together, these are really powerful tools.”
    Wantae Kim, Daniel Acosta, Tyler Dangerfield, Mason Schechter, James Howard, Hannah Do, James Loy, Hal Alper and Y. Jessie Zhang of UT and Matthew Minus of Prairie View A&M University were also authors of the paper. The research was supported by the National Institute of Standards and Technology, the Air Force Office of Scientific Research and the National Institutes of Health, and the National Science Foundation supports IFML. Computing resources were provided by Advanced Micro Devices.
    Those involved in this research have submitted required financial disclosure forms with the University, and Ellington, Diaz and d’Oelsnitz have filed a patent application on materials described in this text. Diaz and d’Oelsnitz are each involved with startups related to this research. More

  • in

    AI for astrophysics: Algorithms help chart the origins of heavy elements

    The origin of heavy elements in our universe is theorized to be the result of neutron star collisions, which produce conditions hot and dense enough for free neutrons to merge with atomic nuclei and form new elements in a split-second window of time. Testing this theory and answering other astrophysical questions requires predictions for a vast range of masses of atomic nuclei. Los Alamos National Laboratory scientists are front and center in using machine learning algorithms (an application of artificial intelligence) to successfully model the atomic masses of the entire nuclide chart — the combination of all possible protons and neutrons that defines elements and their isotopes.
    “Many thousands of atomic nuclei that have yet to be measured may exist in nature,” said Matthew Mumpower, a theoretical physicist and co-author on several recent papers detailing atomic masses research. “Machine learning algorithms are very powerful, as they can find complex correlations in data, a result that theoretical nuclear physics models struggle to efficiently produce. These correlations can provide information to scientists about ‘missing physics’ and can in turn be used to strengthen modern nuclear models of atomic masses.”
    Simulating the rapid neutron-capture process
    Most recently, Mumpower and his colleagues, including former Los Alamos summer student Mengke Li and postdoc Trevor Sprouse, authored a paper in Physics Letters B that described simulating an important astrophysical process with a physics-based machine learning mass model. The r process, or rapid neutron-capture process, is the astrophysical process that occurs in extreme environments, like those produced by neutron star collisions. Heavy elements may result from this “nucleosynthesis”; in fact, half of the heavy isotopes up to bismuth and all of thorium and uranium in the universe may have been created by the r process.
    But modeling the r process requires theoretical predictions of atomic masses currently beyond experimental reach. The team’s physics-informed machine-learning approach trains a model based on random selection from the Atomic Mass Evaluation, a large database of masses. Next the researchers use these predicted masses to simulate the r process. The model allowed the team to simulate r-process nucleosynthesis with machine-learned mass predictions for the first time — a significant feat, as machine learning predictions generally break down when extrapolating.
    “We’ve shown that machine learning atomic masses can open the door to predictions beyond where we have experimental data,” Mumpower said. “The critical piece is that we tell the model to obey the laws of physics. By doing so, we enable physics-based extrapolations. Our results are on par with or outperform contemporary theoretical models and can be immediately updated when new data is available.”
    Investigating nuclear structures
    The r-process simulations complement the research team’s application of machine learning to related investigations of nuclear structure. In a recent article in Physical Review C selected as an Editor’s Suggestion, the team used machine learning algorithms to reproduce nuclear binding energies with quantified uncertainties; that is, they were able to ascertain the energy needed to separate an atomic nucleus into protons and neutrons, along with an associated error bar for each prediction. The algorithm thus provides information that would otherwise take significant computational time and resources to obtain from current nuclear modeling.
    In related work, the team used their machine learning model to combine precision experimental data with theoretical knowledge. These results have motivated some of the first experimental campaigns at the new Facility for Rare Isotope Beams, which seeks to expand the known region of the nuclear chart and uncover the origin of the heavy elements. More

  • in

    Robot ANYmal can do parkour and walk across rubble

    ANYmal has for some time had no problem coping with the stony terrain of Swiss hiking trails. Now researchers at ETH Zurich have taught this quadrupedal robot some new skills: it is proving rather adept at parkour, a sport based on using athletic manoeuvres to smoothly negotiate obstacles in an urban environment, which has become very popular. ANYmal is also proficient at dealing with the tricky terrain commonly found on building sites or in disaster areas.
    To teach ANYmal these new skills, two teams, both from the group led by ETH Professor Marco Hutter of the Department of Mechanical and Process Engineering, followed different approaches.
    Exhausting the mechanical options
    Working in one of the teams is ETH doctoral student Nikita Rudin, who does parkour in his free time. “Before the project started, several of my researcher colleagues thought that legged robots had already reached the limits of their development potential,” he says, “but I had a different opinion. In fact, I was sure that a lot more could be done with the mechanics of legged robots.”
    With his own parkour experience in mind, Rudin set out to further push the boundaries of what ANYmal could do. And he succeeded, by using machine learning to teach the quadrupedal robot new skills. ANYmal can now scale obstacles and perform dynamic manoeuvres to jump back down from them.
    In the process, ANYmal learned like a child would — through trial and error. Now, when presented with an obstacle, ANYmal uses its camera and artificial neural network to determine what kind of impediment it’s dealing with. It then performs movements that seem likely to succeed based on its previous training.
    Is that the full extent of what’s technically possible? Rudin suggests that this is largely the case for each individual new skill. But he adds that this still leaves plenty of potential improvements. These include allowing the robot to move beyond solving predefined problems and instead asking it to negotiate difficult terrain like rubble-strewn disaster areas.
    Combining new and traditional technologies
    Getting ANYmal ready for precisely that kind of application was the goal of the other project, conducted by Rudin’s colleague and fellow ETH doctoral student Fabian Jenelten. But rather than relying on machine learning alone, Jenelten combined it with a tried-and-tested approach used in control engineering known as model-based control. This provides an easier way of teaching the robot accurate manoeuvres, such as how to recognise and get past gaps and recesses in piles of rubble. In turn, machine learning helps the robot master movement patterns that it can then flexibly apply in unexpected situations. “Combining both approaches lets us get the most out of ANYmal,” Jenelten says.
    As a result, the quadrupedal robot is now better at gaining a sure footing on slippery surfaces or unstable boulders. ANYmal is soon also to be deployed on building sites or anywhere that is too dangerous for people — for instance to inspect a collapsed house in a disaster area. More