More stories

  • in

    Two artificial intelligences talk to each other

    Performing a new task based solely on verbal or written instructions, and then describing it to others so that they can reproduce it, is a cornerstone of human communication that still resists artificial intelligence (AI). A team from the University of Geneva (UNIGE) has succeeded in modelling an artificial neural network capable of this cognitive prowess. After learning and performing a series of basic tasks, this AI was able to provide a linguistic description of them to a ”sister” AI, which in turn performed them. These promising results, especially for robotics, are published in Nature Neuroscience.
    Performing a new task without prior training, on the sole basis of verbal or written instructions, is a unique human ability. What’s more, once we have learned the task, we are able to describe it so that another person can reproduce it. This dual capacity distinguishes us from other species which, to learn a new task, need numerous trials accompanied by positive or negative reinforcement signals, without being able to communicate it to their congeners.
    A sub-field of artificial intelligence (AI) — Natural language processing — seeks to recreate this human faculty, with machines that understand and respond to vocal or textual data. This technique is based on artificial neural networks, inspired by our biological neurons and by the way they transmit electrical signals to each other in the brain. However, the neural calculations that would make it possible to achieve the cognitive feat described above are still poorly understood.
    ”Currently, conversational agents using AI are capable of integrating linguistic information to produce text or an image. But, as far as we know, they are not yet capable of translating a verbal or written instruction into a sensorimotor action, and even less explaining it to another artificial intelligence so that it can reproduce it,” explains Alexandre Pouget, full professor in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine.
    A model brain
    The researcher and his team have succeeded in developing an artificial neuronal model with this dual capacity, albeit with prior training. ”We started with an existing model of artificial neurons, S-Bert, which has 300 million neurons and is pre-trained to understand language. We ‘connected’ it to another, simpler network of a few thousand neurons,” explains Reidar Riveland, a PhD student in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine, and first author of the study.
    In the first stage of the experiment, the neuroscientists trained this network to simulate Wernicke’s area, the part of our brain that enables us to perceive and interpret language. In the second stage, the network was trained to reproduce Broca’s area, which, under the influence of Wernicke’s area, is responsible for producing and articulating words. The entire process was carried out on conventional laptop computers. Written instructions in English were then transmitted to the AI.
    For example: pointing to the location — left or right — where a stimulus is perceived; responding in the opposite direction of a stimulus; or, more complex, between two visual stimuli with a slight difference in contrast, showing the brighter one. The scientists then evaluated the results of the model, which simulated the intention of moving, or in this case pointing. ”Once these tasks had been learned, the network was able to describe them to a second network — a copy of the first — so that it could reproduce them. To our knowledge, this is the first time that two AIs have been able to talk to each other in a purely linguistic way,” says Alexandre Pouget, who led the research.
    For future humanoids
    This model opens new horizons for understanding the interaction between language and behaviour. It is particularly promising for the robotics sector, where the development of technologies that enable machines to talk to each other is a key issue. ”The network we have developed is very small. Nothing now stands in the way of developing, on this basis, much more complex networks that would be integrated into humanoid robots capable of understanding us but also of understanding each other,” conclude the two researchers. More

  • in

    Holographic message encoded in simple plastic

    There are many ways to store data — digitally, on a hard disk, or using analogue storage technology, for example as a hologram. In most cases, it is technically quite complicated to create a hologram: High-precision laser technology is normally used for this.
    However, if the aim is simply to store data in a physical object, then holography can be done quite easily, as has now been demonstrated at TU Wien: A 3D printer can be used to produce a panel from normal plastic in which a QR code can be stored, for example. The message is read using terahertz rays — electromagnetic radiation that is invisible to the human eye.
    The hologram as a data storage device
    A hologram is completely different from an ordinary image. In an ordinary image, each pixel has a clearly defined position. If you tear off a piece of the picture, a part of the content is lost.
    In a hologram, however, the image is formed by contributions from all areas of the hologram simultaneously. If you take away a piece of the hologram, the rest can still create the complete image (albeit perhaps a blurrier version). With the hologram, the information is not stored pixel by pixel, but rather, all of the information is spread out over the whole hologram.
    “We have applied this principle to terahertz beams,” says Evan Constable from the Institute of Solid State Physics at TU Wien. “These are electromagnetic rays in the range of around one hundred to several thousand gigahertz, comparable to the radiation of a cell phone or a microwave oven — but with a significantly higher frequency.”
    This terahertz radiation is sent to a thin plastic plate. This plate is almost transparent to the terahertz rays, but it has a higher refractive index than the surrounding air, so at each point of the plate, it changes the incident wave a little. “A wave then emanates from each point of the plate, and all these waves interfere with each other,” says Evan Constable. “If you have adjusted the thickness of the plate in just the right way, point by point, then the superposition of all these waves produces exactly the desired image.”
    It is similar to throwing lots of little stones into a pond in a precisely calculated way so that the water waves from all these stones add up to a very specific overall wave pattern.

    A piece of cheap plastic as a high-tech storage unit for valuable items
    In this way, it was possible to encode a Bitcoin wallet address (consisting of 256 bits) in a piece of plastic. By shining terahertz rays of the correct wavelength through this plastic plate, a terahertz ray image is created that produces exactly the desired code. “In this way, you can securely store a value of tens of thousands of euros in an object that only costs a few cents,” says Evan Constable.
    In order for the plate to generate the correct code, one first has to calculate how thick the plate has to be at each point, so that it changes the terahertz wave in exactly the right way. Evan Constable and his collaborators made the code for obtaining this thickness profile available for free on Github. “Once you have this thickness profile, all you need is an ordinary 3D printer to print the plate and you have the desired information stored holographically,” explains Constable. The aim of the research work was not only to make holography with terahertz waves possible, but also to demonstrate how well the technology for working with these waves has progressed and how precisely this still rather unusual range of electromagnetic radiation can already be used today. More

  • in

    New study shows analog computing can solve complex equations and use far less energy

    A team of researchers including University of Massachusetts Amherst engineers have proven that their analog computing device, called a memristor, can complete complex, scientific computing tasks while bypassing the limitations of digital computing.
    Many of today’s important scientific questions — from nanoscale material modeling to large-scale climate science — can be explored using complex equations. However, today’s digital computing systems are reaching their limit for performing these computations in terms of speed, energy consumption and infrastructure.
    Qiangfei Xia, UMass Amherst professor of electrical and computer engineering, and one of the corresponding authors of the research published in Science, explains that, with current computing methods, every time you want to store information or give a computer a task, it requires moving data between memory and computing units. With complex tasks moving larger amounts of data, you essentially get a processing “traffic jam” of sorts.
    One way traditional computing has aimed to solve this is by increasing bandwidth. Instead, Xia and his colleagues at UMass Amherst, the University of Southern California, and computing technology maker, TetraMem Inc. have implemented in-memory computing with analog memristor technology as an alternative that can avoid these bottlenecks by reducing the number of data transfers.
    The team’s in-memory computing relies on an electrical component called a memristor — a combination of memory and resistor (which controls the flow of electricity in a circuit). A memristor controls the flow of electrical current in a circuit, while also “remembering” the prior state, even when the power is turned off, unlike today’s transistor-based computer chips, which can only hold information while there is power. The memristor device can be programmed into multiple resistance levels, increasing the information density in one cell.
    When organized into a crossbar array, such a memristive circuit does analog computing by using physical laws in a massively parallel fashion, substantially accelerating matrix operation, the most frequently used but very power-hungry computation in neural networks. The computing is performed at the site of the device, rather than moving the data between memory and processing. Using the traffic analogy, Xia compares in-memory computing to the nearly empty roads seen at the height of the pandemic: “You eliminated traffic because [nearly] everybody worked from home,” he says. “We work simultaneously, but we only send the important data/results out.”
    Previously, these researchers demonstrated that their memristor can complete low-precision computing tasks, like machine learning. Other applications have included analog signal processing, radiofrequency sensing, and hardware security.

    “In this work, we propose and demonstrate a new circuit architecture and programming protocol that can efficiently represent high-precision numbers using a weighted sum of multiple, relatively low-precision analog devices, such as memristors, with a greatly reduced overhead in circuitry, energy and latency compared with existing quantization approaches,” says Xia.
    “The breakthrough for this particular paper is that we push the boundary further,” he adds. “This technology is not only good for low-precision, neural network computing, but it can also be good for high-precision, scientific computing.”
    For the proof-of-principle demonstration, the memristor solved static and time-evolving partial differential equations, Navier-Stokes equations, and magnetohydrodynamics problems.
    “We pushed ourselves out of our own comfort zone,” he says, expanding beyond the low-precision requirements of edge computing neural networks to high-precision scientific computing.
    It took over a decade for the UMass Amherst team and collaborators to design a proper memristor device and build sizeable circuits and computer chips for analog in-memory computing. “Our research in the past decade has made analog memristor a viable technology. It is time to move such a great technology into the semiconductor industry to benefit the broad AI hardware community,” Xia says. More

  • in

    Vac to the future

    Scientists love a challenge. Or a friendly competition.
    Scientists at La Jolla Institute for Immunology (LJI) recently published the results of a competition that put researchers to the test. For the competition, part of the NIH-funded Computational Models of Immunity network, teams of researchers from different institutions offered up their best predictions regarding B. pertussis (whooping cough) vaccination.
    Each team tried to answer the same set of questions about vaccine responses in a diverse set of clinical study participants. Which study participants would show the highest antibody response to B. pertussis toxin 14 days post-vaccination? Which participants would show the highest increase of monocytes in their blood one day post-vaccination? And so on.
    The teams were given data on the study participant’s age, sex, and characteristics of their immune status prior to vaccination. The teams then developed computational models to predict vaccine responses in different patient groups.
    “We asked, ‘What do you think is the most important factor that drives vaccination outcome?'” says LJI Professor Bjoern Peters, Ph.D., who led the recent Cell Reports Methodsstudy. “The idea was to make the teams really put their money where their mouth is.”
    Multiple computational models to predict vaccine responses have been developed previously, many of them based on complex patterns in immune state before and after vaccination. Surprisingly, the best predictor in the competition was based on a very simple correlation: antibody responses decrease with the calendar age of study participants.
    The result may seem anti-climactic, but the competition sheds light on where more vaccine research is needed. “We know calendar age is important, but we still see a lot of variability in vaccination responses that we can’t explain,” says Peters.

    The competition has also helped rally scientists around further B. pertussis vaccine research. In the United States, B. pertussis vaccines were reformulated in the 1990s to address relatively minor adverse side effects. Research suggests the newer (aP vaccine) design may not be as effective as the older (wP vaccine) design in preventing disease transmission and infection.
    “We don’t know what’s missing from this current vaccine,” says Peters. “That’s an open question.”
    The prediction competition is shaping up to be an annual event, and previous entrants have gone back to the data to further hone their predictions. Perhaps, Peters hopes, this closer look at exactly what drives higher antibody responses in younger people can lead to better vaccines for all patient groups.
    “We are hoping to use this competition not just as a way to examine the capacity of people to predict vaccination outcomes, but also as a way to address an important public health question,” says Peters.
    The Peters Lab and the CMI-PB Team are currently finishing up their second invited challenge. They will host a public contest in or around August 2024. Researchers can join them at https://www.cmi-pb.org/
    Additional authors of the study, “A multi-omics systems vaccinology resource to develop and test computational models of immunity,” include Pramod Shinde, Ferran Soldevila, Joaquin Reyna, Minori Aoki, Mikkel Rasmussen, Lisa Willemsen, Mari Kojima, Brendan Ha, Jason A Greenbaum, James A Overton, Hector Guzman-Orozco, Somayeh Nili, Shelby Orfield, Jeremy P. Gygi, Ricardo da Silva Antunes, Alessandro Sette, Barry Grant, Lars Rønn Olsen, Anna Konstorum, Leying Guan, Ferhat Ay, and Steven H. Kleinstein.
    This study was supported by the National Institutes of Health’s (NIH) National Institute of Allergy and Infectious Diseases (NIAID; grants U01AI150753, U01AI141995 and U19AI142742.) More

  • in

    Information overload is a personal and societal danger

    We are all aware of the dangers of pollution to our air, water, and earth. In a letter recently published in Nature Human Behavior, scientists are advocating for the recognition and mitigation of another type of environmental pollution that poses equivalent personal and societal dangers: information overload.
    With the internet at our fingertips with smartphones, we are exposed to an unprecedented amount of data far beyond our ability to process. The result is an inability to evaluate information and make decisions. Further, it can lead us to limit our social activities, feel unsatisfied with our jobs, as well as unmotivated, and generally negative. Economists estimate that it all comes at a global cost of about $1 trillion. On top of the emotional and cognitive effects, contextual and environmental considerations may add to the personal and economic costs.
    The idea to explore information overload was incubated in a meeting of an international group of scientists two years ago, all of whom were supported by an E.U. grant for international collaboration. The E.U. team selected partners abroad including, for the third time, Rensselaer Polytechnic Institute’s Network Science and Technology Center (NeST), led by Boleslaw Szymanski, Ph.D., professor of computer science, in the United States.
    The researchers compare information overload to other historical shifts in society: open publishing brought about the need to filter out low-quality research from the vast number of accessible publications, the Industrial Revolution gave rise to air pollution, and environmental activists have helped usher in legal and economic changes to help curb pollution. Similarly, so-called “information pollution” or “data smog” must be addressed.
    Through the lens of computer science, there are at least three levels of information overload: “neural and cognitive mechanisms on the individual level… information and decisions at the group level… (and) societal level interactions among individuals, groups, and information providers.” These levels do not operate independently, so the flow of information may be treated as a multilevel network with nodes, which may give rise to an abrupt change. The researchers cite teamwork as an example: one team member’s information overload may cause the group’s performance to be hindered. It is a complex problem.
    “We are calling for action in science, education, and legislation,” said Szymanski. “We need further interdisciplinary research on information overload. Information ecology must be taught in school. We also need to start the conversation on legislative possibilities, akin to the Clean Air Act in the U.K. decades ago.”
    “Information overload can have severe implications,” said Curt Breneman, Ph.D., dean of Rensselaer’s School of Science. “It begins by eroding our emotional health, job performance, and satisfaction, subsequently influencing the actions of groups and ultimately, entire societies. I hope that Dr. Szymanski’s letter, written with colleagues from across the world, will raise public awareness of the problem and enable solutions to be studied and implemented.”
    Szymanski was joined in authoring the letter by Janusz A. Hołyst of Warsaw University of Technology, the principal investigator of the E.U. grant; Philipp Mayr of the Leibniz Institute for the Social Sciences; Michael Thelwall of University of Sheffield; Ingo Frommholz of University of Wolverhampton; Shlomo Havlin and Alon Sela of Bar-Ilan University; Yoed N. Kenett of Technion — Israel Institute of Technology; Denis Helic of Modul University Vienna; Aljoša Rehar and Sebastijan R. Maček of Slovenian Press Agency; Przemysław Kazienko and Tomasz Kajdanowicz of Wroclaw University of Science and Technology; Przemysław Biecek of Warsaw University of and University of Warsaw; and Julian Sienkiewicz of Warsaw University of Technology. More

  • in

    Advanced army robots more likely to be blamed for deaths

    Advanced killer robots are more likely to blamed for civilian deaths than military machines, new research has revealed.
    The University of Essex study shows that high-tech bots will be held more responsible for fatalities in identical incidents.
    Led by the Department of Psychology’s Dr Rael Dawtry it highlights the impact of autonomy and agency.
    And showed people perceive robots to be more culpable if described in a more advanced way.
    It is hoped the study — published in The Journal of Experimental Social Psychology — will help influence lawmakers as technology advances.
    Dr Dawtry said: “As robots are becoming more sophisticated, they are performing a wider range of tasks with less human involvement.
    “Some tasks, such as autonomous driving or military uses of robots, pose a risk to peoples’ safety, which raises questions about how — and where — responsibility will be assigned when people are harmed by autonomous robots.

    “This is an important, emerging issue for law and policy makers to grapple with, for example around the use of autonomous weapons and human rights.
    “Our research contributes to these debates by examining how ordinary people explain robots’ harmful behaviour and showing that the same processes underlying how blame is assigned to humans also lead people to assign blame to robots.”
    As part of the study Dr Dawtry presented different scenarios to more than 400 people.
    One saw them judge whether an armed humanoid robot was responsible for the death of a teenage girl.
    During a raid on a terror compound its machine guns “discharged” and fatally hit the civilian.
    When reviewing the incident, the participants blamed a robot more when it was described in more sophisticated terms despite the outcomes being the same.

    Other studies showed that simply labelling a variety of devices ‘autonomous robots’ lead people to hold them accountable compared to when they were labelled ‘machines’.
    Dr Dawtry added: “These findings show that how robots’ autonomy is perceived- and in turn, how blameworthy robots are — is influenced, in a very subtle way, by how they are described.
    “For example, we found that simply labelling relatively simple machines, such as those used in factories, as ‘autonomous robots’, lead people to perceive them as agentic and blameworthy, compared to when they were labelled ‘machines’.
    “One implication of our findings is that, as robots become more objectively sophisticated, or are simply made to appear so, they are more likely to be blamed.” More

  • in

    Alzheimer’s drug fermented with help from AI and bacteria moves closer to reality

    Galantamine is a common medication used by people with Alzheimer’s disease and other forms of dementia around the world to treat their symptoms. Unfortunately, synthesizing the active compounds in a lab at the scale needed isn’t commercially viable. The active ingredient is extracted from daffodils through a time-consuming process, and unpredictable factors, such as weather and crop yields, can affect supply and price of the drug.
    Now, researchers at The University of Texas at Austin have developed tools — including an artificial intelligence system and glowing biosensors — to harness microbes one day to do all the work instead.
    In a paper in Nature Communications, researchers outline a process using genetically modified bacteria to create a chemical precursor of galantamine as a byproduct of the microbe’s normal cellular metabolism. Essentially, the bacteria are programmed to convert food into medicinal compounds.
    “The goal is to eventually ferment medicines like this in large quantities,” said Andrew Ellington, a professor of molecular biosciences and author of the study. “This method creates a reliable supply that is much less expensive to produce. It doesn’t have a growing season, and it can’t be impacted by drought or floods.”
    Danny Diaz, a postdoctoral fellow with the Deep Proteins research group in UT’s Institute for Foundations of Machine Learning (IFML), developed an AI system called MutComputeX that is key to the process. It identifies how to mutate proteins inside the bacteria to improve their efficiency and operating temperature in order to maximize production of a needed medicinal chemical.
    “This system helped identify mutations that would make the bacteria more efficient at producing the target molecule,” Diaz said. “In some cases, it was up to three times as efficient as the natural system found in daffodils.”
    The process of harnessing microbes to produce useful byproducts is nothing new. Brewers use yeast to make alcohol, and bacteria help create cheese and yogurt. Microbial fermentation is currently used to make certain types of insulin for diabetes treatment, hormones and recombinant proteins used in several drugs such as autoimmune treatments, and even vaccines. But applying AI in the process is relatively new and expands what is possible with microbial fermentation.

    The research team genetically modified E. coli to produce 4-O’Methyl-norbelladine, a chemical building block of galantamine. The complex molecule is in a family of compounds extracted from daffodils that have medicinal uses in treating conditions such as cancer, fungal infections and viral infections, but using microbial fermentation to create a chemical in this family is new.
    The scientists also created a fluorescent biosensor to quickly detect and analyze which bacteria were producing the desired chemicals and how much. When the biosensor, a specially created protein, comes into contact with the chemical researchers wanted to create, it glows green.
    “The biosensor allows us to test and analyze samples in seconds when it used to take something like five minutes each,” said Simon d’Oelsnitz, a postdoctoral researcher formerly at UT Austin and now at Harvard University, the first author of the paper. “And the machine learning program allows us to easily narrow candidates from tens of thousands to tens. Put together, these are really powerful tools.”
    Wantae Kim, Daniel Acosta, Tyler Dangerfield, Mason Schechter, James Howard, Hannah Do, James Loy, Hal Alper and Y. Jessie Zhang of UT and Matthew Minus of Prairie View A&M University were also authors of the paper. The research was supported by the National Institute of Standards and Technology, the Air Force Office of Scientific Research and the National Institutes of Health, and the National Science Foundation supports IFML. Computing resources were provided by Advanced Micro Devices.
    Those involved in this research have submitted required financial disclosure forms with the University, and Ellington, Diaz and d’Oelsnitz have filed a patent application on materials described in this text. Diaz and d’Oelsnitz are each involved with startups related to this research. More

  • in

    AI for astrophysics: Algorithms help chart the origins of heavy elements

    The origin of heavy elements in our universe is theorized to be the result of neutron star collisions, which produce conditions hot and dense enough for free neutrons to merge with atomic nuclei and form new elements in a split-second window of time. Testing this theory and answering other astrophysical questions requires predictions for a vast range of masses of atomic nuclei. Los Alamos National Laboratory scientists are front and center in using machine learning algorithms (an application of artificial intelligence) to successfully model the atomic masses of the entire nuclide chart — the combination of all possible protons and neutrons that defines elements and their isotopes.
    “Many thousands of atomic nuclei that have yet to be measured may exist in nature,” said Matthew Mumpower, a theoretical physicist and co-author on several recent papers detailing atomic masses research. “Machine learning algorithms are very powerful, as they can find complex correlations in data, a result that theoretical nuclear physics models struggle to efficiently produce. These correlations can provide information to scientists about ‘missing physics’ and can in turn be used to strengthen modern nuclear models of atomic masses.”
    Simulating the rapid neutron-capture process
    Most recently, Mumpower and his colleagues, including former Los Alamos summer student Mengke Li and postdoc Trevor Sprouse, authored a paper in Physics Letters B that described simulating an important astrophysical process with a physics-based machine learning mass model. The r process, or rapid neutron-capture process, is the astrophysical process that occurs in extreme environments, like those produced by neutron star collisions. Heavy elements may result from this “nucleosynthesis”; in fact, half of the heavy isotopes up to bismuth and all of thorium and uranium in the universe may have been created by the r process.
    But modeling the r process requires theoretical predictions of atomic masses currently beyond experimental reach. The team’s physics-informed machine-learning approach trains a model based on random selection from the Atomic Mass Evaluation, a large database of masses. Next the researchers use these predicted masses to simulate the r process. The model allowed the team to simulate r-process nucleosynthesis with machine-learned mass predictions for the first time — a significant feat, as machine learning predictions generally break down when extrapolating.
    “We’ve shown that machine learning atomic masses can open the door to predictions beyond where we have experimental data,” Mumpower said. “The critical piece is that we tell the model to obey the laws of physics. By doing so, we enable physics-based extrapolations. Our results are on par with or outperform contemporary theoretical models and can be immediately updated when new data is available.”
    Investigating nuclear structures
    The r-process simulations complement the research team’s application of machine learning to related investigations of nuclear structure. In a recent article in Physical Review C selected as an Editor’s Suggestion, the team used machine learning algorithms to reproduce nuclear binding energies with quantified uncertainties; that is, they were able to ascertain the energy needed to separate an atomic nucleus into protons and neutrons, along with an associated error bar for each prediction. The algorithm thus provides information that would otherwise take significant computational time and resources to obtain from current nuclear modeling.
    In related work, the team used their machine learning model to combine precision experimental data with theoretical knowledge. These results have motivated some of the first experimental campaigns at the new Facility for Rare Isotope Beams, which seeks to expand the known region of the nuclear chart and uncover the origin of the heavy elements. More