More stories

  • in

    Grasping exponential growth

    Most people underestimate exponential growth, including when it comes to the spread of the coronavirus. The ability to grasp the magnitude of exponential growth depends on the way in which it is communicated. Using the right framing helps to understand the benefit of mitigation measures.
    The coronavirus outbreak offered the public a crash course in statistics, with terms like doubling time, logarithmic scales, R factor, rolling averages, and excess mortality now on everyone’s tongue. However, simply having heard these terms does not mean that someone will be able to comprehend the speed of the spread.
    Exponential growth is a notoriously difficult concept to understand. This difficulty can be illustrated by an old Indian legend about a king who was tricked by one of his advisers, saying “Noble lord, I want nothing more than a chess board to be filled with grains of rice. Place one grain on the first square and double the amount of grain for each square that follows.”
    The king agreed to the deal, seemingly unaware of the explosive growth that would result from doubling the amount of grain for each of the 64 chessboard squares. At the end of the procedure, he would owe his adviser no less than 18 quintillion, 446 quadrillion, 744 trillion, 73 billion, 709 million, 551 thousand and 615 grains — the equivalent of around 11 billion train carriages full of rice.
    The tendency to underestimate exponential growth can result in negative consequences during a pandemic. If people misjudge how quickly the virus can spread, then they are less likely to take measures such as mask wearing, social distancing, or working from home. Instead, people may perceive such measures as exaggerated.
    A new research paper published by the journal PLOS ONE from ETH Zurich’s Center for Law and Economics and the Lucerne University of Applied Sciences and Arts has taken a closer look at this behavioural phenomenon, known as exponential growth bias. Martin Schonger, lecturer and director of a study programme at HSLU and Senior Research Fellow at ETH Zürich, and doctoral researcher Daniela Sele wanted to find out whether the way in which the exponential spread of infectious disease is communicated can affect the magnitude of this bias. From previous experiments, the researchers knew that people underestimate exponential growth even when they are aware of exponential growth bias. In other words, informing the public of potential bias does little to improve perception: informed people still underestimate what exponential growth really means in practice, just like people who are unaware of the bias.

    advertisement

    Doubling time — a concept easier to understand than growth rate
    The research team conducted an experiment in which over 400 participants were presented with the same scenario: a country currently has a thousand cases, and this figure climbs by 26 percent every day. With this exponential spread of the virus, the country would reach one million cases in 30 days. However, there is a chance to reduce the growth rate from 26 percent to 9 percent by adopting mitigation measures.
    Researchers quizzed participants on the situation, framing their questions from different perspectives: How many cases can be prevented by adopting mitigation measures? By adopting the measures, how much time can be gained before reaching one million cases? How many cases will there be after 30 days if mitigation measures lengthen the doubling time from three days to eight days? By the way, extending the doubling time like this is equivalent to reducing the growth rate from 26 percent to 9 percent — something that few people recognise intuitively.
    Researchers stated that they were surprised by the clear and consistent results of the experiment. Their first finding: talking about growth rates is an ineffective way of communicating the spread of pandemic diseases. Over 90 percent of participants drastically underestimated the number of infections after 30 days of exponential spread. They were much more on the mark, however, when the question was framed using doubling times.
    Imagining the impact of mitigation
    The researchers’ second finding was that people have trouble gauging how many infections can be prevented with mitigation measures. When asked how many infections could be prevented in the scenario above (starting from a thousand cases, a growth rate of 9 percent instead of 26 percent over 30 days), people responded with estimates that were extremely far off. The typical (median) participant believed that 8,600 cases could be prevented, when, in fact, the figure is almost one million.

    advertisement

    However, when participants were asked about the number of days that could be gained by adopting mitigation measures — for example, until hospitals are overloaded, or until there is a vaccine on the market — their estimates were significantly better.
    The experiment achieved its best results with questions framed from the perspective of time gained and the impact of slowing down doubling times. A statement that combines both of these would be, for example: “If each of us adopt preventative measures today, cases of the virus will slow down — we can estimate that they will double only every eight days, as opposed to every three days. This allows 50 additional days to implement preparatory measures to combat the virus (e.g., by providing much needed supplies to hospitals, or finding treatments and vaccines) before reaching one million cases.”
    Choosing the right words
    The study, conducted during the Swiss partial lockdown in spring of 2020, did not focus on how public authorities and the media discussed the spread of the virus. However, Sele and Schonger have been following the way in which the drastic measures were communicated and comparing these observations with their research findings.
    According to the authors, the Federal Office of Public Health (FOPH) and the scientific task force often use doubling times rather than growth rates. In the experiment, they found that this method of framing communication surrounding the coronavirus improved people’s understanding. However, the FOPH made little mention of the potential for time gained, even though the research findings indicate that this information helps to better transmit the message.
    The researchers suspect that the direct impact of official communication is limited. Reporting in the press might play a more significant role, but the media mostly focus on case numbers and rarely frame communication in the context of time gained.
    Schonger and Sele see COVID measures as just one application of the framing theory when it comes to communicating exponential growth: similar phenomena might also be observed in the banking and finance industry, or when it comes to legal or environmental policy-making. More

  • in

    Wearable sensor may signal you're developing COVID-19 — even if your symptoms are subtle

    A smart ring that generates continuous temperature data may foreshadow COVID-19, even in cases when infection is not suspected. The device, which may be a better illness indicator than a thermometer, could lead to earlier isolation and testing, curbing the spread of infectious diseases, according to a preliminary study led by UC San Francisco and UC San Diego.
    An analysis of data from 50 people previously infected with COVID-19, published online in the peer-reviewed journal Scientific Reports on Dec. 14, 2020, found that data obtained from the commercially available smart ring accurately identified higher temperatures in people with symptoms of COVID-19.
    While it is not known how effectively the smart ring can detect asymptomatic COVID-19, which affects between 10 percent to 70 percent of those infected according to the Centers for Disease Control and Prevention, the authors reported that for 38 of the 50 participants, fever was identified when symptoms were unreported or even unnoticed.
    Of note, the researchers analyzed weeks of temperature data to determine typical ranges for each of the 50 participants. “Many factors impact body temperature,” said principal investigator and senior author Ashley Mason, PhD, assistant professor in the UCSF Department of Psychiatry and faculty at the Osher Center for Integrative Medicine. “Single-point temperature measurement is not very meaningful. People go in and out of fever, and a temperature that is clearly elevated for one person may not be a major aberration for another person. Continual temperature information can better identify fever.”
    According to co-author Frederick Hecht, MD, professor of medicine and director of research at the UCSF Osher Center for Integrative Medicine, this work is “important for showing the potential of wearable devices in early detection of COVID-19, as well as other infectious diseases.”
    Asymptomatic Illness or Illness with Unreported/Unnoticed Symptoms?
    While the number of study participants was too small to extrapolate for the whole population, the authors said they were encouraged that the smart ring detected illness when symptoms were subtle or unnoticed. “This raises the question of how many asymptomatic cases are truly asymptomatic and how many might just be unnoticed or unreported,” said first author Benjamin Smarr, PhD, an assistant professor in the Department of Bioengineering and the Halicio?lu Data Science Institute at UC San Diego. “By using wearable technology, we’re able to query the body directly.”

    advertisement

    To conduct the study, the researchers used the Oura Ring, a wearable sensor made by the Finnish startup Oura, which pairs to a mobile app. The ring continuously measures sleep and wakefulness, heart and respiratory rates, and temperature. The researchers provided the rings to nearly 3,400 health care workers across the U.S., and worked with Oura to invite existing users to participate in the study via the Oura app, resulting in enrollment of more than 65,000 participants worldwide in a now concluded prospective, observational study, which the UC researchers are preparing for publication.
    The participants in the preliminary study reported that they had previously been infected with COVID-19. A continuous record of their biomonitoring data was still available for analysis from the weeks before their infection, through the time of enrollment until the end of the study.
    No-touch thermometers that detect infrared radiation from the forehead are used to quickly screen for fever in airports and offices and are believed to detect some COVID-19 cases, but many studies suggest their value is limited. The ring records temperature all the time, so each measurement is contextualized by the history of that individual, making relative elevations much easier to spot. “Context matters in temperature assessment,” Smarr emphasized.
    Heart Rate, Respiration Rate Provide Other Clues
    Other illness-associated changes that the rings detect included increased heart rate, reduced heart rate variability and increased respiration rate, but these changes were not as strongly correlated, the authors noted.

    advertisement

    The researchers are using data from the larger, prospective study to develop an algorithm from data collected by wearable devices that can identify when it appears that the user is becoming sick. Mason’s team can then trigger a request for the user to complete with a self-collection COVID-19 test kit. The researchers will evaluate the algorithm in a new study of 4,000 additional participants.
    “The hope is that people infected with COVID will be able to prepare and isolate sooner, call their doctor sooner, notify any folks they’ve been in contact with sooner, and not spread the virus,” Mason said.
    Co-Authors: Sarah Fisher, Anoushka Chowdhary, Karena Puldon, Adam Rao and Frederick Hecht from UCSF; Kirstin Aschbacher from Oura and UCSF; and Stephen Dilchert from CUNY, New York.
    Funding: Oura Health Oy.
    Disclosures: Aschbacher is an employee of Oura Health Oy, in addition to holding an adjunct associate professor position at UCSF. Smarr has worked as a paid consultant at Oura Health Oy within the last 12 months, although not during this research project. More

  • in

    Like adults, children by age 3 prefer seeing fractal patterns

    By the time children are 3 years old they already have an adult-like preference for visual fractal patterns commonly seen in nature, according to University of Oregon researchers.
    That discovery emerged among children who’ve been raised in a world of Euclidean geometry, such as houses with rooms constructed with straight lines in a simple non-repeating manner, said the study’s lead author Kelly E. Robles, a doctoral student in the UO’s Department of Psychology.
    “Unlike early humans who lived outside on savannahs, modern-day humans spend the majority of their early lives inside these humanmade structures,” Robles said. “So, since children are not heavily exposed to these natural low-to-moderate complexity fractal patterns, this preference must come from something earlier in development or perhaps are innate.”
    The study published online Nov. 25 in the Nature journal Humanities and Social Sciences Communication. In it, researchers explored how individual differences in processing styles may account for trends in fractal fluency. Previous research had suggested that a preference for fractal patterns may develop as a result of environmental and developmental factors acquired across a person’s lifespan.
    In the UO study, researchers exposed participants — 82 adults, ages 18-33, and 96 children, ages 3-10 — to images of fractal patterns, exact and statistical, ranging in complexity on computer screens.
    Exact fractals are highly ordered such that the same basic pattern repeats exactly at every scale and may possess spatial symmetry such as that seen in snowflakes. Statistical fractals, in contrast, repeat in a similar but not exact fashion across scale and do not possess spatial symmetry as seen in coastlines, clouds, mountains, rivers and trees. Both forms appear in art across many cultures.

    advertisement

    When viewing the fractal patterns, Robles said, subjects chose favorites between different pairs of images that differed in complexity. When looking at exact fractal patterns, selections involved different pairs of snowflake-like or tree-branch-like images. For the statistical fractals, selections involved choosing between pairs of cloud-like images.
    “Since people prefer a balance of simplicity and complexity, we were looking to confirm that people preferred low-to-moderate complexity in statistically repeating patterns, and that the presence of order in exact repeating patterns allowed for a tolerance of and preference for more complex patterns,” she said.
    Although there were some differences in the preferences of adults and children, the overall trend was similar. Exact patterns with greater complexity were more preferred, while preference for statistical patterns peaked at low-moderate complexity and then decreases with additional complexity.
    In subsequent steps with the participants, the UO team was able to rule out the possibility that age-related perceptual strategies or biases may have driven different preferences for statistical and exact patterns.
    “We found that people prefer the most common natural pattern, the statistical fractal patterns of low-moderate complexity, and that this preference does not stem from or vary across decades of exposure to nature or to individual differences in how we process images,” Robles said. “Our preferences for fractals are set before our third birthdays, suggesting that our visual system is tuned to better process these patterns that are highly prevalent in nature.”
    The aesthetic experience of viewing nature’s fractals holds huge potential benefits — ranging from stress-reduction to refreshing mental fatigue, said co-author Richard Taylor, professor and head of the UO’s Department of Physics.
    “Nature provides these benefits for free, but we increasingly find ourselves surrounded by urban landscapes devoid of fractals,” he said. “This study shows that incorporating fractals into urban environments can begin providing benefits from a very early age.”
    Taylor is using fractal inspired designs, in his own research, in an effort to create implants to treat macular degeneration. He and co-author Margaret Sereno, professor of psychology and director of the Integrative Perception Lab, also have published on the positive aesthetic benefits of installing fractal solar panels and window blinds.
    Fractal carpets, recently installed in the UO’s Phil and Penny Knight Campus for Accelerating Scientific Impact, are seen in the new facility’s virtual grand opening tour. Sereno and Taylor also are collaborating on future applications with Ihab Elzeyadi, a professor in the UO’s Department of Architecture. More

  • in

    New computational method validates images without 'ground truth'

    A realtor sends a prospective homebuyer a blurry photograph of a house taken from across the street. The homebuyer can compare it to the real thing — look at the picture, then look at the real house — and see that the bay window is actually two windows close together, the flowers out front are plastic and what looked like a door is actually a hole in the wall.
    What if you aren’t looking at a picture of a house, but something very small — like a protein? There is no way to see it without a specialized device so there’s nothing to judge the image against, no “ground truth,” as it’s called. There isn’t much to do but trust that the imaging equipment and the computer model used to create images are accurate.
    Now, however, research from the lab of Matthew Lew at the McKelvey School of Engineering at Washington University in St. Louis has developed a computational method to determine how much confidence a scientist should have that their measurements, at any given point, are accurate, given the model used to produce them.
    The research was published Dec. 11 in Nature Communications.
    “Fundamentally, this is a forensic tool to tell you if something is right or not,” said Lew, assistant professor in the Preston M. Green Department of Electrical & Systems Engineering. It’s not simply a way to get a sharper picture. “This is a whole new way of validating the trustworthiness of each detail within a scientific image.
    “It’s not about providing better resolution,” he added of the computational method, called Wasserstein-induced flux (WIF). “It’s saying, ‘This part of the image might be wrong or misplaced.'”
    The process used by scientists to “see” the very small — single-molecule localization microscopy (SMLM) — relies on capturing massive amounts of information from the object being imaged. That information is then interpreted by a computer model that ultimately strips away most of the data, reconstructing an ostensibly accurate image — a true picture of a biological structure, like an amyloid protein or a cell membrane.

    advertisement

    There are a few methods already in use to help determine whether an image is, generally speaking, a good representation of the thing being imaged. These methods, however, cannot determine how likely it is that any single data point within an image is accurate.
    Hesam Mazidi, a recent graduate who was a PhD student in Lew’s lab for this research, tackled the problem.
    “We wanted to see if there was a way we could do something about this scenario without ground truth,” he said. “If we could use modeling and algorithmic analysis to quantify if our measurements are faithful, or accurate enough.”
    The researchers didn’t have ground truth — no house to compare to the realtor’s picture — but they weren’t empty handed. They had a trove of data that is usually ignored. Mazidi took advantage of the massive amount of information gathered by the imaging device that usually gets discarded as noise. The distribution of noise is something the researchers can use as ground truth because it conforms to specific laws of physics.
    “He was able to say, ‘I know how the noise of the image is manifested, that’s a fundamental physical law,'” Lew said of Mazidi’s insight.

    advertisement

    “He went back to the noisy, imperfect domain of the actual scientific measurement,” Lew said. All of the data points recorded by the imaging device. “There is real data there that people throw away and ignore.”
    Instead of ignoring it, Mazidi looked to see how well the model predicted the noise — given the final image and the model that created it.
    Analyzing so many data points is akin to running the imaging device over and over again, performing multiple test runs to calibrate it.
    “All of those measurements give us statistical confidence,” Lew said.
    WIF allows them to determine not if the entire image is probable based on the model, but, considering the image, if any given point on the image is probable, based on the assumptions built into the model.
    Ultimately, Mazidi developed a method that can say with strong statistical confidence that any given data point in the final image should or should not be in a particular spot.
    It’s as if the algorithm analyzed the picture of the house and — without ever having seen the place — it cleaned up the image, revealing the hole in the wall.
    In the end, the analysis yields a single number per data point, between -1 and 1. The closer to one, the more confident a scientist can be that a point on an image is, in fact, accurately representing the thing being imaged.
    This process can also help scientists improve their models. “If you can quantify performance, then you can also improve your model by using the score,” Mazidi said. Without access to ground truth, “it allows us to evaluate performance under real experimental conditions rather than a simulation.”
    The potential uses for WIF are far-reaching. Lew said the next step is to use it to validate machine learning, where biased datasets may produce inaccurate outputs.
    How would a researcher know, in such a case, that their data was biased? “Using this model, you’d be able to test on data that has no ground truth, where you don’t know if the neural network was trained with data that are similar to real-world data.
    “Care has to be taken in every type of measurement you take,” Lew said. “Sometimes we just want to push the big red button and see what we get, but we have to remember, there’s a lot that happens when you push that button.” More

  • in

    Challenges of fusing robotics and neuroscience

    Combining neuroscience and robotic research has gained impressive results in the rehabilitation of paraplegic patients. A research team led by Prof. Gordon Cheng from the Technical University of Munich (TUM) was able to show that exoskeleton training not only helped patients to walk, but also stimulated their healing process. With these findings in mind, Prof. Cheng wants to take the fusion of robotics and neuroscience to the next level.
    Prof. Cheng, by training a paraplegic patient with the exoskeleton within your sensational study under the “Walk Again” project, you found that patients regained a certain degree of control over the movement of their legs. Back then, this came as a complete surprise to you …
    … and it somehow still is. Even though we had this breakthrough four years ago, this was only the beginning. To my regret, none of these patients is walking around freely and unaided yet. We have only touched the tip of the iceberg. To develop better medical devices, we need to dig deeper in understanding how the brain works and how to translate this into robotics.
    In your paper published in Science Robotics this month, you and your colleague Prof. Nicolelis, a leading expert in neuroscience and in particular in the area of the human-machine interface, argue that some key challenges in the fusion of neuroscience and robotics need to be overcome in order to take the next steps. One of them is to “close the loop between the brain and the machine” — what do you mean by that?
    The idea behind this is that the coupling between the brain and the machine should work in a way where the brain thinks of the machine as an extension of the body. Let’s take driving as an example. While driving a car, you don’t think about your moves, do you? But we still don’t know how this really works. My theory is that the brain somehow adapts to the car as if it is a part of the body. With this general idea in mind, it would be great to have an exoskeleton that would be embraced by the brain in the same way.
    How could this be achieved in practice?
    The exoskeleton that we were using for our research so far is actually just a big chunk of metal and thus rather cumbersome for the wearer. I want to develop a “soft” exoskeleton — something that you can just wear like a piece of clothing that can both sense the user’s movement intentions and provide instantaneous feedback. Integrating this with recent advances in brain-machine interfaces that allow real-time measurement of brain responses enables the seamless adaptation of such exoskeletons to the needs of individual users. Given the recent technological advances and better understanding of how to decode the user’s momentary brain activity, the time is ripe for their integration into more human-centered or, better ? brain-centered ? solutions.
    What other pieces are still missing? You talked about providing a “more realistic functional model” for both disciplines.
    We have to facilitate the transfer through new developments, for example robots that are closer to human behaviour and the construction of the human body and thus lower the threshold for the use of robots in neuroscience. This is why we need more realistic functional models, which means that robots should be able to mimic human characteristics. Let’s take the example of a humanoid robot actuated with artificial muscles. This natural construction mimicking muscles instead of the traditional motorized actuation would provide neuroscientists with a more realistic model for their studies. We think of this as a win-win situation to facilitate better cooperation between neuroscience and robotics in the future.
    You are not alone in the mission of overcoming these challenges. In your Elite Graduate Program in Neuroengineering, the first and only one of its kind in Germany combining experimental and theoretical neuroscience with in-depth training in engineering, you are bringing together the best students in the field.
    As described above, combining the two disciplines of robotics and neuroscience is a tough exercise, and therefore one of the main reasons why I created this master’s program in Munich. To me, it is important to teach the students to think more broadly and across disciplines, to find previously unimagined solutions. This is why lecturers from various fields, for example hospitals or the sports department, are teaching our students. We need to create a new community and a new culture in the field of engineering. From my standpoint, education is the key factor.

    Story Source:
    Materials provided by Technical University of Munich (TUM). Note: Content may be edited for style and length. More

  • in

    'The robot made me do it': Robots encourage risk-taking behavior in people

    New research has shown robots can encourage people to take greater risks in a simulated gambling scenario than they would if there was nothing to influence their behaviours. Increasing our understanding of whether robots can affect risk-taking could have clear ethical, practical and policy implications, which this study set out to explore.
    Dr Yaniv Hanoch, Associate Professor in Risk Management at the University of Southampton who led the study explained, “We know that peer pressure can lead to higher risk-taking behaviour. With the ever-increasing scale of interaction between humans and technology, both online and physically, it is crucial that we understand more about whether machines can have a similar impact.”
    This new research, published in the journal Cyberpsychology, Behavior, and Social Networking, involved 180 undergraduate students taking the Balloon Analogue Risk Task (BART), a computer assessment that asks participants to press the spacebar on a keyboard to inflate a balloon displayed on the screen. With each press of the spacebar, the balloon inflates slightly, and 1 penny is added to the player’s “temporary money bank.” The balloons can explode randomly, meaning the player loses any money they have won for that balloon and they have the option to “cash-in” before this happens and move on to the next balloon.
    One-third of the participants took the test in a room on their own (the control group), one third took the test alongside a robot that only provided them with the instructions but was silent the rest of the time and the final, the experimental group, took the test with the robot providing instruction as well as speaking encouraging statements such as “why did you stop pumping?”
    The results showed that the group who were encouraged by the robot took more risks, blowing up their balloons significantly more frequently than those in the other groups did. They also earned more money overall. There was no significant difference in the behaviours of the students accompanied by the silent robot and those with no robot.
    Dr Hanoch said: “We saw participants in the control condition scale back their risk-taking behaviour following a balloon explosion, whereas those in the experimental condition continued to take as much risk as before. So, receiving direct encouragement from a risk-promoting robot seemed to override participants’ direct experiences and instincts.”
    The researcher now believe that further studies are needed to see whether similar results would emerge from human interaction with other artificial intelligence (AI) systems, such as digital assistants or on-screen avatars.
    Dr Hanoch concluded, “With the wide spread of AI technology and its interactions with humans, this is an area that needs urgent attention from the research community.”
    “On the one hand, our results might raise alarms about the prospect of robots causing harm by increasing risky behavior. On the other hand, our data points to the possibility of using robots and AI in preventive programs, such as anti-smoking campaigns in schools, and with hard to reach populations, such as addicts.”

    Story Source:
    Materials provided by University of Southampton. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence helps scientists develop new general models in ecology

    In ecology, millions of species interact in billions of different ways between them and with their environment. Ecosystems often seem chaotic, or at least overwhelming for someone trying to understand them and make predictions for the future.
    Artificial intelligence and machine learning are able to detect patterns and predict outcomes in ways that often resemble human reasoning. They pave the way to increasingly powerful cooperation between humans and computers.
    Within AI, evolutionary computation methods replicate in some sense the processes of evolution of species in the natural world. A particular method called symbolic regression allows the evolution of human-interpretable formulas that explain natural laws.
    “We used symbolic regression to demonstrate that computers are able to derive formulas that represent the way ecosystems or species behave in space and time. These formulas are also easy to understand. They pave the way for general rules in ecology, something that most methods in AI cannot do,” says Pedro Cardoso, curator at the Finnish Museum of Natural History, University of Helsinki.
    With the help of the symbolic regression method, an interdisciplinary team from Finland, Portugal, and France was able to explain why some species exist in some regions and not in others, and why some regions have more species than others.
    The researchers were able, for example, to find a new general model that explains why some islands have more species than others. Oceanic islands have a natural life-cycle, emerging from volcanoes and eventually submerging with erosion after millions of years. With no human input, the algorithm was able to find that the number of species of an island increases with the island age and peaks with intermediate ages, when erosion is still low.
    “The explanation was known, a couple of formulas already existed, but we were able to find new ones that outperform the existing ones under certain circumstances,” says Vasco Branco, PhD student working on the automation of extinction risk assessments at the University of Helsinki.
    The research proposes that explainable artificial intelligence is a field to explore and promotes the cooperation between humans and machines in ways that are only now starting to scratch the surface.
    “Evolving free-form equations purely from data, often without prior human inference or hypotheses, may represent a very powerful tool in the arsenal of a discipline as complex as ecology,” says Luis Correia, computer science professor at the University of Lisbon.

    Story Source:
    Materials provided by University of Helsinki. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence improves control of powerful plasma accelerators

    Researchers have used AI to control beams for the next generation of smaller, cheaper accelerators for research, medical and industrial applications.
    Experiments led by Imperial College London researchers, using the Science and Technology Facilities Council’s Central Laser Facility (CLF), showed that an algorithm was able to tune the complex parameters involved in controlling the next generation of plasma-based particle accelerators.
    The algorithm was able to optimize the accelerator much more quickly than a human operator, and could even outperform experiments on similar laser systems.
    These accelerators focus the energy of the world’s most powerful lasers down to a spot the size of a skin cell, producing electrons and x-rays with equipment a fraction of the size of conventional accelerators.
    The electrons and x-rays can be used for scientific research, such as probing the atomic structure of materials; in industrial applications, such as for producing consumer electronics and vulcanised rubber for car tyres; and could also be used in medical applications, such as cancer treatments and medical imaging.
    Several facilities using these new accelerators are in various stages of planning and construction around the world, including the CLF’s Extreme Photonics Applications Centre (EPAC) in the UK, and the new discovery could help them work at their best in the future. The results are published today in Nature Communications.
    First author Dr Rob Shalloo, who completed the work at Imperial and is now at the accelerator centre DESY, said: “The techniques we have developed will be instrumental in getting the most out of a new generation of advanced plasma accelerator facilities under construction within the UK and worldwide.

    advertisement

    “Plasma accelerator technology provides uniquely short bursts of electrons and x-rays, which are already finding uses in many areas of scientific study. With our developments, we hope to broaden accessibility to these compact accelerators, allowing scientists in other disciplines and those wishing to use these machines for applications, to benefit from the technology without being an expert in plasma accelerators.”
    The team worked with laser wakefield accelerators. These combine the world’s most powerful lasers with a source of plasma (ionised gas) to create concentrated beams of electrons and x-rays. Traditional accelerators need hundreds of metres to kilometres to accelerate electrons, but wakefield accelerators can manage the same acceleration within the space of millimetres, drastically reducing the size and cost of the equipment.
    However, because wakefield accelerators operate in the extreme conditions created when lasers are combined with plasma, they can be difficult to control and optimise to get the best performance. In wakefield acceleration, an ultrashort laser pulse is driven into plasma, creating a wave that is used to accelerate electrons. Both the laser and plasma have several parameters that can be tweaked to control the interaction, such as the shape and intensity of the laser pulse, or the density and length of the plasma.
    While a human operator can tweak these parameters, it is difficult to know how to optimise so many parameters at once. Instead, the team turned to artificial intelligence, creating a machine learning algorithm to optimise the performance of the accelerator.
    The algorithm set up to six parameters controlling the laser and plasma, fired the laser, analysed the data, and re-set the parameters, performing this loop many times in succession until the optimal parameter configuration was reached.
    Lead researcher Dr Matthew Streeter, who completed the work at Imperial and is now at Queen’s University Belfast, said: “Our work resulted in an autonomous plasma accelerator, the first of its kind. As well as allowing us to efficiently optimise the accelerator, it also simplifies their operation and allows us to spend more of our efforts on exploring the fundamental physics behind these extreme machines.”
    The team demonstrated their technique using the Gemini laser system at the CLF, and have already begun to use it in further experiments to probe the atomic structure of materials in extreme conditions and in studying antimatter and quantum physics.
    The data gathered during the optimisation process also provided new insight into the dynamics of the laser-plasma interaction inside the accelerator, potentially informing future designs to further improve accelerator performance. More