More stories

  • in

    Revised code could help improve efficiency of fusion experiments

    An international team of researchers led by the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) has upgraded a key computer code for calculating forces acting on magnetically confined plasma in fusion energy experiments. The upgrade will be part of a suite of computational tools that will allow scientists to further improve the design of breakfast-cruller-shaped facilities known as stellarators. Together, the three codes in the suite could help scientists bring efficient fusion reactors closer to reality.
    The revised software lets researchers more easily determine the boundary of plasma in stellarators. When used in concert with two other codes, the code could help find a stellarator configuration that improves the performance of the design. The two complementary codes determine the optimal location for the plasma in a stellarator vacuum chamber to maximize the efficiency of the fusion reactions, and determine the shape that the external electromagnets must have to hold the plasma in the proper position.
    The revised software, called the “free-boundary stepped-pressure equilibrium code (SPEC),” is one of a set of tools scientists can use to tweak the performance of plasma to more easily create fusion energy. “We want to optimize both the plasma position and the magnetic coils to balance the force that makes the plasma expand while holding it in place,” said Stuart Hudson, physicist, deputy head of the Theory Department at PPPL and lead author of the paper reporting the results in Plasma Physics and Controlled Fusion.
    “That way we can create a stable plasma whose particles are more likely to fuse. The updated SPEC code enables us to know where the plasma will be for a given set of magnetic coils.”
    Fusion combines light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei — and in the process generates massive amounts of energy in the sun and stars. Scientists are seeking to replicate fusion in devices on Earth for a virtually inexhaustible supply of safe and clean power to generate electricity.
    Plasma stability is crucial for fusion. If plasma bounces around inside a stellarator, it can escape, cool, and tamp down the fusion reactions, in effect quenching the fusion fire. An earlier version of the code, also developed by Hudson, could only calculate how forces were affecting a plasma if the researchers already knew the plasma’s location. Researchers, however, typically don’t have that information. “That’s one of the problems with plasmas,” Hudson said. “They move all over the place.”
    The new version of the SPEC code helps solve the problem by allowing researchers to calculate the plasma’s boundary without knowing its position beforehand. Used in coordination with a coil-design code called FOCUS and an optimization code called STELLOPT — both of which were also developed at PPPL — SPEC lets physicists simultaneously ensure that the plasma will have the best fusion performance and the magnets will not be too complicated to build. “There’s no point optimizing the shape of the plasma and then later finding out that the magnets would be incredibly difficult to construct,” Hudson said.
    One challenge that Hudson and colleagues faced was verifying that each step of the code upgrade was done correctly. Their slow-and-steady approach was crucial to making sure that the code makes accurate calculations. “Let’s say you are designing a component that will go on a rocket to the moon,” Hudson said. “It’s very important that that part works. So you test and test and test.”
    Updating any computer code calls for a number of interlocking steps:
    First, scientists must translate a set of mathematical equations describing the plasma into a programming language that a computer can understand;
    Next, scientists must determine the mathematical steps needed to solve the equations;
    Finally, the scientists must verify that the code produces correct results, either by comparing the results with those produced by a code that has already been verified or using the code to solve simple equations whose answers are easy to check.
    Hudson and colleagues performed the calculations with widely different methods. They used pencil and paper to determine the equations and solution steps, and powerful PPPL computers to verify the results. “We demonstrated that the code works,” Hudson said. “Now it can be used to study current experiments and design new ones.”
    Collaborators on the paper include researchers at the Max Planck Institute for Plasma Physics, the Australian National University, and the Swiss École Polytechnique Fédérale de Lausanne. The research was supported by the DOE’s Office of Science (Fusion Energy Sciences), the Euratom research and training program, the Australian Research Council, and the Simons Foundation.

    Story Source:
    Materials provided by DOE/Princeton Plasma Physics Laboratory. Original written by Raphael Rosen. Note: Content may be edited for style and length. More

  • in

    Virtual imaging trials optimize CT, radiography for COVID-19

    An open-access article in ARRS’ American Journal of Roentgenology (AJR) established a foundation for the use of virtual imaging trials in effective assessment and optimization of CT and radiography acquisitions and analysis tools to help manage the coronavirus disease (COVID-19) pandemic.
    Virtual imaging trials have two main components–representative models of targeted subjects and realistic models of imaging scanners–and the authors of this AJR article developed the first computational models of patients with COVID-19, while showing, as proof of principle, how they can be combined with imaging simulators for COVID-19 imaging studies.
    “For the body habitus of the models,” lead author Ehsan Abadi explained, “we used the 4D extended cardiac-torso (XCAT) model that was developed at Duke University.”
    Abadi and his Duke colleagues then segmented the morphologic features of COVID-19 abnormalities from 20 CT images of patients with multidiagnostic confirmation of SARS-CoV-2 infection and incorporated them into XCAT models.
    “Within a given disease area, the texture and material of the lung parenchyma in the XCAT were modified to match the properties observed in the clinical images,” Abadi et al. continued.
    Using a specific CT scanner (Definition Flash, Siemens Healthineers) and validated radiography simulator (DukeSim) to help illustrate utility, the team virtually imaged three developed COVID-19 computational phantoms.
    “Subjectively,” the authors concluded, “the simulated abnormalities were realistic in terms of shape and texture,” adding their preliminary results showed that the contrast-to-noise ratios in the abnormal regions were 1.6, 3.0, and 3.6 for 5-, 25-, and 50-mAs images, respectively.
     
     

    Story Source:
    Materials provided by American Roentgen Ray Society. Note: Content may be edited for style and length. More

  • in

    Building mechanical memory boards using origami

    The ancient Japanese art of paper folding, known as origami, can be used to create mechanical, binary switches.
    In Applied Physics Letters, by AIP Publishing, researchers report the fabrication of such a paper device using a particular origami pattern known as the Kresling pattern. This device can act as a mechanical switch.
    By putting several of these together on a single platform, the investigators built a functioning mechanical memory board.
    Origami structures can be either rigid or nonrigid. For the first type, only the creases between panels of paper can deform, but the panels stay fixed. In nonrigid origami, however, the panels themselves can deform.
    The Kresling pattern is an example of nonrigid origami. Folding a piece of paper using this pattern generates a bellowslike structure that can flip between one orientation and another. The bellows act as a type of spring and can be controlled by vibrating a platform that holds the bellows. This creates a switch, which the investigators refer to as a Kresling-inspired mechanical switch, or KIMS.
    The researchers found that oscillating a platform holding the KIMS up and down at a certain speed will cause it to flip, or switch, between its two stable states. They used an electrodynamic shaker to provide controlled movements of the base and monitored the upper surface of the KIMS using a laser. In this way, they were able to map out and analyze the basic physics that underlies the switching behavior.
    “We used the Kresling origami pattern to also develop a cluster of mechanical binary switches,” author Ravindra Masana said. “These can be forced to transition between two different static states using a single controlled input in the form of a harmonic excitation applied at the base of the switch.”
    The group first considered a 2-bit memory board created by placing two KIMS units on a single platform. Because each KIMS bit has two stable states, four distinct states identified as S00, S01, S10 and S11 can be obtained. Oscillations of the platform will cause switching between these four stable states. This proof of concept with just two bits could be extended to multiple KIMS units, creating a type of mechanical memory.
    “Such switches can be miniaturized,” said Mohammed Daqaq, one of the authors and the director of the Laboratory of Applied Nonlinear Dynamics at NYU Abu Dhabi. “Instead of using a bulky electrodynamic shaker for actuation, the memory board can then be actuated using scalable piezoelectric and graphene actuators.”
    Miniaturized origami memory boards should have wide applicability and hold great promise for future device development.

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Computer modeling used to predict reef health

    A UBC Okanagan researcher has developed a way to predict the future health of the planet’s coral reefs.
    Working with scientists from Australia’s Flinders’ University and privately-owned research firm Nova Blue Environment, biology doctoral student Bruno Carturan has been studying the ecosystems of the world’s endangered reefs.
    “Coral reefs are among the most diverse ecosystems on Earth and they support the livelihoods of more than 500 million people,” says Carturan. “But coral reefs are also in peril. About 75 per cent of the world’s coral reefs are threatened by habitat loss, climate change and other human-caused disturbances.”
    Carturan, who studies resilience, biodiversity and complex systems under UBCO Professors Lael Parrott and Jason Pither, says nearly all the world’s reefs will be dangerously affected by 2050 if no effective measures are taken.
    There is hope, however, as he has determined a way to examine the reefs and explore why some reef ecosystems appear to be more resilient than others. Uncovering why, he says, could help stem the losses.
    “In other ecosystems, including forests and wetlands, experiments have shown that diversity is key to resilience,” says Carturan. “With more species, comes a greater variety of form and function — what ecologists call traits. And with this, there is a greater likelihood that some particular traits, or combination of traits, help the ecosystem better withstand and bounce back from disturbances.”
    The importance of diversity for the health and stability of ecosystems has been extensively investigated by ecologists, he explains. While the consensus is that ecosystems with more diversity are more resilient and function better, the hypothesis has rarely been tested experimentally with corals.

    advertisement

    Using an experiment to recreate the conditions found in real coral reefs is challenging for several reasons — one being that the required size, timeframe and number of different samples and replicates are just unmanageable.
    That’s where computer simulation modelling comes in.
    “Technically called an ‘agent-based model’, it can be thought of as a virtual experimental arena that enables us to manipulate species and different types of disturbances, and then examine their different influences on resilience in ways that are just not feasible in real reefs,” explains Carturan.
    In his simulation arena, individual coral colonies and algae grow, compete with one another, reproduce and die. And they do all this in realistic ways. By using agent-based models — with data collected by many researchers over decades — scientists can manipulate the initial diversity of corals, including their number and identity, and see how the virtual reef communities respond to threats.
    “This is crucial because these traits are the building blocks that give rise to ecosystem structure and function. For instance, corals come in a variety of forms — from simple spheres to complex branching — and this influences the variety of fish species these reefs host, and their susceptibility to disturbances such as cyclones and coral bleaching.”
    By running simulations over and over again, the model can identify combinations that can provide the greatest resilience. This will help ecologists design reef management and restoration strategies using predictions from the model, says collaborating Flinders researcher Professor Corey Bradshaw.

    advertisement

    “Sophisticated models like ours will be useful for coral-reef management around the world,” Bradshaw adds. “For example, Australia’s iconic Great Barrier Reef is in deep trouble from invasive species, climate change-driven mass bleaching and overfishing.”
    “This high-resolution coral ‘video game’ allows us to peek into the future to make the best possible decisions and avoid catastrophes.”
    The research, supported by grants from the Natural Sciences and Engineering Research Council of Canada and the Canada Foundation for Innovation, was published recently in eLife. More

  • in

    Faster, more efficient energy storage could stem from holistic study of layered materials

    A team led by the Department of Energy’s Oak Ridge National Laboratory developed a novel, integrated approach to track energy-transporting ions within an ultra-thin material, which could unlock its energy storage potential leading toward faster charging, longer lasting devices.
    Scientists have for a decade studied the energy-storing possibilities of an emerging class of two-dimensional materials — those constructed in layers that are only a few atoms thick — called MXenes, pronounced “max-eens.”
    The ORNL-led team integrated theoretical data from computational modeling of experimental data to pinpoint potential locations of a variety of charged ions in titanium carbide, the most studied MXene phase. Through this holistic approach, they could track and analyze the ions’ motion and behavior from the single-atom to the device scale.
    “By comparing all the methods we employed, we were able to form links between theory and different types of materials characterization, ranging from very simple to very complex over a wide range of length and time scales,” said Nina Balke, ORNL co-author of the published study that was conducted within the Fluid Interface Reactions, Structures and Transport, or FIRST, Center. FIRST is a DOE-funded Energy Frontier Research Center located at ORNL.
    “We pulled all those links together to understand how ion storage works in layered MXene electrodes,” she added. The study’s results allowed the team to predict the material’s capacitance, or its ability to store energy. “And, in the end, after much discussion, we were able to unify all these techniques into one cohesive picture, which was really cool.”
    Layered materials can enhance energy stored and power delivered because the gaps between the layers allow charged particles, or ions, to move freely and quickly. However, ions can be difficult to detect and characterize, especially in a confined environment with multiple processes at play. A better understanding of these processes can advance the energy storage potential of lithium-ion batteries and supercapacitors.
    As a FIRST center project, the team focused on the development of supercapacitors — devices that charge quickly for short-term, high-power energy needs. In contrast, lithium-ion batteries have a higher energy capacity and provide electrical power longer, but the rates of discharge, and therefore their power levels, are lower.
    MXenes have the potential to bridge the benefits of these two concepts, Balke said, which is the overarching goal of fast-charging devices with greater, more efficient energy storage capacity. This would benefit a range of applications from electronics to electric vehicle batteries.
    Using computational modeling, the team simulated the conditions of five different charged ions within the layers confined in an aqueous solution, or “water shell.” The theoretical model is simple, but combined with experimental data, it created a baseline that provided evidence of where the ions within the MXene layers went and how they behaved in a complex environment.
    “One surprising outcome was we could see, within the simulation limits, different behavior for the different ions,” said ORNL theorist and co-author Paul Kent.
    The team hopes their integrated approach can guide scientists toward future MXene studies. “What we developed is a joint model. If we have a little bit of data from an experiment using a certain MXene, and if we knew the capacitance for one ion, we can predict it for the other ones, which is something that we weren’t able to do before,” Kent said.
    “Eventually, we’ll be able to trace those behaviors to more real-world, observable changes in the material’s properties,” he added. More

  • in

    Bold proposal to tackle one of the biggest barriers to more renewable energy

    The phrase “too much of a good thing” may sound like a contradiction, but it encapsulates one of the key hurdles preventing the expansion of renewable energy generation. Too much of a service or commodity makes it harder for companies to sell them, so they curtail production.
    Usually that works out fine: The market reaches equilibrium and economists are happy. But external factors are bottlenecking renewable electricity despite the widespread desire to increase its capacity.
    UC Santa Barbara’s Sangwon Suh is all too familiar with this issue. The professor of industrial ecology has focused on it and related challenges for at least the past two years at the Bren School of Environmental Science & Management. “Curtailment is the biggest problem of renewable energy we are facing,” said Suh, who noted it will only escalate as renewable energy capacity increases.
    Now Suh, along with Bren doctoral student Jiajia Zheng, and Andrew Chien at the University of Chicago, have presented an innovative proposal to address this issue by routing workloads between data centers in different regions. The concept, published in the journal Joule, is cheap, efficient and requires minimal new infrastructure. Yet it could reduce thousands of tons of greenhouse gas emissions per year, all while saving companies money and encouraging the expansion of renewable energy.
    The main roadblock
    Curtailment comes into play when renewable energy sources generate more electricity than is required to meet demand. Modern power grids balance energy supply and demand in real-time, every minute of every day. Extra electricity would overwhelm them, so it needs to be either stored, sold off or curtailed.

    advertisement

    This occurs because reliable energy sources — like fossil fuel and nuclear power plants — are critical to grid stability, as well as meeting nighttime demand. These facilities have to operate above a minimum capacity, since shutting down and restarting them is both costly and inefficient. This sets a minimum for electricity from conventional power sources, and if renewables continue to generate more power, then the extra energy is effectively useless.
    California is a case study in the challenges of variable renewable electricity and the problem of curtailment. Presumably the state could sell its surplus electricity to neighbors. Unfortunately, many power grids are encountering the same problem, and the transmission network has limited capacity. As a result, the state has resorted to selling excess electricity at a negative price, essentially paying other states to take the energy.
    There are two other solutions for dealing with excess electricity aside from simply curtailing energy generation, Suh explained. Energy can be stored in batteries and even hydroelectric reservoirs. That said, batteries are incredibly expensive, and hydropower storage is only suitable for certain locations.
    The other option is to use the extra electricity to generate things of value that can be used later. “Whatever we produce will have to be stored and transported to where it’s needed,” Suh pointed out. “And this can be very expensive.
    “But,” he added, “transporting data and information is very cheap because we can use fiber optics to transmit the data literally at the speed of light.” As the authors wrote in the study, the idea behind data load migration is “moving bits, not watts.”
    An innovative idea

    advertisement

    The task ahead of the authors was clear. “The question we were trying to answer was can we process data using excess electricity?” Suh said. “If we can, then it’s probably the cheapest solution for transporting the product or service made using excess electricity.”
    Currently, Northern Virginia hosts most of the nation’s data centers. Unlike California’s grid, CAISO, the grid Northern Virginia sits on, PJM, relies heavily on coal-fired power plants, “the dirtiest electricity that we can ever imagine,” in Suh’s words.
    Suh, Zheng and Chien propose that workloads from the PJM region could be sent to centers out west whenever California has excess electricity. The jobs can be accomplished using electricity that otherwise would have been curtailed or sold for a loss, and then the processed data can be sent wherever the service is needed. Data centers usually have average server usage rates below 50%, Zheng explained, meaning there is plenty of idle capacity ready to be tapped.
    This plan is not only environmentally sound; it represents significant savings for the companies using these services. “This approach could potentially save the data center operators tens of millions of dollars,” said lead author Zheng. Since the electricity would otherwise have been useless, its cost to the company is essentially zero.
    The authors analyzed historical curtailment data of CAISO from 2015 through 2019. They found that load migration could have absorbed up to 62% of CAISO’s curtailed electricity capacity in 2019. That’s nearly 600,000 megawatt-hours of previously wasted energy — roughly as much electricity as 100,000 Californian households consume in a year.
    At the same time, the strategy could have reduced the equivalent of up to 240,000 metric tons of CO2 emissions in 2019 using only existing data center capacity in California. “That is equivalent to the greenhouse gas emissions from 600 million miles of driving using average passenger vehicles,” Suh said. And, rather than costing money, each ton of CO2 emissions averted by switching power grids would actually provide around $240 in savings due to decreased spending on electricity.
    Untapped potential
    These findings were within what the authors expected to see. It was the ramifications that amazed them. “What surprised me was why we were not doing this before,” Suh said. “This seems very straightforward: There’s excess electricity, and electricity is a valuable thing, and information is very cheap to transmit from one location to another. So why are we not doing this?”
    Suh suspects it may be because data center operators are less inclined to cooperate with each other under current market pressures. Despite the environmental and financial benefits, these companies may be reluctant to outsource data processing to a facility run by a different firm.
    In fact, the data center industry is somewhat of a black box. “It was very challenging for us to get detailed information on the power usage characteristics and energy consumption data from the industry,” said Zheng.
    Harnessing the potential of curtailed renewable energy will require fluid coordination between the data center operators. Shifting the system may require changing the incentives currently at work. This could take the form of new regulations, a price on carbon emissions or collaborations between rival companies.
    “Two different things need to happen in parallel,” Suh said. “One is from the private sector: They need to cooperate and come up with the technological and managerial solutions to enable this. And from the government side, they need to think about the policy changes and incentives that can enable this type of change more quickly.”
    A widespread price on carbon emissions could provide the necessary nudge. California already has a carbon price, and Suh believes that, as additional states follow suit, it will become more economically attractive for companies to start using the strategies laid out in this study.
    And these strategies have huge growth potential. Data processing and renewable electricity capacity are both growing rapidly. Researchers predict that the datasphere will expand more than fivefold from 2018 to 2025. As a result, there is a lot of room for data centers to absorb additional processing needs using excess renewable energy in the future.
    This paper offers only a conservative estimate of the financial and environmental benefits of data load migration, Suh acknowledged. “As we increase the data center capacity, I think that the ability for a data center to be used as a de facto battery is actually increasing as well,” he said.
    “If we can think ahead and be prepared, I think that a substantial portion of the curtailment problem can be addressed in a very cost-effective way by piggybacking on the growth of data centers.” More

  • in

    Deep learning algorithm to speed up materials discovery in emerging tech industries

    Solid-state inorganic materials are critical to the growth and development of electric vehicle, cellphone, laptop battery and solar energy technologies. However, finding the ideal materials with the desired functions for these industries is extremely challenging. Jianjun Hu, an associate professor of computer science at the University of South Carolina is the lead researcher on a project to generate new hypothetical materials.
    Due to the vast chemical design space and the high sparsity of candidates, experimental trials and first-principle computational simulations cannot be used as screening tools to solve this problem. Instead, researchers have developed a deep learning-based smart algorithm that uses a technique called generative adversarial network (GAN) model to dramatically improve the material search efficiency up to two orders of magnitude. It has the potential to greatly speed up the discovery of novel functional materials.
    The work, published in NPJ Computational Materials, was a collaboration between researchers at the University of South Carolina College of Engineering and Computing and Guizhou University, a research university located in Guiyang, China.
    Inspired by the deep learning technique used in Google’s AlphaGo, which learned implicit rules of the board game Go to defeat the game’s top players, the researchers used their GAN neural network to learn the implicit chemical composition rules of atoms in different elements to assemble chemically valid formulas. By training their deep learning models using the tens of thousands of known inorganic materials deposited in databases such as ICSD and OQMD, they created a generative machine learning model capable of generating millions of new hypothetical inorganic material formulas.
    “There is almost an infinite number of new materials that could exist, but they haven’t been discovered yet,” said Jianjun Hu. “Our algorithm, it’s like a generation engine. Using this model, we can generate a lot of new hypothetical materials that have very high likelihoods to exist.”
    Without explicitly modeling or enforcing chemical constraints such as charge neutrality and electronegativity, the deep learning-based smart algorithm learned to observe such rules when generating millions of hypothetical materials’ formulas. The predictive power of the algorithm has been verified both by known materials and recent findings in materials discovery literature. “One major advantage of our algorithm is the high validity, uniqueness and novelty, which are the three major evaluation metrics of such generative models,” said Shaobo Li, a professor at Guizhou University who was involved in this study.
    This is not the first time that an algorithm has been created for materials discovery. Past algorithms were also able to produce millions of potential new materials. However, very few of the materials discovered by these algorithms were synthesizable due to their high free energy and instability. In contrast, nearly 70 percent of the inorganic materials identified by Hu’s team are very likely to be stable and then possibly synthesizable.
    “You can get any number of formula combinations by putting elements’ symbols together. But it doesn’t mean the physics can exist,” said Ming Hu, an associate professor of mechanical engineering at UofSC also involved in the research. “So, our algorithm and the next step, structure prediction algorithm, will dramatically increase the speed to screening new function materials by creating synthesizable compounds.”
    These new materials will help researchers in fields such as electric vehicles, green energy, solar energy and cellphone development as they continually search for new materials with optimized functionalities. With the current materials discovery process being so slow, these industries’ growth has been limited by the materials available to them.
    The next major step for the team is to predict the crystal structure of the generated formulas, which is currently a major challenge. However, the team has already started working on this challenge along with several leading international teams. Once solved, the two steps can be combined to discover many potential materials for energy conversion, storage and other applications.
    About University of South Carolina:
    The University of South Carolina is a globally recognized, high-impact research university committed to a superior student experience and dedicated to innovation in learning, research and community engagement. Founded in 1801, the university offers more than 350 degree programs and is the state’s only top-tier Carnegie Foundation research institution. More than 50,000 students are enrolled at one of 20 locations throughout the state, including the research campus in Columbia. With 56 nationally ranked academic programs including top-ranked programs in international business, the nation’s best honors college and distinguished programs in engineering, law, medicine, public health and the arts, the university is helping to build healthier, more educated communities in South Carolina and around the world. More

  • in

    Fifty new planets confirmed in machine learning first

    Fifty potential planets have had their existence confirmed by a new machine learning algorithm developed by University of Warwick scientists.
    For the first time, astronomers have used a process based on machine learning, a form of artificial intelligence, to analyse a sample of potential planets and determine which ones are real and which are ‘fakes’, or false positives, calculating the probability of each candidate to be a true planet.
    Their results are reported in a new study published in the Monthly Notices of the Royal Astronomical Society, where they also perform the first large scale comparison of such planet validation techniques. Their conclusions make the case for using multiple validation techniques, including their machine learning algorithm, when statistically confirming future exoplanet discoveries.
    Many exoplanet surveys search through huge amounts of data from telescopes for the signs of planets passing between the telescope and their star, known as transiting. This results in a telltale dip in light from the star that the telescope detects, but it could also be caused by a binary star system, interference from an object in the background, or even slight errors in the camera. These false positives can be sifted out in a planetary validation process.
    Researchers from Warwick’s Departments of Physics and Computer Science, as well as The Alan Turing Institute, built a machine learning based algorithm that can separate out real planets from fake ones in the large samples of thousands of candidates found by telescope missions such as NASA’s Kepler and TESS.
    It was trained to recognise real planets using two large samples of confirmed planets and false positives from the now retired Kepler mission. The researchers then used the algorithm on a dataset of still unconfirmed planetary candidates from Kepler, resulting in fifty new confirmed planets and the first to be validated by machine learning. Previous machine learning techniques have ranked candidates, but never determined the probability that a candidate was a true planet by themselves, a required step for planet validation.

    advertisement

    Those fifty planets range from worlds as large as Neptune to smaller than Earth, with orbits as long as 200 days to as little as a single day. By confirming that these fifty planets are real, astronomers can now prioritise these for further observations with dedicated telescopes.
    Dr David Armstrong, from the University of Warwick Department of Physics, said: “The algorithm we have developed lets us take fifty candidates across the threshold for planet validation, upgrading them to real planets. We hope to apply this technique to large samples of candidates from current and future missions like TESS and PLATO.
    “In terms of planet validation, no-one has used a machine learning technique before. Machine learning has been used for ranking planetary candidates but never in a probabilistic framework, which is what you need to truly validate a planet. Rather than saying which candidates are more likely to be planets, we can now say what the precise statistical likelihood is. Where there is less than a 1% chance of a candidate being a false positive, it is considered a validated planet.”
    Dr Theo Damoulas from the University of Warwick Department of Computer Science, and Deputy Director, Data Centric Engineering and Turing Fellow at The Alan Turing Institute, said: “Probabilistic approaches to statistical machine learning are especially suited for an exciting problem like this in astrophysics that requires incorporation of prior knowledge — from experts like Dr Armstrong — and quantification of uncertainty in predictions. A prime example when the additional computational complexity of probabilistic methods pays off significantly.”
    Once built and trained the algorithm is faster than existing techniques and can be completely automated, making it ideal for analysing the potentially thousands of planetary candidates observed in current surveys like TESS. The researchers argue that it should be one of the tools to be collectively used to validate planets in future.
    Dr Armstrong adds: “Almost 30% of the known planets to date have been validated using just one method, and that’s not ideal. Developing new methods for validation is desirable for that reason alone. But machine learning also lets us do it very quickly and prioritise candidates much faster.
    “We still have to spend time training the algorithm, but once that is done it becomes much easier to apply it to future candidates. You can also incorporate new discoveries to progressively improve it.
    “A survey like TESS is predicted to have tens of thousands of planetary candidates and it is ideal to be able to analyse them all consistently. Fast, automated systems like this that can take us all the way to validated planets in fewer steps let us do that efficiently.” More