More stories

  • in

    Artificial intelligence that can discover hidden physical laws in various data

    Researchers at Kobe University and Osaka University have successfully developed artificial intelligence technology that can extract hidden equations of motion from regular observational data and create a model that is faithful to the laws of physics.
    This technology could enable us to discover the hidden equations of motion behind phenomena for which the laws were considered unexplainable. For example, it may be possible to use physics-based knowledge and simulations to examine ecosystem sustainability.
    The research group consisted of Associate Professor YAGUCHI Takaharu and PhD. student CHEN Yuhan (Graduate School of System Informatics, Kobe University), and Associate Professor MATSUBARA Takashi (Graduate School of Engineering Science, Osaka University).
    These research achievements were made public on December 6, 2021, and were presented at the Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS2021), a meeting on artificial intelligence technologies. This research was among the top 3% selected for the spotlight category.
    Main Points Being able to model (formularize) physical phenomena using artificial intelligence could result in extremely precise, high-speed simulations. In current methods using artificial intelligence, it is necessary to use transformed data that fits the equation of motion.Therefore it is difficult to apply artificial intelligence to actual observational data for which the equations of motion are unknown. The research group used geometry to develop artificial intelligence that can find the hidden equation of motion in the supplied observational data (regardless of its format) and model it accordingly. In the future, it may be possible to discover the hidden physical laws behind phenomena that that had previously been considered to be incompatible with Newton’s Laws, such as ecosystem changes. This will enable us to carry out investigations and simulations related to these phenomena using the laws of physics, which could reveal previously unknown properties.Ordinarily, predictions of physical phenomena are carried out via simulations using supercomputers. These simulations use mathematical models based on the laws of physics, however if the model is not highly reliable then the results will also lack reliability. Therefore, it is essential to develop a method of producing highly reliable models from the observational data of phenomena. Furthermore, in recent years the range of physics applications has expanded beyond our predications, and it has been demonstrated that it is possible to apply Newton’s Laws to other aspects, such as part of a model to show ecosystem changes. However, a concrete equation of motion has not yet been revealed for many cases.
    Research Methodology
    This research study developed a method of discovering novel equations of motion in observational data for phenomena that Newton’s Laws can be applied to. Previously, research has been conducted into discovering equations of motion from data, however the prior method required the data to be in the appropriate format to fit its assumed special form of the equation of motion. However, there are many cases in reality where it is not clear what data format is best to use, therefore it is difficult to apply realistic data.
    In response to this, the researchers considered that the appropriate transformation of observational data is akin to coordinate transformation in geometry, thus resolving the issue by applying the geometric idea of coordinate transformation invariance found in physics. For this, it is necessary to illuminate the unknown geometric properties behind phenomena. The research team subsequently succeeded in developing AI that can find these geometric properties in data. If equations of motion can be extracted from data, then it will be possible to use these equations to create models and simulations that are faithful to physical laws.
    Further Developments
    Physics simulations are carried out in a wide range of fields, including weather forecasting, drug discovery, building analyses, and car design, but they usually require extensive calculations. However, if AI can learn from the data of specific phenomena and construct small-scale models using the proposed method, then this will simplify and speed up calculations that are faithful to the laws of physics. This will contribute towards the development of the aforementioned fields.
    Furthermore, we can apply this method to aspects that at first glance appear to be unrelated to physics. If equations of motion can be extracted in such cases, this will make it possible to do physics knowledge-based investigations and simulations even for phenomena that has been considered impossible to explain using physics. For example, it may be possible to find a hidden equation of motion in animal population data that shows the change in the number of individuals. This could be used to investigate ecosystem sustainability by applying the appropriate physical laws (eg. the law of conservation of energy, etc.).
    Story Source:
    Materials provided by Kobe University. Note: Content may be edited for style and length. More

  • in

    Key step toward personalized medicine: Modeling biological systems

    A new study by the Oregon State University College of Engineering shows that machine learning techniques can offer powerful new tools for advancing personalized medicine, care that optimizes outcomes for individual patients based on unique aspects of their biology and disease features.
    The research with machine learning, a branch of artificial intelligence in which computer systems use algorithms and statistical models to look for trends in data, tackles long-unsolvable problems in biological systems at the cellular level, said Oregon State’s Brian D. Wood, who conducted the study with then OSU Ph.D. student Ehsan Taghizadeh and Helen M. Byrne of the University of Oxford.
    “Those systems tend to have high complexity — first because of the vast number of individual cells and second, because of the highly nonlinear way in which cells can behave,” said Wood, a professor of environmental engineering. “Nonlinear systems present a challenge for upscaling methods, which is the primary means by which researchers can accurately model biological systems at the larger scales that are often the most relevant.”
    A linear system in science or mathematics means any change to the system’s input results in a proportional change to the output; a linear equation, for example, might describe a slope that gains 2 feet vertically for every foot of horizontal distance.
    Nonlinear systems don’t work that way, and many of the world’s systems, including biological ones, are nonlinear.
    The new research, funded in part by the U.S. Department of Energy and published in the Journal of Computational Physics, is one of the first examples of using machine learning to address issues with modeling nonlinear systems and understanding complex processes that might occur in human tissues, Wood said. More

  • in

    Community of ethical hackers needed to prevent AI's looming 'crisis of trust'

    The Artificial Intelligence industry should create a global community of hackers and “threat modellers” dedicated to stress-testing the harm potential of new AI products in order to earn the trust of governments and the public before it’s too late.
    This is one of the recommendations made by an international team of risk and machine-learning experts, led by researchers at the University of Cambridge’s Centre for the Study of Existential Risk (CSER), who have authored a new “call to action” published today in the journal Science.
    They say that companies building intelligent technologies should harness techniques such as “red team” hacking, audit trails and “bias bounties” — paying out rewards for revealing ethical flaws — to prove their integrity before releasing AI for use on the wider public.
    Otherwise, the industry faces a “crisis of trust” in the systems that increasingly underpin our society, as public concern continues to mount over everything from driverless cars and autonomous drones to secret social media algorithms that spread misinformation and provoke political turmoil.
    The novelty and “black box” nature of AI systems, and ferocious competition in the race to the marketplace, has hindered development and adoption of auditing or third party analysis, according to lead author Dr Shahar Avin of CSER.
    The experts argue that incentives to increase trustworthiness should not be limited to regulation, but must also come from within an industry yet to fully comprehend that public trust is vital for its own future — and trust is fraying. More

  • in

    Machine learning decodes tremors of the universe

    Black holes are one of the greatest mysteries of our Universe — for example, a black hole with the mass of our Sun has a radius of only 3 kilometers. Black holes in orbit around each other give off gravitational radiation — oscillations of space and time predicted by Albert Einstein in 1916. This causes the orbit to become faster and tighter, and eventually, the black holes merge in a final burst of radiation. These gravitational waves propagate through the Universe at the speed of light, and are detected by observatories in the USA (LIGO) and Italy (Virgo). Scientists compare the data collected by the observatories against theoretical predictions to estimate the properties of the source, including how large the black holes are and how fast they are spinning. Currently, this procedure takes at least hours, often months.
    An interdisciplinary team of researchers from the Max Planck Institute for Intelligent Systems (MPI-IS) in Tübingen and the Max Planck Institute for Gravitational Physics (Albert Einstein Institute/AEI) in Potsdam is using state-of-the-art machine learning methods to speed up this process. They developed an algorithm using a deep neural network, a complex computer code built from a sequence of simpler operations, inspired by the human brain. Within seconds, the system infers all properties of the binary black-hole source. Their research results were published in the flagship journal of Physics, Physical Review Letters.
    “Our method can make very accurate statements in a few seconds about how big and massive the two black holes were that generated the gravitational waves when they merged. How fast do the black holes rotate, how far away are they from Earth and from which direction is the gravitational wave coming? We can deduce all this from the observed data and even make statements about the accuracy of this calculation,” explains Maximilian Dax, first author of the study Real-Time Gravitational Wave Science with Neural Posterior Estimation and Ph.D. student in the Empirical Inference Department at MPI-IS.
    The researchers trained the neural network with many simulations — predicted gravitational-wave signals for hypothetical binary black-hole systems combined with noise from the detectors. This way, the network learns the correlations between the measured gravitational-wave data and the parameters characterizing the underlying black-hole system. It takes ten days for the algorithm called DINGO (the abbreviation stands for Deep INference for Gravitational-wave Observations) to learn. Then it is ready for use: the network deduces the size, the spins, and all other parameters describing the black holes from data of newly observed gravitational waves in just a few seconds. The high-precision analysis decodes ripples in space-time almost in real-time — something that has never been done with such speed and precision. The researchers are convinced that the improved performance of the neural network as well as its ability to better handle noise fluctuations in the detectors will make this method a very useful tool for future gravitational-wave observations.
    “The further we look into space through increasingly sensitive detectors, the more gravitational-wave signals are detected. Fast methods such as ours are essential for analyzing all of this data in a reasonable amount of time,” says Stephen Green, senior scientist in the Astrophysical and Cosmological Relativity department at the AEI. “DINGO has the advantage that — once trained — it can analyze new events very quickly. Importantly, it also provides detailed uncertainty estimates on parameters, which have been hard to produce in the past using machine-learning methods.” Until now, researchers in the LIGO and Virgo collaborations have used computationally very time-consuming algorithms to analyze the data. They need millions of new simulations of gravitational waveforms for the interpretation of each measurement, which leads to computing times of several hours to months — DINGO avoids this overhead because a trained network does not need any further simulations for analyzing newly observed data, a process known as ‘amortized inference’.
    The method holds promise for more complex gravitational-wave signals describing binary — black-hole configurations, whose use in current algorithms makes analyses very time-consuming, and for binary neutron stars. Whereas the collision of black holes releases energy exclusively in the form of gravitational waves, merging neutron stars also emit radiation in the electromagnetic spectrum. They are therefore also visible to telescopes which have to be pointed to the respective region of the sky as quickly as possible in order to observe the event. To do this, one needs to very quickly determine where the gravitational wave is coming from, as facilitated by the new machine learning method. In the future, this information could be used to point telescopes in time to observe electromagnetic signals from the collisions of neutron stars, and of a neutron star with a black hole.
    Alessandra Buonanno, director at the AEI, and Bernhard Schölkopf, a Director at the MPI-IS, are thrilled with the prospect of taking their successful collaboration to the next level. Buonanno expects that “going forward, these approaches will also enable a much more realistic treatment of the detector noise and of the gravitational signals than is possible today using standard techniques,” and Schölkopf adds that such “simulation-based inference using machine learning could be transformative in many areas of science where we need to infer a complex model from noisy observations.”
    Story Source:
    Materials provided by Max Planck Institute for Intelligent Systems. Note: Content may be edited for style and length. More

  • in

    A new super-cooled microwave source boosts the scale-up of quantum computers

    Researchers in Finland have developed a circuit that produces the high-quality microwave signals required to control quantum computers while operating at temperatures near absolute zero. This is a key step towards moving the control system closer to the quantum processor, which may make it possible to greatly increase the number of qubits in the processor.
    One of the factors limiting the size of quantum computers is the mechanism used to control the qubits in quantum processors. This is normally accomplished using a series of microwave pulses, and because quantum processors operate at temperatures near absolute zero, the control pulses are normally brought into the cooled environment via broadband cables from room temperature.
    As the number of qubits grows, so does the number of cables needed. This limits the potential size of a quantum processor, because the refrigerators cooling the qubits would have to become larger to accommodate more and more cables while also working harder to cool them down — ultimately a losing proposition.
    A research consortium led by Aalto University and VTT Technical Research Centre of Finland has now developed a key component of the solution to this conundrum. ‘We have built a precise microwave source that works at the same extremely low temperature as the quantum processors, approximately -273 degrees,’ says Mikko Möttönen, Professor at Aalto University and VTT Technical Research Centre of Finland, who led the team.
    The new microwave source is an on-chip device that can be integrated with a quantum processor. Less than a millimetre in size, it potentially removes the need for high-frequency control cables connecting different temperatures. With this low-power, low-temperature microwave source, it may be possible to use smaller cryostats while still increasing the number of qubits in a processor.
    ‘Our device produces one hundred times more power than previous versions, which is enough to control qubits and carry out quantum logic operations,’ says Möttönen. ‘It produces a very precise sine wave, oscillating over a billion times per second. As a result, errors in qubits from the microwave source are very infrequent, which is important when implementing precise quantum logic operations.’
    However, a continuous-wave microwave source, such as the one produced by this device, cannot be used as is to control qubits. First, the microwaves must be shaped into pulses. The team is currently developing methods to quickly switch the microwave source on and off.
    Even without a switching solution to create pulses, an efficient, low-noise, low-temperature microwave source could be useful in a range of quantum technologies, such as quantum sensors.
    ‘In addition to quantum computers and sensors, the microwave source can act as a clock for other electronic devices. It can keep different devices in the same rhythm, allowing them to induce operations for several different qubits at the desired instant of time,’ explains Möttönen.
    The theoretical analysis and the initial design were carried out by Juha Hassel and others at VTT. Hassel, who started this work at VTT, is currently the head of engineering and development at IQM, a Finnish quantum-computing hardware company. The device was then built at VTT and operated by postdoctoral research Chengyu Yan and his colleagues at Aalto University using the OtaNano research infrastructure. Yan is currently an associate professor at Huazhong University of Science and Technology, China. The teams involved in this research are part of the Academy of Finland Centre of Excellence in Quantum Technology (QTF) and the Finnish Quantum Institute (InstituteQ).
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    A tool to speed development of new solar cells

    In the ongoing race to develop ever-better materials and configurations for solar cells, there are many variables that can be adjusted to try to improve performance, including material type, thickness, and geometric arrangement. Developing new solar cells has generally been a tedious process of making small changes to one of these parameters at a time. While computational simulators have made it possible to evaluate such changes without having to actually build each new variation for testing, the process remains slow.
    Now, researchers at MIT and Google Brain have developed a system that makes it possible not just to evaluate one proposed design at a time, but to provide information about which changes will provide the desired improvements. This could greatly increase the rate for the discovery of new, improved configurations.
    The new system, called a differentiable solar cell simulator, is described in a paper published in the journal Computer Physics Communications, written by MIT junior Sean Mann, research scientist Giuseppe Romano of MIT’s Institute for Soldier Nanotechnologies, and four others at MIT and at Google Brain.
    Traditional solar cell simulators, Romano explains, take the details of a solar cell configuration and produce as their output a predicted efficiency — that is, what percentage of the energy of incoming sunlight actually gets converted to an electric current. But this new simulator both predicts the efficiency and shows how much that output is affected by any one of the input parameters. “It tells you directly what happens to the efficiency if we make this layer a little bit thicker, or what happens to the efficiency if we for example change the property of the material,” he says.
    In short, he says, “we didn’t discover a new device, but we developed a tool that will enable others to discover more quickly other higher performance devices.” Using this system, “we are decreasing the number of times that we need to run a simulator to give quicker access to a wider space of optimized structures.” In addition, he says, “our tool can identify a unique set of material parameters that has been hidden so far because it’s very complex to run those simulations.”
    While traditional approaches use essentially a random search of possible variations, Mann says, with his tool “we can follow a trajectory of change because the simulator tells you what direction you want to be changing your device. That makes the process much faster because instead of exploring the entire space of opportunities, you can just follow a single path” that leads directly to improved performance. More

  • in

    Stretchy, washable battery brings wearable devices closer to reality

    UBC researchers have created what could be the first battery that is both flexible and washable. It works even when twisted or stretched to twice its normal length, or after being tossed in the laundry.
    “Wearable electronics are a big market and stretchable batteries are essential to their development,” says Dr. Ngoc Tan Nguyen, a postdoctoral fellow at UBC’s faculty of applied science. “However, up until now, stretchable batteries have not been washable. This is an essential addition if they are to withstand the demands of everyday use.”
    The battery developed by Dr. Nguyen and his colleagues offers a number of engineering advances. In normal batteries, the internal layers are hard materials encased in a rigid exterior. The UBC team made the key compounds — in this case, zinc and manganese dioxide — stretchable by grinding them into small pieces and then embedding them in a rubbery plastic, or polymer. The battery comprises several ultra-thin layers of these polymers wrapped inside a casing of the same polymer. This construction creates an airtight, waterproof seal that ensures the integrity of the battery through repeated use.
    It was team member Bahar Iranpour, a PhD student, who suggested throwing the battery in the wash to test its seal. So far, the battery has withstood 39 wash cycles and the team expects to further improve its durability as they continue to develop the technology.
    “We put our prototypes through an actual laundry cycle in both home and commercial-grade washing machines. They came out intact and functional and that’s how we know this battery is truly resilient,” says Iranpour.
    The choice of zinc and manganese dioxide chemistry also confers another important advantage. “We went with zinc-manganese because for devices worn next to the skin, it’s a safer chemistry than lithium-ion batteries, which can produce toxic compounds when they break,” says Nguyen.
    An affordable option
    Ongoing work is underway to increase the battery’s power output and cycle life, but already the innovation has attracted commercial interest. The researchers believe that when the new battery is ready for consumers, it could cost the same as an ordinary rechargeable battery.
    “The materials used are incredibly low-cost, so if this is made in large numbers, it will be cheap,” says electrical and computer engineering professor Dr. John Madden, director of UBC’s Advanced Materials and Process Engineering Lab who supervised the work. In addition to watches and patches for measuring vital signs, the battery might also be integrated with clothing that can actively change colour or temperature.
    “Wearable devices need power. By creating a cell that is soft, stretchable and washable, we are making wearable power comfortable and convenient.”
    Story Source:
    Materials provided by University of British Columbia. Note: Content may be edited for style and length. More

  • in

    Analog computers now just one step from digital

    The future of computing may be analog.
    The digital design of our everyday computers is good for reading email and gaming, but today’s problem-solving computers are working with vast amounts of data. The ability to both store and process this information can lead to performance bottlenecks due to the way computers are built.
    The next computer revolution might be a new kind of hardware, called processing-in-memory (PIM), an emerging computing paradigm that merges the memory and processing unit and does its computations using the physical properties of the machine — no 1s or 0s needed to do the processing digitally.
    At Washington University in St. Louis, researchers from the lab of Xuan “Silvia” Zhang, associate professor in the Preston M. Green Department of Electrical & Systems Engineering at the McKelvey School of Engineering, have designed a new PIM circuit, which brings the flexibility of neural networks to bear on PIM computing. The circuit has the potential to increase PIM computing’s performance by orders of magnitude beyond its current theoretical capabilities.
    Their research was published online Oct. 27 in the journal IEEE Transactions on Computers. The work was a collaboration with Li Jiang at Shanghai Jiao Tong University in China.
    Traditionally designed computers are built using a Von Neuman architecture. Part of this design separates the memory — where data is stored — and the processor — where the actual computing is performed. More