More stories

  • in

    Resolving the puzzles of graphene superconductivity

    A single layer of carbon atoms arranged in a honeycomb lattice makes up the promising nanomaterial called graphene. Research on a setup of three sheets of graphene stacked on top of one another so that their lattices are aligned but shifted — forming rhombohedral trilayer graphene — revealed an unexpected state of superconductivity. In this state electrical resistance vanishes due to the quantum nature of the electrons. The discovery was published and debated in Nature, whilst the origins remained elusive. Now, Professor Maksym Serbyn and Postdoc Areg Ghazaryan from the Institute of Science and Technology (IST) Austria in collaboration with Professor Erez Berg and Postdoc Tobias Holder from the Weizmann Institute of Science, Israel, developed a theoretical framework of unconventional superconductivity, which resolves the puzzles posed by the experimental data.
    The Puzzles and their Resolution
    Superconductivity relies on the pairing of free electrons in the material despite their repulsion arising from their equal negative charges. This pairing happens between electrons of opposite spin through vibrations of the crystal lattice. Spin is a quantum property of particles comparable, but not identical to rotation. The mentioned kind of pairing is the case at least in conventional superconductors. “Applied to trilayer graphene,” co-lead-author Ghazaryan points out, “we identified two puzzles that seem difficult to reconcile with conventional superconductivity.”
    First, above a threshold temperature of roughly -260 °C electrical resistance should rise in equal steps with increasing temperature. However, in the experiments it remained constant up to -250 °C. Second, pairing between electrons of opposite spin implies a coupling that contradicts another experimentally observed feature, namely the presence of a nearby configuration with fully aligned spins, which we know as magnetism. “In the paper, we show that both observations are explainable,” group leader Maksym Serbyn summarizes, “if one assumes that an interaction between electrons provides the ‘glue’ that holds electrons together. This leads to unconventional superconductivity.”
    When one draws all possible states, which electrons can have, on a certain chart and then separates the occupied ones from the unoccupied ones with a line, this separation line is called a Fermi surface. Experimental data from graphene shows two Fermi surfaces, creating a ring-like shape. In their work, the researchers draw from a theory from Kohn and Luttinger from the 1960’s and demonstrate that such circular Fermi surfaces favor a mechanism for superconductivity based only on electron interactions. They also suggest experimental setups to test their argument and offer routes towards raising the critical temperature, where superconductivity starts appearing.
    The Benefits of Graphene Superconductivity
    While superconductivity has been observed in other trilayer and bilayer graphene, these known materials must be specifically engineered and may be hard to control because of their low stability. Rhombohedral trilayer graphene, although rare, is naturally occurring. The proposed theoretical solution has the potential of shedding light on long-standing problems in condensed matter physics and opening the way to potential applications of both superconductivity and graphene.
    Story Source:
    Materials provided by Institute of Science and Technology Austria. Note: Content may be edited for style and length. More

  • in

    AI models microprocessor performance in real-time

    Computer engineers at Duke University have developed a new AI method for accurately predicting the power consumption of any type of computer processor more than a trillion times per second while barely using any computational power itself. Dubbed APOLLO, the technique has been validated on real-world, high-performance microprocessors and could help improve the efficiency and inform the development of new microprocessors.
    The approach is detailed in a paper published at MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture, one of the top-tier conferences in computer architecture, where it was selected the conference’s best publication.
    “This is an intensively studied problem that has traditionally relied on extra circuitry to address,” said Zhiyao Xie, first author of the paper and a PhD candidate in the laboratory of Yiran Chen, professor of electrical and computer engineering at Duke. “But our approach runs directly on the microprocessor in the background, which opens many new opportunities. I think that’s why people are excited about it.”
    In modern computer processors, cycles of computations are made on the order of 3 trillion times per second. Keeping track of the power consumed by such intensely fast transitions is important to maintain the entire chip’s performance and efficiency. If a processor draws too much power, it can overheat and cause damage. Sudden swings in power demand can cause internal electromagnetic complications that can slow the entire processor down.
    By implementing software that can predict and stop these undesirable extremes from happening, computer engineers can protect their hardware and increase its performance. But such schemes come at a cost. Keeping pace with modern microprocessors typically requires precious extra hardware and computational power.
    “APOLLO approaches an ideal power estimation algorithm that is both accurate and fast and can easily be built into a processing core at a low power cost,” Xie said. “And because it can be used in any type of processing unit, it could become a common component in future chip design.”
    The secret to APOLLO’s power comes from artificial intelligence. The algorithm developed by Xie and Chen uses AI to identify and select just 100 of a processor’s millions of signals that correlate most closely with its power consumption. It then builds a power consumption model off of those 100 signals and monitors them to predict the entire chip’s performance in real-time.
    Because this learning process is autonomous and data driven, it can be implemented on most any computer processor architecture — even those that have yet to be invented. And while it doesn’t require any human designer expertise to do its job, the algorithm could help human designers do theirs.
    “After the AI selects its 100 signals, you can look at the algorithm and see what they are,” Xie said. “A lot of the selections make intuitive sense, but even if they don’t, they can provide feedback to designers by informing them which processes are most strongly correlated with power consumption and performance.”
    The work is part of a collaboration with Arm Research, a computer engineering research organization that aims to analyze the disruptions impacting industry and create advanced solutions, many years ahead of deployment. With the help of Arm Research, APOLLO has already been validated on some of today’s highest performing processors. But according to the researchers, the algorithm still needs testing and comprehensive evaluations on many more platforms before it would be adopted by commercial computer manufacturers.
    “Arm Research works with and receives funding from some of the biggest names in the industry, like Intel and IBM, and predicting power consumption is one of their major priorities,” Chen added. “Projects like this offer our students an opportunity to work with these industry leaders, and these are the types of results that make them want to continue working with and hiring Duke graduates.”
    This work was conducted under the high-performance AClass CPU research program at Arm Research and was partially supported by the National Science Foundation (NSF-2106828, NSF-2112562) and the Semiconductor Research Corporation (SRC).
    Story Source:
    Materials provided by Duke University. Original written by Ken Kingery. Note: Content may be edited for style and length. More

  • in

    Doctoral student finds alternative cell option for organs-on-chips

    Organ-on-a-chip technology has provided a push to discover new drugs for a variety of rare and ignored diseases for which current models either don’t exist or lack precision. In particular, these platforms can include the cells of a patient, thus resulting in patient-specific discovery.
    As an example, even though sickle cell disease was first described in the early 1900s, the range of severity in the disease causes challenges when trying to treat patients. Since this disease is most prevalent among economically poor and underrepresented minorities, there has been a general lack of stimulus to discover new treatment strategies due to socioeconomic inequity, making it one of the most serious orphan conditions globally.
    Tanmay Mathur, doctoral student in Dr. Abhishek Jain’s lab in the Department of Biomedical Engineering at Texas A&M University, is developing personalized blood vessels to improve knowledge and derive treatments against the vascular dysfunction seen in sickle cell disease and other rare diseases of the blood and vessels.
    Current cells used in blood vessel models use induced pluripotent stem cells (IPSCs), which are derived from a patient’s endothelial cells. However, Mathur said these cells have limitations — they expire quickly and can’t be stored for long periods of time.
    Mathur’s research offers an alternative — blood outgrowth endothelial cells (BOECs), which can be isolated from a patient’s blood. All that is needed is 50 to 100 milliliters of blood.
    “The equipment and the reagents involved are also very cheap and available in most clinical settings,” Mathur said. “These cells are progenitor endothelial cells, meaning they have high proliferation, so if you keep giving them the food they want, within a month, we will have enough cells so that we can successfully keep on subculturing them forever.”
    However, the question is do BOECs work like IPSCs in the context of organ-on-chips, a microdevice that allows researchers to create these blood vessel models. That’s a question Mathur recently answered in a paper published in the Journal of the American Heart Association. More

  • in

    Real-world study shows the potential of gait authentication to enhance smartphone security

    Real-world tests have shown that gait authentication could be a viable means of protecting smartphones and other mobiles devices from cyber crime, according to new research.
    A study led by the University of Plymouth asked smartphone users to go about their daily activities while motion sensors within their mobile devices captured data about their stride patterns.
    The results showed the system was on average around 85% accurate in recognising an individual’s gait, with that figure rising to almost 90% when they were walking normally and fast walking.
    There are currently more than 6.3billion smartphone users around the world, using their devices to provide a wide range of services and to store sensitive and confidential information.
    While authentication mechanisms — such as passwords, PINs and biometrics — exist, studies have shown the level of security and usability of such approaches varies considerably.
    Writing in Computers & Security, the researchers say the study illustrates that — within an appropriate framework — gait recognition could be a viable technique for protecting individuals and their data from potential crime. More

  • in

    Artificial intelligence that can discover hidden physical laws in various data

    Researchers at Kobe University and Osaka University have successfully developed artificial intelligence technology that can extract hidden equations of motion from regular observational data and create a model that is faithful to the laws of physics.
    This technology could enable us to discover the hidden equations of motion behind phenomena for which the laws were considered unexplainable. For example, it may be possible to use physics-based knowledge and simulations to examine ecosystem sustainability.
    The research group consisted of Associate Professor YAGUCHI Takaharu and PhD. student CHEN Yuhan (Graduate School of System Informatics, Kobe University), and Associate Professor MATSUBARA Takashi (Graduate School of Engineering Science, Osaka University).
    These research achievements were made public on December 6, 2021, and were presented at the Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS2021), a meeting on artificial intelligence technologies. This research was among the top 3% selected for the spotlight category.
    Main Points Being able to model (formularize) physical phenomena using artificial intelligence could result in extremely precise, high-speed simulations. In current methods using artificial intelligence, it is necessary to use transformed data that fits the equation of motion.Therefore it is difficult to apply artificial intelligence to actual observational data for which the equations of motion are unknown. The research group used geometry to develop artificial intelligence that can find the hidden equation of motion in the supplied observational data (regardless of its format) and model it accordingly. In the future, it may be possible to discover the hidden physical laws behind phenomena that that had previously been considered to be incompatible with Newton’s Laws, such as ecosystem changes. This will enable us to carry out investigations and simulations related to these phenomena using the laws of physics, which could reveal previously unknown properties.Ordinarily, predictions of physical phenomena are carried out via simulations using supercomputers. These simulations use mathematical models based on the laws of physics, however if the model is not highly reliable then the results will also lack reliability. Therefore, it is essential to develop a method of producing highly reliable models from the observational data of phenomena. Furthermore, in recent years the range of physics applications has expanded beyond our predications, and it has been demonstrated that it is possible to apply Newton’s Laws to other aspects, such as part of a model to show ecosystem changes. However, a concrete equation of motion has not yet been revealed for many cases.
    Research Methodology
    This research study developed a method of discovering novel equations of motion in observational data for phenomena that Newton’s Laws can be applied to. Previously, research has been conducted into discovering equations of motion from data, however the prior method required the data to be in the appropriate format to fit its assumed special form of the equation of motion. However, there are many cases in reality where it is not clear what data format is best to use, therefore it is difficult to apply realistic data.
    In response to this, the researchers considered that the appropriate transformation of observational data is akin to coordinate transformation in geometry, thus resolving the issue by applying the geometric idea of coordinate transformation invariance found in physics. For this, it is necessary to illuminate the unknown geometric properties behind phenomena. The research team subsequently succeeded in developing AI that can find these geometric properties in data. If equations of motion can be extracted from data, then it will be possible to use these equations to create models and simulations that are faithful to physical laws.
    Further Developments
    Physics simulations are carried out in a wide range of fields, including weather forecasting, drug discovery, building analyses, and car design, but they usually require extensive calculations. However, if AI can learn from the data of specific phenomena and construct small-scale models using the proposed method, then this will simplify and speed up calculations that are faithful to the laws of physics. This will contribute towards the development of the aforementioned fields.
    Furthermore, we can apply this method to aspects that at first glance appear to be unrelated to physics. If equations of motion can be extracted in such cases, this will make it possible to do physics knowledge-based investigations and simulations even for phenomena that has been considered impossible to explain using physics. For example, it may be possible to find a hidden equation of motion in animal population data that shows the change in the number of individuals. This could be used to investigate ecosystem sustainability by applying the appropriate physical laws (eg. the law of conservation of energy, etc.).
    Story Source:
    Materials provided by Kobe University. Note: Content may be edited for style and length. More

  • in

    Key step toward personalized medicine: Modeling biological systems

    A new study by the Oregon State University College of Engineering shows that machine learning techniques can offer powerful new tools for advancing personalized medicine, care that optimizes outcomes for individual patients based on unique aspects of their biology and disease features.
    The research with machine learning, a branch of artificial intelligence in which computer systems use algorithms and statistical models to look for trends in data, tackles long-unsolvable problems in biological systems at the cellular level, said Oregon State’s Brian D. Wood, who conducted the study with then OSU Ph.D. student Ehsan Taghizadeh and Helen M. Byrne of the University of Oxford.
    “Those systems tend to have high complexity — first because of the vast number of individual cells and second, because of the highly nonlinear way in which cells can behave,” said Wood, a professor of environmental engineering. “Nonlinear systems present a challenge for upscaling methods, which is the primary means by which researchers can accurately model biological systems at the larger scales that are often the most relevant.”
    A linear system in science or mathematics means any change to the system’s input results in a proportional change to the output; a linear equation, for example, might describe a slope that gains 2 feet vertically for every foot of horizontal distance.
    Nonlinear systems don’t work that way, and many of the world’s systems, including biological ones, are nonlinear.
    The new research, funded in part by the U.S. Department of Energy and published in the Journal of Computational Physics, is one of the first examples of using machine learning to address issues with modeling nonlinear systems and understanding complex processes that might occur in human tissues, Wood said. More

  • in

    Community of ethical hackers needed to prevent AI's looming 'crisis of trust'

    The Artificial Intelligence industry should create a global community of hackers and “threat modellers” dedicated to stress-testing the harm potential of new AI products in order to earn the trust of governments and the public before it’s too late.
    This is one of the recommendations made by an international team of risk and machine-learning experts, led by researchers at the University of Cambridge’s Centre for the Study of Existential Risk (CSER), who have authored a new “call to action” published today in the journal Science.
    They say that companies building intelligent technologies should harness techniques such as “red team” hacking, audit trails and “bias bounties” — paying out rewards for revealing ethical flaws — to prove their integrity before releasing AI for use on the wider public.
    Otherwise, the industry faces a “crisis of trust” in the systems that increasingly underpin our society, as public concern continues to mount over everything from driverless cars and autonomous drones to secret social media algorithms that spread misinformation and provoke political turmoil.
    The novelty and “black box” nature of AI systems, and ferocious competition in the race to the marketplace, has hindered development and adoption of auditing or third party analysis, according to lead author Dr Shahar Avin of CSER.
    The experts argue that incentives to increase trustworthiness should not be limited to regulation, but must also come from within an industry yet to fully comprehend that public trust is vital for its own future — and trust is fraying. More

  • in

    Machine learning decodes tremors of the universe

    Black holes are one of the greatest mysteries of our Universe — for example, a black hole with the mass of our Sun has a radius of only 3 kilometers. Black holes in orbit around each other give off gravitational radiation — oscillations of space and time predicted by Albert Einstein in 1916. This causes the orbit to become faster and tighter, and eventually, the black holes merge in a final burst of radiation. These gravitational waves propagate through the Universe at the speed of light, and are detected by observatories in the USA (LIGO) and Italy (Virgo). Scientists compare the data collected by the observatories against theoretical predictions to estimate the properties of the source, including how large the black holes are and how fast they are spinning. Currently, this procedure takes at least hours, often months.
    An interdisciplinary team of researchers from the Max Planck Institute for Intelligent Systems (MPI-IS) in Tübingen and the Max Planck Institute for Gravitational Physics (Albert Einstein Institute/AEI) in Potsdam is using state-of-the-art machine learning methods to speed up this process. They developed an algorithm using a deep neural network, a complex computer code built from a sequence of simpler operations, inspired by the human brain. Within seconds, the system infers all properties of the binary black-hole source. Their research results were published in the flagship journal of Physics, Physical Review Letters.
    “Our method can make very accurate statements in a few seconds about how big and massive the two black holes were that generated the gravitational waves when they merged. How fast do the black holes rotate, how far away are they from Earth and from which direction is the gravitational wave coming? We can deduce all this from the observed data and even make statements about the accuracy of this calculation,” explains Maximilian Dax, first author of the study Real-Time Gravitational Wave Science with Neural Posterior Estimation and Ph.D. student in the Empirical Inference Department at MPI-IS.
    The researchers trained the neural network with many simulations — predicted gravitational-wave signals for hypothetical binary black-hole systems combined with noise from the detectors. This way, the network learns the correlations between the measured gravitational-wave data and the parameters characterizing the underlying black-hole system. It takes ten days for the algorithm called DINGO (the abbreviation stands for Deep INference for Gravitational-wave Observations) to learn. Then it is ready for use: the network deduces the size, the spins, and all other parameters describing the black holes from data of newly observed gravitational waves in just a few seconds. The high-precision analysis decodes ripples in space-time almost in real-time — something that has never been done with such speed and precision. The researchers are convinced that the improved performance of the neural network as well as its ability to better handle noise fluctuations in the detectors will make this method a very useful tool for future gravitational-wave observations.
    “The further we look into space through increasingly sensitive detectors, the more gravitational-wave signals are detected. Fast methods such as ours are essential for analyzing all of this data in a reasonable amount of time,” says Stephen Green, senior scientist in the Astrophysical and Cosmological Relativity department at the AEI. “DINGO has the advantage that — once trained — it can analyze new events very quickly. Importantly, it also provides detailed uncertainty estimates on parameters, which have been hard to produce in the past using machine-learning methods.” Until now, researchers in the LIGO and Virgo collaborations have used computationally very time-consuming algorithms to analyze the data. They need millions of new simulations of gravitational waveforms for the interpretation of each measurement, which leads to computing times of several hours to months — DINGO avoids this overhead because a trained network does not need any further simulations for analyzing newly observed data, a process known as ‘amortized inference’.
    The method holds promise for more complex gravitational-wave signals describing binary — black-hole configurations, whose use in current algorithms makes analyses very time-consuming, and for binary neutron stars. Whereas the collision of black holes releases energy exclusively in the form of gravitational waves, merging neutron stars also emit radiation in the electromagnetic spectrum. They are therefore also visible to telescopes which have to be pointed to the respective region of the sky as quickly as possible in order to observe the event. To do this, one needs to very quickly determine where the gravitational wave is coming from, as facilitated by the new machine learning method. In the future, this information could be used to point telescopes in time to observe electromagnetic signals from the collisions of neutron stars, and of a neutron star with a black hole.
    Alessandra Buonanno, director at the AEI, and Bernhard Schölkopf, a Director at the MPI-IS, are thrilled with the prospect of taking their successful collaboration to the next level. Buonanno expects that “going forward, these approaches will also enable a much more realistic treatment of the detector noise and of the gravitational signals than is possible today using standard techniques,” and Schölkopf adds that such “simulation-based inference using machine learning could be transformative in many areas of science where we need to infer a complex model from noisy observations.”
    Story Source:
    Materials provided by Max Planck Institute for Intelligent Systems. Note: Content may be edited for style and length. More