More stories

  • in

    Twisted science: New quantum ruler to explore exotic matter

    A single-atom-thick sheet of carbon known as graphene has remarkable properties on its own, but things can get even more interesting when you stack up multiple sheets. When two or more overlying sheets of graphene are sightly misaligned — twisted at certain angles relative to each other — they take on a plethora of exotic identities.Depending on the twist angle, these materials, known as moiré quantum matter, can suddenly generate their own magnetic fields, become superconductors with zero electrical resistance, or conversely, turn into perfect insulators.
    Joseph A. Stroscio and his colleagues at the National Institute of Standards and Technology (NIST), along with an international team of collaborators, have developed a “quantum ruler” to measure and explore the strange properties of these twisted materials. The work may also lead to a new, miniaturized standard for electrical resistance that could calibrate electronic devices directly on the factory floor, eliminating the need to send them to an off-site standards laboratory.
    Collaborator Fereshte Ghahari, a physicist from George Mason University in Fairfax, Virginia, took two layers of graphene (known as bilayer graphene) of about 20 micrometers across and twisted them relative to another two layers to create a moiré quantum matter device. Ghahari made the device using the nanofabrication facility at NIST’s Center for Nanoscale Science and Technology. NIST researchers Marlou Slot and Yulia Maximenko then chilled this twisted material device to one-hundredth of a degree above absolute zero, reducing random motions of atoms and electrons and heightening the ability for electrons in the material to interact. After reaching ultralow temperatures, they examined how the energy levels of electrons in the layers of graphene changed when they varied the strength of a strong external magnetic field. Measuring and manipulating the energy levels of electrons is critical for designing and manufacturing semiconductor devices.
    To measure the energy levels, the team used a versatile scanning tunneling microscope that Stroscio designed and built at NIST. When the researchers applied a voltage to the graphene bilayers in the magnetic field, the microscope recorded the tiny current from the electrons that “tunneled” out from the material to the microscope probe tip.
    In a magnetic field, electrons move in circular paths. Ordinarily, the circular orbits of the electrons in solid materials have a special relationship with an applied magnetic field: The area enclosed by each circular orbit, multiplied by the applied field, can only take on a set of fixed, discrete values, due to the quantum nature of electrons. In order to maintain that fixed product, if the magnetic field is halved, then the area enclosed by an orbiting electron must double. The difference in energy between successive energy levels that follow this pattern can be used like tick marks on a ruler to measure the material’s electronic and magnetic properties. Any subtle deviation from this pattern would represent a new quantum ruler that can reflect the orbital magnetic properties of the particular quantum moiré material researchers are studying.
    In fact, when the NIST researchers varied the magnetic field applied to the moiré graphene bilayers, they found evidence of a new quantum ruler at play. The area enclosed by the circular orbit of electrons multiplied by the applied magnetic field no longer equaled a fixed value. Instead, the product of those two numbers had shifted by an amount dependent on the magnetization of the bilayers.
    This deviation translated into a set of different tick marks for the energy levels of the electrons. The findings promise to shed new light on how electrons confined to twisted sheets of graphene give rise to new magnetic properties. More

  • in

    AI-driven earthquake forecasting shows promise in trials

    A new attempt to predict earthquakes with the aid of artificial intelligence has raised hopes that the technology could one day be used to limit earthquakes’ impact on lives and economies. Developed by researchers at The University of Texas at Austin, the AI algorithm correctly predicted 70% of earthquakes a week before they happened during a seven-month trial in China.
    The AI was trained to detect statistical bumps in real-time seismic data that researchers had paired with previous earthquakes. The outcome was a weekly forecast in which the AI successfully predicted 14 earthquakes within about 200 miles of where it estimated they would happen and at almost exactly the calculated strength. It missed one earthquake and gave eight false warnings.
    It’s not yet known if the same approach will work at other locations, but the effort is a milestone in research for AI-driven earthquake forecasting.
    “Predicting earthquakes is the holy grail,” said Sergey Fomel, a professor in UT’s Bureau of Economic Geology and a member of the research team. “We’re not yet close to making predictions for anywhere in the world, but what we achieved tells us that what we thought was an impossible problem is solvable in principle.”
    The trial was part of an international competition held in China in which the UT-developed AI came first out of 600 other designs. UT’s entry was led by bureau seismologist and the AI’s lead developer, Yangkang Chen. Findings from the trial are published in the journal Bulletin of the Seismological Society of America.
    “You don’t see earthquakes coming,” said Alexandros Savvaidis, a senior research scientist who leads the bureau’s Texas Seismological Network Program (TexNet) — the state’s seismic network. “It’s a matter of milliseconds, and the only thing you can control is how prepared you are. Even with 70%, that’s a huge result and could help minimize economic and human losses and has the potential to dramatically improve earthquake preparedness worldwide.”
    The researchers said that their method had succeeded by following a relatively simple machine learning approach. The AI was given a set of statistical features based on the team’s knowledge of earthquake physics, then told to train itself on a five-year database of seismic recordings. More

  • in

    New open-source method to improve decoding of single-cell data

    Researchers at Memorial Sloan Kettering Cancer Center (MSK) have developed a new open-source computational method, dubbed Spectra, which improves the analysis of single-cell transcriptomic data.
    By guiding data analysis in a unique way, Spectra can offer new insights into the complex interplay between cells — like the interactions between cancer cells and immune cells, which are critical to improving immunotherapy treatments.
    The team’s approach and findings were recently published in Nature Biotechnology.
    Spectra, the researchers note, can cut through technical “noise” to identify functionally relevant gene expression programs, including those that are novel or highly specific to a particular biological context.
    The algorithm is well suited to study data from large patient cohorts and to suss out clinically meaningful patient characteristics, the MSK team writes in a research briefing that accompanies the study, adding that Spectra is ideal for identifying biomarkers and drug targets in the burgeoning field of immuno-oncology.
    Additionally, the MSK team has made Spectra freely available to researchers around the world.
    “I’m trained as a computer scientist,” says study senior author Dana Pe’er, PhD, who chairs the Computational and Systems Biology Program at MSK’s Sloan Kettering Institute. “Every single tool I build, I strive to make robust so it can be used in many contexts, not just one. I also try and make them as accessible as possible.”
    “I’m happy to discover new biology,” she continues. “And I’m just as happy — perhaps happier — to build a foundational tool that can be used by the wider community to make many biological discoveries.” More

  • in

    Comfort with a smaller carbon footprint

    As organizations work to reduce their energy consumption and associated carbon emissions, one area that remains to be optimized is indoor heating and cooling. In fact, HVAC — which stands for Heating, Ventilation, and Air Conditioning — represents, on average, about 40% of a building’s total energy use. Methods that conserve electricity while still providing a comfortable indoor environment for workers could make a significant difference in the fight against climate change.
    Now, researchers from Osaka University have demonstrated significant energy savings through the application of a new, AI-driven algorithm for controlling HVAC systems. This method does not require complex physics modelling, or even detailed previous knowledge about the building itself.
    During cold weather, it is sometimes challenging for conventional sensor-based systems to determine when the heating should be shut off. This is due to thermal interference from lighting, equipment, or even the heat produced by the workers themselves. This can lead to the HVAC being activated when it should not be, wasting energy.
    To overcome these obstacles, the researchers employed a control algorithm that worked to predict the thermodynamic response of the building based on data collected. This approach can be more effective than attempting to explicitly calculate the impact of the multitude of complex factors that might affect the temperature, such as insulation and heat generation. Thus, with enough information, ‘data driven’ approaches can often outperform even sophisticated models. Here, the HVAC control system was designed to ‘learn’ the symbolic relationships between the variables, including power consumption, based on a large dataset.
    The algorithm was able to save energy while still allowing the building occupants to work in comfort. “Our autonomous system showed significant energy savings, of 30% or more for office buildings, by leveraging the predictive power of machine learning to optimize the times the HVAC should operate.” says lead author Dafang Zhao. “Importantly, the rooms were comfortably warm despite it being winter.”
    The algorithm worked to minimize the total energy consumed, the difference between the actual and desired room temperature, and change in the rate of power output at peak demand. “Our system can be easily customized to prioritize energy conservation or temperature accuracy, depending on the needs of the situation,” adds senior author Ittetsu Taniguchi.
    To collectively achieve the goal of a carbon-neutral economy, it is highly likely that corporations will need to be at the vanguard of innovation. The researchers note that their approach may enjoy rapid adoption during times of rising energy costs, which makes their findings good for both the environment as well as company viability. More

  • in

    New technology could reduce lag, improve reliability of online gaming, meetings

    Whether you’re battling foes in a virtual arena or collaborating with colleagues across the globe, lag-induced disruptions can be a major hindrance to seamless communication and immersive experiences.
    That’s why researchers with the University of Central Florida’s College of Optics and Photonics (CREOL) and the University of California, Los Angeles, have developed new technology to make data transfer over optical fiber communication faster and more efficient.
    Their new development, a novel class of optical modulators, is detailed in a new study published recently in the journal Nature Communications. Modulators can be thought of as like a light switch that controls certain properties of data-carrying light in an optical communication system.
    “Carrying torrents of data between internet hubs and connecting servers, storage elements, and switches inside data centers, optical fiber communication is the backbone on which the digital world is built,” says Sasan Fathpour, the study’s co-author and CREOL professor. “The basic constituents of such links, the optical fiber, semiconductor laser, optical modulator and photoreceiver, all place limits on the bandwidth and the accuracy of data transmission.”
    Fathpour says particularly the dispersion of optical fibers, or signal distortion over long distances, and noise of semiconductor lasers, or unwanted signal interference, are two fundamental limitations of optical communication and signal processing systems that affect data transmission and reliability.
    He says their research has invented a unique class of optical modulators that simultaneously address both limitations by taking advantage of phase diversity, or varied timing of signals, and differential operations, or comparison of light signals.
    By doing so, the researchers have created an advanced “light switch” that not only controls data transmission but does so while comparing the amount and timing of data moving through the system to ensure accurate and efficient transmission. More

  • in

    Machine learning used to probe the building blocks of shapes

    Applying machine learning to find the properties of atomic pieces of geometry shows how AI has the power to accelerate discoveries in maths.
    Mathematicians from Imperial College London and the University of Nottingham have, for the first time, used machine learning to expand and accelerate work identifying ‘atomic shapes’ that form the basic pieces of geometry in higher dimensions. Their findings have been published in Nature Communications.
    The way they used artificial intelligence, in the form of machine learning, could transform how maths is done, say the authors. Dr Alexander Kasprzyk from the University of Nottingham said: “For mathematicians, the key step is working out what the pattern is in a given problem. This can be very difficult, and some mathematical theories can take years to discover.”
    Professor Tom Coates, from the Department of Mathematics at Imperial, added: “We have shown that machine learning can help uncover patterns within mathematical data, giving us both new insights and hints of how they can be proved.”
    PhD student Sara Veneziale, from the Department of Mathematics at Imperial, said: “This could be very broadly applicable, such that it could rapidly accelerate the pace at which maths discoveries are made. It’s like when computers were first used in maths research, or even calculators: it’s a step-change in the way we do maths.”
    Defining shapes
    Mathematicians describe shapes using equations, and by analysing these equations can break the shape down into fundamental pieces. These are the building blocks of shapes, the equivalent of atoms, and are called Fano varieties. More

  • in

    Birders and AI push bird conservation to the next level

    For the first time, big data and artificial intelligence (AI) are being used to model hidden patterns in nature, not just for one bird species, but for entire ecological communities across continents. And the models follow each species’ full annual life cycle, from breeding to fall migration to nonbreeding grounds, and back north again during spring migration. It begins with the more than 900,000 birders who report their sightings to the Cornell Lab of Ornithology’s eBird program, one of the world’s largest biodiversity science projects. When combined with innovations in technology and artificial intelligence-the same innovations that power self-driving cars and real-time language translation-these sightings are revealing more than ever about patterns of bird biodiversity, and the processes that underlie them.
    The development and application of this revolutionary computational tool is the result of a collaboration between the Cornell Lab of Ornithology and the Cornell Institute for Computational Sustainability. This work is now published in the journal Ecology.
    “This method uniquely tells us which species occur where, when, with what other species, and under what environmental conditions,” said lead author Courtney Davis, a researcher at the Cornell Lab. “With that type of information, we can identify and prioritize landscapes of high conservation value — vital information in this era of ongoing biodiversity loss.”
    “This model is very general and is suitable for various tasks, provided there’s enough data,” Gomes said. “This work on joint bird species distribution modeling is about predicting the presence and absence of species, but we are also developing models to estimate bird abundance — the number of individual birds per species. We’re also aiming to enhance the model by incorporating bird calls alongside visual observations.”
    Cross-disciplinary collaborations like this are necessary for the future of biodiversity conservation, according to Daniel Fink, researcher at the Cornell Lab and senior author of the study.
    “The task at hand is too big for ecologists to do on their own-we need the expertise of our colleagues in computer science and computational sustainability to develop targeted plans for landscape-scale conservation, restoration, and management around the world.”
    This work was funded by the National Science Foundation, The Leon Levy Foundation, The Wolf Creek Foundation, the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship — a Schmidt Future program, the Air Force Office of Scientific Research, and the U.S. Department of Agriculture’s National Institute of Food and Agriculture. More

  • in

    Could future AI crave a favorite food?

    Can artificial intelligence (AI) get hungry? Develop a taste for certain foods? Not yet, but a team of Penn State researchers is developing a novel electronic tongue that mimics how taste influences what we eat based on both needs and wants, providing a possible blueprint for AI that processes information more like a human being.
    Human behavior is complex, a nebulous compromise and interaction between our physiological needs and psychological urges. While artificial intelligence has made great strides in recent years, AI systems do not incorporate the psychological side of our human intelligence. For example, emotional intelligence is rarely considered as part of AI.
    “The main focus of our work was how could we bring the emotional part of intelligence to AI,” said Saptarshi Das, associate professor of engineering science and mechanics at Penn State and corresponding author of the study published recently in Nature Communications. “Emotion is a broad field and many researchers study psychology; however, for computer engineers, mathematical models and diverse data sets are essential for design purposes. Human behavior is easy to observe but difficult to measure and that makes it difficult to replicate in a robot and make it emotionally intelligent. There is no real way right now to do that.”
    Das noted that our eating habits are a good example of emotional intelligence and the interaction between the physiological and psychological state of the body. What we eat is heavily influenced by the process of gustation, which refers to how our sense of taste helps us decide what to consume based on flavor preferences. This is different than hunger, the physiological reason for eating.
    “If you are someone fortunate to have all possible food choices, you will choose the foods you like most,” Das said. “You are not going to choose something that is very bitter, but likely try for something sweeter, correct?”
    Anyone who has felt full after a big lunch and still was tempted by a slice of chocolate cake at an afternoon workplace party knows that a person can eat something they love even when not hungry.
    “If you are given food that is sweet, you would eat it in spite of your physiological condition being satisfied, unlike if someone gave you say a hunk of meat,” Das said. “Your psychological condition still wants to be satisfied, so you will have the urge to eat the sweets even when not hungry.”
    While there are still many questions regarding the neuronal circuits and molecular-level mechanisms within the brain that underlie hunger perception and appetite control, Das said, advances such as improved brain imaging have offered more information on how these circuits work in regard to gustation. More