More stories

  • in

    Engineering the quantum states in solids using light

    A POSTECH research team led by Professor Gil-Ho Lee and Gil Young Cho (Department of Physics) has developed a platform that can control the properties of solid materials with light and measure them.
    Recognized for developing a platform to control and measure the properties of materials in various ways with light, the findings from the study were published in the top international academic journal Nature on March 15, 2022 (GMT).
    The electrical properties of a material are determined by the movement of electrons in the material. For example, a material is defined as a metal if electrons can move freely, otherwise it is an insulator. In order to change the electrical properties of these solids, applying heat or pressure or adding impurities have been generally used. This is because the change in the position of the atoms in the solid changes the movement of electrons accordingly.
    In contrast, the Floquet state, in which the original quantum state is replicated when light is irradiated on matters, has been proposed. By adopting such a concept, quantum states of the matters can be easily manipulated with light, which can be effectively used in quantum systems.
    In previous experiments, the light intensity for realizing Floquet state in solids was enormous due to the high frequency of light. Also, Floquet states last only for a very short time of 250 femtoseconds (1 femtosecond is one trillionth of a second). Due to their transient nature, more quantitative studies of their characteristics have been limited.
    POSTECH research team succeeded in the experimental realization of the steady Floquet state in a graphene Josephson junction (GJJ) and by irradiating continuous microwaves on it. The intensity of the light has been decreased to one trillionth the value of previous experiments, significantly reducing the heat generation and enabling continuously long-lasting Floquet states.
    The research team also developed a novel superconducting tunneling spectroscopy to measure the Floquet states with high energy resolution. This is necessary to quantitatively verify the characteristics of the Floquet state that varies depending on the intensity, frequency and polarization of light applied to the device.
    “This study is significant in that we have created a platform that can study the Floquet state in detail,” explained professors Gil-Ho Lee and Gil Young Cho who led the study. They added, “We plan to further investigate the correlation between properties of light, such as polarization, and the Floquet states.”
    This study was conducted with the support from the Samsung Science and Technology Foundation, National Research Foundation of Korea, Institute for Basic Science, Air Force Office of Scientific Research, and Elemental Strategy Initiative conducted by the MEXT.
    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Planet-scale MRI

    Earthquakes do more than buckle streets and topple buildings. Seismic waves generated by earthquakes pass through the Earth, acting like a giant MRI machine and providing clues to what lies inside the planet.
    Seismologists have developed methods to take wave signals from the networks of seismometers at the Earth’s surface and reverse engineer features and characteristics of the medium they pass through, a process known as seismic tomography.
    For decades, seismic tomography was based on ray theory, and seismic waves were treated like light rays. This served as a pretty good approximation and led to major discoveries about the Earth’s interior. But to improve the resolution of current seismic tomographic models, seismologists need to take into account the full complexity of wave propagation using numerical simulations, known as full-waveform inversion, says Ebru Bozdag, assistant professor in the Geophysics Department at the Colorado School of Mines.
    “We are at a stage where we need to avoid approximations and corrections in our imaging techniques to construct these models of the Earth’s interior,” she said.
    Bozdag was the lead author of the first full-waveform inversion model, GLAD-M15 in 2016, based on full 3D wave simulations and 3D data sensitivities at the global scale. The model used the open-source 3D global wave propagation solver SPECFEM3D_GLOBE (freely available from Computational Infrastructure for Geodynamics) and was created in collaboration with researchers from Princeton University, University of Marseille, King Abdullah University of Science and Technology (KAUST) and Oak Ridge National Laboratory (ORNL). The work was lauded in the press. Its successor, GLAD-M25 (Lei et al. 2020), came out in 2020 and brought prominent features like subduction zones, mantle plumes, and hotspots into view for further discussions on mantle dynamics.
    “We showed the feasibility of using full 3D wave simulations and data sensitivities to seismic parameters at the global scale in our 2016 and 2020 papers. Now, it’s time to use better parameterization to describe the physics of the Earth’s interior in the inverse problem,” she said. More

  • in

    New software to help discover valuable compounds

    Because the comparative metabolomics field lacks sophisticated data analysis tools that are available to genomics and proteomics researchers, metabolomics researchers spend a lot of time hunting for candidate compounds that could be useful as leads for the development of new pharmaceuticals or agrochemicals. To solve this problem, scientists have developed Metaboseek, a free, easy-to-use app that integrates multiple data analysis features for the metabolomics community.
    As a postdoctoral research associate in the lab of BTI faculty member Frank Schroeder, Max Helf saw his labmates continually struggle when they were analyzing data. So, he decided to do something about it and developed a free, open-source app called Metaboseek, which is now essential to the lab’s work.
    The Schroeder lab studies the roundworm Caenorhabditis elegans, one of the most successful model systems for human biology, to discover new metabolites that govern evolutionarily conserved signaling pathways and could be useful as leads for the development of new pharmaceuticals or agrochemicals. The researchers accomplish this task by comparing the metabolites between two different worm populations — a process called comparative metabolomics.
    Given that samples routinely have more than 100,000 compounds in them, computational approaches are essential to perform the analysis.
    The team had been relying on software packages that did not offer the required level of flexibility to easily customize analysis parameters. That limitation, and the lack of a suitable graphical user interface, meant Helf’s colleagues faced the cumbersome task of visually inspecting mounds of data — for example, to spot possible false positives — and jumping between several other software tools to confirm and filter out those meaningless results.
    “It just seemed very inefficient to me, and I couldn’t get over the shortcomings of other software solutions for this problem,” Helf said. “I thought there had to be an easier way, so I started to write code for my own software.”
    Helf developed the initial version of his software in 2017, and continued to improve it over the next two years. “Besides addressing the problems my labmates were already facing, I talked to them about what else held them back — what they wanted to do but weren’t even trying — and built those features in the app,” said Helf, who is now a bioinformatics product manager at proteomics company Biognosys AG. “I wanted this new tool to be user-friendly and accessible to anyone who does chemical biology.” More

  • in

    How eye imaging technology could help robots and cars see better

    Even though robots don’t have eyes with retinas, the key to helping them see and interact with the world more naturally and safely may rest in optical coherence tomography (OCT) machines commonly found in the offices of ophthalmologists.
    One of the imaging technologies that many robotics companies are integrating into their sensor packages is Light Detection and Ranging, or LiDAR for short. Currently commanding great attention and investment from self-driving car developers, the approach essentially works like radar, but instead of sending out broad radio waves and looking for reflections, it uses short pulses of light from lasers.
    Traditional time-of-flight LiDAR, however, has many drawbacks that make it difficult to use in many 3D vision applications. Because it requires detection of very weak reflected light signals, other LiDAR systems or even ambient sunlight can easily overwhelm the detector. It also has limited depth resolution and can take a dangerously long time to densely scan a large area such as a highway or factory floor. To tackle these challenges, researchers are turning to a form of LiDAR called frequency-modulated continuous wave (FMCW) LiDAR.
    “FMCW LiDAR shares the same working principle as OCT, which the biomedical engineering field has been developing since the early 1990s,” said Ruobing Qian, a PhD student working in the laboratory of Joseph Izatt, the Michael J. Fitzpatrick Distinguished Professor of Biomedical Engineering at Duke. “But 30 years ago, nobody knew autonomous cars or robots would be a thing, so the technology focused on tissue imaging. Now, to make it useful for these other emerging fields, we need to trade in its extremely high resolution capabilities for more distance and speed.”
    In a paper appearing March 29 in the journal Nature Communications, the Duke team demonstrates how a few tricks learned from their OCT research can improve on previous FMCW LiDAR data-throughput by 25 times while still achieving submillimeter depth accuracy.
    OCT is the optical analogue of ultrasound, which works by sending sound waves into objects and measuring how long they take to come back. To time the light waves’ return times, OCT devices measure how much their phase has shifted compared to identical light waves that have travelled the same distance but have not interacted with another object. More

  • in

    AI helps radiologists detect bone fractures

    Artificial intelligence (AI) is an effective tool for fracture detection that has potential to aid clinicians in busy emergency departments, according to a study in Radiology.
    Missed or delayed diagnosis of fractures on X-ray is a common error with potentially serious implications for the patient. Lack of timely access to expert opinion as the growth in imaging volumes continues to outpace radiologist recruitment only makes the problem worse.
    AI may help address this problem by acting as an aid to radiologists, helping to speed and improve fracture diagnosis.
    To learn more about the technology’s potential in the fracture setting, a team of researchers in England reviewed 42 existing studies comparing the diagnostic performance in fracture detection between AI and clinicians. Of the 42 studies, 37 used X-ray to identify fractures, and five used CT.
    The researchers found no statistically significant differences between clinician and AI performance. AI’s sensitivity for detecting fractures was 91-92%.
    “We found that AI performed with a high degree of accuracy, comparable to clinician performance,” said study lead author Rachel Kuo, M.B. B.Chir., from the Botnar Research Centre, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences in Oxford, England. “Importantly, we found this to be the case when AI was validated using independent external datasets, suggesting that the results may be generalizable to the wider population.”
    The study results point to several promising educational and clinical applications for AI in fracture detection, Dr. Kuo said. It could reduce the rate of early misdiagnosis in challenging circumstances in the emergency setting, including cases where patients may sustain multiple fractures. It has potential as an educational tool for junior clinicians.
    “It could also be helpful as a ‘second reader,’ providing clinicians with either reassurance that they have made the correct diagnosis or prompting them to take another look at the imaging before treating patients,” Dr. Kuo said.
    Dr. Kuo cautioned that research into fracture detection by AI remains in a very early, pre-clinical stage. Only a minority of the studies that she and her colleagues looked at evaluated the performance of clinicians with AI assistance, and there was only one example where an AI was evaluated in a prospective study in a clinical environment.
    “It remains important for clinicians to continue to exercise their own judgment,” Dr. Kuo said. “AI is not infallible and is subject to bias and error.”
    Story Source:
    Materials provided by Radiological Society of North America. Note: Content may be edited for style and length. More

  • in

    Wally Broecker divined how the climate could suddenly shift

    It was the mid-1980s, at a meeting in Switzerland, when Wally Broecker’s ears perked up. Scientist Hans Oeschger was describing an ice core drilled at a military radar station in southern Greenland. Layer by layer, the 2-kilometer-long core revealed what the climate there was like thousands of years ago. Climate shifts, inferred from the amounts of carbon dioxide and of a form of oxygen in the core, played out surprisingly quickly — within just a few decades. It seemed almost too fast to be true.      

    Broecker returned home, to Columbia University’s Lamont-Doherty Earth Observatory, and began wondering what could cause such dramatic shifts. Some of Oeschger’s data turned out to be incorrect, but the seed they planted in Broecker’s mind flowered — and ultimately changed the way scientists think about past and future climate.

    A geochemist who studied the oceans, Broecker proposed that the shutdown of a major ocean circulation pattern, which he named the great ocean conveyor, could cause the North Atlantic climate to change abruptly. In the past, he argued, melting ice sheets released huge pulses of water into the North Atlantic, turning the water fresher and halting circulation patterns that rely on salty water. The result: a sudden atmospheric cooling that plunged the region, including Greenland, into a big chill. (In the 2004 movie The Day After Tomorrow, an overly dramatized oceanic shutdown coats the Statue of Liberty in ice.)

    It was a leap of insight unprecedented for the time, when most researchers had yet to accept that climate could shift abruptly, much less ponder what might cause such shifts.

    Broecker not only explained the changes seen in the Greenland ice core, he also went on to found a new field. He prodded, cajoled and brought together other scientists to study the entire climate system and how it could shift on a dime. “He was a really big thinker,” says Dorothy Peteet, a paleoclimatologist at NASA’s Goddard Institute for Space Studies in New York City who worked with Broecker for decades. “It was just his genuine curiosity about how the world worked.”

    Broecker was born in 1931 into a fundamentalist family who believed the Earth was 6,000 years old, so he was not an obvious candidate to become a pathbreaking geoscientist. Because of his dyslexia, he relied on conversations and visual aids to soak up information. Throughout his life, he did not use computers, a linchpin of modern science, yet became an expert in radiocarbon dating. And, contrary to the siloing common in the sciences, he worked expansively to understand the oceans, the atmosphere, the land, and thus the entire Earth system.

    By the 1970s, scientists knew that humans were pouring excess carbon dioxide into the atmosphere, through burning fossil fuels and cutting down carbon-storing forests, and that those changes were tinkering with Earth’s natural thermostat. Scientists knew that climate had changed in the past; geologic evidence over billions of years revealed hot or dry, cold or wet periods. But many scientists focused on long-term climate changes, paced by shifts in the way Earth rotates on its axis and circles the sun — both of which change the amount of sunlight the planet receives. A highly influential 1976 paper referred to these orbital shifts as the “pacemaker of the ice ages.”

    Ice cores from Antarctica and Greenland changed the game. In 1969, Willi Dansgaard of the University of Copenhagen and colleagues reported results from a Greenland ice core covering the last 100,000 years. They found large, rapid fluctuations in oxygen-18 that suggested wild temperature swings. Climate could oscillate quickly, it seemed — but it took another Greenland ice core and more than a decade before Broecker had the idea that the shutdown of the great ocean conveyor system could be to blame.

    Pulled from southern Greenland beginning in 1979, the Dye-3 ice core (the drill used to retrieve the core is shown) revealed that abrupt climate change had occurred in the past.The Niels Bohr Institute

    Broecker proposed that such a shutdown was responsible for a known cold snap that started around 12,900 years ago. As the Earth began to emerge from its orbitally influenced ice age, water melted off the northern ice sheets and washed into the North Atlantic. Ocean circulation halted, plunging Europe into a sudden chill, he said. The period, which lasted just over a millennium, is known as the Younger Dryas after an Arctic flower that thrived during the cold snap. It was the last hurrah of the last ice age.

    Evidence that an ocean conveyor shutdown could cause dramatic climate shifts soon piled up in Broecker’s favor. For instance, Peteet found evidence of rapid Younger Dryas cooling in bogs near New York City — thus establishing that the cooling was not just a European phenomenon but also extended to the other side of the Atlantic. Changes were real, widespread and fast.

    By the late 1980s and early ’90s, there was enough evidence supporting abrupt climate change that two major projects — one European, one American — began to drill a pair of fresh cores into the Greenland ice sheet. Richard Alley, a geoscientist at Penn State, remembers working through the layers and documenting small climatic changes over thousands of years. “Then we hit the end of the Younger Dryas and it was like falling off a cliff,” he says. It was “a huge change after many small changes,” he says. “Breathtaking.”

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    The new Greenland cores cemented scientific recognition of abrupt climate change. Though the shutdown of the ocean conveyor could not explain all abrupt climate changes that had ever occurred, it showed how a single physical mechanism could trigger major planet-wide disruptions. It also opened discussions about how rapidly climate might change in the future.

    Broecker, who died in 2019, spent his last decades exploring abrupt shifts that are already happening. He worked, for example, with billionaire Gary Comer, who during a yacht trip in 2001 was shocked by the shrinking of Arctic sea ice, to brainstorm new directions for climate research and climate solutions.

    Broecker knew more than almost anyone about what might be coming. He often described Earth’s climate system as an angry beast that humans are poking with sticks. And one of his most famous papers was titled “Climatic change: Are we on the brink of a pronounced global warming?”

    It was published in 1975. More

  • in

    Quantum information theory: Quantum complexity grows linearly for an exponentially long time

    Physicists know about the huge chasm between quantum physics and the theory of gravity. However, in recent decades, theoretical physics has provided some plausible conjecture to bridge this gap and to describe the behaviour of complex quantum many-body systems, for example black holes and wormholes in the universe. Now, a theory group at Freie Universität Berlin and HZB, together with Harvard University, USA, has proven a mathematical conjecture about the behaviour of complexity in such systems, increasing the viability of this bridge. The work is published in Nature Physics.
    “We have found a surprisingly simple solution to an important problem in physics,” says Prof. Jens Eisert, a theoretical physicist at Freie Universität Berlin and HZB. “Our results provide a solid basis for understanding the physical properties of chaotic quantum systems, from black holes to complex many-body systems,” Eisert adds.
    Using only pen and paper, i.e. purely analytically, the Berlin physicists Jonas Haferkamp, Philippe Faist, Naga Kothakonda and Jens Eisert, together with Nicole Yunger Halpern (Harvard, now Maryland), have succeeded in proving a conjecture that has major implications for complex quantum many-body systems. “This plays a role, for example, when you want to describe the volume of black holes or even wormholes,” explains Jonas Haferkamp, PhD student in the team of Eisert and first author of the paper.
    Complex quantum many-body systems can be reconstructed by circuits of so-called quantum bits. The question, however, is: how many elementary operations are needed to prepare the desired state? On the surface, it seems that this minimum number of operations — the complexity of the system — is always growing. Physicists Adam Brown and Leonard Susskind from Stanford University formulated this intuition as a mathematical conjecture: the quantum complexity of a many-particle system should first grow linearly for astronomically long times and then — for even longer — remain in a state of maximum complexity. Their conjecture was motivated by the behaviour of theoretical wormholes, whose volume seems to grow linearly for an eternally long time. In fact, it is further conjectured that complexity and the volume of wormholes are one and the same quantity from two different perspectives. “This redundancy in description is also called the holographic principle and is an important approach to unifying quantum theory and gravity. Brown and Susskind’s conjecture on the growth of complexity can be seen as a plausibility check for ideas around the holographic principle,” explains Haferkamp.
    The group has now shown that the quantum complexity of random circuits indeed increases linearly with time until it saturates at a point in time that is exponential to the system size. Such random circuits are a powerful model for the dynamics of many-body systems. The difficulty in proving the conjecture arises from the fact that it can hardly be ruled out that there are “shortcuts,” i.e. random circuits with much lower complexity than expected. “Our proof is a surprising combination of methods from geometry and those from quantum information theory. This new approach makes it possible to solve the conjecture for the vast majority of systems without having to tackle the notoriously difficult problem for individual states,” says Haferkamp.
    “The work in Nature Physics is a nice highlight of my PhD,” adds the young physicist, who will take up a position at Harvard University at the end of the year. As a postdoc, he can continue his research there, preferably in the classic way with pen and paper and in exchange with the best minds in theoretical physics.
    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More

  • in

    Chaos theory provides hints for controlling the weather

    Under a project led by the RIKEN Center for Computational Science, researchers have used computer simulations to show that weather phenomena such as sudden downpours could potentially be modified by making small adjustments to certain variables in the weather system. They did this by taking advantage of a system known as a “butterfly attractor” in chaos theory, where a system can have one of two states — like the wings of a butterfly — and that it switches back and forth between the two states depending on small changes in certain conditions.
    While weather predictions have reached levels of high accuracy thanks to methods such as supercomputer-based simulations and data assimilation, where observational data is incorporated into simulations, scientists have long hoped to be able to control the weather. Research in this area has intensified due to climate change, which has led to more extreme weather events such as torrential rain and storms.
    There are methods at present for weather modification, but they have had limited success. Seeding the atmosphere to induce rain has been demonstrated, but it is only possible when the atmosphere is already in a state where it might rain. Geoengineering projects have been envisioned, but have not been carried out due to concerns about what unpredicted long-term effects they might have.
    As a promising approach, researchers from the RIKEN team have looked to chaos theory to create realistic possibilities for mitigating weather events such as torrential rain. Specifically, they have focused on a phenomenon known as a butterfly attractor, proposed by mathematician and meteorologist Edward Lorentz, one of the founders of modern chaos theory. Essentially, this refers to a system that can adopt one of two orbits that look like the wings of a butterfly, but can change the orbits randomly based on small fluctuations in the system.
    To perform the work, the RIKEN team ran one weather simulation, to serve as the control of “nature” itself, and then ran other simulations, using small variations in a number of variables describing the convection — how heat moves through the system — and discovered that small changes in several of the variables together could lead to the system being in a certain state once a certain amount of time elapsed.
    According to Takemasa Miyoshi of the RIKEN Center for Computational Science, who led the team, “This opens the path to research into the controllability of weather and could lead to weather control technology. If realized, this research could help us prevent and mitigate extreme windstorms, such as torrential rains and typhoons, whose risks are increasing with climate change.”
    “We have built a new theory and methodology for studying the controllability of weather,” he continues. “Based on the observing system simulation experiments used in previous predictability studies, we were able to design an experiment to investigate predictability based on the assumption that the true values (nature) cannot be changed, but rather that we can change the idea of what can be changed (the object to be controlled).”
    Looking to the future, he says, “In this case we used an ideal low-dimensional model to develop a new theory, and in the future we plan to use actual weather models to study the possible controllability of weather.”
    The work, published in Nonlinear Processes of Geophysics, was done as part of the Moonshot R&D Millennia program, contributing to the new Moonshot goal #8.
    Story Source:
    Materials provided by RIKEN. Note: Content may be edited for style and length. More