More stories

  • in

    Unprecedented look at the health status of a diverse patient population

    Researchers in the health and wellness space have typically relied on people to report their personal health data, like activity levels, heart rate or blood pressure, during brief snapshots in time.
    Wearable health devices, such as the popular Apple Watch, have changed the game, surfacing meaningful data that can paint a more complete picture of daily life and resulting health and disease for clinicians.
    Early results from a landmark, three-year observational study called MIPACT, short for Michigan Predictive Activity & Clinical Trajectories, provide insight into the baseline health status of a representative group of thousands of people, as reported in a paper published in The Lancet Digital Health.
    “From both a research and clinical standpoint, as we design digital health interventions or make recommendations for our patients, it’s important to understand patients’ baseline activity levels,” said Jessica Golbus, M.D., of University of Michigan Health’s Division of Cardiovascular Medicine, and co-investigator on the study.
    The University of Michigan Health study is led by Sachin Kheterpal M.D., the associate dean for Research Information Technology and professor of Anesthesiology and launched in 2018 as a collaboration with Apple. The study aims to enroll a diverse set of participants across a range of ages, races, ethnicities and underlying health conditions.
    Golbus notes that one of the biggest successes of the study so far was their ability to recruit from groups that have largely been underrepresented or unrepresented in digital health research. For example, 18% of the more than 6,700 participants were 65 or older, 17% were Black, and 17% were Asian. More

  • in

    New synthesis process paves way for more efficient lasers, LEDs

    Researchers from North Carolina State University have developed a new process that makes use of existing industry standard techniques for making III-nitride semiconductor materials, but results in layered materials that will make LEDs and lasers more efficient.
    III-nitride semiconductor materials are wide-bandgap semiconductors that are of particular interest in optic and photonic applications because they can be used to create lasers and LEDs that produce light in the visible bandwidth range. And when it comes to large-scale manufacturing, III-nitride semiconductor materials produced using a technique called metal organic chemical vapor deposition (MOCVD).
    Semiconductor devices require two materials, a “p-type” and an “n-type.” Electrons move from the n-type material to the p-type material. This is made possible by creating a p-type material that has “holes,” or spaces that electrons can move into.
    A challenge for people who make LEDs and lasers has been that there was a limit on the number of holes that you can make in p-type III-nitride semiconductor materials that are created using MOCVD. But that limit just went up.
    “We have developed a process that produces the highest concentration of holes in p-type material in any III-Nitride semiconductor made using MOCVD,” says Salah Bedair, co-author of a paper on the work and a distinguished professor of electrical and computer engineering at NC State. “And this is high quality material — very few defects — making it suitable for use in a variety of devices.”
    In practical terms, this means more of the energy input in LEDs is converted into light. For lasers, it means that less of the energy input will be wasted as heat by reducing the metal contact resistance. More

  • in

    Artificial intelligence sheds light on how the brain processes language

    In the past few years, artificial intelligence models of language have become very good at certain tasks. Most notably, they excel at predicting the next word in a string of text; this technology helps search engines and texting apps predict the next word you are going to type.
    The most recent generation of predictive language models also appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion.
    Such models were designed to optimize performance for the specific function of predicting text, without attempting to mimic anything about how the human brain performs this task or understands language. But a new study from MIT neuroscientists suggests the underlying function of these models resembles the function of language-processing centers in the human brain.
    Computer models that perform well on other types of language tasks do not show this similarity to the human brain, offering evidence that the human brain may use next-word prediction to drive language processing.
    “The better the model is at predicting the next word, the more closely it fits the human brain,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines (CBMM), and an author of the new study. “It’s amazing that the models fit so well, and it very indirectly suggests that maybe what the human language system is doing is predicting what’s going to happen next.”
    Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of CBMM and MIT’s Artificial Intelligence Laboratory (CSAIL); and Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute, are the senior authors of the study, which appears this week in the Proceedings of the National Academy of Sciences. Martin Schrimpf, an MIT graduate student who works in CBMM, is the first author of the paper. More

  • in

    Superconductivity: New tricks for finding better materials

    Even after more than 30 years of research, high-temperature superconductivity is still one of the great unsolved mysteries of materials physics. The exact mechanism that causes certain materials to still conduct electric current without any resistance even at relatively high temperatures is still not fully understood.
    Two years ago, a new class of promising superconductors was discovered: so-called layered nickelates. For the first time, a research team at TU Wien has now succeeded in determining important parameters of these novel superconductors by comparing theory and experiment. This means that for the first time a theoretical model is now available that can be used to understand the electronic mechanisms of high-temperature superconductivity in these materials.
    In search of high-temperature superconductors
    Many superconductors are known today, but most of them are only superconducting at extremely low temperatures, close to absolute zero. Materials that remain superconducting at higher temperatures are called “high-temperature superconductors” — even though these “high” temperatures (often in the order of magnitude of less than -200°C) are still extremely cold by human standards.
    Finding a material that still remains superconducting at significantly higher temperatures would be a revolutionary discovery that would open the door to many new technologies. For a long time, the so-called cuprates were considered particularly exciting candidates — a class of materials containing copper atoms. Now, however, another class of materials could turn out to be even more promising: Nickelates, which have a similar structure to cuprates, but with nickel instead of copper.
    “There has been a lot of research on cuprates, and it has been possible to dramatically increase the critical temperature up to which the material remains superconducting. If similar progress can be made with the newly discovered nickelates, it would be a huge step forward,” says Prof. Jan Kuneš from the Institute of Solid State Physics at TU Wien.
    Hard-to-access parameters
    Theoretical models describing the behaviour of such superconductors already exist. The problem, however, is that in order to use these models, one must know certain material parameters that are difficult to determine. “The charge transfer energy plays a key role,” explains Jan Kuneš. “This value tells us how much energy you have to add to the system to transfer an electron from a nickel atom to an oxygen atom.”
    Unfortunately, this value cannot be measured directly, and theoretical calculations are extremely complicated and imprecise. Therefore, Atsushi Hariki, a member of Jan Kuneš’ research group, developed a method to determine this parameter indirectly: When the material is examined with X-rays, the results also depend on the charge transfer energy. “We calculated details of the X-ray spectrum that are particularly sensitive to this parameter and compared our results with measurements of different X-ray spectroscopy methods,” explains Jan Kuneš. “In this way, we can determine the appropriate value — and this value can now be inserted into the computational models used to describe the superconductivity of the material.”
    Important prerequisite for the search for better nickelates
    Thus, for the first time, it has now been possible to explain the electronic structure of the material precisely and to set up a parameterised theoretical model for describing superconductivity in nickelates. “With this, we can now get to the bottom of the question of how the mechanics of the effect can be explained at the electronic level,” says Jan Kuneš. “Which orbitals play a decisive role? Which parameters matter in detail? That’s what you need to know if you want to find out how to improve this material further, so that one day you might be able to produce new nickelates whose superconductivity persists up to even significantly higher temperatures.”
    Story Source:
    Materials provided by Vienna University of Technology. Note: Content may be edited for style and length. More

  • in

    Experiments confirm a quantum material’s unique response to circularly polarized laser light

    When the COVID-19 pandemic shut down experiments at the Department of Energy’s SLAC National Accelerator Laboratory early last year, Shambhu Ghimire’s research group was forced to find another way to study an intriguing research target: quantum materials known as topological insulators, or TIs, which conduct electric current on their surfaces but not through their interiors.
    Denitsa Baykusheva, a Swiss National Science Foundation Fellow, had joined his group at the Stanford PULSE Institute two years earlier with the goal of finding a way to generate high harmonic generation, or HHG, in these materials as a tool for probing their behavior. In HHG, laser light shining through a material shifts to higher energies and higher frequencies, called harmonics, much like pressing on a guitar string produces higher notes. If this could be done in TIs, which are promising building blocks for technologies like spintronics, quantum sensing and quantum computing, it would give scientists a new tool for investigating these and other quantum materials.
    With the experiment shut down midway, she and her colleagues turned to theory and computer simulations to come up with a new recipe for generating HHG in topological insulators. The results suggested that circularly polarized light, which spirals along the direction of the laser beam, would produce clear, unique signals from both the conductive surfaces and the interior of the TI they were studying, bismuth selenide — and would in fact enhance the signal coming from the surfaces.
    When the lab reopened for experiments with covid safety precautions in place, Baykusheva set out to test that recipe for the first time. In a paper published today in Nano Letters, the research team report that those tests went exactly as predicted, producing the first unique signature from the topological surface.
    “This material looks very different than any other material we’ve tried,” said Ghimire, who is a principal investigator at PULSE. “It’s really exciting being able to find a new class of material that has a very different optical response than anything else.”
    Over the past dozen years, Ghimire had done a series of experiments with PULSE Director David Reis showing that HHG can be produced in ways that were previously thought unlikely or even impossible: by beaming laser light into a crystal, a frozen argon gas or an atomically thin semiconductor material. Another study described how to use HHG to generate attosecond laser pulses, which can be used to observe and control the movements of electrons, by shining a laser through ordinary glass. More

  • in

    Quantum battles in attoscience: Following three debates

    The field of attoscience has been kickstarted by new advances in laser technology. Research began with studies of three particular processes. Firstly, ‘above-threshold ionization’ (ATI), describing atoms which are ionized by more than the required number of photons. Secondly, ‘high harmonic generation’ (HHG) occurs when a target is illuminated by an intense laser pulse, causing it to emit high-frequency harmonics as a nonlinear response. Finally, ‘laser-induced nonsequential double ionization’ (NSDI) occurs when the laser field induces correlated dynamics within systems of multiple electrons.
    Using powerful, ultrashort laser pulses, researchers can now study how these processes unfold on timescales of just 10-18 seconds. This gives opportunities to study phenomena such as the motions of electrons within atoms, the dynamics of charges within molecules, and oscillations of electric fields within laser pulses.
    Today, many theoretical approaches are used to study attosecond physics. Within this landscape, two broadly opposing viewpoints have emerged: the ‘analytical’ approach, in which systems are studied using suitable approximations of physical processes; and the ‘ab-initio’ approach, where systems are broken down into their elemental parts, then analysed using fundamental physics.
    Using ATI, HHG, and NSDI as case studies, the first of the Quantum Battles papers explores this tension through a dialogue between two hypothetical theorists, each representing viewpoints expressed by the workshop’s discussion panel. The study investigates three main questions: relating to the scope and nature of both approaches, their relative advantages and disadvantages, and their complementary roles in scientific discovery so far.
    Another source of tension within the attoscience community relates to quantum tunnelling — describing how quantum particles can travel directly through energy barriers. Here, a long-standing debate exists over whether tunnelling occurs instantaneously, or if it requires some time; and if so, how much.
    The second paper follows this debate through analysis of the panel’s viewpoints, as they discussed the physical observables of tunnelling experiments; theoretical approaches to assessing tunnelling time; and the nature of tunnelling itself. The study aims to explain why so many approaches reach differing conclusions, given the lack of any universally-agreed definition of tunnelling.
    The wave-like properties of matter are a further key concept in quantum mechanics. On attosecond timescales, intense laser fields can be used to exploit interference between matter waves of electrons. This allows researchers to create images with sub-atomic resolutions, while maintaining the ability to capture dynamics occurring on ultra-short timescales.
    The final ‘battle’ paper explores several questions which are rarely asked about this technique. In particular, it explores the physical differences between the roles of matter waves in HHG — which can be used to extend imaging capabilities; and ATI — which is used to generate packets of electron matter waves.
    The Quantum Battles workshop oversaw a wide variety of lively, highly interactive debates between a diverse range of participants: from leading researchers, to those just starting out in their careers. In many cases, the discussions clarified the points of tension that exist within the attoscience community. This format was seen as particularly innovative by the community and the general public, who could follow the discussions via dedicated social media platforms. One participant even referred to the Quantum Battles as a `breath of fresh air’.
    Quantum Battles promoted the view that while initial discoveries may stem from a specific perspective, scientific progress happens when representatives of many different viewpoints collaborate with each other. One immediate outcome is the “AttoFridays” online seminar series, which arose from the success of the workshop. With their fresh and open approach, Quantum Battles and AttoFridays will lead to more efficient and constructive discussions across institutional, scientific, and national borders.
    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More

  • in

    Novel advanced light design and fabrication process could revolutionize sensing technologies

    Vanderbilt and Penn State engineers have developed a novel approach to design and fabricate thin-film infrared light sources with near-arbitrary spectral output driven by heat, along with a machine learning methodology called inverse design that reduced the optimization time for these devices from weeks or months on a multi-core computer to a few minutes on a consumer-grade desktop.
    The ability to develop inexpensive, efficient, designer infrared light sources could revolutionize molecular sensing technologies. Additional applications include free-space communications, infrared beacons for search and rescue, molecular sensors for monitoring industrial gases, environmental pollutants and toxins.
    The research team’s approach, detailed today in Nature Materials, uses simple thin-film deposition, one of the most mature nano-fabrication techniques, aided by key advances in materials and machine learning.
    Standard thermal emitters, such as incandescent lightbulbs, generate broadband thermal radiation that restricts their use to simple applications. In contrast, lasers and light emitting diodes offer the narrow frequency emission desired for many applications but are typically too inefficient and/or expensive. That has directed research toward wavelength-selective thermal emitters to provide the narrow bandwidth of a laser or LED, but with the simple design of a thermal emitter. However, to date most thermal emitters with user-defined output spectral have required patterned nanostructures fabricated with high-cost, low-throughput methods.
    The research team led by Joshua Caldwell, Vanderbilt associate professor of mechanical engineering, and Jon-Paul Maria, professor of materials science and engineering at Penn State, set out to conquer long-standing challenges and create a more efficient process. Their approach leverages the broad spectral tunability of the semiconductor cadmium oxide in concert with a one-dimensional photonic crystal fabricated with alternating layers of dielectrics referred to as a distributed Bragg reflector.
    The combination of these multiple layers of materials gives rise to a so-called “Tamm-polariton,” where the emission wavelength of the device is dictated by the interactions between these layers. Until now, such designs were limited to a single designed wavelength output. But creating multiple resonances at multiple frequencies with user-controlled wavelength, linewidth, and intensity is imperative for matching the absorption spectra of most molecules. More

  • in

    Physicists describe photons’ characteristics to protect future quantum computing

    Consumers need to be confident that transactions they make online are safe and secure. A main method to protect customer transactions and other information is through encryption, where vital information is encoded with a key using complex mathematical problems that are difficult even for computers to solve.
    But even that may have a weakness: Encrypted information could be decoded by future quantum computers that would try many keys simultaneously and rapidly find the right one.
    To prepare for this future possibility, researchers are working to develop codes that cannot be broken by quantum computers. These codes rely on distributing single photons — single particles of light — that share a quantum character solely among the parties that wish to communicate. The new quantum codes require these photons to have the same color, so they are impossible to distinguish from each other, and the resulting devices, networks, and systems form the backbone of a future “quantum internet.”
    Researchers at the University of Iowa have been studying the properties of photons emitted from solids and are now able to predict how sharp the color of each emitted photon can be. In a new study, the researchers describe theoretically how many of these indistinguishable photons can be sent simultaneously down a fiber-optical cable to establish secure communications, and how rapidly these quantum codes can send information.
    “Up to now, there has not been a well-founded quantitative description of the noise in the color of light emitted by these qubits, and the noise leading to loss of quantum coherence in the qubits themselves that’s essential for calculations,” says Michael Flatté, professor in the Department of Physics and Astronomy and the study’s corresponding author. “This work provides that.”
    Story Source:
    Materials provided by University of Iowa. Original written by Richard Lewis. Note: Content may be edited for style and length. More