More stories

  • in

    Mathematical discovery could shed light on secrets of the Universe

    How can Einstein’s theory of gravity be unified with quantum mechanics? It is a challenge that could give us deep insights into phenomena such as black holes and the birth of the universe. Now, a new article in Nature Communications, written by researchers from Chalmers University of Technology, Sweden, and MIT, USA, presents results that cast new light on important challenges in understanding quantum gravity.
    A grand challenge in modern theoretical physics is to find a ‘unified theory’ that can describe all the laws of nature within a single framework — connecting Einstein’s general theory of relativity, which describes the universe on a large scale, and quantum mechanics, which describes our world at the atomic level. Such a theory of ‘quantum gravity’ would include both a macroscopic and microscopic description of nature.
    “We strive to understand the laws of nature and the language in which these are written is mathematics. When we seek answers to questions in physics, we are often led to new discoveries in mathematics too. This interaction is particularly prominent in the search for quantum gravity — where it is extremely difficult to perform experiments,” explains Daniel Persson, Professor at the Department of Mathematical Sciences at Chalmers university of technology.
    An example of a phenomenon that requires this type of unified description is black holes. A black hole forms when a sufficiently heavy star expands and collapses under its own gravitational force, so that all its mass is concentrated in an extremely small volume. The quantum mechanical description of black holes is still in its infancy but involves spectacular advanced mathematics.
    A simplified model for quantum gravity
    “The challenge is to describe how gravity arises as an ’emergent’ phenomenon. Just as everyday phenomena — such as the flow of a liquid — emerge from the chaotic movements of individual droplets, we want to describe how gravity emerges from quantum mechanical system at the microscopic level,” says Robert Berman, Professor at the Department of Mathematical Sciences at Chalmers University of Technology. More

  • in

    Toward ever-more powerful microchips and supercomputers

    The information age created over nearly 60 years has given the world the internet, smart phones and lightning-fast computers. Making this possible has been the doubling of the number of transistors that can be packed onto a computer chip roughly every two years, giving rise to billions of atomic-scale transistors that now fit on a fingernail-sized chip. Such “atomic scale” lengths are so tiny that individual atoms can be seen and counted in them.
    Physical limit
    With this doubling now rapidly approaching a physical limit, the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) has joined industry efforts to extend the process and develop new ways to produce ever-more capable, efficient, and cost-effective chips. Laboratory scientists have now accurately predicted through modeling a key step in atomic-scale chip fabrication in the first PPPL study under a Cooperative Research and Development Agreement (CRADA) with Lam Research Corp., a world-wide supplier of chip-making equipment.
    “This would be one little piece in the whole process,” said David Graves, associate laboratory director for low-temperature plasma surface interactions, a professor in the Princeton Department of Chemical and Biological Engineering and co-author of a paper that outlines the findings in the Journal of Vacuum Science & Technology B. Insights gained through modeling, he said, “can lead to all sorts of good things, and that’s why this effort at the Lab has got some promise.”
    While the shrinkage can’t go on much longer, “it hasn’t completely reached an end,” he said. “Industry has been successful to date in using mainly empirical methods to develop innovative new processes but a deeper fundamental understanding will speed this process. Fundamental studies take time and require expertise industry does not always have,” he said. “This creates a strong incentive for laboratories to take on the work.”
    The PPPL scientists modeled what is called “atomic layer etching” (ALE), an increasingly critical fabrication step that aims to remove single atomic layers from a surface at a time. This process can be used to etch complex three-dimensional structures with critical dimensions that are thousands of times thinner than a human hair into a film on a silicon wafer.
    Basic agreement
    “The simulations basically agreed with experiments as a first step and could lead to improved understanding of the use of ALE for atomic-scale etching,” said Joseph Vella, a post-doctoral fellow at PPPL and lead author of the journal paper. Improved understanding will enable PPPL to investigate such things as the extent of surface damage and the degree of roughness developed during ALE, he said, “and this all starts with building our fundamental understanding of atomic layer etching.”
    The model simulated the sequential use of chlorine gas and argon plasma ions to control the silicon etch process on an atomic scale. Plasma, or ionized gas, is a mixture consisting of free electrons, positively charged ions and neutral molecules. The plasma used in semiconductor device processing is near room temperature, in contrast to the ultra-hot plasma used in fusion experiments.
    “A surprise empirical finding from Lam Research was that the ALE process became particularly effective when the ion energies were quite a bit higher than the ones we started with,” Graves said. “So that will be our next step in the simulations — to see if we can understand what’s happening when the ion energy is much higher and why it’s so good.”
    Going forward, “the semiconductor industry as a whole is contemplating a major expansion in the materials and the types of devices to be used, and this expansion will also have to be processed with atomic scale precision,” he said. “The U.S. goal is to lead the world in using science to tackle important industrial problems,” he said, “and our work is part of that.”
    This study was partially supported by the DOE Office of Science. Coauthors included David Humbird of DWH Consulting in Centennial, Colorado.
    Story Source:
    Materials provided by DOE/Princeton Plasma Physics Laboratory. Original written by John Greenwald. Note: Content may be edited for style and length. More

  • in

    Researchers develop hybrid human-machine framework for building smarter AI

    From chatbots that answer tax questions to algorithms that drive autonomous vehicles and dish out medical diagnoses, artificial intelligence undergirds many aspects of daily life. Creating smarter, more accurate systems requires a hybrid human-machine approach, according to researchers at the University of California, Irvine. In a study published this month in Proceedings of the National Academy of Sciences, they present a new mathematical model that can improve performance by combining human and algorithmic predictions and confidence scores.
    “Humans and machine algorithms have complementary strengths and weaknesses. Each uses different sources of information and strategies to make predictions and decisions,” said co-author Mark Steyvers, UCI professor of cognitive sciences. “We show through empirical demonstrations as well as theoretical analyses that humans can improve the predictions of AI even when human accuracy is somewhat below [that of] the AI — and vice versa. And this accuracy is higher than combining predictions from two individuals or two AI algorithms.”
    To test the framework, researchers conducted an image classification experiment in which human participants and computer algorithms worked separately to correctly identify distorted pictures of animals and everyday items — chairs, bottles, bicycles, trucks. The human participants ranked their confidence in the accuracy of each image identification as low, medium or high, while the machine classifier generated a continuous score. The results showed large differences in confidence between humans and AI algorithms across images.
    “In some cases, human participants were quite confident that a particular picture contained a chair, for example, while the AI algorithm was confused about the image,” said co-author Padhraic Smyth, UCI Chancellor’s Professor of computer science. “Similarly, for other images, the AI algorithm was able to confidently provide a label for the object shown, while human participants were unsure if the distorted picture contained any recognizable object.”
    When predictions and confidence scores from both were combined using the researchers’ new Bayesian framework, the hybrid model led to better performance than either human or machine predictions achieved alone.
    “While past research has demonstrated the benefits of combining machine predictions or combining human predictions — the so-called ‘wisdom of the crowds’ — this work forges a new direction in demonstrating the potential of combining human and machine predictions, pointing to new and improved approaches to human-AI collaboration,” Smyth said.
    This interdisciplinary project was facilitated by the Irvine Initiative in AI, Law, and Society. The convergence of cognitive sciences — which are focused on understanding how humans think and behave — with computer science — in which technologies are produced — will provide further insight into how humans and machines can collaborate to build more accurate artificially intelligent systems, the researchers said.
    Additional co-authors include Heliodoro Tejada, a UCI graduate student in cognitive sciences, and Gavin Kerrigan, a UCI Ph.D. student in computer science.
    Funding for this study was provided by the National Science Foundation under award numbers 1927245 and 1900644 and the HPI Research Center in Machine Learning and Data Science at UCI.
    Story Source:
    Materials provided by University of California – Irvine. Note: Content may be edited for style and length. More

  • in

    Objection: No one can understand what you’re saying

    Legal documents, such as contracts or deeds, are notoriously difficult for nonlawyers to understand. A new study from MIT cognitive scientists has determined just why these documents are often so impenetrable.
    After analyzing thousands of legal contracts and comparing them to other types of texts, the researchers found that lawyers have a habit of frequently inserting long definitions in the middle of sentences. Linguists have previously demonstrated that this type of structure, known as “center-embedding,” makes text much more difficult to understand.
    While center-embedding had the most significant effect on comprehension difficulty, the MIT study found that the use of unnecessary jargon also contributes.
    “It’s not a secret that legal language is very hard to understand. It’s borderline incomprehensible a lot of the time,” says Edward Gibson, an MIT professor of brain and cognitive sciences and the senior author of the new paper. “In this study, we’re documenting in detail what the problem is.”
    The researchers hope that their findings will lead to greater awareness of this issue and stimulate efforts to make legal documents more accessible to the general public.
    “Making legal language more straightforward would help people understand their rights and obligations better, and therefore be less susceptible to being unnecessarily punished or not being able to benefit from their entitled rights,” says Eric Martinez, a recent law school graduate and licensed attorney who is now a graduate student in brain and cognitive sciences at MIT. More

  • in

    Physicists discover method for emulating nonlinear quantum electrodynamics in a laboratory setting

    On the big screen, in video games and in our imaginations, lightsabers flare and catch when they clash together. In reality, as in a laser light show, the beams of light go through each other, creating spiderweb patterns. That clashing, or interference, happens only in fiction — and in places with enormous magnetic and electric fields, which happens in nature only near massive objects such as neutron stars. Here, the strong magnetic or electric field reveals that vacuum isn’t truly a void. Instead, here when light beams intersect, they scatter into rainbows.
    A weak version of this effect has been observed in modern particle accelerators, but it is completely absent from our daily lives or even normal laboratory environments.
    Yuli Lyanda-Geller, professor of physics and astronomy in the College of Science at Purdue University, in collaboration with Aydin Keser and Oleg Sushkov from the University of New South Wales in Australia, discovered that it is possible to produce this effect in a class of novel materials involving bismuth, its solid solutions with antimony and tantalum arsenide.
    With this knowledge, the effect can be studied, potentially leading to vastly more sensitive sensors as well as supercapacitors for energy storage that could be turned on and off by a controlled magnetic field.
    “Most importantly, one of the deepest quantum mysteries in the universe can be tested and studied in a small laboratory experiment,” Lyanda-Geller said. “With these materials, we can study effects of the universe. We can study what happens in neutron stars from our laboratories.”
    Brief summary of methods
    Keser, Lyanda-Geller and Sushkov applied quantum field theory nonperturbative methods used to describe high-energy particles and expanded them to analyze the behavior of so-called Dirac materials, which recently became the focus of interest. They used the expansion to obtain results that go both beyond known high-energy results and the general framework of condensed matter and materials physics. They suggested various experimental configurations with applied electric and magnetic fields and analyzed best materials that would allow them to experimentally study this quantum electrodynamic effect in a nonaccelerator setting.
    They subsequently discovered that their results better explained some magnetic phenomena that had been observed and studied in earlier experiments.
    Funding
    U.S. Department of Energy, Office of Basic Energy Sciences; Division of Materials Sciences and Engineering; and the Australian Research Council, Centre of Excellence in Future Low Energy Electronics Technologies
    Story Source:
    Materials provided by Purdue University. Original written by Brittany Steff. Note: Content may be edited for style and length. More

  • in

    Simulated human eye movement aims to train metaverse platforms

    Computer engineers at Duke University have developed virtual eyes that simulate how humans look at the world accurately enough for companies to train virtual reality and augmented reality programs. Called EyeSyn for short, the program will help developers create applications for the rapidly expanding metaverse while protecting user data.
    The results have been accepted and will be presented at the International Conference on Information Processing in Sensor Networks (IPSN), May 4-6, 2022, a leading annual forum on research in networked sensing and control.
    “If you’re interested in detecting whether a person is reading a comic book or advanced literature by looking at their eyes alone, you can do that,” said Maria Gorlatova, the Nortel Networks Assistant Professor of Electrical and Computer Engineering at Duke.
    “But training that kind of algorithm requires data from hundreds of people wearing headsets for hours at a time,” Gorlatova added. “We wanted to develop software that not only reduces the privacy concerns that come with gathering this sort of data, but also allows smaller companies who don’t have those levels of resources to get into the metaverse game.”
    The poetic insight describing eyes as the windows to the soul has been repeated since at least Biblical times for good reason: The tiny movements of how our eyes move and pupils dilate provide a surprising amount of information. Human eyes can reveal if we’re bored or excited, where concentration is focused, whether or not we’re expert or novice at a given task, or even if we’re fluent in a specific language.
    “Where you’re prioritizing your vision says a lot about you as a person, too,” Gorlatova said. “It can inadvertently reveal sexual and racial biases, interests that we don’t want others to know about, and information that we may not even know about ourselves.”
    Eye movement data is invaluable to companies building platforms and software in the metaverse. For example, reading a user’s eyes allows developers to tailor content to engagement responses or reduce resolution in their peripheral vision to save computational power. More

  • in

    Harnessing AI and Robotics to treat spinal cord injuries

    By employing artificial intelligence (AI) and robotics to formulate therapeutic proteins, a team led by Rutgers researchers has successfully stabilized an enzyme able to degrade scar tissue resulting from spinal cord injuries and promote tissue regeneration.
    The study, recently published in Advanced Healthcare Materials, details the team’s ground-breaking stabilization of the enzyme Chondroitinase ABC, (ChABC) offering new hope for patients coping with spinal cord injuries.
    “This study represents one of the first times artificial intelligence and robotics have been used to formulate highly sensitive therapeutic proteins and extend their activity by such a large amount. It’s a major scientific achievement,” said Adam Gormley, the project’s principal investigator and an assistant professor of biomedical engineering at Rutgers School of Engineering (SOE) at Rutgers University-New Brunswick.
    Gormley expressed that his research is also motivated, in part, by a personal connection to spinal cord injury.
    “I’ll never forget being at the hospital and learning a close college friend would likely never walk again after being paralyzed from the waist down after a mountain biking accident,” Gormley recalled. “The therapy we are developing may someday help people such as my friend lessen the scar on their spinal cords and regain function. This is a great reason to wake up in the morning and fight to further the science and potential therapy.”
    Shashank Kosuri, a biomedical engineering doctoral student at Rutgers SOE and a lead author of the study noted that spinal cord injuries, or SCIs, can negatively impact the physical, psychological, and socio-economic well-being of patients and their families. Soon after an SCI, a secondary cascade of inflammation produces a dense scar tissue that can inhibit or prevent nervous tissue regeneration.
    The enzyme successfully stabilized in the study, ChABC, is known to degrade scar tissue molecules and promote tissue regeneration, yet it is highly unstable at the human body temperature of 98.6° F. and loses all activity within a few hours. Kosuri noted that this necessitates multiple, expensive infusions at very high doses to maintain therapeutic efficacy.
    Synthetic copolymers are able to wrap around enzymes such as ChABC and stabilize them in hostile microenvironments. In order to stabilize the enzyme, the researchers utilized an AI-driven approach with liquid handling robotics to synthesize and test the ability of numerous copolymers to stabilize ChABC and maintain its activity at 98.6° F.
    While the researchers were able to identify several copolymers that performed well, Kosuri reported that one copolymer combination even continued to retain 30% of the enzyme for up to one week, a promising result for patients seeking care for spinal cord injuries.
    The study received support from grants funded by the National Institutes of Health, the National Science Foundation, and The New Jersey Commission on Spinal Cord research. In addition to Gormley and Kosuri, the Rutgers research team also included SOE Professor Li Cai and Distinguished Professor Martin Yarmush, as well as several SOE-affiliated students. Faculty and students from Princeton University’s Department of Chemical and Biological Engineering also collaborated on the project.
    Story Source:
    Materials provided by Rutgers University. Original written by Emily Everson Layden. Note: Content may be edited for style and length. More

  • in

    Some deep-sea octopuses aren’t the long-haul moms scientists thought they were

    Octopuses living in the deep sea off the coast of California are breeding far faster than expected.

    The animals lay their eggs near geothermal springs, and the warmer water speeds up embryonic development, researchers report February 28 at the virtual 2022 Ocean Sciences Meeting. That reproductive sleight of hand means that the octopus moms brood for less than two years, instead of the estimated 12.

    In 2018, scientists working off the coast of California discovered thousands of deep-sea octopuses (Muusoctopus robustus) congregated on a patch of seafloor about 3,200 meters below the surface. Many of the grapefruit-sized animals were females brooding clutches of eggs, leading researchers to dub the site the Octopus Garden.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    But with water temperatures hovering around a frigid 1.6° Celsius, growth in this garden was predicted to be leisurely. In octopuses, embryonic development tends to slow down at low temperatures, says marine ecologist Jim Barry of the Monterey Bay Aquarium Research Institute in Moss Landing, Calif. “When you get really cold, down near zero, that’s when brood periods get really long.”

    The record for the longest brood period of any animal, just over four years, is held by a different species of octopus living in warmer water (SN: 7/30/14). M. robustus, thriving in the chilly depths of the Octopus Garden, was therefore a serious contender to snatch that title, Barry says. “If you look at its predicted brood period at 1.6° C, it’s over 12 years.”

    To verify what would be a record-setting stint of motherhood, Barry and his colleagues repeatedly visited the Octopus Garden from 2019 to 2021 using a remotely operated vehicle. The team trained cameras at the octopus eggs, which resemble white fingers, to monitor their rate of development. With one of the submersible’s robotic arms, the researchers also gently nudged dozens of octopuses aside and measured the water temperature in their nests.

    The team found that relatively warm water — up to 10.5° C — bathed all the egg clutches. The female octopuses are preferentially laying their eggs in streams of geothermally heated water, the researchers realized. That discovery was a tip-off that these animals are not the long-haul moms people thought them to be, Barry says. “We’re virtually certain these animals are breeding far more rapidly than you’d expect.”

    Deep-sea octopuses (Muusoctopus robustus) brood clutches of eggs, which look like white fingers.Ocean Exploration Trust, NOAA

    Based on observations of the developing eggs, Barry and colleagues calculated that the moms brooded for only about 600 days, or about a year and a half. That is much faster than predicted, says Jeffrey Drazen, a deep-sea ecologist at the University of Hawaii at Manoa who was not involved in the research. “They’re cutting a huge amount of time off of their parental care period.”

    There is also an evolutionary advantage to seeking out warmer water: Shorter brood periods mean that fewer eggs are likely to be gobbled up by predators. And these octopuses seem to know that, Barry says. “We believe they’re exploiting that thermal energy to improve reproductive success.”

    Only a few other marine animals, such as icefish in Antarctica’s Weddell Sea (SN: 1/13/22), are known to seek out warmer conditions when breeding. But there are probably other species that do the same, Drazen says. The challenge is finding them and their breeding grounds in the vast expanse of the deep ocean. “I imagine that as we keep looking, we will keep finding really interesting sites that are important to certain species,” he says. More