More stories

  • in

    Toward ever-more powerful microchips and supercomputers

    The information age created over nearly 60 years has given the world the internet, smart phones and lightning-fast computers. Making this possible has been the doubling of the number of transistors that can be packed onto a computer chip roughly every two years, giving rise to billions of atomic-scale transistors that now fit on a fingernail-sized chip. Such “atomic scale” lengths are so tiny that individual atoms can be seen and counted in them.
    Physical limit
    With this doubling now rapidly approaching a physical limit, the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) has joined industry efforts to extend the process and develop new ways to produce ever-more capable, efficient, and cost-effective chips. Laboratory scientists have now accurately predicted through modeling a key step in atomic-scale chip fabrication in the first PPPL study under a Cooperative Research and Development Agreement (CRADA) with Lam Research Corp., a world-wide supplier of chip-making equipment.
    “This would be one little piece in the whole process,” said David Graves, associate laboratory director for low-temperature plasma surface interactions, a professor in the Princeton Department of Chemical and Biological Engineering and co-author of a paper that outlines the findings in the Journal of Vacuum Science & Technology B. Insights gained through modeling, he said, “can lead to all sorts of good things, and that’s why this effort at the Lab has got some promise.”
    While the shrinkage can’t go on much longer, “it hasn’t completely reached an end,” he said. “Industry has been successful to date in using mainly empirical methods to develop innovative new processes but a deeper fundamental understanding will speed this process. Fundamental studies take time and require expertise industry does not always have,” he said. “This creates a strong incentive for laboratories to take on the work.”
    The PPPL scientists modeled what is called “atomic layer etching” (ALE), an increasingly critical fabrication step that aims to remove single atomic layers from a surface at a time. This process can be used to etch complex three-dimensional structures with critical dimensions that are thousands of times thinner than a human hair into a film on a silicon wafer.
    Basic agreement
    “The simulations basically agreed with experiments as a first step and could lead to improved understanding of the use of ALE for atomic-scale etching,” said Joseph Vella, a post-doctoral fellow at PPPL and lead author of the journal paper. Improved understanding will enable PPPL to investigate such things as the extent of surface damage and the degree of roughness developed during ALE, he said, “and this all starts with building our fundamental understanding of atomic layer etching.”
    The model simulated the sequential use of chlorine gas and argon plasma ions to control the silicon etch process on an atomic scale. Plasma, or ionized gas, is a mixture consisting of free electrons, positively charged ions and neutral molecules. The plasma used in semiconductor device processing is near room temperature, in contrast to the ultra-hot plasma used in fusion experiments.
    “A surprise empirical finding from Lam Research was that the ALE process became particularly effective when the ion energies were quite a bit higher than the ones we started with,” Graves said. “So that will be our next step in the simulations — to see if we can understand what’s happening when the ion energy is much higher and why it’s so good.”
    Going forward, “the semiconductor industry as a whole is contemplating a major expansion in the materials and the types of devices to be used, and this expansion will also have to be processed with atomic scale precision,” he said. “The U.S. goal is to lead the world in using science to tackle important industrial problems,” he said, “and our work is part of that.”
    This study was partially supported by the DOE Office of Science. Coauthors included David Humbird of DWH Consulting in Centennial, Colorado.
    Story Source:
    Materials provided by DOE/Princeton Plasma Physics Laboratory. Original written by John Greenwald. Note: Content may be edited for style and length. More

  • in

    Researchers develop hybrid human-machine framework for building smarter AI

    From chatbots that answer tax questions to algorithms that drive autonomous vehicles and dish out medical diagnoses, artificial intelligence undergirds many aspects of daily life. Creating smarter, more accurate systems requires a hybrid human-machine approach, according to researchers at the University of California, Irvine. In a study published this month in Proceedings of the National Academy of Sciences, they present a new mathematical model that can improve performance by combining human and algorithmic predictions and confidence scores.
    “Humans and machine algorithms have complementary strengths and weaknesses. Each uses different sources of information and strategies to make predictions and decisions,” said co-author Mark Steyvers, UCI professor of cognitive sciences. “We show through empirical demonstrations as well as theoretical analyses that humans can improve the predictions of AI even when human accuracy is somewhat below [that of] the AI — and vice versa. And this accuracy is higher than combining predictions from two individuals or two AI algorithms.”
    To test the framework, researchers conducted an image classification experiment in which human participants and computer algorithms worked separately to correctly identify distorted pictures of animals and everyday items — chairs, bottles, bicycles, trucks. The human participants ranked their confidence in the accuracy of each image identification as low, medium or high, while the machine classifier generated a continuous score. The results showed large differences in confidence between humans and AI algorithms across images.
    “In some cases, human participants were quite confident that a particular picture contained a chair, for example, while the AI algorithm was confused about the image,” said co-author Padhraic Smyth, UCI Chancellor’s Professor of computer science. “Similarly, for other images, the AI algorithm was able to confidently provide a label for the object shown, while human participants were unsure if the distorted picture contained any recognizable object.”
    When predictions and confidence scores from both were combined using the researchers’ new Bayesian framework, the hybrid model led to better performance than either human or machine predictions achieved alone.
    “While past research has demonstrated the benefits of combining machine predictions or combining human predictions — the so-called ‘wisdom of the crowds’ — this work forges a new direction in demonstrating the potential of combining human and machine predictions, pointing to new and improved approaches to human-AI collaboration,” Smyth said.
    This interdisciplinary project was facilitated by the Irvine Initiative in AI, Law, and Society. The convergence of cognitive sciences — which are focused on understanding how humans think and behave — with computer science — in which technologies are produced — will provide further insight into how humans and machines can collaborate to build more accurate artificially intelligent systems, the researchers said.
    Additional co-authors include Heliodoro Tejada, a UCI graduate student in cognitive sciences, and Gavin Kerrigan, a UCI Ph.D. student in computer science.
    Funding for this study was provided by the National Science Foundation under award numbers 1927245 and 1900644 and the HPI Research Center in Machine Learning and Data Science at UCI.
    Story Source:
    Materials provided by University of California – Irvine. Note: Content may be edited for style and length. More

  • in

    Objection: No one can understand what you’re saying

    Legal documents, such as contracts or deeds, are notoriously difficult for nonlawyers to understand. A new study from MIT cognitive scientists has determined just why these documents are often so impenetrable.
    After analyzing thousands of legal contracts and comparing them to other types of texts, the researchers found that lawyers have a habit of frequently inserting long definitions in the middle of sentences. Linguists have previously demonstrated that this type of structure, known as “center-embedding,” makes text much more difficult to understand.
    While center-embedding had the most significant effect on comprehension difficulty, the MIT study found that the use of unnecessary jargon also contributes.
    “It’s not a secret that legal language is very hard to understand. It’s borderline incomprehensible a lot of the time,” says Edward Gibson, an MIT professor of brain and cognitive sciences and the senior author of the new paper. “In this study, we’re documenting in detail what the problem is.”
    The researchers hope that their findings will lead to greater awareness of this issue and stimulate efforts to make legal documents more accessible to the general public.
    “Making legal language more straightforward would help people understand their rights and obligations better, and therefore be less susceptible to being unnecessarily punished or not being able to benefit from their entitled rights,” says Eric Martinez, a recent law school graduate and licensed attorney who is now a graduate student in brain and cognitive sciences at MIT. More

  • in

    Physicists discover method for emulating nonlinear quantum electrodynamics in a laboratory setting

    On the big screen, in video games and in our imaginations, lightsabers flare and catch when they clash together. In reality, as in a laser light show, the beams of light go through each other, creating spiderweb patterns. That clashing, or interference, happens only in fiction — and in places with enormous magnetic and electric fields, which happens in nature only near massive objects such as neutron stars. Here, the strong magnetic or electric field reveals that vacuum isn’t truly a void. Instead, here when light beams intersect, they scatter into rainbows.
    A weak version of this effect has been observed in modern particle accelerators, but it is completely absent from our daily lives or even normal laboratory environments.
    Yuli Lyanda-Geller, professor of physics and astronomy in the College of Science at Purdue University, in collaboration with Aydin Keser and Oleg Sushkov from the University of New South Wales in Australia, discovered that it is possible to produce this effect in a class of novel materials involving bismuth, its solid solutions with antimony and tantalum arsenide.
    With this knowledge, the effect can be studied, potentially leading to vastly more sensitive sensors as well as supercapacitors for energy storage that could be turned on and off by a controlled magnetic field.
    “Most importantly, one of the deepest quantum mysteries in the universe can be tested and studied in a small laboratory experiment,” Lyanda-Geller said. “With these materials, we can study effects of the universe. We can study what happens in neutron stars from our laboratories.”
    Brief summary of methods
    Keser, Lyanda-Geller and Sushkov applied quantum field theory nonperturbative methods used to describe high-energy particles and expanded them to analyze the behavior of so-called Dirac materials, which recently became the focus of interest. They used the expansion to obtain results that go both beyond known high-energy results and the general framework of condensed matter and materials physics. They suggested various experimental configurations with applied electric and magnetic fields and analyzed best materials that would allow them to experimentally study this quantum electrodynamic effect in a nonaccelerator setting.
    They subsequently discovered that their results better explained some magnetic phenomena that had been observed and studied in earlier experiments.
    Funding
    U.S. Department of Energy, Office of Basic Energy Sciences; Division of Materials Sciences and Engineering; and the Australian Research Council, Centre of Excellence in Future Low Energy Electronics Technologies
    Story Source:
    Materials provided by Purdue University. Original written by Brittany Steff. Note: Content may be edited for style and length. More

  • in

    Simulated human eye movement aims to train metaverse platforms

    Computer engineers at Duke University have developed virtual eyes that simulate how humans look at the world accurately enough for companies to train virtual reality and augmented reality programs. Called EyeSyn for short, the program will help developers create applications for the rapidly expanding metaverse while protecting user data.
    The results have been accepted and will be presented at the International Conference on Information Processing in Sensor Networks (IPSN), May 4-6, 2022, a leading annual forum on research in networked sensing and control.
    “If you’re interested in detecting whether a person is reading a comic book or advanced literature by looking at their eyes alone, you can do that,” said Maria Gorlatova, the Nortel Networks Assistant Professor of Electrical and Computer Engineering at Duke.
    “But training that kind of algorithm requires data from hundreds of people wearing headsets for hours at a time,” Gorlatova added. “We wanted to develop software that not only reduces the privacy concerns that come with gathering this sort of data, but also allows smaller companies who don’t have those levels of resources to get into the metaverse game.”
    The poetic insight describing eyes as the windows to the soul has been repeated since at least Biblical times for good reason: The tiny movements of how our eyes move and pupils dilate provide a surprising amount of information. Human eyes can reveal if we’re bored or excited, where concentration is focused, whether or not we’re expert or novice at a given task, or even if we’re fluent in a specific language.
    “Where you’re prioritizing your vision says a lot about you as a person, too,” Gorlatova said. “It can inadvertently reveal sexual and racial biases, interests that we don’t want others to know about, and information that we may not even know about ourselves.”
    Eye movement data is invaluable to companies building platforms and software in the metaverse. For example, reading a user’s eyes allows developers to tailor content to engagement responses or reduce resolution in their peripheral vision to save computational power. More

  • in

    Harnessing AI and Robotics to treat spinal cord injuries

    By employing artificial intelligence (AI) and robotics to formulate therapeutic proteins, a team led by Rutgers researchers has successfully stabilized an enzyme able to degrade scar tissue resulting from spinal cord injuries and promote tissue regeneration.
    The study, recently published in Advanced Healthcare Materials, details the team’s ground-breaking stabilization of the enzyme Chondroitinase ABC, (ChABC) offering new hope for patients coping with spinal cord injuries.
    “This study represents one of the first times artificial intelligence and robotics have been used to formulate highly sensitive therapeutic proteins and extend their activity by such a large amount. It’s a major scientific achievement,” said Adam Gormley, the project’s principal investigator and an assistant professor of biomedical engineering at Rutgers School of Engineering (SOE) at Rutgers University-New Brunswick.
    Gormley expressed that his research is also motivated, in part, by a personal connection to spinal cord injury.
    “I’ll never forget being at the hospital and learning a close college friend would likely never walk again after being paralyzed from the waist down after a mountain biking accident,” Gormley recalled. “The therapy we are developing may someday help people such as my friend lessen the scar on their spinal cords and regain function. This is a great reason to wake up in the morning and fight to further the science and potential therapy.”
    Shashank Kosuri, a biomedical engineering doctoral student at Rutgers SOE and a lead author of the study noted that spinal cord injuries, or SCIs, can negatively impact the physical, psychological, and socio-economic well-being of patients and their families. Soon after an SCI, a secondary cascade of inflammation produces a dense scar tissue that can inhibit or prevent nervous tissue regeneration.
    The enzyme successfully stabilized in the study, ChABC, is known to degrade scar tissue molecules and promote tissue regeneration, yet it is highly unstable at the human body temperature of 98.6° F. and loses all activity within a few hours. Kosuri noted that this necessitates multiple, expensive infusions at very high doses to maintain therapeutic efficacy.
    Synthetic copolymers are able to wrap around enzymes such as ChABC and stabilize them in hostile microenvironments. In order to stabilize the enzyme, the researchers utilized an AI-driven approach with liquid handling robotics to synthesize and test the ability of numerous copolymers to stabilize ChABC and maintain its activity at 98.6° F.
    While the researchers were able to identify several copolymers that performed well, Kosuri reported that one copolymer combination even continued to retain 30% of the enzyme for up to one week, a promising result for patients seeking care for spinal cord injuries.
    The study received support from grants funded by the National Institutes of Health, the National Science Foundation, and The New Jersey Commission on Spinal Cord research. In addition to Gormley and Kosuri, the Rutgers research team also included SOE Professor Li Cai and Distinguished Professor Martin Yarmush, as well as several SOE-affiliated students. Faculty and students from Princeton University’s Department of Chemical and Biological Engineering also collaborated on the project.
    Story Source:
    Materials provided by Rutgers University. Original written by Emily Everson Layden. Note: Content may be edited for style and length. More

  • in

    Event horizons are tunable factories of quantum entanglement

    LSU physicists have leveraged quantum information theory techniques to reveal a mechanism for amplifying, or “stimulating,” the production of entanglement in the Hawking effect in a controlled manner. Furthermore, these scientists propose a protocol for testing this idea in the laboratory using artificially produced event horizons. These results have been recently published in Physical Review Letters, “Quantum aspects of stimulated Hawking radiation in an analog white-black hole pair,” where Ivan Agullo, Anthony J. Brady and Dimitrios Kranas present these ideas and apply them to optical systems containing the analog of a pair white-black hole.
    Black holes are some of the most mystifying objects in our universe, largely due to the fact that their inner-workings are hidden behind a completely obscuring veil — the black hole’s event horizon.
    In 1974, Stephen Hawking added more mystique to the character of black holes by showing that, once quantum effects are considered, a black hole isn’t really black at all but, instead, emits radiation, as if it was a hot body, gradually losing mass in the so-called “Hawking evaporation process.” Further, Hawking’s calculations showed that the emitted radiation is quantum mechanically entangled with the bowels of the black hole itself. This entanglement is the quantum signature of the Hawking effect. This astounding result is difficult, if not impossible, to be tested on astrophysical black holes, since the faint Hawking radiation gets overshined by other sources of radiation in the cosmos.
    On the other hand, in the 1980’s, a seminal article by William Unruh established that the spontaneous production of entangled Hawking particles occurs in any system that can support an effective event horizon. Such systems generally fall under the umbrella of “analog gravity systems” and opened a window for testing Hawking’s ideas in the laboratory.
    Serious experimental investigations into analog gravity systems — made of Bose-Einstein condensates, non-linear optical fibers, or even flowing water — have been underway for more than a decade. Stimulated and spontaneously-generated Hawking radiation has recently been observed in several platforms, but measuring entanglement has proved elusive due to its faint and fragile character.
    “We show that, by illuminating the horizon, or horizons, with appropriately chosen quantum states, one can amplify the production of entanglement in Hawking’s process in a tunable manner,” said Associate Professor Ivan Agullo. “As an example, we apply these ideas to the concrete case of a pair of analog white-black holes sharing an interior and produced within a non-linear optical material.”
    “Many of the quantum information tools used in this research were from my graduate research with Professor Jonathan P. Dowling,” said 2021 PhD alumnus Anthony Brady, postdoctoral researcher at the University of Arizona. “Jon was a charismatic character, and he brought his charisma and unconventionality into his science, as well as his advising. He encouraged me to work on eccentric ideas, like analog black holes, and see if I could meld techniques from various fields of physics — like quantum information and analog gravity — in order to produce something novel, or ‘cute,’ as he liked to say.”
    “The Hawking process is one of the richest physical phenomena connecting seemingly unrelated fields of physics from the quantum theory to thermodynamics and relativity,” said Dimitrios Kranas, LSU graduate student. “Analog black holes came to add an extra flavor to the effect providing us, at the same time, with the exciting possibility of testing it in the laboratory. Our detailed numerical analysis allows us to probe new features of the Hawking process, helping us understand better the similarities and differences between astrophysical and analog black holes.”
    Story Source:
    Materials provided by Louisiana State University. Note: Content may be edited for style and length. More

  • in

    Researchers map magnetic fields in 3D, findings could improve device storage capacity

    Researchers from the University of New Hampshire have mapped magnetic fields in three dimensions, a major step toward solving what they call the “grand challenge” of revealing 3D magnetic configuration in magnetic materials. The work has implications for improving diagnostic imaging and capacity in storage devices.
    “The number three really represents a breakthrough in this field,” said Jiadong Zang, associate professor of physics. “Our brain is a three-dimensional object. It’s ironic that all our devices are two-dimensional. They’re underperforming compared to our brains.”
    The study, published recently in the journal Nature Materials, provides the results of three years of high-performance numerical simulations, mapping a three-dimensional structure of a 100 nanometer magnetic tetrahedron sample using only three projection angles of electron beams. Zang points to computed tomography medical imaging, or CT scans, as an example. Instead of sending multiple beams of X-rays to map tissues in the body the same images could be produced with only three beams.
    Reducing electron beam exposure in fast three-dimensional magnetic imaging is one potential application for this collaborative research. The researchers’ findings also have implications for improving storage capacity of magnetic memory devices, which currently deposit circuits onto two-dimensional panels that are approaching maximum density.
    The method offered by this research will be a useful tool to detect and characterize three-dimensional magnetic circuits.
    Zang and Alexander Booth, a former UNH doctoral student, conducted the theoretical analysis. Researchers from Japan and the University of Wisconsin performed the physical experiments. Funds from the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES) under award number DE-SC0020221 helped support Zang and Booth’s contributions to this research.
    The University of New Hampshire inspires innovation and transforms lives in our state, nation and world. More than 16,000 students from all 50 states and 71 countries engage with an award-winning faculty in top-ranked programs in business, engineering, law, health and human services, liberal arts and the sciences across more than 200 programs of study. A Carnegie Classification R1 institution, UNH partners with NASA, NOAA, NSF and NIH, and received $260 million in competitive external funding in FY21 to further explore and define the frontiers of land, sea and space.
    Story Source:
    Materials provided by University of New Hampshire. Original written by Beth Potier. Note: Content may be edited for style and length. More