More stories

  • in

    Using sound to test devices, control qubits

    Acoustic resonators are everywhere. In fact, there is a good chance you’re holding one in your hand right now. Most smart phones today use bulk acoustic resonators as radio frequency filters to filter out noise that could degrade a signal. These filters are also used in most Wi-Fi and GPS systems.
    Acoustic resonators are more stable than their electrical counterparts, but they can degrade over time. There is currently no easy way to actively monitor and analyze the degradation of the material quality of these widely used devices.
    Now, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), in collaboration with researchers at the OxideMEMS Lab at Purdue University, have developed a system that uses atomic vacancies in silicon carbide to measure the stability and quality of acoustic resonators. What’s more, these vacancies could also be used for acoustically-controlled quantum information processing, providing a new way to manipulate quantum states embedded in this commonly-used material.
    “Silicon carbide, which is the host for both the quantum reporters and the acoustic resonator probe, is a readily available commercial semiconductor that can be used at room temperature,” said Evelyn Hu, the Tarr-Coyne Professor of Applied Physics and of Electrical Engineering and the Robin Li and Melissa Ma Professor of Arts and Sciences, and senior author of the paper. “As an acoustic resonator probe, this technique in silicon carbide could be used in monitoring the performance of accelerometers, gyroscopes and clocks over their lifetime and, in a quantum scheme, has potential for hybrid quantum memories and quantum networking.”
    The research was published in Nature Electronics.
    A look inside acoustic resonators
    Silicon carbide is a common material for microelectromechanical systems (MEMS), which includes bulk acoustic resonators. More

  • in

    Clinical trials could yield better data with fewer patients thanks to new tool

    University of Alberta researchers have developed a new statistical tool to evaluate the results of clinical trials, with the aim of allowing smaller trials to ask more complex research questions and get effective treatments to patients more quickly.
    In their paper, the team reports on their new “Chauhan Weighted Trajectory Analysis,” which they developed to improve on the Kaplan-Meier estimator, the standard tool since 1959.
    The Kaplan-Meier test limits researchers because it can only assess binary questions, such as whether patients survived or died on a treatment. It can’t include other factors such as adverse drug reactions or quality-of-life measures such as being able to walk or care for yourself. The new tool allows simultaneous evaluation and visualization of multiple outcomes in one graph.
    “In general, diseases aren’t binary,” explains first author Karsh Chauhan, a fourth-year MD student at the U of A. “Now we can capture the severity of diseases — whether they make patients sick, whether they put them in hospital, whether they lead to death — and we can capture both the rise and the fall of how patients do on different treatments.”
    John Mackey, a breast cancer medical oncologist and professor emeritus of oncology, added this tool allows researchers to do a smaller, less expensive, quicker trial with fewer patients, and get the overall benefit of a new treatment more rapidly out there in the world.
    The two began working on the statistical tool three years ago when they were designing a clinical trial for a new device to prevent bedsores, which affect many patients with long-term illness. They wanted to look at how the severity of illness changed during treatment, but the Kaplan-Meier test wasn’t going to help.
    “Dr. Mackey said to me, ‘If the tool doesn’t exist, then why don’t you build it yourself?’ That was very exciting,” says Chauhan, who also has a BSc in engineering physics, which he calls a degree in “problem-solving.” More

  • in

    Can AI grasp related concepts after learning only one?

    Humans have the ability to learn a new concept and then immediately use it to understand related uses of that concept — once children know how to “skip,” they understand what it means to “skip twice around the room” or “skip with your hands up.”
    But are machines capable of this type of thinking? In the late 1980s, Jerry Fodor and Zenon Pylyshyn, philosophers and cognitive scientists, posited that artificial neural networks — the engines that drive artificial intelligence and machine learning — are not capable of making these connections, known as “compositional generalizations.” However, in the decades since, scientists have been developing ways to instill this capacity in neural networks and related technologies, but with mixed success, thereby keeping alive this decades-old debate.
    Researchers at New York University and Spain’s Pompeu Fabra University have now developed a technique — reported in the journal Nature — that advances the ability of these tools, such as ChatGPT, to make compositional generalizations. This technique, Meta-learning for Compositionality (MLC), outperforms existing approaches and is on par with, and in some cases better than, human performance. MLC centers on training neural networks — the engines driving ChatGPT and related technologies for speech recognition and natural language processing — to become better at compositional generalization through practice.
    Developers of existing systems, including large language models, have hoped that compositional generalization will emerge from standard training methods, or have developed special-purpose architectures in order to achieve these abilities. MLC, in contrast, shows how explicitly practicing these skills allow these systems to unlock new powers, the authors note.
    “For 35 years, researchers in cognitive science, artificial intelligence, linguistics, and philosophy have been debating whether neural networks can achieve human-like systematic generalization,” says Brenden Lake, an assistant professor in NYU’s Center for Data Science and Department of Psychology and one of the authors of the paper. “We have shown, for the first time, that a generic neural network can mimic or exceed human systematic generalization in a head-to-head comparison.”
    In exploring the possibility of bolstering compositional learning in neural networks, the researchers created MLC, a novel learning procedure in which a neural network is continuously updated to improve its skills over a series of episodes. In an episode, MLC receives a new word and is asked to use it compositionally — for instance, to take the word “jump” and then create new word combinations, such as “jump twice” or “jump around right twice.” MLC then receives a new episode that features a different word, and so on, each time improving the network’s compositional skills.
    To test the effectiveness of MLC, Lake, co-director of NYU’s Minds, Brains, and Machines Initiative, and Marco Baroni, a researcher at the Catalan Institute for Research and Advanced Studies and professor at the Department of Translation and Language Sciences of Pompeu Fabra University, conducted a series of experiments with human participants that were identical to the tasks performed by MLC.
    In addition, rather than learn the meaning of actual words — terms humans would already know — they also had to learn the meaning of nonsensical terms (e.g., “zup” and “dax”) as defined by the researchers and know how to apply them in different ways. MLC performed as well as the human participants — and, in some cases, better than its human counterparts. MLC and people also outperformed ChatGPT and GPT-4, which despite its striking general abilities, showed difficulties with this learning task.
    “Large language models such as ChatGPT still struggle with compositional generalization, though they have gotten better in recent years,” observes Baroni, a member of Pompeu Fabra University’s Computational Linguistics and Linguistic Theory research group. “But we think that MLC can further improve the compositional skills of large language models.” More

  • in

    Highest-resolution single-photon superconducting camera

    Researchers at the National Institute of Standards and Technology (NIST) and their colleagues have built a superconducting camera containing 400,000 pixels — 400 times more than any other device of its type.
    Superconducting cameras allow scientists to capture very weak light signals, whether from distant objects in space or parts of the human brain. Having more pixels could open up many new applications in science and biomedical research.
    The NIST camera is made up of grids of ultrathin electrical wires, cooled to near absolute zero, in which current moves with no resistance until a wire is struck by a photon. In these superconducting-nanowire cameras, the energy imparted by even a single photon can be detected because it shuts down the superconductivity at a particular location (pixel) on the grid. Combining all the locations and intensities of all the photons makes up an image.
    The first superconducting cameras capable of detecting single photons were developed more than 20 years ago. Since then, the devices have contained no more than a few thousand pixels — too limited for most applications.
    Creating a superconducting camera with a greater number of pixels has posed a serious challenge because it would become all but impossible to connect every single chilled pixel among many thousands to its own readout wire. The challenge stems from the fact that each of the camera’s superconducting components must be cooled to ultralow temperatures to function properly, and individually connecting every pixel among millions to the cooling system would be virtually impossible.
    NIST researchers Adam McCaughan and Bakhrom Oripov and their collaborators at NASA’s Jet Propulsion Laboratory in Pasadena, California, and the University of Colorado Boulder overcame that obstacle by combining the signals from many pixels onto just a few room-temperature readout wires.
    A general property of any superconducting wire is that it allows current to flow freely up to a certain maximum “critical” current. To take advantage of that behavior, the researchers applied a current just below the maximum to the sensors. Under that condition, if even a single photon strikes a pixel, it destroys the superconductivity. The current is no longer able to flow without resistance through the nanowire and is instead shunted to a small resistive heating element connected to each pixel. The shunted current creates an electrical signal that can rapidly be detected. More

  • in

    Climate change likely impacted human populations in the Neolithic and Bronze Age

    Human populations in Neolithic Europe fluctuated with changing climates, according to a study published October 25, 2023 in the open-access journal PLOS ONE by Ralph Großmann of Kiel University, Germany and colleagues.
    The archaeological record is a valuable resource for exploring the relationship between humans and the environment, particularly how each is affected by the other. In this study, researchers examined Central European regions rich in archaeological remains and geologic sources of climate data, using these resources to identify correlations between human population trends and climate change.
    The three regions examined are the Circumharz region of central Germany, the Czech Republic/Lower Austria region, and the Northern Alpine Foreland of southern Germany. Researchers compiled over 3400 published radiocarbon dates from archaeological sites in these regions to serve as indicators of ancient populations, following the logic that more dates are available from larger populations leaving behind more materials. Climate data came from cave formations in these regions which provide datable information about ancient climate conditions. These data span 3550-1550 BC, from the Late Neolithic to the Early Bronze Age.
    The study found a notable correlation between climate and human populations. During warm and wet times, populations tended to increase, likely bolstered by improved crops and economies. During cold and dry times, populations often decreased, sometimes experiencing major cultural shifts with potential evidence of increasing social inequality, such as the emergence of high status “princely burials” of some individuals in the Circumharz region.
    These results suggest that at least some of the trends in human populations over time can be attributed to the effects of changing climates. The authors acknowledge that these data are susceptible to skewing by limitations of the archaeological record in these regions, and that more data will be important to support these results. This type of study is crucial for understanding human connectivity to the environment and the impacts of changing climates on human cultures.
    The authors add: “Between 5500 and 3500 years ago, climate was a major factor in population development in the regions around the Harz Mountains, in the northern Alpine foreland and in the region of what is now the Czech Republic and Austria. However, not only the population size, but also the social structures changed with climate fluctuations.” More

  • in

    ‘Dim-witted’ pigeons use the same principles as AI to solve tasks

    A new study provides evidence that pigeons tackle some problems just as artificial intelligence would — allowing them to solve difficult tasks that would vex humans.
    Previous research had shown pigeons learned how to solve complex categorization tasks that human ways of thinking — like selective attention and explicit rule use — would not be useful in solving.
    Researchers had theorized that pigeons used a “brute force” method of solving problems that is similar to what is used in AI models, said Brandon Turner, lead author of the new study and professor of psychology at The Ohio State University.
    But this study may have proven it: Turner and a colleague tested a simple AI model to see if it could solve the problems in the way they thought pigeons did — and it worked.
    “We found really strong evidence that the mechanisms guiding pigeon learning are remarkably similar to the same principles that guide modern machine learning and AI techniques,” Turner said.
    “Our findings suggest that in the pigeon, nature may have found a way to make an incredibly efficient learner that has no ability to generalize or extrapolate like humans would.”
    Turner conducted the study with Edward Wasserman, a professor of psychology at the University of Iowa. Their results were published recently in the journal iScience. More

  • in

    Single model predicts trends in employment, microbiomes, forests

    Researchers report that a single, simplified model can predict population fluctuations in three unrelated realms: urban employment, human gut microbiomes and tropical forests. The model will help economists, ecologists, public health authorities and others predict and respond to variability in multiple domains, the researchers say.
    The new findings are detailed in the Proceedings of the National Academy of Sciences.
    The model, which goes by the acronym SLRM, does not predict exact outcomes, but generates a narrow distribution of the most likely trajectories, said James O’Dwyer, a professor of plant biology at the University of Illinois Urbana-Champaign who developed the model with postdoctoral researcher Ashish George in the Carl R. Woese Institute for Genomic Biology at the U. of I. George is now a computational scientist at the Broad Institute in Cambridge, Massachusetts.
    “The model incorporates random events, so it predicts a range of outcomes. But the data fall right in the middle of that range of outcomes,” O’Dwyer said.
    The model divides each population into discrete sectors — for example job types such as healthcare, agriculture or retail trade — and assigns a “generation time” to each.
    “Generation time is the lifetime of a tree or microbe, or the time a person spends in a given employment sector,” George said. “It is measured in hours for microbes, years for job types, and decades for forests.” Analyzing the systems in terms of generation time for each sector revealed similarities in how all three systems behave.
    The scientists relied on decades of research tracking changes in each of the different domains over time. For the employment analysis, they focused on the number of people employed in different economic sectors over time. This data came from the North American Industry Classification System and included monthly updates for 383 U.S. cities over a period of 17 years. More

  • in

    Bitcoin mining has ‘very worrying’ impacts on land and water, not only carbon

    As bitcoin and other cryptocurrencies have grown in market share, they’ve been criticized for their heavy carbon footprint: Cryptocurrency mining is an energy-intensive endeavor. Mining has massive water and land footprints as well, according to a new study that is the first to detail country-by-country environmental impacts of bitcoin mining. It serves as the foundation for a new United Nations (UN) report on bitcoin mining, also published today.
    The study reveals how each country’s mix of energy sources defines the environmental footprint of its bitcoin mining and highlights the top 10 countries for energy, carbon, water and land use.* The work was published in Earth’s Future, which publishes interdisciplinary research on the past, present and future of our planet and its inhabitants.
    “A lot of our exciting new technologies have hidden costs we don’t realize at the onset,” said Kaveh Madani, a Director at United Nations University who led the new study. “We introduce something, it gets adopted, and only then do we realize that there are consequences.”
    Madani and his co-authors used energy, carbon, water and land use data from 2020 to 2021 to calculate country-specific environmental impacts for 76 countries known to mine bitcoin. They focused on bitcoin because it’s older, popular and more well-established/widely used than other cryptocurrencies.
    Madani said the results were “very interesting and very concerning,” in part because demand is rising so quickly. But even with more energy-efficient mining approaches, if demand continues to grow, so too will mining’s environmental footprints, he said.
    Electricity and carbon
    If bitcoin mining were a country, it would be ranked 27th in energy use globally. Overall, bitcoin mining consumed about 173 terawatt hours of electricity in the two years from January 2020 to December 2021, about 60% more than the energy used for bitcoin mining in 2018-2019, the study found. Bitcoin mining emitted about 86 megatons of carbon, largely because of the dominance of fossil fuel-based energy in bitcoin-mining countries. More