More stories

  • in

    Matter and antimatter seem to respond equally to gravity

    As part of an experiment to measure — to an extremely precise degree — the charge-to-mass ratios of protons and antiprotons, the RIKEN-led BASE collaboration at CERN, Geneva, Switzerland, has found that, within the uncertainty of the experiment, matter and antimatter respond to gravity in the same way.
    Matter and antimatter create some of the most interesting problems in physics today. They are essentially equivalent, except that where a particle has a positive charge its antiparticle has a negative one. In other respects they seem equivalent. However, one of the great mysteries of physics today, known as “baryon asymmetry,” is that, despite the fact that they seem equivalent, the universe seems made up entirely of matter, with very little antimatter. Naturally, scientists around the world are trying hard to find something different between the two, which could explain why we exist.
    As part of this quest, scientists have explored whether matter and antimatter interact similarly with gravity, or whether antimatter would experience gravity in a different way than matter, which would violate Einstein’s weak equivalence principle. Now, the BASE collaboration has shown, within strict boundaries, that antimatter does in fact respond to gravity in the same way as matter.
    The finding, published in Nature, actually came from a different experiment, which was examining the charge-to-mass ratios of protons and antiprotons, one of the other important measurements that could determine the key difference between the two.
    This work involved 18 months of work at CERN’s antimatter factory. To make the measurements, the team confined antiprotons and negatively charged hydrogen ions, which they used as a proxy for protons, in a Penning trap. In this device, a particle follows a cyclical trajectory with a frequency, close to the cyclotron frequency, that scales with the trap’s magnetic-field strength and the particle’s charge-to-mass ratio. By feeding antiprotons and negatively charged hydrogen ions into the trap, one at a time, they were able to measure, under identical conditions, the cyclotron frequencies of the two particle types, comparing their charge-to-mass ratios. According to Stefan Ulmer, the leader of the project, “By doing this, we were able to obtain a result that they are essentially equivalent, to a degree four times more precise than previous measures. To this level of CPT invariance, causality and locality hold in the relativistic quantum field theories of the Standard Model.”
    Interestingly, the group used the measurements to test a fundamental physics law known as the weak equivalence principle. According to this principle, different bodies in the same gravitational field should undergo the same acceleration in the absence of frictional forces. Because the BASE experiment was placed on the surface of the Earth, the proton and antiproton cyclotron-frequency measurements were made in the gravitational field on the Earth’s surface, and any difference between the gravitational interaction of protons and antiprotons would result in a difference between the cyclotron frequencies.
    By sampling the gravitational field of the Earth as the planet orbited the Sun, the scientists found that matter and antimatter responded to gravity in the same way up to a degree of three parts in 100, which means that the gravitational acceleration of matter and antimatter are identical within 97% of the experienced acceleration.
    Ulmer adds that these measurements could lead to new physics. He says, “The 3% accuracy of the gravitational interaction obtained in this study is comparable to the accuracy goal of the gravitational interaction between antimatter and matter that other research groups plan to measure using free-falling anti-hydrogen atoms. If the results of our study differ from those of the other groups, it could lead to the dawn of a completely new physics.”
    The research group, led by RIKEN, included scientists from international partners including CERN, the Max Planck Society, the National Metrology Institute in Germany PTB, the Universities of Mainz and Hannover, the University of Tokyo, and GSI Darmstadt.
    Story Source:
    Materials provided by RIKEN. Note: Content may be edited for style and length. More

  • in

    The first topological acoustic transistor

    Topological materials move electrons along their surface and edges without any loss, making them promising materials for dissipationless, high-efficiency electronics. Researchers are especially interested in using these materials as transistors, the backbone of all modern electronics. But there’s a problem: Transistors switch electronic current on and off, but it’s difficult to turn off the dissipationless flow of electrons in topological materials.
    Now, Harvard University researchers have designed and simulated the first topological acoustic transistors — with sound waves instead of electrons — and proposed a connection architecture to form a universal logic gate that can switch the flow of sound on and off.
    “Since the advent of topological materials around 2007, there has been a lot of interest in developing a topological electronic transistor,” said Jenny Hoffman, the Clowes Professor of Science at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Department of Physics. “Although the materials we used won’t yield an electronic topological transistor, our general design process applies to both quantum materials and photonic crystals, raising hopes that electronic and optical equivalents may not be far behind.”
    The research is published in Physical Review Letters.
    By using acoustic topological insulators, the researchers were able to sidestep the complicated quantum mechanics of electron topological insulators.
    “The equations for sound waves are exactly solvable, which allowed us to numerically find just the right combination of materials to design a topological acoustic waveguide that turns on when heated, and off when cooled,” said Harris Pirie, a former graduate student in the Department of Physics and first author of the paper. More

  • in

    System recognizes hand gestures to expand computer input on a keyboard

    Researchers are developing a new technology that uses hand gestures to carry out commands on computers.
    The prototype, called “Typealike,” works through a regular laptop webcam with a simple affixed mirror. The program recognizes the user’s hands beside or near the keyboard and prompts operations based on different hand positions.
    A user could, for example, place their right hand with the thumb pointing up beside the keyboard, and the program would recognize this as a signal to increase the volume. Different gestures and different combinations of gestures can be programmed to carry out a wide range of operations.
    The innovation in the field of human-computer interaction aims to make user experience faster and smoother, with less need for keyboard shortcuts or working with a mouse and trackpad.
    “It started with a simple idea about new ways to use a webcam,” said Nalin Chhibber, a recent master’s graduate from the University of Waterloo’s Cheriton School of Computer Science. “The webcam is pointed at your face, but the most interaction happening on a computer is around your hands. So we thought, what could we do if the webcam could pick up hand gestures?”
    The initial insight led to the development of a small mechanical attachment that redirects the webcam downwards towards the hands. The team then created a software program capable of understanding distinct hand gestures in variable conditions and for different users. The team used machine learning techniques to train the Typealike program.
    “It’s a neural network, so you need to show the algorithm examples of what you’re trying to detect,” said Fabrice Matulic, senior researcher at Preferred Networks Inc. and a former postdoctoral researcher at Waterloo. “Some people will make gestures a little bit differently, and hands vary in size, so you have to collect a lot of data from different people with different lighting conditions.”
    The team recorded a database of hand gestures with dozens of research volunteers. They also had the volunteers do tests and surveys to help the team understand how to make the program as functional and versatile as possible.
    “We’re always setting out to make things people can easily use,” said Daniel Vogel, an associate professor of computer science at Waterloo. “People look at something like Typealike, or other new tech in the field of human-computer interaction, and they say it just makes sense. That’s what we want. We want to make technology that’s intuitive and straightforward, but sometimes to do that takes a lot of complex research and sophisticated software.”
    The researchers say there are further applications for the Typealike program in virtual reality where it could eliminate the need for hand-held controllers.
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    Sustainable silk material for biomedical, optical, food supply applications

    While silk is best known as a component in clothes and fabric, the material has plentiful uses, spanning biomedicine to environmental science. In Applied Physics Reviews, by AIP Publishing, researchers from Tufts University discuss the properties of silk and recent and future applications of the material.
    Silk makes an important biomaterial, because it does not generate an immune response in humans and promotes the growth of cells. It has been used in drug delivery, and because the material is flexible and has favorable technological properties, it is ideal for wearable and implantable health monitoring sensors.
    As an optically transparent and easily manipulated material at the nano- and microscale, silk is also useful in optics and electronics. It is used to develop diffractive optics, photonic crystals, and waveguides, among other devices.
    More recently, silk has come to the forefront of sustainability research. The material is made in nature and can be reprocessed from recycled or discarded clothing and other textiles. The use of silk coatings may also reduce food waste, which is a significant component of the global carbon footprint.
    “We are continuing to improve the integration between different disciplines,” said author Giulia Guidetti. “For example, we can use silk as a biomedical device for drug delivery but also include an optical response in that same device. This same process could be used someday in the food supply chain. Imagine having a coating which preserves the food but also tells you when the food is spoiled.”
    Silk is versatile and often superior to more traditional materials, because it can be easily chemically modified and tuned for certain properties or assembled into a specific form depending on its final use. However, controlling and optimizing these aspects depends on understanding the material’s origin.
    The bottom-up assembly of silk by silkworms has been studied for a long time, but a full picture of its construction is still lacking. The team emphasized the importance of understanding these processes, because it could allow them to fabricate the material more effectively and with more control over the final function.
    “One big challenge is that nature is very good at doing things, like making silk, but it covers an enormous dimensional parameter space,” said author Fiorenzo Omenetto. “For technology, we want to make something with repeatability, which requires being able to control a process that has inherent variability and has been perfected over thousands of years.”
    The scientists hope to see more materials and devices use silk in the future, possibly as an integral component in sensors to obtain emergent data on humans and the environment.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Resolving the black hole ‘fuzzball or wormhole’ debate

    Black holes really are giant fuzzballs, a new study says.
    The study attempts to put to rest the debate over Stephen Hawking’s famous information paradox, the problem created by Hawking’s conclusion that any data that enters a black hole can never leave. This conclusion accorded with the laws of thermodynamics, but opposed the fundamental laws of quantum mechanics.
    “What we found from string theory is that all the mass of a black hole is not getting sucked in to the center,” said Samir Mathur, lead author of the study and professor of physics at The Ohio State University. “The black hole tries to squeeze things to a point, but then the particles get stretched into these strings, and the strings start to stretch and expand and it becomes this fuzzball that expands to fill up the entirety of the black hole.”
    The study, published Dec. 28 in the Turkish Journal of Physics, found that string theory almost certainly holds the answer to Hawking’s paradox, as the paper’s authors had originally believed. The physicists proved theorems to show that the fuzzball theory remains the most likely solution for Hawking’s information paradox. The researchers have also published an essay showing how this work may resolve longstanding puzzles in cosmology; the essay appeared in December in the International Journal of Modern Physics.
    Mathur published a study in 2004 that theorized black holes were similar to very large, very messy balls of yarn — “fuzzballs” that become larger and messier as new objects get sucked in.
    “The bigger the black hole, the more energy that goes in, and the bigger the fuzzball becomes,” Mathur said. The 2004 study found that string theory, the physics theory that holds that all particles in the universe are made of tiny vibrating strings, could be the solution to Hawking’s paradox. With this fuzzball structure, the hole radiates like any normal body, and there is no puzzle. More

  • in

    Africa’s ‘Great Green Wall’ could have far-reaching climate effects

    Africa’s “Great Green Wall” initiative is a proposed 8,000-kilometer line of trees meant to hold back the Sahara from expanding southward. New climate simulations looking to both the region’s past and future suggest this greening could have a profound effect on the climate of northern Africa, and even beyond.

    By 2030, the project aims to plant 100 million hectares of trees along the Sahel, the semiarid zone lining the desert’s southern edge. That completed tree line could as much as double rainfall within the Sahel and would also decrease average summer temperatures throughout much of northern Africa and into the Mediterranean, according to the simulations, presented December 14 during the American Geophysical Union’s fall meeting. But, the study found, temperatures in the hottest parts of the desert would become even hotter.

    Previous studies have shown that a “green Sahara” is linked to changes in the intensity and location of the West African monsoon. That major wind system blows hot, dry air southwestward across northern Africa during the cooler months and brings slightly wetter conditions northeastward during the hotter months.

    Such changes in the monsoon’s intensity as well as its northward or southward extent led to a green Sahara period that lasted from about 11,000 to 5,000 years ago, for example (SN: 1/18/17). Some of the strongest early evidence for that greener Sahara of the past came in the 1930s, when Hungarian explorer László Almásy — the basis for the protagonist of the 1996 movie The English Patient — discovered Neolithic cave and rock art in the Libyan Desert that depicted people swimming.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Past changes in the West African monsoon are linked to cyclical variations in Earth’s orbit, which alters how much incoming solar radiation heats up the region. But orbital cycles don’t tell the whole story, says Francesco Pausata, a climate dynamicist at the Université du Québec à Montréal who ran the new simulations. Scientists now recognize that changes in plant cover and overall dustiness can dramatically intensify those monsoon shifts, he says.

    More vegetation “helps create a local pool of moisture,” with more water cycling from soil to atmosphere, increasing humidity and therefore rainfall, says Deepak Chandan, a paleoclimatologist at the University of Toronto who was not involved in the work. Plants also make for a darker land surface compared with blinding desert sands, so that the ground absorbs more heat, Chandan says. What’s more, vegetation reduces how much dust is in the atmosphere. Dust particles can reflect sunlight back to space, so less dust means more solar radiation can reach the land. Add it all up, and these effects lead to more heat and more humidity over the land relative to the ocean, creating a larger difference in atmospheric pressure. And that means stronger, more intense monsoon winds will blow.

    The idea for Africa’s Great Green Wall came in the 1970s and ’80s, when the once-fertile Sahel began to turn barren and dry as a result of changing climate and land use. Planting a protective wall of vegetation to hold back an expanding desert is a long-standing scheme. In the 1930s, President Franklin Roosevelt mobilized the U.S. Forest Service and the Works Progress Administration to plant walls of trees from the Great Plains to Texas to slow the growth of the Dust Bowl. Since the 1970s, China has engaged in its own massive desert vegetation project — also nicknamed the Great Green Wall — in an attempt to halt the southward march of sand dunes from the Gobi Desert (SN: 7/9/21).

    Led by the African Union, Africa’s Great Green Wall project launched in 2007 and is now roughly 15 percent complete. Proponents hope the completed tree line, which will extend from Senegal to Djibouti, will not only hold back the desert from expanding southward, but also bring improved food security and millions of jobs to the region.

    What effect the finished greening might ultimately have on the local, regional and global climate has been little studied — and it needs to be, Pausata says. The initiative is, essentially, a geoengineering project, he says, and when people want to do any type of geoengineering, they should study these possible impacts.

    To investigate those possible impacts, Pausata created high-resolution computer simulations of future global warming, both with and without a simulated wall of plants along the Sahel. Against the backdrop of global warming, the Great Green Wall would decrease average summertime temperatures in most of the Sahel by as much as 1.5 degrees Celsius.

    But the Sahel’s hottest areas would get even hotter, with average temperatures increasing by as much as 1.5 degrees C. The greening would also increase rainfall across the entire region, even doubling it in some places, the research suggests.

    These results are preliminary, Pausata says, and the data presented at the meeting were only for a high-emissions future warming scenario called RCP8.5 that may not end up matching reality (SN: 1/7/20). Simulations for moderate- and lower-emissions scenarios are ongoing.

    The effects of greening the Sahara might extend far beyond the region, the simulations suggest. A stronger West African monsoon could shift larger atmospheric circulation patterns westward, influencing other climate patterns such as the El Niño Southern Oscillation and altering the tracks of tropical cyclones.

    Chandan agrees that it’s important to understand just what impact such large-scale planting might have and notes that improvements in understanding what led to past changes in the Sahara are key to simulating its future. That the Great Green Wall’s impact could be far-ranging also makes sense, he says: “The climate system is full of interactions.” More

  • in

    Simple, accurate, and efficient: Improving the way computers recognize hand gestures

    In the 2002 science fiction blockbuster film Minority Report, Tom Cruise’s character John Anderton uses his hands, sheathed in special gloves, to interface with his wall-sized transparent computer screen. The computer recognizes his gestures to enlarge, zoom in, and swipe away. Although this futuristic vision for computer-human interaction is now 20 years old, today’s humans still interface with computers by using a mouse, keyboard, remote control, or small touch screen. However, much effort has been devoted by researchers to unlock more natural forms of communication without requiring contact between the user and the device. Voice commands are a prominent example that have found their way into modern smartphones and virtual assistants, letting us interact and control devices through speech.
    Hand gestures constitute another important mode of human communication that could be adopted for human-computer interactions. Recent progress in camera systems, image analysis, and machine learning have made optical-based gesture recognition a more attractive option in most contexts than approaches relying on wearable sensors or data gloves, as used by Anderton in Minority Report. However, current methods are hindered by a variety of limitations, including high computational complexity, low speed, poor accuracy, or a low number of recognizable gestures. To tackle these issues, a team led by Zhiyi Yu of Sun Yat-sen University, China, recently developed a new hand gesture recognition algorithm that strikes a good balance between complexity, accuracy, and applicability. As detailed in their paper, which was published in the Journal of Electronic Imaging, the team adopted innovative strategies to overcome key challenges and realize an algorithm that can be easily applied in consumer-level devices.
    One of the main features of the algorithm is adaptability to different hand types. The algorithm first tries to classify the hand type of the user as either slim, normal, or broad based on three measurements accounting for relationships between palm width, palm length, and finger length. If this classification is successful, subsequent steps in the hand gesture recognition process only compare the input gesture with stored samples of the same hand type. “Traditional simple algorithms tend to suffer from low recognition rates because they cannot cope with different hand types. By first classifying the input gesture by hand type and then using sample libraries that match this type, we can improve the overall recognition rate with almost negligible resource consumption,” explains Yu.
    Another key aspect of the team’s method is the use of a “shortcut feature” to perform a prerecognition step. While the recognition algorithm is capable of identifying an input gesture out of nine possible gestures, comparing all the features of the input gesture with those of the stored samples for all possible gestures would be very time consuming. To solve this problem, the prerecognition step calculates a ratio of the area of the hand to select the three most likely gestures of the possible nine. This simple feature is enough to narrow down the number of candidate gestures to three, out of which the final gesture is decided using a much more complex and high-precision feature extraction based on “Hu invariant moments.” Yu says, “The gesture prerecognition step not only reduces the number of calculations and hardware resources required but also improves recognition speed without compromising accuracy.”
    The team tested their algorithm both in a commercial PC processor and an FPGA platform using an USB camera. They had 40 volunteers make the nine hand gestures multiple times to build up the sample library, and another 40 volunteers to determine the accuracy of the system. Overall, the results showed that the proposed approach could recognize hand gestures in real time with an accuracy exceeding 93%, even if the input gesture images were rotated, translated, or scaled. According to the researchers, future work will focus on improving the performance of the algorithm under poor lightning conditions and increasing the number of possible gestures.
    Gesture recognition has many promising fields of application and could pave the way to new ways of controlling electronic devices. A revolution in human-computer interaction might be close at hand!
    Story Source:
    Materials provided by SPIE–International Society for Optics and Photonics. Note: Content may be edited for style and length. More

  • in

    ‘Pop-up’ electronic sensors could detect when individual heart cells misbehave

    Engineers at the University of California San Diego have developed a powerful new tool that monitors the electrical activity inside heart cells, using tiny “pop-up” sensors that poke into cells without damaging them. The device directly measures the movement and speed of electrical signals traveling within a single heart cell — a first — as well as between multiple heart cells. It is also the first to measure these signals inside the cells of 3D tissues.
    The device, published Dec. 23 in the journal Nature Nanotechnology, could enable scientists to gain more detailed insights into heart disorders and diseases such as arrhythmia (abnormal heart rhythm), heart attack and cardiac fibrosis (stiffening or thickening of heart tissue).
    “Studying how an electrical signal propagates between different cells is important to understand the mechanism of cell function and disease,” said first author Yue Gu, who recently received his Ph.D. in materials science and engineering at UC San Diego. “Irregularities in this signal can be a sign of arrhythmia, for example. If the signal cannot propagate correctly from one part of the heart to another, then some part of the heart cannot receive the signal so it cannot contract.”
    “With this device, we can zoom in to the cellular level and get a very high resolution picture of what’s going on in the heart; we can see which cells are malfunctioning, which parts are not synchronized with the others, and pinpoint where the signal is weak,” said senior author Sheng Xu, a professor of nanoengineering at the UC San Diego Jacobs School of Engineering. “This information could be used to help inform clinicians and enable them to make better diagnoses.”
    The device consists of a 3D array of microscopic field effect transistors, or FETs, that are shaped like sharp pointed tips. These tiny FETs pierce through cell membranes without damaging them and are sensitive enough to detect electrical signals — even very weak ones — directly inside the cells. To evade being seen as a foreign substance and remain inside the cells for long periods of time, the FETs are coated in a phospholipid bilayer. The FETs can monitor signals from multiple cells at the same time. They can even monitor signals at two different sites inside the same cell.
    “That’s what makes this device unique,” said Gu. “It can have two FET sensors penetrate inside one cell — with minimal invasiveness — and allow us to see which way a signal propagates and how fast it goes. This detailed information about signal transportation within a single cell has so far been unknown.”
    To build the device, the team first fabricated the FETs as 2D shapes, and then bonded select spots of these shapes onto a pre-stretched elastomer sheet. The researchers then loosened the elastomer sheet, causing the device to buckle and the FETs to fold into a 3D structure so that they can penetrate inside cells. More