More stories

  • in

    Tracking a levitated nanoparticle with a mirror

    Sensing with levitated nanoparticles has so far been limited by the precision of position measurements. Now, researchers at the University of Innsbruck led by Tracy Northup, have demonstrated a new method for optical interferometry in which light scattered by a particle is reflected by a mirror. This opens up new possibilities for using levitated particles as sensors, in particular, in quantum regimes.
    Levitated nanoparticles are promising tools for sensing ultra-weak forces of biological, chemical or mechanical origin and even for testing the foundations of quantum physics. However, such applications require precise position measurement. Researchers at the Department of Experimental Physics of the University of Innsbruck, Austria, have now demonstrated a new technique that boosts the efficiency with which the position of a sub-micron levitated object is detected. “Typically, we measure a nanoparticle’s position with a technique called optical interferometry, in which part of the light emitted by a nanoparticle is compared with the light from a reference laser,” says Lorenzo Dania, a PhD student in Tracy Northup’s research group. “A laser beam, however, has a much different shape than the light pattern emitted by a nanoparticle, known as dipole radiation.” That shape difference currently limits the measurement precision.
    Self-interference method
    The new technique demonstrated by Tracy Northup, a professor at the University of Innsbruck, and her team resolves this limitation by replacing the laser beam with the light of the particle reflected by a mirror. The technique builds on a method to track barium ions that has been developed in recent years by Rainer Blatt, also of the University of Innsbruck, and his team. Last year, researchers from the two teams proposed to extend this method to nanoparticles. Now, using a nanoparticle levitated in an electromagnetic trap, the researchers showed that this method outperformed other state-of-the-art detection techniques. The result opens up new possibilities for using levitated particles as sensors — for example, to measure tiny forces — and for bringing the particles’ motion into realms described by quantum mechanics.
    Financial support for the research was provided, among others, by the European Union as well as by the Austrian Science Fund FWF, the Austrian Academy of Sciences and the Austrian Federal Ministry of Education, Science and Research.
    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More

  • in

    Megatooth sharks may have been higher on the food chain than any ocean animal ever

    Whenever paleontologist Dana Ehret gives talks about the 15-meter-long prehistoric sharks known as megalodons, he likes to make a joke: “What did megalodon eat?” asks Ehret, Assistant Curator of Natural History at the New Jersey State Museum in Trenton. “Well,” he says, “whatever it wanted.”

    Now, there might be evidence that’s literally true. Some megalodons (Otodus megalodon) may have been “hyper apex predators,” higher up the food chain than any ocean animal ever known, researchers report in the June 22 Science Advances. Using chemical measurements of fossilized teeth, scientists compared the diets of marine animals — from polar bears to ancient great white sharks — and found that megalodons and their direct ancestors were often predators on a level never seen before.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    The finding contradicts another recent study, which found megalodons were at a similar level in the food chain as great white sharks (SN: 5/31/22). If true, the new results might change how researchers think about what drove megalodons to extinction around 3.5 million years ago.

    In the latest study, researchers examined dozens of fossilized teeth for varieties of nitrogen, called isotopes, that have different numbers of neutrons. In animals, one specific nitrogen isotope tends to be more common than another. A predator absorbs both when it eats prey, so the imbalance between the isotopes grows further up the food chain. 

    For years, scientists have used this trend to learn about modern creatures’ diets. But researchers were almost never able to apply it to fossils millions of years old because the nitrogen levels were too low. In the new study, scientists get around this by feeding their samples to bacteria that digest the nitrogen into a chemical the team can more easily measure.

    The result: Megalodon and its direct ancestors, known collectively as megatooth sharks, showed nitrogen isotope excesses sometimes greater than any known marine animal. They were on average probably two levels higher on the food chain than today’s great white sharks, which is like saying that some megalodons would have eaten a beast that ate great whites.

    “I definitely thought that I’d just messed up in the lab,” says Emma Kast, a biogeochemist at the University of Cambridge. Yet on closer inspection, the data held up.

    The result is “eyebrow-raising,” says Robert Boessenecker, a paleontologist at the College of Charleston in South Carolina who was not involved in the study. “Even if megalodon was eating nothing but killer whales, it would still need to be getting some of this excess nitrogen from something else,” he says, “and there’s just nothing else in the ocean today that has nitrogen isotopes that are that concentrated.”

    “I don’t know how to explain it,” he says.

    There are possibilities. Megalodons may have eaten predatory sperm whales, though those went extinct before the megatooth sharks. Or megalodons could have been cannibals (SN: 10/5/20).  

    Another complication comes from the earlier, contradictory study. Those researchers examined the same food chain —  in some cases, even the same shark teeth — using a zinc isotope instead of nitrogen. They drew the opposite conclusion, finding megalodons were on a similar level as other apex predators.

    The zinc method is not as established as the nitrogen method, though nitrogen isotopes have also rarely been used this way before. “It could be that we don’t have a total understanding and grasp of this technique,” says Sora Kim, a paleoecologist at the University of California, Merced who was involved in both studies. “But if [the newer study] is right, that’s crazy.”

    Confirming the results would be a step toward understanding why megalodons died off. If great whites had a similar diet, it could mean that they outcompeted megalodons for food, says Ehret, who was not involved in the study. The new findings suggest that’s unlikely, but leave room for the possibility that great whites competed with — or simply ate — juvenile megalodons (SN: 1/12/21). 

    Measuring more shark teeth with both techniques could solve the mystery and reconcile the studies. At the same time, Kast says, there’s plenty to explore with their method for measuring nitrogen isotopes in fossils. “There’s so many animals and so many different ecosystems and time periods,” she says. 

    Boessenecker agrees. When it comes to the ancient oceans, he says, “I guarantee we’re going to find out some really weird stuff.” More

  • in

    Robot overcomes uncertainty to retrieve buried objects

    For humans, finding a lost wallet buried under a pile of items is pretty straightforward — we simply remove things from the pile until we find the wallet. But for a robot, this task involves complex reasoning about the pile and objects in it, which presents a steep challenge.
    MIT researchers previously demonstrated a robotic arm that combines visual information and radio frequency (RF) signals to find hidden objects that were tagged with RFID tags (which reflect signals sent by an antenna). Building off that work, they have now developed a new system that can efficiently retrieve any object buried in a pile. As long as some items in the pile have RFID tags, the target item does not need to be tagged for the system to recover it.
    The algorithms behind the system, known as FuseBot, reason about the probable location and orientation of objects under the pile. Then FuseBot finds the most efficient way to remove obstructing objects and extract the target item. This reasoning enabled FuseBot to find more hidden items than a state-of-the-art robotics system, in half the time.
    This speed could be especially useful in an e-commerce warehouse. A robot tasked with processing returns could find items in an unsorted pile more efficiently with the FuseBot system, says senior author Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the Media Lab.
    “What this paper shows, for the first time, is that the mere presence of an RFID-tagged item in the environment makes it much easier for you to achieve other tasks in a more efficient manner. We were able to do this because we added multimodal reasoning to the system — FuseBot can reason about both vision and RF to understand a pile of items,” adds Adib.
    Joining Adib on the paper are research assistants Tara Boroushaki, who is the lead author; Laura Dodds; and Nazish Naeem. The research will be presented at the Robotics: Science and Systems conference. More

  • in

    Identifying bird species by sound, the BirdNET app opens new avenues for citizen science

    The BirdNET app, a free machine-learning powered tool that can identify over 3,000 birds by sound alone, generates reliable scientific data and makes it easier for people to contribute citizen-science data on birds by simply recording sounds.
    An article publishing June 28thin the open access journal PLOS Biology by Connor Wood and colleagues in the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology, U.S. suggests that the BirdNET app lowers the barrier to citizen science because it doesn’t require bird-identification skills to participate. Users simply listen for birds and tap the app to record. BirdNET uses artificial intelligence to automatically identify the species by sound and captures the recording for use in research.
    “Our guiding design principles were that we needed an accurate algorithm and a simple user interface,” said study co-author Stefan Kahl in the Yang Center at the Cornell Lab, who led the technical development. “Otherwise, users would not return to the app.” The results exceeded expectations: Since its launch in 2018, more than 2.2 million people have contributed data.
    To test whether the app could generate reliable scientific data, the authors selected four test cases in which conventional research had already provided robust answers. Their results show that BirdNET app data successfully replicated known patterns of song dialects in North American and European songbirds and accurately mapped a bird migration on both continents.
    Validating the reliability of the app data for research purposes was the first step in what they hope will be a long-term, global research effort — not just for birds, but ultimately for all wildlife and indeed entire soundscapes. Data used in the four test cases is publicly available, and the authors are working on making the entire dataset open.
    “The most exciting part of this work is how simple it is for people to participate in bird research and conservation,” Wood adds. “You don’t need to know anything about birds, you just need a smartphone, and the BirdNET app can then provide both you and the research team with a prediction for what bird you’ve heard. This has led to tremendous participation worldwide, which translates to an incredible wealth of data. It’s really a testament to an enthusiasm for birds that unites people from all walks of life.”
    The BirdNET app is part of the Cornell Lab of Ornithology’s suite of tools, including the educational Merlin Bird ID app and citizen-science apps eBird, NestWatch, and Project FeederWatch, which together have generated more than 1 billion bird observations, sounds, and photos from participants around the world for use in science and conservation.
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Investigating positron scattering from giant molecular targets

    Particle scattering is an important test of the quantum properties of atoms and larger molecules. While electrons have historically dominated these experiments, their positively charged antimatter counterparts? — ?positrons? — ?can be used in promising applications when the negatively charged particles aren’t suitable.
    A new paper published in EPJ D examines the scattering of positrons from rare gas atoms stuffed inside the fullerenes — so-called “rare gas endohedrals.” The paper is authored by Km Akanksha Dubey from the Indian Institute of Technology Patna, Patna, Bihta, India, and Marcelo Ciappina, Guangdong Technion-Israel Institute of Technology, Shantou, China.
    “Our focus was to investigate positron scattering processes with rare gas endohedrals. As a reference to the endohedral system, we also considered positron scattering from bare C60 targets,” Ciappina says. “”In our study, we chose rare gas atoms for encapsulation inside carbon 60 (C60), as they are probably the most popular and studied endohedrals. Rare gas endohedrals are very stable formations; the encapsulated atoms find their equilibrium position at almost the geometrical centre of the C60.”
    The study builds upon the findings of previous studies involving the collision of positrons with giant targets like C60 and rare gas endohedrals. The major difference being that the resonance scattering with different sizes of the encaged atoms is elucidated in comparison to the bare C60 scattering; resonances are also tested under the different scattering fields of the projectile-target complex.
    “To our surprise, resonance formations in the rare gas endohedrals are altered as compared to the case of positron-C60 collision, despite the dominant scattering field in positron scattering being repulsive in nature,” Ciappina says. The resonances at the lower energy are significantly affected by various scattering fields considered alternatively.
    “Thus, scattering resonances in the positron scattering find their natural abode in the C60 and rare gas endohedrals, and the resonance states can be favourably manipulated by keeping the rare gas atoms inside it.”
    With insights into many aspects of such collision processes, potential applications for the findings of the paper could range from fields such as positron beam spectroscopy and the investigation of nanomaterials.
    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More

  • in

    Is AI good or bad for the climate? It's complicated

    As the world fights climate change, will the increasingly widespread use of artificial intelligence (AI) be a help or a hindrance? In a paper published this week in Nature Climate Change, a team of experts in AI, climate change, and public policy present a framework for understanding the complex and multifaceted relationship of AI with greenhouse gas emissions, and suggest ways to better align AI with climate change goals.
    “AI affects the climate in many ways, both positive and negative, and most of these effects are poorly quantified,” said David Rolnick, Assistant Professor of Computer Science at McGill University and a Core Academic Member of Mila — Quebec AI Institute, who co-authored the paper. “For example, AI is being used to track and reduce deforestation, but AI-based advertising systems are likely making climate change worse by increasing the amount that people buy.”
    The paper divides the impacts of AI on greenhouse gas emissions into three categories: 1) Impacts from the computational energy and hardware used to develop, train, and run AI algorithms, 2) immediate impacts caused by the applications of AI — such as optimizing energy use in buildings (which decreases emissions) or accelerating fossil fuel exploration (which increases emissions), and 3) system-level impacts caused by the ways in which AI applications affect behaviour patterns and society more broadly, such as via advertising systems and self-driving cars.
    “Climate change should be a key consideration when developing and assessing AI technologies,” said Lynn Kaack, Assistant Professor of Computer Science and Public Policy at the Hertie School, and lead author on the report. “We find that those impacts that are easiest to measure are not necessarily those with the largest impacts. So, evaluating the effect of AI on the climate holistically is important.”
    AI’s impacts on greenhouse gas emissions — a matter of choice
    The authors emphasize the ability of researchers, engineers, and policymakers to shape the impacts of AI, writing that its “… ultimate effect on the climate is far from predestined, and societal decisions will play a large role in shaping its overall impacts.” For example, the paper notes that AI-enabled autonomous vehicle technologies can help lower emissions if they are designed to facilitate public transportation, but they can increase emissions if they are used in personal cars and result in people driving more.
    The researchers also note that machine learning expertise is often concentrated among a limited set of actors. This raises potential challenges with respect to the governance and implementation of machine learning in the context of climate change, since it may create or widen the digital divide, or shift power from public to large private entities by virtue of who controls relevant data or intellectual capital.
    “The choices that we make implicitly as technologists can matter a lot,” said Prof. Rolnick. “Ultimately, AI for Good shouldn’t just be about adding beneficial applications on top of business as usual, it should be about shaping all the applications of AI to achieve the impact we want to see.”
    Story Source:
    Materials provided by McGill University. Note: Content may be edited for style and length. More

  • in

    Microfluidic-based soft robotic prosthetics promise relief for diabetic amputees

    Every 30 seconds, a leg is amputated somewhere in the world due to diabetes. These patients often suffer from neuropathy, a loss of sensation in the lower extremities, and are therefore unable to detect damage resulting from an ill-fitting prosthesis, which leads to the amputation of a limb.
    In Biomicrofluidics, by AIP Publishing, Canadian scientists reveal their development of a new type of prosthetic using microfluidics-enabled soft robotics that promises to greatly reduce skin ulcerations and pain in patients who have had an amputation between the ankle and knee.
    More than 80% of lower-limb amputations in the world are the result of diabetic foot ulcers, and the lower limb is known to swell at unpredictable times, resulting in volume changes of 10% or more.
    Typically, the prosthesis used after amputation includes fabric and silicone liners that can be added or removed to improve fit. The amputee needs to manually change the liners, but neuropathy leading to poor sensation makes this difficult and can lead to more damage to the remaining limb.
    “Rather than creating a new type of prosthetic socket, the typical silicon/fabric limb liner is replaced with a single layer of liner with integrated soft fluidic actuators as an interfacing layer,” said author Carolyn Ren, from the University of Waterloo. “These actuators are designed to be inflated to varying pressures based on the anatomy of the residual limb to reduce pain and prevent pressure ulcerations.”
    The scientists started with a recently developed device using pneumatic actuators to adjust the pressure of the prosthetic socket. This initial device was quite heavy, limiting its use in real-world situations.
    To address this problem, the group developed a way to miniaturize the actuators. They designed a microfluidic chip with 10 integrated pneumatic valves to control each actuator. The full system is controlled by a miniature air pump and two solenoid valves that provide air to the microfluidic chip. The control box is small and light enough to be worn as part of the prosthesis.
    Medical personnel with extensive experience in prosthetic devices were part of the team and provided a detailed map of desired pressures for the prosthetic socket. The group carried out extensive measurements of the contact pressure provided by each actuator and compared these to the desired pressure for a working prosthesis.
    All 10 actuators were found to produce pressures in the desired range, suggesting the new device will work well in the field. Future research will test the approach on a more accurate biological model.
    The group plans additional research to integrate pressure sensors directly into the prosthetic liner, perhaps using newly available knitted soft fabric that incorporates pressure sensing material.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Electrospinning promises major improvements in wearable technology

    Wearable technology has exploded in recent years. Spurred by advances in flexible sensors, transistors, energy storage, and harvesting devices, wearables encompass miniaturized electronic devices worn directly on the human skin for sensing a range of biophysical and biochemical signals or, as with smart watches, for providing convenient human-machine interfaces.
    Engineering wearables for optimal skin conformity, breathability, and biocompatibility without compromising the tunability of their mechanical, electrical, and chemical properties is no small task. The emergence of electrospinning — the fabrication of nanofibers with tunable properties from a polymer base — is an exciting development in the field.
    In APL Bioengineering, by AIP Publishing, researchers from Tufts University examined some of the latest advances in wearable electronic devices and systems being developed using electrospinning.
    “We show how the scientific community has realized many remarkable things using electrospun nanomaterials,” said author Sameer Sonkusale. “They have applied them for physical activity monitoring, motion tracking, measuring biopotentials, chemical and biological sensing, and even batteries, transistors, and antennas, among others.”
    Sonkusale and his colleagues showcase the many advantages electrospun materials have over conventional bulk materials.
    Their high surface-to-volume ratio endows them with enhanced porosity and breathability, which is important for long-term wearability. Also, with the appropriate blend of polymers, they can achieve superior biocompatibility.
    Conductive electrospun nanofibers provide high surface area electrodes, enabling both flexibility and performance improvements, including rapid charging and high energy storage capacities.
    “Also, their nanoscale features mean they adhere well to the skin without need for chemical adhesives, which is important if you are interested in measuring biopotentials, like heart activity using electrocardiography or brain activity using electroencephalography,” said Sonkusale.
    Electrospinning is considerably less expensive and more user-friendly than photolithography for realizing nanoscale transistor morphologies with superior electronic transport.
    The researchers are confident electrospinning will further establish its claim as a versatile, feasible, and inexpensive technique for the fabrication of wearable devices in the coming years. They note there are areas for improvement to be considered, including broadening the choice for materials and improving the ease of integration with human physiology.
    They suggest the aesthetics of wearables may be improved by making them smaller and, perhaps, with the incorporation of transparent materials, “almost invisible.”
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More