More stories

  • in

    A protein mines, sorts rare earths better than humans, paving way for green tech

    Rare earth elements, like neodymium and dysprosium, are a critical component to almost all modern technologies, from smartphones to hard drives, but they are notoriously hard to separate from the Earth’s crust and from one another.
    Penn State scientists have discovered a new mechanism by which bacteria can select between different rare earth elements, using the ability of a bacterial protein to bind to another unit of itself, or “dimerize,” when it is bound to certain rare earths, but prefer to remain a single unit, or “monomer,” when bound to others.
    By figuring out how this molecular handshake works at the atomic level, the researchers have found a way to separate these similar metals from one another quickly, efficiently, and under normal room temperature conditions. This strategy could lead to more efficient, greener mining and recycling practices for the entire tech sector, the researchers state.
    “Biology manages to differentiate rare earths from all the other metals out there — and now, we can see how it even differentiates between the rare earths it finds useful and the ones it doesn’t,” said Joseph Cotruvo Jr., associate professor of chemistry at Penn State and lead author on a paper about the discovery published today (May 31) in the journal Nature. “We’re showing how we can adapt these approaches for rare earth recovery and separation.”
    Rare earth elements, which include the lanthanide metals, are in fact relatively abundant, Cotruvo explained, but they are what mineralogists call “dispersed,” meaning they’re mostly scattered throughout the planet in low concentrations.
    “If you can harvest rare earths from devices that we already have, then we may not be so reliant on mining it in the first place,” Cotruvo said. However, he added that regardless of source, the challenge of separating one rare earth from another to get a pure substance remains.

    “Whether you are mining the metals from rock or from devices, you are still going to need to perform the separation. Our method, in theory, is applicable for any way in which rare earths are harvested,” he said.
    All the same — and completely different
    In simple terms, rare earths are 15 elements on the periodic table — the lanthanides, with atomic numbers 57 to 71 — and two other elements with similar properties that are often grouped with them. The metals behave similarly chemically, have similar sizes, and, for those reasons, they often are found together in the Earth’s crust. However, each one has distinct applications in technologies.
    Conventional rare earth separation practices require using large amounts of toxic chemicals like kerosene and phosphonates, similar to chemicals that are commonly used in insecticides, herbicides and flame retardants, Cotruvo explained. The separation process requires dozens or even hundreds of steps, using these highly toxic chemicals, to achieve high-purity individual rare earth oxides.
    “There is getting them out of the rock, which is one part of the problem, but one for which many solutions exist,” Cotruvo said. “But you run into a second problem once they are out, because you need to separate multiple rare earths from one another. This is the biggest and most interesting challenge, discriminating between the individual rare earths, because they are so alike. We’ve taken a natural protein, which we call lanmodulin or LanM, and engineered it to do just that.”
    Learning from nature

    Cotruvo and his lab turned to nature to find an alternative to the conventional solvent-based separation process, because biology has already been harvesting and harnessing the power of rare earths for millennia, especially in a class of bacteria called “methylotrophs” that often are found on plant leaves and in soil and water and play an important role in how carbon moves through the environment.
    Six years ago, the lab isolated lanmodulin from one of these bacteria, and showed that it was unmatched — over 100 million times better — in its ability to bind lanthanides over common metals like calcium. Through subsequent work they showed that it was able to purify rare earths as a group from dozens of other metals in mixtures that were too complex for traditional rare earth extraction methods. However, the protein was less good at discriminating between the individual rare earths.
    Cotruvo explained that for the new study detailed in Nature, the team identified hundreds of other natural proteins that looked roughly like the first lanmodulin but homed in on one that was different enough — 70% different — that they suspected it would have some distinct properties. This protein is found naturally in a bacterium (Hansschlegelia quercus) isolated from English oak buds.
    The researchers found that the lanmodulin from this bacterium exhibited strong capabilities to differentiate between rare earths. Their studies indicated that this differentiation came from an ability of the protein to dimerize and perform a kind of handshake. When the protein binds one of the lighter lanthanides, like neodymium, the handshake (dimer) is strong. By contrast, when the protein binds to a heavier lanthanide, like dysprosium, the handshake is much weaker, such that the protein favors the monomer form.
    “This was surprising because these metals are very similar in size,” Cotruvo said. “This protein has the ability to differentiate at a scale that is unimaginable to most of us — a few trillionths of a meter, a difference that is less than a tenth of the diameter of an atom.”
    Fine-tuning rare earth separations
    To visualize the process at such a small scale, the researchers teamed up with Amie Boal, Penn State professor of chemistry, biochemistry and molecular biology, who is a co-author on the paper. Boal’s lab specializes in a technique called X-ray crystallography, which allows for high-resolution molecular imaging.
    The researchers determined that the protein’s ability to dimerize dependent on the lanthanide to which it was bound came down to a single amino acid — 1% of the whole protein — that occupied a different position with lanthanum (which, like neodymium, is a light lanthanide) than with dysprosium.
    Because this amino acid is part of a network of interconnected amino acids at the interface with the other monomer, this shift altered how the two protein units interacted. When an amino acid that is a key player in this network was removed, the protein was much less sensitive to rare earth identity and size. The findings revealed a new, natural principle for fine-tuning rare earth separations, based on propagation of miniscule differences at the rare earth binding site to the dimer interface.
    Using this knowledge, their collaborators at Lawrence Livermore National Laboratory showed that the protein could be tethered to small beads in a column, and that it could separate the most important components of permanent magnets, neodymium and dysprosium, in a single step, at room temperature and without any organic solvents.
    “While we are by no means the first scientists to recognize that metal-sensitive dimerization could be a way of separating very similar metals, mostly with synthetic molecules,” Cotruvo said, “this is the first time that this phenomenon has been observed in nature with the lanthanides. This is basic science with applied outcomes. We’re revealing what nature is doing and it’s teaching us what we can do better as chemists.”
    Cotruvo believes that the concept of binding rare earths at a molecular interface, such that dimerization is dependent on the exact size of the metal ion, can be a powerful approach for accomplishing challenging separations.
    “This is the tip of the iceberg,” he said. “With further optimization of this phenomenon, the toughest problem of all — efficient separation of rare earths that are right next to each other on the periodic table — may be within reach.”
    A patent application was filed by Penn State based on this work and the team is currently scaling up operations, fine-tuning and streamlining the protein with the goal of commercializing the process.
    Other Penn State co-authors are Joseph Mattocks, Jonathan Jung, Chi-Yun Lin, Neela Yennawar, Emily Featherston and Timothy Hamilton. Ziye Dong, Christina Kang-Yun and Dan Park of the Lawrence Livermore National Laboratory also co-authored the paper.
    The work was funded by the U.S. Department of Energy, the National Science Foundation, the National Institutes of Health, the Jane Coffin Childs Memorial Fund for Medical Research, and the Critical Materials Institute, an Energy Innovation Hub funded by the DOE, Office of Energy Efficiency and Renewable Energy, Advanced Materials and Manufacturing Technologies Office. Part of the work was performed under the auspices of the DOE by Lawrence Livermore National Laboratory. More

  • in

    World’s fastest electron microscope

    Electron microscopes give us insight into the tiniest details of materials and can visualize, for example, the structure of solids, molecules or nanoparticles with atomic resolution. However, most materials in nature are not static. They constantly interact, move and reshape between initial and final configurations. One of the most general phenomena is the interaction between light and matter, which is omnipresent in materials such as solar cells, displays or lasers. These interactions are defined by electrons pushed and pulled around by the oscillations of light, and the dynamics are extremely fast: light waves oscillate at attoseconds, the billionth of a billionth of a second.
    Until now, it has been very difficult to directly visualize these extremely fast processes in space and time, but that is exactly what a team of physicists from the University of Konstanz has now succeeded in. They recorded movies with attosecond time resolution in a transmission electron microscope, providing new insights into the functionality of nanomaterials and dielectric meta-atoms. They recently published their results in the scientific journal Nature.
    Generation of ultrashort electron pulses
    “If you look closely, almost all phenomena in optics, nanophotonics or metamaterials occur on time scales below one oscillation period of a light wave,” explains Peter Baum, physics professor and head of the Light and Matter Group at the University of Konstanz. “To film the ultrafast interactions between light and matter, we therefore need a time resolution of attoseconds.” To achieve such an extreme recording speed, Baum’s research group uses the fast oscillations of a continuous-wave laser to convert the electron beam of an electron microscope into a sequence of ultrashort electron pulses.
    In this process, a thin silicon membrane creates a periodic acceleration and deceleration of the electrons. “This modulation causes the electrons to catch up with each other. After some time, they convert into a train of ultrashort pulses,” explains David Nabben, doctoral student and first author of the study. Another laser wave creates the interaction with the sample object. The ultrashort electron pulses are then used to measure the object’s response to the laser light, frozen in time like in a stroboscope. In the end, the researchers obtain a movie of the processes with attosecond time resolution.
    Investigation of nanophotonic phenomena
    In their study, the scientists present several examples of time-resolved measurements in nanomaterials. The experiments show, for example, the emergence of chiral surface waves that can be controlled by the researchers to travel in a specific spatial direction, or characteristic time delays between different modes of radiation from nanoantennas. What is more, the scientists not only investigate such surface phenomena, but also film the electromagnetic processes inside a waveguide material.
    The results are highly interesting for further developments in nanophotonics, but also demonstrate the very broad application range of the new attosecond electron microscopy. “The direct measurement of the electromagnetic functionality of materials as a function of space and time is not only of great value for fundamental science, but also opens up the way for new developments in photonic integrated circuits or metamaterials,” Nabben summarizes the impact of the results. More

  • in

    Using AI to create better, more potent medicines

    While it can take years for the pharmaceutical industry to create medicines capable of treating or curing human disease, a new study suggests that using generative artificial intelligence could vastly accelerate the drug-development process.
    Today, most drug discovery is carried out by human chemists who rely on their knowledge and experience to select and synthesize the right molecules needed to become the safe and efficient medicines we depend on. To identify the synthesis paths, scientists often employ a technique called retrosynthesis — a method for creating potential drugs by working backward from the wanted molecules and searching for chemical reactions to make them.
    Yet because sifting through millions of potential chemical reactions can be an extremely challenging and time-consuming endeavor, researchers at The Ohio State University have created an AI framework called G2Retro to automatically generate reactions for any given molecule. The new study showed that compared to current manual-planning methods, the framework was able to cover an enormous range of possible chemical reactions as well as accurately and quickly discern which reactions might work best to create a given drug molecule.
    “Using AI for things critical to saving human lives, such as medicine, is what we really want to focus on,” said Xia Ning, lead author of the study and an associate professor of computer science and engineering at Ohio State. “Our aim was to use AI to accelerate the drug design process, and we found that it not only saves researchers time and money but provides drug candidates that may have much better properties than any molecules that exist in nature.”
    This study builds on previous research of Ning’s where her team developed a method named Modof that was able to generate molecule structures that exhibited desired properties better than any existing molecules. “Now the question becomes how to make such generated molecules, and that is where this new study shines,” said Ning, also an associate professor of biomedical informatics in the College of Medicine.
    The study was published today in the journal Communications Chemistry.
    Ning’s team trained G2Retro on a dataset that contains 40,000 chemical reactions collected between 1976 and 2016. The framework “learns” from graph-based representations of given molecules, and uses deep neural networks to generate possible reactant structures that could be used to synthesize them. Its generative power is so impressive that, according to Ning, once given a molecule, G2Retro could come up with hundreds of new reaction predictions in only a few minutes.
    “Our generative AI method G2Retro is able to supply multiple different synthesis routes and options, as well as a way to rank different options for each molecule,” said Ning. “This is not going to replace current lab-based experiments, but it will offer more and better drug options so experiments can be prioritized and focused much faster.”
    To further test the AI’s effectiveness, Ning’s team conducted a case study to see if G2Retro could accurately predict four newly released drugs already in circulation: Mitapivat, a medication used to treat hemolytic anemia; Tapinarof, which is used to treat various skin diseases; Mavacamten, a drug to treat systemic heart failure; and Oteseconazole, used to treat fungal infections in females. G2Retro was able to correctly generate exactly the same patented synthesis routes for these medicines, and provided alternative synthesis routes that are also feasible and synthetically useful, Ning said.
    Having such a dynamic and effective device at scientists’ disposal could enable the industry to manufacture stronger drugs at a quicker pace — but despite the edge AI might give scientists inside the lab, Ning emphasizes the medicines G2Retro or any generative AI creates still need to be validated — a process that involves the created molecules being tested in animal models and later in human trials.
    “We are very excited about generative AI for medicine, and we are dedicated to using AI responsibly to improve human health,” said Ning.
    This research was supported by Ohio State’s President’s Research Excellence Program and the National Science Foundation. Other Ohio State co-authors were Ziqi Chen, Oluwatosin Ayinde, James Fuchs and Huan Sun. More

  • in

    Fastest industry standard optical fiber

    An optical fibre about the thickness of a human hair can now carry the equivalent of more than 10 million fast home internet connections running at full capacity.
    A team of Japanese, Australian, Dutch, and Italian researchers has set a new speed record for an industry standard optical fibre, achieving 1.7 Petabits over a 67km length of fibre. The fibre, which contains 19 cores that can each carry a signal, meets the global standards for fibre size, ensuring that it can be adopted without massive infrastructure change. And it uses less digital processing, greatly reducing the power required per bit transmitted.
    Macquarie University researchers supported the invention by developing a 3D laser-printed glass chip that allows low loss access to the 19 streams of light carried by the fibre and ensures compatibility with existing transmission equipment.
    The fibre was developed by the Japanese National Institute of Information and Communications Technology (NICT, Japan) and Sumitomo Electric Industries, Ltd. (SEI, Japan) and the work was performed in collaboration with the Eindhoven University of Technology, University of L’Aquila, and Macquarie University.
    All the world’s internet traffic is carried through optical fibres which are each 125 microns thick (comparable to the thickness of a human hair). These industry standard fibres link continents, data centres, mobile phone towers, satellite ground stations and our homes and businesses.
    Back in 1988, the first subsea fibre-optic cable across the Atlantic had a capacity of 20 Megabits or 40,000 telephone calls, in two pairs of fibres. Known as TAT 8, it came just in time to support the development of the World Wide Web. But it was soon at capacity.

    The latest generation of subsea cables such as the Grace Hopper cable, which went into service in 2022, carries 22 Terabits in each of 16 fibre pairs. That’s a million times more capacity than TAT 8, but it’s still not enough to meet the demand for streaming TV, video conferencing and all our other global communication.
    “Decades of optics research around the world has allowed the industry to push more and more data through single fibres,” says Dr Simon Gross from Macquarie University’s School of Engineering. “They’ve used different colours, different polarisations, light coherence and many other tricks to manipulate light.”
    Most current fibres have a single core that carries multiple light signals. But this current technology is practically limited to only a few Terabits per second due to interference between the signals.
    “We could increase capacity by using thicker fibres. But thicker fibres would be less flexible, more fragile, less suitable for long-haul cables, and would require massive reengineering of optical fibre infrastructure,” says Dr Gross.
    “We could just add more fibres. But each fibre adds equipment overhead and cost and we’d need a lot more fibres.”
    To meet the exponentially growing demand for movement of data, telecommunication companies need technologies that offer greater data flow for reduced cost.

    The new fibre contains 19 cores that can each carry a signal.
    “Here at Macquarie University, we’ve created a compact glass chip with a wave guide pattern etched into it by a 3D laser printing technology. It allows feeding of signals into the 19 individual cores of the fibre simultaneously with uniform low losses. Other approaches are lossy and limited in the number of cores,” says Dr Gross.
    “It’s been exciting to work with the Japanese leaders in optical fibre technology. I hope we’ll see this technology in subsea cables within five to 10 years.”
    Another researcher involved in the experiment, Professor Michael Withford from Macquarie University’s School of Mathematical and Physical Sciences, believes this breakthrough in optical fibre technology has far-reaching implications.
    “The optical chip builds on decades of research into optics at Macquarie University,” says Professor Withford. “The underlying patented technology has many applications including finding planets orbiting distant stars, disease detection, even identifying damage in sewage pipes.” More

  • in

    Symmetry breaking by ultrashort light pulses opens new quantum pathways for coherent phonons

    Atoms in a crystal form a regular lattice, in which they can move over small distances from their equilibrium positions. Such phonon excitations are represented by quantum states. A superposition of phonon states defines a so-called phonon wavepacket, which is connected with collective coherent oscillations of the atoms in the crystal. Coherent phonons can be generated by excitation of the crystal with a femtosecond light pulse and their motions in space and time be followed by scattering an ultrashort x-ray pulse from the excited material. The pattern of scattered x-rays gives direct insight in the momentary position of and distances between the atoms. A sequence of such patterns provides a ‘movie’ of the atomic motions.
    The physical properties of coherent phonons are determined by the symmetry of the crystal, which represents a periodic arrangement of identical unit cells. Weak optical excitation does not change the symmetry properties of the crystal. In this case, coherent phonons with identical atomic motions in all unit cells are excited . In contrast, strong optical excitation can break the symmetry of the crystal and make atoms in adjacent unit cells oscillate differently. While this mechanism holds potential for accessing other phonons, it has not been explored so far.
    In the journal Physical Review B, researchers from the Max-Born-Institute in Berlin in collaboration with researchers from the University of Duisburg-Essen have demonstrated a novel concept for exciting and probing coherent phonons in crystals of a transiently broken symmetry. The key of this concept lies in reducing the symmetry of a crystal by appropriate optical excitation, as has been shown with the prototypical crystalline semimetal bismuth (Bi).
    Ultrafast mid-infrared excitation of electrons in Bi modifies the spatial charge distribution and, thus, reduces the crystal symmetry transiently. In the reduced symmetry, new quantum pathways for the excitation of coherent phonons open up. The symmetry reduction causes a doubling of the unit-cell size from the red framework with two Bi atoms to the blue framework with four Bi atoms. In addition to the unidirectional atomic motion, the unit cell with 4 Bi atoms allows for coherent phonon wave packets with bidirectional atomic motions.
    Probing the transient crystal structure directly by femtosecond x-ray diffraction reveals oscillations of diffracted intensity, which persist on a picosecond time scale. The oscillations arise from coherent wave packet motions along phonon coordinates in the crystal of reduced symmetry. Their frequency of 2.6 THz is different from that of phonon oscillations at low excitation level. Interestingly, this behavior occurs only above a threshold of the optical pump fluence and reflects the highly nonlinear, so-called non-perturbative character of the optical excitation process.
    In summary, optically induced symmetry breaking allows for modifying the excitation spectrum of a crystal on ultrashort time scales. These results may pave the way for steering material properties transiently and, thus, implementing new functions in optoacoustics and optical switching. More

  • in

    Self-driving cars lack social intelligence in traffic

    Should I go or give way? It is one of the most basic questions in traffic, whether merging in on a motorway or at the door of the metro. The decision is one that humans typically make quickly and intuitively, because doing so relies on social interactions trained from the time we begin to walk.
    Self-driving cars on the other hand, which are already on the road in several parts of the world, still struggle when navigating these social interactions in traffic. This has been demonstrated in new research conducted at the University of Copenhagen’s Department of Computer Science. Researchers analyzed an array of videos uploaded by YouTube users of self-driving cars in various traffic situations. The results show that self-driving cars have a particularly tough time understanding when to ‘yield’ — when to give way and when to drive on.
    “The ability to navigate in traffic is based on much more than traffic rules. Social interactions, including body language, play a major role when we signal each other in traffic. This is where the programming of self-driving cars still falls short. That is why it is difficult for them to consistently understand when to stop and when someone is stopping for them, which can be both annoying and dangerous,” says Professor Barry Brown, who has studied the evolution of self-driving car road behavior for the past five years.
    Sorry, it’s a self-driving car!
    Companies like Waymo and Cruise have launched taxi services with self-driving cars in parts of the United States. Tesla has rolled out their FSD model (full self-driving) to about 100,000 volunteer drivers in the US and Canada. And the media is brimming with stories about how good self-driving cars perform. But according to Professor Brown and his team, their actual road performance is a well-kept trade secret that very few have insight into. Therefore, the researchers performed in-depth analyses using 18 hours of YouTube footage filmed by enthusiasts testing cars from the back seat.
    One of their video examples shows a family of four standing by the curb of a residential street in the United States. There is no pedestrian crossing, but the family would like to cross the road. As the driverless car approaches, it slows, causing the two adults in the family to wave their hands as a sign for the car to drive on. Instead, the car stops right next to them for 11 seconds. Then, as the family begins walking across the road, the car starts moving again, causing them to jump back onto the sidewalk, whereupon the person in the back seat rolls down the window and yells, “Sorry, self-driving car!.”
    “The situation is similar to the main problem we found in our analysis and demonstrates the inability of self-driving cars to understand social interactions in traffic. The driverless vehicle stops so as to not hit pedestrians, but ends up driving into them anyway because it doesn’t understand the signals. Besides creating confusion and wasted time in traffic, it can also be downright dangerous,” says Professor Brown.

    A drive in foggy Frisco
    In tech centric San Francisco, the performance of self-driving cars can be judged up close. Here, driverless cars have been unleashed in several parts of the city as buses and taxis, navigating the hilly streets among people and other natural phenomena. And according to the researcher, this has created plenty of resistance among the city’s residents:
    “Self-driving cars are causing traffic jams and problems in San Francisco because they react inappropriately to other road users. Recently, the city’s media wrote of a chaotic traffic event caused by self-driving cars due to fog. Fog caused the self-driving cars to overreact, stop and block traffic, even though fog is extremely common in the city,” says Professor Brown.
    Robotic cars have been in the works for 10 years and the industry behind them has spent over DKK 40 billion to push their development. Yet the outcome has been cars that still drive with many mistakes, blocking other drivers and disrupting the smooth flow of traffic.
    Why do you think it’s so difficult to program self-driving cars to understand social interactions in traffic?
    “I think that part of the answer is that we take the social element for granted. We don’t think about it when we get into a car and drive — we just do it automatically. But when it comes to designing systems, you need to describe everything we take for granted and incorporate it into the design. The car industry could learn from having a more sociological approach. Understanding social interactions that are part of traffic should be used to design self-driving cars’ interactions with other road users, similar to how research has helped improve the usability of mobile phones and technology more broadly.”
    About the study: The researchers analyzed 18 hours of video footage of self-driving cars from 70 different YouTube videos. Using different video analysis techniques the researchers studied the video sequences in depth, rather than making a broader superficial analysis. The study is called: “The Halting Problem: Video analysis of self-driving cars in traffic” has just been presented at the 2023 CHI Conference on Human Factors in Computing Systems, where it won the conference’s best paper award. The study was conducted by Barry Brown of the University of Copenhagen and Stockholm University, Mathias Broth of Linköping University, and Erik Vinkhuyzen of Kings College, London. More

  • in

    New tool may help spot ‘invisible’ brain damage in college athletes

    An artificial intelligence computer program that processes magnetic resonance imaging (MRI) can accurately identify changes in brain structure that result from repeated head injury, a new study in student athletes shows. These variations have not been captured by other traditional medical images such as computerized tomography (CT) scans. The new technology, researchers say, may help design new diagnostic tools to better understand subtle brain injuries that accumulate over time.
    Experts have long known about potential risks of concussion among young athletes, particularly for those who play high-contact sports such as football, hockey, and soccer. Evidence is now mounting that repeated head impacts, even if they at first appear mild, may add up over many years and lead to cognitive loss. While advanced MRI identifies microscopic changes in brain structure that result from head trauma, researchers say the scans produce vast amounts of data that is difficult to navigate.
    Led by researchers in the Department of Radiology at NYU Grossman School of Medicine, the new study showed for the first time that the new tool, using an AI technique called machine learning, could accurately distinguish between the brains of male athletes who played contact sports like football versus noncontact sports like track and field. The results linked repeated head impacts with tiny, structural changes in the brains of contact-sport athletes who had not been diagnosed with a concussion.
    “Our findings uncover meaningful differences between the brains of athletes who play contact sports compared to those who compete in noncontact sports,” said study senior author and neuroradiologist Yvonne Lui, MD. “Since we expect these groups to have similar brain structure, these results suggest that there may be a risk in choosing one sport over another,” adds Lui, a professor and vice chair for research in the Department of Radiology at NYU Langone Health.
    Lui adds that beyond spotting potential damage, the machine-learning technique used in their investigation may also help experts to better understand the underlying mechanisms behind brain injury.
    The new study, which published online May 22 in The Neuroradiology Journal, involved hundreds of brain images from 36 contact-sport college athletes (mostly football players) and 45 noncontact-sport college athletes (mostly runners and baseball players). The work was meant to clearly link changes detected by the AI tool in the brain scans of football players to head impacts. It builds on a previous study that had identified brain-structure differences in football players, comparing those with and without concussions to athletes who competed in noncontact sports.

    For the investigation, the researchers analyzed MRI scans from 81 male athletes taken between 2016 through 2018, none of whom had a known diagnosis of concussion within that time period. Contact-sport athletes played football, lacrosse, and soccer, while noncontact-sport athletes participated in baseball, basketball, track and field, and cross-country.
    As part of their analysis, the research team designed statistical techniques that gave their computer program the ability to “learn” how to predict exposure to repeated head impacts using mathematical models. These were based on data examples fed into them, with the program getting “smarter” as the amount of training data grew.
    The study team trained the program to identify unusual features in brain tissue and distinguish between athletes with and without repeated exposure to head injuries based on these factors. They also ranked how useful each feature was for detecting damage to help uncover which of the many MRI metrics might contribute most to diagnoses.
    Two metrics most accurately flagged structural changes that resulted from head injury, say the authors. The first, mean diffusivity, measures how easily water can move through brain tissue and is often used to spot strokes on MRI scans. The second, mean kurtosis, examines the complexity of brain-tissue structure and can indicate changes in the parts of the brain involved in learning, memory, and emotions.
    “Our results highlight the power of artificial intelligence to help us see things that we could not see before, particularly ‘invisible injuries’ that do not show up on conventional MRI scans,” said study lead author Junbo Chen, MS, a doctoral candidate at NYU Tandon School of Engineering. “This method may provide an important diagnostic tool not only for concussion, but also for detecting the damage that stems from subtler and more frequent head impacts.”
    Chen adds that the study team next plans to explore the use of their machine-learning technique for examining head injury in female athletes.
    Funding for the study was provided by National Institute of Health grants P41EB017183 and C63000NYUPG118117. Further funding was provided by Department of Defense grant W81XWH2010699.
    In addition to Lui and Chen, other NYU researchers involved in the study were Sohae Chung, PhD; Tianhao Li, MS; Els Fieremans, PhD; Dmitry Novikov, PhD; and Yao Wang, PhD. More

  • in

    Source-shifting metastructures composed of only one resin for location camouflaging

    The field of transformation optics has flourished over the past decade, allowing scientists to design metamaterial-based structures that shape and guide the flow of light. One of the most dazzling inventions potentially unlocked by transformation optics is the invisibility cloak — a theoretical fabric that bends incoming light away from the wearer, rendering them invisible. Interestingly, such illusions are not restricted to the manipulations of light alone.
    Many of the techniques used in transformation optics have been applied to sound waves, giving rise to the parallel field of transformation acoustics. In fact, researchers have already made substantial progress by developing the “acoustic cloak,” the analog of the invisibility cloak for sounds. While research on acoustic illusion has focused on the concept of masking the presence of an object, not much progress has been made on the problem of location camouflaging.
    The concept of an acoustic source-shifter utilizes a structure that makes the location of the sound source appear different from its actual location. Such devices capable of “acoustic location camouflaging” could find applications in advanced holography and virtual reality. Unfortunately, the nature of location camouflaging has been scarcely studied, and the development of accessible materials and surfaces that would provide a decent performance has proven challenging.
    Against this backdrop, Professor Garuda Fujii, affiliated with the Institute of Engineering and Energy Landscape Architectonics Brain Bank (ELab2) at Shinshu University, Japan, has now made progress in developing high-performance source-shifters. In a recent study published in the Journal of Sound and Vibration online on May 5, 2023, Prof. Fujii presented an innovative approach to designing source-shifter structures out of acrylonitrile butadiene styrene (ABS), an elastic polymer commonly used in 3D printing.
    Prof. Fujii’s approach is centered around a core concept: inverse design based on topology optimization. The numerical approach builds on the reproduction of pressure fields (sound) emitted by a virtual source, i.e., the source that nearby listeners would mistakenly perceive as real. Next, the pressure fields emitted by the actual source are manipulated to camouflage the location and make it sound as if coming from a different location in space. This can be achieved with the optimum design of a metastructure that, by the virtue of its geometry and elastic properties, minimizes the difference between the pressure fields emitted from the actual and virtual sources.
    Utilizing this approach, Prof. Fujii implemented an iterative algorithm to numerically determine the optimal design of ABS resin source-shifters according to various design criteria. His models and simulations had to account for the acoustic-elastic interactions between fluids (air) and solid elastic structures, as well as the actual limitations of modern manufacturing technology.
    The simulation results revealed that the optimized structures could reduce the difference between the emitted pressure fields of the masked source and those of a bare source at the virtual location to as low as 0.6%. “The optimal structure configurations obtained via topology optimization exhibited good performances at camouflaging the actual source location despite the simple composition of ABS that did not comprise complex acoustic metamaterials”, remarks Prof. Fujii.
    To shed more light on the underlying camouflaging mechanisms, Prof. Fujii analyzed the importance of the distance between the virtual and actual sources. He found that a greater distance did not necessarily degrade the source-shifter’s performance. He also investigated the effect of changing the frequency of the emitted sound on the performance as the source-shifters had been optimized for only one target frequency. Finally, he explored whether a source-shifter could be topologically optimized to operate at multiple sound frequencies.
    While his approach requires further fine-tuning, the findings of this study will surely help advance illusion acoustics. He concludes, “The proposed optimization method for designing high-performance source-shifters will help in the development of acoustic location camouflage and the advancement of holography technology.” More