More stories

  • in

    Is it topological? A new materials database has the answer

    What will it take to make our electronics smarter, faster, and more resilient? One idea is to build them from materials that are topological.
    Topology stems from a branch of mathematics that studies shapes that can be manipulated or deformed without losing certain core properties. A donut is a common example: If it were made of rubber, a donut could be twisted and squeezed into a completely new shape, such as a coffee mug, while retaining a key trait — namely, its center hole, which takes the form of the cup’s handle. The hole, in this case, is a topological trait, robust against certain deformations.
    In recent years, scientists have applied concepts of topology to the discovery of materials with similarly robust electronic properties. In 2007, researchers predicted the first electronic topological insulators — materials in which electrons that behave in ways that are “topologically protected,” or persistent in the face of certain disruptions.
    Since then, scientists have searched for more topological materials with the aim of building better, more robust electronic devices. Until recently, only a handful of such materials were identified, and were therefore assumed to be a rarity.
    Now researchers at MIT and elsewhere have discovered that, in fact, topological materials are everywhere, if you know how to look for them.
    In a paper published in Science, the team, led by Nicolas Regnault of Princeton University and the École Normale Supérieure Paris, reports harnessing the power of multiple supercomputers to map the electronic structure of more than 96,000 natural and synthetic crystalline materials. They applied sophisticated filters to determine whether and what kind of topological traits exist in each structure. More

  • in

    Human behavior is key to building a better long-term COVID forecast

    From extreme weather to another wave of COVID-19, forecasts give decision-makers valuable time to prepare. When it comes to COVID, though, long-term forecasting is a challenge, because it involves human behavior.
    While it can sometimes seem like there is no logic to human behavior, new research is working to improve COVID forecasts by incorporating that behavior into prediction models.
    UConn College of Agriculture, Health and Natural Resources Allied Health researcher Ran Xu, along with collaborators Hazhir Rahmandad from the Massachusetts Institute of Technology, and Navid Ghaffarzadegan from Virginia Tech, have a paper out today in PLOS Computational Biology where they detail how they applied relatively simple but nuanced variables to enhance modelling capabilities, with the result that their approach out-performed a majority of the models currently used to inform decisions made by the federal Centers for Disease Control and Prevention (CDC).
    Xu explains that he and his collaborators are methodologists, and they were interested in examining which parameters impacted the forecasting accuracy of the COVID prediction models. To begin, they turned to the CDC prediction hub, which serves as a repository of models from across the United States.
    “Currently there are over 70 different models, mostly from universities and some from companies, that are updated weekly,” says Xu. “Each week, these models give predictions for cases and number of deaths in the next couple of weeks. The CDC uses this information to inform their decisions; for example, where to strategically focus their efforts or whether to advise people to do social distancing.”
    The Human Factor
    The data was a culmination of over 490,000 point forecasts for weekly death incidents across 57 US locations over the course of one year. The researchers analyzed the length of prediction and how relatively accurate the predictions were across a period of 14 weeks. On further analysis, Xu says they noticed something interesting when they categorized the models based on their methodologies: More

  • in

    New process revolutionizes microfluidic fabrication

    Microfluidic devices use tiny spaces to manipulate very small quantities of liquids and gasses by taking advantage of the properties they exhibit at the microscale. They have demonstrated usefulness in applications from inkjet printing to chemical analysis and have great potential in personal medicine, where they can miniaturize many tests that now require a full lab, lending them the name lab-on-a-chip.
    Researchers at Kyoto University’s Institute for Integrated Cell-Material Sciences (iCeMS) approached microfluidic fabrication from a new direction and came up with an innovative process to make devices with some distinctive properties and advantages.
    A description of the new process created by Dr. Detao Qin of iCeMS’ Pureosity team, led by Professor Easan Sivaniah, appears in Nature Communications.
    Until now, making devices with microfluidic channels has required assembling them from several components, introducing possible points of failure. The Pureosity team’s process needs no such assembly. Instead, it uses light-sensitized common polymers and micro-LED light sources to create self-enclosed, porous, high-resolution channels, capable of carrying aqueous solutions and separating small biomolecules from each other, through a novel photolithography technique.
    The Pureosity team’s latest development builds upon their Organized Microfibrillation (OM) technology, a printing process which was previously published in Nature (2019). Due to a unique feature of the OM process the microfluidic channels display structural color which is linked to the pore size. This correlation ties flow rate to the color as well.
    “We see great potential in this new process,” says Prof. Sivaniah. “We see it as a completely new platform for microfluidic technology, not just for personal diagnostics, but also for miniaturized sensors and detectors.”
    Microfluidic devices are already being used in the biomedical field in point-of-care diagnostics to analyze DNA and proteins. In the future, devices may allow patients to monitor their vital health markers by simply wearing a small patch, so that healthcare providers can respond immediately to dangerous symptoms.
    “It was exciting to finally use our technology for biomedical applications,” says Assistant Professor Masateru Ito, a co-author on the current paper. “We are taking the first steps, but it is encouraging that relevant biomolecules such as insulin and the SARS-COV2 shell protein were compatible with our channels. I think that diagnostic devices are a promising future for this technology.”
    Story Source:
    Materials provided by Kyoto University. Note: Content may be edited for style and length. More

  • in

    Low-cost battery-like device absorbs CO2 emissions while it charges

    Researchers have developed a low-cost device that can selectively capture carbon dioxide gas while it charges. Then, when it discharges, the CO2 can be released in a controlled way and collected to be reused or disposed of responsibly.
    The supercapacitor device, which is similar to a rechargeable battery, is the size of a two-pence coin, and is made in part from sustainable materials including coconut shells and seawater.
    Designed by researchers from the University of Cambridge, the supercapacitor could help power carbon capture and storage technologies at much lower cost. Around 35 billion tonnes of CO2 are released into the atmosphere per year and solutions are urgently needed to eliminate these emissions and address the climate crisis. The most advanced carbon capture technologies currently require large amounts of energy and are expensive.
    The supercapacitor consists of two electrodes of positive and negative charge. In work led by Trevor Binford while completing his Master’s degree at Cambridge, the team tried alternating from a negative to a positive voltage to extend the charging time from previous experiments. This improved the supercapacitor’s ability to capture carbon.
    “We found that that by slowly alternating the current between the plates we can capture double the amount of CO2 than before,” said Dr Alexander Forse from Cambridge’s Yusuf Hamied Department of Chemistry, who led the research.
    “The charging-discharging process of our supercapacitor potentially uses less energy than the amine heating process used in industry now,” said Forse. “Our next questions will involve investigating the precise mechanisms of CO2 capture and improving them. Then it will be a question of scaling up.”
    The results are reported in the journal Nanoscale. More

  • in

    New thermal management technology for electronic devices reduces bulk while improving cooling

    Electronic devices generate heat, and that heat must be dissipated. If it isn’t, the high temperatures can compromise device function, or even damage the devices and their surroundings.
    Now, a team from UIUC and UC Berkeley have published a paper in Nature Electronics detailing a new cooling method that offers a host of benefits, not the least of which is space efficiency that offers a substantial increase over conventional approaches in devices’ power per unit volume.
    Tarek Gebrael, the lead author and a UIUC Ph.D. student in mechanical engineering, explains that the existing solutions suffer from three shortcomings. “First, they can be expensive and difficult to scale up,” he says. Heat spreaders made of diamond, for example, are sometimes used at the chip level, but they aren’t cheap.
    Second, conventional heat spreading approaches generally require that the heat spreader and a heat sink — a device for dissipating heat efficiently, toward which the spreader directs the heat — be attached on top of the electronic device. Unfortunately, “in many cases, most of the heat is generated underneath the electronic device,” meaning that the cooling mechanism isn’t where it needs to be for optimal performance.
    Third, state-of-the-art heat spreaders can’t be installed directly on the surface of the electronics; a layer of “thermal interface material” must be sandwiched between them to ensure good contact. However, due to its poor heat transfer characteristics, that middle layer also introduces a negative impact on thermal performance.
    The new solution addresses all three of those problems. More

  • in

    Spin keeps electrons in line in iron-based superconductor

    Researchers from PSI’s Spectroscopy of Quantum Materials group together with scientists from Beijing Normal University have solved a puzzle at the forefront of research into iron-based superconductors: the origin of FeSe’s electronic nematicity. Using Resonant inelastic X-ray scattering (RIXS) at the Swiss Light Source (SLS), they discovered that, surprisingly, this electronic phenomenon is primarily spin driven. Electronic nematicity is believed to be an important ingredient in high-temperature superconductivity, but whether it helps or hinders it is still unknown. Their findings are published in Nature Physics.
    Near PSI, where the Swiss forest is ever present in people’s lives, you often see log piles: incredibly neat log piles. Wedge shaped logs for firewood are stacked carefully lengthways but with little thought to their rotation. When particles in a material spontaneously line up, like the logs in these log piles, such that they break rotational symmetry but preserve translational symmetry, a material is said to be in a nematic state. In a liquid crystal, this means that the rod shaped molecules are able to flow like a liquid in the direction of their alignment, but not in other directions. Electronic nematicity occurs when the electron orbitals in a material align in this way. Typically, this electronic nematicity manifests itself as anisotropic electronic properties: for example, resistivity or conductivity exhibiting vastly different magnitudes when measured along different axes.
    Since their discovery in 2008, the past decade has seen enormous interest in the family of iron based superconductors. Alongside the well-studied cuprate superconductors, these materials exhibit the mysterious phenomenon of high temperature superconductivity. The electronic nematic state is a ubiquitous feature of iron-based superconductors. Yet, until now, the physical origin of this electronic nematicity is a puzzle; in fact, arguably one of the most important puzzles in the study of iron-based superconductors.
    But why is the electronic nematicity so interesting? The answer lies with the ever exciting conundrum: understanding how electrons pair up and achieve superconductivity at high temperatures. The stories of electronic nematicity and superconductivity are inextricably linked — but exactly how, and indeed whether they compete or cooperate, is a hotly debated issue.
    The drive to understand electronic nematicity has led researchers to turn their attention to one particular iron-based superconductor, iron selenide (FeSe). FeSe is somewhat of an enigma, simultaneously possessing the most simple crystal structure of all the iron-based superconductors and the most baffling electronic properties.
    FeSe enters its superconducting phase below a critical temperature (Tc) of 9 K but tantalisingly boasts a tunable Tc, meaning that this temperaturecan be raised by applying pressure to or doping the material. The quasi-2D layered material possesses an extended electronic nematic phase, which appears below approximately 90 K. Curiously, this electronic nematicity appears without the long-range magnetic order that it would typically go hand in hand with, leading to lively debate surrounding its origins: namely, whether these are driven by orbital- or spin-degrees of freedom. The absence of long range magnetic order in FeSe gives the opportunity to have a clearer view on the electronic nematicity and its interplay with superconductivity. As a result many researchers feel that FeSe may hold the key to understanding the puzzle of electronic nematicity across the family of iron based superconductors. More

  • in

    Researchers magnify hidden biological structures with MAGNIFIERS

    A research team from Carnegie Mellon University and Columbia have combined two emerging imaging technologies to better view a wide range of biomolecules, including proteins, lipids and DNA, at the nanoscale. Their technique, which brings together expansion microscopy and stimulated Raman scattering microscopy, is detailed in Advanced Science.
    Biomolecules are traditionally imaged using fluorescent microscopy, but that technique has its limitations. Fluorescent microscopy relies on fluorophore-carrying tags to bind to and label molecules of interest. These tags emit fluorescent light with a broad range of wavelengths, thus researchers can only use 3-4 fluorescent colors in the visible spectrum at a time to label molecules of interest.
    Unlike fluorescent microscopy, stimulated Raman scattering microscopy (SRS) visualizes the chemical bonds of biomolecules by capturing their vibrational fingerprints. In this sense, SRS doesn’t need labels to see the different types of biomolecules, or even different isotopes, within a sample. In addition, a rainbow of dyes with unique vibrational spectra can be used to image multiple targets. However, SRS has a diffraction limit of about 300 nanometers, making it unable to visualize many of the crucial nanoscale structures found in cells and tissue.
    “Each type of molecule has its own vibrational fingerprint. SRS allows us to see the type of molecule we want by tuning in to the characteristic frequency of its vibrations. Something like switching between the radio stations.” said Carnegie Mellon Eberly Family Associate Professor of Biological Sciences Yongxin (Leon) Zhao.
    Zhao’s lab has been developing new imaging tools based on expansion microscopy — a technique that addresses the problem of diffraction limits in a wide range of biological imaging. Expansion microscopy takes biological samples and transforms them into water-soluble hydrogels. The hydrogels can then be treated and made to expand to more than 100 times their original volume. The expanded samples can then be imaged using standard techniques.
    “Just as SRS was able to surmount the limitations of fluorescence microscopy, expansion microscopy surmounts the limitations of SRS,” said Zhao.
    The Carnegie Mellon and Columbia researchers combined SRS and expansion microscopy to create Molecule Anchorable Gel-enabled Nanoscale Imaging of Fluorescence and stimulated Raman Scattering microscopy (MAGNIFIERS). Zhao’s expansion microscopy technique was able to expand samples up to 7.2-fold, allowing them to use SRS to image smaller molecules and structures than they would be able to do without expansion.
    In the recently published study, the research team showed that MAGNIFIERS could be used for high-resolution metabolic imaging of protein aggregates, like those created in conditions like Huntington’s disease. They also showed that MAGNIFIERS could map the nanoscale location of eight different markers in brain tissue at one time.
    The researchers plan to continue to develop the MAGNIFIERS technique to achieve higher resolution and higher throughput imaging for understanding the pathology of complex diseases, such as cancer and brain disorders.
    Additional study co-authors include: Alexsandra Klimas, Brendan Gallagher, Zhangu Cheng, Feifei Fu, Piyumi Wijesekara and Xi Ren from Carnegie Mellon; and Yupeng Miao, Lixue Shi and Wei Min from Columbia
    This research was funded by the National Institutes of Health (DP2 OD025926-01, R01 GM128214, R01 GM132860, and R01 EB029523), Carnegie Mellon University, the DSF Charitable Foundation and U.S. Department of Defense (VR190139).
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Jocelyn Duffy. Note: Content may be edited for style and length. More

  • in

    Accelerating the pace of machine learning

    Machine learning happens a lot like erosion.
    Data is hurled at a mathematical model like grains of sand skittering across a rocky landscape. Some of those grains simply sail along with little or no impact. But some of them make their mark: testing, hardening, and ultimately reshaping the landscape according to inherent patterns and fluctuations that emerge over time.
    Effective? Yes. Efficient? Not so much.
    Rick Blum, the Robert W. Wieseman Professor of Electrical and Computer Engineering at Lehigh University, seeks to bring efficiency to distributed learning techniques emerging as crucial to modern artificial intelligence (AI) and machine learning (ML). In essence, his goal is to hurl far fewer grains of data without degrading the overall impact.
    In the paper “Distributed Learning With Sparsified Gradient Differences,” published in a special ML-focused issue of the IEEE Journal of Selected Topics in Signal Processing, Blum and collaborators propose the use of “Gradient Descent method with Sparsification and Error Correction,” or GD-SEC, to improve the communications efficiency of machine learning conducted in a “worker-server” wireless architecture. The issue was published May 17, 2022.
    “Problems in distributed optimization appear in various scenarios that typically rely on wireless communications,” he says. “Latency, scalability, and privacy are fundamental challenges.”
    “Various distributed optimization algorithms have been developed to solve this problem,” he continues,”and one primary method is to employ classical GD in a worker-server architecture. In this environment, the central server updates the model’s parameters after aggregating data received from all workers, and then broadcasts the updated parameters back to the workers. But the overall performance is limited by the fact that each worker must transmit all of its data all of the time. When training a deep neural network, this can be on the order of 200 MB from each worker device at each iteration. This communication step can easily become a significant bottleneck on overall performance, especially in federated learning and edge AI systems.”
    Through the use of GD-SEC, Blum explains, communication requirements are significantly reduced. The technique employs a data compression approach where each worker sets small magnitude gradient components to zero — the signal-processing equivalent of not sweating the small stuff. The worker then only transmits to the server the remaining non-zero components. In other words, meaningful, usable data are the only packets launched at the model.
    “Current methods create a situation where each worker has expensive computational cost; GD-SEC is relatively cheap where only one GD step is needed at each round,” says Blum.
    Professor Blum’s collaborators on this project include his former student Yicheng Chen ’19G ’21PhD, now a software engineer with LinkedIn; Martin Takác, an associate professor at the Mohamed bin Zayed University of Artificial Intelligence; and Brian M. Sadler, a Life Fellow of the IEEE, U.S. Army Senior Scientist for Intelligent Systems, and Fellow of the Army Research Laboratory.
    Story Source:
    Materials provided by Lehigh University. Note: Content may be edited for style and length. More