More stories

  • in

    Interplay between charge order and superconductivity at nanoscale

    Scientists have been relentlessly working on understanding the fundamental mechanisms at the base of high-temperature superconductivity with the ultimate goal to design and engineer new quantum materials superconducting close to room temperature.
    High temperature superconductivity is something of a holy grail for researchers studying quantum materials. Superconductors, which conduct electricity without dissipating energy, promise to revolutionize our energy and telecommunication power systems. However, superconductors typically work at extremely low temperatures, requiring elaborate freezers or expensive coolants. For this reason, scientist have been relentlessly working on understanding the fundamental mechanisms at the base of high-temperature superconductivity with the ultimate goal to design and engineer new quantum materials superconducting close to room temperature.
    Fabio Boschini, Professor at the Institut national de la recherche scientifique (INRS), and North American scientists studied the dynamics of the superconductor yttrium barium copper oxide (YBCO), which offers superconductivity at higher-than-normal temperatures, via time-resolved resonant x-ray scattering at the Linac Coherent Light Source (LCLS) free-electron laser, SLAC (US). The research was published on May 19 in the journal Science. In this new study, researchers have been able to track how charge density waves in YBCO react to a sudden “quenching” of the superconductivity, induced by an intense laser pulse.
    “We are learning that charge density waves — self-organized electrons behaving like ripples in water — and superconductivity are interacting at the nanoscale on ultrafast timescales. There is a very deep connection between superconductivity emergence and charge density waves,” says Fabio Boschini, co-investigator on this project and affiliate investigator at the Stewart Blusson Quantum Matter Institute (Blusson QMI).
    “Up until a few years ago, researchers underestimated the importance of the dynamics inside these materials,” said Giacomo Coslovich, lead investigator and Staff Scientist at the SLAC National Accelerator Laboratory in California. “Until this collaboration came together, we really didn’t have the tools to assess the charge density wave dynamics in these materials. The opportunity to look at the evolution of charge order is only possible thanks to teams like ours sharing resources, and by the use of a free-electron laser to offer new insight into the dynamical properties of matter.”
    Owing to a better picture of the dynamical interactions underlying high-temperature superconductors, the researchers are optimistic that they can work with theoretical physicists to develop a framework for a more nuanced understanding of how high-temperature superconductivity emerges.
    Collaboration is key
    The present work came about from a collaboration of researchers from several leading research centres and beamlines. “We began running our first experiments at the end of 2015 with the first characterization of the material at the Canadian Light Source, says Boschini. Over time, the project came to involve many Blusson QMI researchers, such as MengXing Na who I mentored and introduced to this work. She was integral to the data analysis.”
    “This work is meaningful for a number of reasons, but it also really showcases the importance of forming long-lasting, meaningful collaborations and relationships,” said Na. “Some projects take a really long time, and it’s a credit to Giacomo’s leadership and perseverance that we got here.”
    The project has linked at least three generations of scientists, following some as they progressed through their postdoctoral careers and into faculty positions. The researchers are excited to expand upon this work, by using light as an optical knob to control the on-off state of superconductivity.
    Story Source:
    Materials provided by Institut national de la recherche scientifique – INRS. Original written by Audrey-Maude Vézina. Note: Content may be edited for style and length. More

  • in

    Virtual immune system roadmap unveiled

    An article published May 20 in Nature’s npj Digital Medicine provides a step-by-step plan for an international effort to create a digital twin of the human immune system.
    “This paper outlines a road map that the scientific community should take in building, developing and applying a digital twin of the immune system,” said Tomas Helikar, a University of Nebraska-Lincoln biochemist who is one of 10 co-authors from six universities from around the world. Earlier this year, the National Institutes of Health renewed a five-year $1.8 million grant for Helikar to continue his work in the area.
    “This is an effort that will require the collaboration of computational biologists, immunologists, clinicians, mathematicians and computer scientists,” he said. “Trying to break down this complexity down into measurable and achievable steps has been a challenge. This paper is addressing that.”
    A digital twin of the immune system would be a breakthrough that could offer precision medicine for a wide array of ailments, including cancer, autoimmune disease and viral infections like COVID-19.
    Helikar’s involvement has been inspired in part by his 7-year-old son, who required a lung transplant as an infant. This has resulted in a life-long careful balancing of his immune system through powerful immunosuppression drugs to prevent organ rejection while keeping infections and other diseases at bay.
    While the first step is to create a generic model that reflects common biological mechanisms, the eventual goal is to make virtual models at the individual level. That would enable doctors to deliver treatments precisely designed for the individual. More

  • in

    Using everyday WiFi to help robots see and navigate better indoors

    Engineers at the University of California San Diego have developed a low cost, low power technology to help robots accurately map their way indoors, even in poor lighting and without recognizable landmarks or features.
    The technology consists of sensors that use WiFi signals to help the robot map where it’s going. It’s a new approach to indoor robot navigation. Most systems rely on optical light sensors such as cameras and LiDARs. In this case, the so-called “WiFi sensors” use radio frequency signals rather than light or visual cues to see, so they can work in conditions where cameras and LiDARs struggle — in low light, changing light, and repetitive environments such as long corridors and warehouses.
    And by using WiFi, the technology could offer an economical alternative to expensive and power hungry LiDARs, the researchers noted.
    A team of researchers from the Wireless Communication Sensing and Networking Group, led by UC San Diego electrical and computer engineering professor Dinesh Bharadia, will present their work at the 2022 International Conference on Robotics and Automation (ICRA), which will take place from May 23 to 27 in Philadelphia.
    “We are surrounded by wireless signals almost everywhere we go. The beauty of this work is that we can use these everyday signals to do indoor localization and mapping with robots,” said Bharadia.
    “Using WiFi, we have built a new kind of sensing modality that fills in the gaps left behind by today’s light-based sensors, and it can enable robots to navigate in scenarios where they currently cannot,” added Aditya Arun, who is an electrical and computer engineering Ph.D. student in Bharadia’s lab and the first author of the study. More

  • in

    Is it topological? A new materials database has the answer

    What will it take to make our electronics smarter, faster, and more resilient? One idea is to build them from materials that are topological.
    Topology stems from a branch of mathematics that studies shapes that can be manipulated or deformed without losing certain core properties. A donut is a common example: If it were made of rubber, a donut could be twisted and squeezed into a completely new shape, such as a coffee mug, while retaining a key trait — namely, its center hole, which takes the form of the cup’s handle. The hole, in this case, is a topological trait, robust against certain deformations.
    In recent years, scientists have applied concepts of topology to the discovery of materials with similarly robust electronic properties. In 2007, researchers predicted the first electronic topological insulators — materials in which electrons that behave in ways that are “topologically protected,” or persistent in the face of certain disruptions.
    Since then, scientists have searched for more topological materials with the aim of building better, more robust electronic devices. Until recently, only a handful of such materials were identified, and were therefore assumed to be a rarity.
    Now researchers at MIT and elsewhere have discovered that, in fact, topological materials are everywhere, if you know how to look for them.
    In a paper published in Science, the team, led by Nicolas Regnault of Princeton University and the École Normale Supérieure Paris, reports harnessing the power of multiple supercomputers to map the electronic structure of more than 96,000 natural and synthetic crystalline materials. They applied sophisticated filters to determine whether and what kind of topological traits exist in each structure. More

  • in

    Human behavior is key to building a better long-term COVID forecast

    From extreme weather to another wave of COVID-19, forecasts give decision-makers valuable time to prepare. When it comes to COVID, though, long-term forecasting is a challenge, because it involves human behavior.
    While it can sometimes seem like there is no logic to human behavior, new research is working to improve COVID forecasts by incorporating that behavior into prediction models.
    UConn College of Agriculture, Health and Natural Resources Allied Health researcher Ran Xu, along with collaborators Hazhir Rahmandad from the Massachusetts Institute of Technology, and Navid Ghaffarzadegan from Virginia Tech, have a paper out today in PLOS Computational Biology where they detail how they applied relatively simple but nuanced variables to enhance modelling capabilities, with the result that their approach out-performed a majority of the models currently used to inform decisions made by the federal Centers for Disease Control and Prevention (CDC).
    Xu explains that he and his collaborators are methodologists, and they were interested in examining which parameters impacted the forecasting accuracy of the COVID prediction models. To begin, they turned to the CDC prediction hub, which serves as a repository of models from across the United States.
    “Currently there are over 70 different models, mostly from universities and some from companies, that are updated weekly,” says Xu. “Each week, these models give predictions for cases and number of deaths in the next couple of weeks. The CDC uses this information to inform their decisions; for example, where to strategically focus their efforts or whether to advise people to do social distancing.”
    The Human Factor
    The data was a culmination of over 490,000 point forecasts for weekly death incidents across 57 US locations over the course of one year. The researchers analyzed the length of prediction and how relatively accurate the predictions were across a period of 14 weeks. On further analysis, Xu says they noticed something interesting when they categorized the models based on their methodologies: More

  • in

    New process revolutionizes microfluidic fabrication

    Microfluidic devices use tiny spaces to manipulate very small quantities of liquids and gasses by taking advantage of the properties they exhibit at the microscale. They have demonstrated usefulness in applications from inkjet printing to chemical analysis and have great potential in personal medicine, where they can miniaturize many tests that now require a full lab, lending them the name lab-on-a-chip.
    Researchers at Kyoto University’s Institute for Integrated Cell-Material Sciences (iCeMS) approached microfluidic fabrication from a new direction and came up with an innovative process to make devices with some distinctive properties and advantages.
    A description of the new process created by Dr. Detao Qin of iCeMS’ Pureosity team, led by Professor Easan Sivaniah, appears in Nature Communications.
    Until now, making devices with microfluidic channels has required assembling them from several components, introducing possible points of failure. The Pureosity team’s process needs no such assembly. Instead, it uses light-sensitized common polymers and micro-LED light sources to create self-enclosed, porous, high-resolution channels, capable of carrying aqueous solutions and separating small biomolecules from each other, through a novel photolithography technique.
    The Pureosity team’s latest development builds upon their Organized Microfibrillation (OM) technology, a printing process which was previously published in Nature (2019). Due to a unique feature of the OM process the microfluidic channels display structural color which is linked to the pore size. This correlation ties flow rate to the color as well.
    “We see great potential in this new process,” says Prof. Sivaniah. “We see it as a completely new platform for microfluidic technology, not just for personal diagnostics, but also for miniaturized sensors and detectors.”
    Microfluidic devices are already being used in the biomedical field in point-of-care diagnostics to analyze DNA and proteins. In the future, devices may allow patients to monitor their vital health markers by simply wearing a small patch, so that healthcare providers can respond immediately to dangerous symptoms.
    “It was exciting to finally use our technology for biomedical applications,” says Assistant Professor Masateru Ito, a co-author on the current paper. “We are taking the first steps, but it is encouraging that relevant biomolecules such as insulin and the SARS-COV2 shell protein were compatible with our channels. I think that diagnostic devices are a promising future for this technology.”
    Story Source:
    Materials provided by Kyoto University. Note: Content may be edited for style and length. More

  • in

    Low-cost battery-like device absorbs CO2 emissions while it charges

    Researchers have developed a low-cost device that can selectively capture carbon dioxide gas while it charges. Then, when it discharges, the CO2 can be released in a controlled way and collected to be reused or disposed of responsibly.
    The supercapacitor device, which is similar to a rechargeable battery, is the size of a two-pence coin, and is made in part from sustainable materials including coconut shells and seawater.
    Designed by researchers from the University of Cambridge, the supercapacitor could help power carbon capture and storage technologies at much lower cost. Around 35 billion tonnes of CO2 are released into the atmosphere per year and solutions are urgently needed to eliminate these emissions and address the climate crisis. The most advanced carbon capture technologies currently require large amounts of energy and are expensive.
    The supercapacitor consists of two electrodes of positive and negative charge. In work led by Trevor Binford while completing his Master’s degree at Cambridge, the team tried alternating from a negative to a positive voltage to extend the charging time from previous experiments. This improved the supercapacitor’s ability to capture carbon.
    “We found that that by slowly alternating the current between the plates we can capture double the amount of CO2 than before,” said Dr Alexander Forse from Cambridge’s Yusuf Hamied Department of Chemistry, who led the research.
    “The charging-discharging process of our supercapacitor potentially uses less energy than the amine heating process used in industry now,” said Forse. “Our next questions will involve investigating the precise mechanisms of CO2 capture and improving them. Then it will be a question of scaling up.”
    The results are reported in the journal Nanoscale. More

  • in

    New thermal management technology for electronic devices reduces bulk while improving cooling

    Electronic devices generate heat, and that heat must be dissipated. If it isn’t, the high temperatures can compromise device function, or even damage the devices and their surroundings.
    Now, a team from UIUC and UC Berkeley have published a paper in Nature Electronics detailing a new cooling method that offers a host of benefits, not the least of which is space efficiency that offers a substantial increase over conventional approaches in devices’ power per unit volume.
    Tarek Gebrael, the lead author and a UIUC Ph.D. student in mechanical engineering, explains that the existing solutions suffer from three shortcomings. “First, they can be expensive and difficult to scale up,” he says. Heat spreaders made of diamond, for example, are sometimes used at the chip level, but they aren’t cheap.
    Second, conventional heat spreading approaches generally require that the heat spreader and a heat sink — a device for dissipating heat efficiently, toward which the spreader directs the heat — be attached on top of the electronic device. Unfortunately, “in many cases, most of the heat is generated underneath the electronic device,” meaning that the cooling mechanism isn’t where it needs to be for optimal performance.
    Third, state-of-the-art heat spreaders can’t be installed directly on the surface of the electronics; a layer of “thermal interface material” must be sandwiched between them to ensure good contact. However, due to its poor heat transfer characteristics, that middle layer also introduces a negative impact on thermal performance.
    The new solution addresses all three of those problems. More