More stories

  • in

    Designing better battery electrolytes

    Looking at the future of battery materials.
    Designing a battery is a three-part process. You need a positive electrode, you need a negative electrode, and — importantly — you need an electrolyte that works with both electrodes.
    An electrolyte is the battery component that transfers ions — charge-carrying particles — back and forth between the battery’s two electrodes, causing the battery to charge and discharge. For today’s lithium-ion batteries, electrolyte chemistry is relatively well-defined. For future generations of batteries being developed around the world and at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, however, the question of electrolyte design is wide open.
    “While we are locked into a particular concept for electrolytes that will work with today’s commercial batteries, for beyond-lithium-ion batteries the design and development of different electrolytes will be crucial,” said Shirley Meng, chief scientist at the Argonne Collaborative Center for Energy Storage Science (ACCESS) and professor of molecular engineering at the Pritzker School of Molecular Engineering of The University of Chicago. “Electrolyte development is one key to the progress we will achieve in making these cheaper, longer-lasting and more powerful batteries a reality, and taking one major step towards continuing to decarbonize our economy.”
    In a new paper published in Science, Meng and colleagues laid out their vision for electrolyte design in future generations of batteries.
    Even relatively small departures from today’s batteries will require a rethinking of electrolyte design, according to Meng. Switching from a nickel-containing oxide to a sulfur-based material as the main constituent of a lithium-ion battery’s positive electrode could yield significant performance benefits and reduce costs if scientists can figure out how to rejigger the electrolyte, she said. More

  • in

    Study shows how machine learning could predict rare disastrous events, like earthquakes or pandemics

    When it comes to predicting disasters brought on by extreme events (think earthquakes, pandemics or “rogue waves” that could destroy coastal structures), computational modeling faces an almost insurmountable challenge: Statistically speaking, these events are so rare that there’s just not enough data on them to use predictive models to accurately forecast when they’ll happen next.
    But a team of researchers from Brown University and Massachusetts Institute of Technology say it doesn’t have to be that way.
    In a new study in Nature Computational Science, the scientists describe how they combined statistical algorithms — which need less data to make accurate, efficient predictions — with a powerful machine learning technique developed at Brown and trained it to predict scenarios, probabilities and sometimes even the timeline of rare events despite the lack of historical record on them.
    Doing so, the research team found that this new framework can provide a way to circumvent the need for massive amounts of data that are traditionally needed for these kinds of computations, instead essentially boiling down the grand challenge of predicting rare events to a matter of quality over quantity.
    “You have to realize that these are stochastic events,” said George Karniadakis, a professor of applied mathematics and engineering at Brown and a study author. “An outburst of pandemic like COVID-19, environmental disaster in the Gulf of Mexico, an earthquake, huge wildfires in California, a 30-meter wave that capsizes a ship — these are rare events and because they are rare, we don’t have a lot of historical data. We don’t have enough samples from the past to predict them further into the future. The question that we tackle in the paper is: What is the best possible data that we can use to minimize the number of data points we need?”
    The researchers found the answer in a sequential sampling technique called active learning. These types of statistical algorithms are not only able to analyze data input into them, but more importantly, they can learn from the information to label new relevant data points that are equally or even more important to the outcome that’s being calculated. At the most basic level, they allow more to be done with less. More

  • in

    When using virtual reality as a teaching tool, context and 'feeling real' matter

    A new study by UCLA psychologists reveals that when VR is used to teach language, context and realism matter.
    The research is published in the journal npj Science of Learning.
    “The context in which we learn things can help us remember them better,” said Jesse Rissman, the paper’s corresponding author and a UCLA associate professor of psychology. “We wanted to know if learning foreign languages in virtual reality environments could improve recall, especially when there was the potential for two sets of words to interfere with each other.”
    Researchers asked 48 English-speaking participants to try to learn 80 words in two phonetically similar African languages, Swahili and Chinyanja, as they navigated virtual reality settings.
    Wearing VR headsets, participants explored one of two environments — a fantasy fairyland or a science fiction landscape — where they could click to learn the Swahili or Chinyanja names for the objects they encountered. Some participants learned both languages in the same VR environment; others learned one language in each environment.
    Participants navigated through the virtual worlds four times over the course of two days, saying the translations aloud each time. One week later, the researchers followed up with a pop quiz to see how well the participants remembered what they had learned. More

  • in

    Artificial Intelligence searches an early sign of osteoarthritis from an x-ray image

    Researchers from the University of Jyväskylä and the Central Finland Health Care District have developed an AI based neural network to detect an early knee osteoarthritis from x-ray images. AI was able to match a doctors’ diagnosis in 87% of cases. The result is important because x-rays are the primary diagnostic method for early knee osteoarthritis. An early diagnosis can save the patient from unnecessary examinations, treatments and even knee joint replacement surgery.
    Osteoarthritis is the most common joint-related ailment globally. In Finland alone, it causes as many as 600,000 medical visits every year. It has been estimated to cost the national economy up to €1 billion every year.
    The new AI based method was trained to detect a radiological feature predictive of osteoarthritis from x-rays. The finding is not at the moment included in the diagnostic criteria, but orthopaedic specialists consider it as an early sign of osteoarthritis. The method was developed in Digital Health Intelligence Lab at the University of Jyväskylä as a part of the AI Hub Central Finland project. It utilises neural network technologies that are widely used globally.
    “The aim of the project was to train the AI to recognise an early feature of osteoarthritis from an x-ray. Something that experienced doctors can visually distinguish from the image, but cannot be done automatically,” explains Anri Patron, the researcher responsible for the development of the method.
    In practice, the AI tries to detect whether there is spiking on the tibial tubercles in the knee joint or not. Tibial spiking can be a sign of osteoarthritis.
    The reliability of the method was evaluated together with specialists from the Central Finland Healthcare District. More

  • in

    Dynamical fractal discovered in clean magnetic crystal

    The nature and properties of materials depend strongly on dimension. Imagine how different life in a one-dimensional or two-dimensional world would be from the three dimensions we’re commonly accustomed to. With this in mind, it is perhaps not surprising that fractals – objects with fractional dimension — have garnered significant attention since their discovery. Despite their apparent strangeness, fractals arise in surprising places — from snowflakes and lightning strikes to natural coastlines.
    Researchers at the University of Cambridge, the Max Planck Institute for the Physics of Complex Systems in Dresden, the University of Tennessee, and the Universidad Nacional de La Plata have uncovered an altogether new type of fractal appearing in a class of magnets called spin ices. The discovery was surprising because the fractals were seen in a clean three-dimensional crystal, where they conventionally would not be expected. Even more remarkably, the fractals are visible in dynamical properties of the crystal, and hidden in static ones. These features motivated the appellation of “emergent dynamical fractal.”
    The fractals were discovered in crystals of the material dysprosium titanate, where the electron spins behave like tiny bar magnets. These spins cooperate through ice rules that mimic the constraints that protons experience in water ice. For dysprosium titanate, this leads to very special properties.
    Jonathan Hallén of the University of Cambridge is a PhD student and the lead author on the study. He explains that “at temperatures just slightly above absolute zero the crystal spins form a magnetic fluid.” This is no ordinary fluid, however.
    “With tiny amounts of heat the ice rules get broken in a small number of sites and their north and south poles, making up the flipped spin, separate from each other traveling as independent magnetic monopoles.”
    The motion of these magnetic monopoles led to the discovery here. As Professor Claudio Castelnovo, also from the University of Cambridge, points out: “We knew there was something really strange going on. Results from 30 years of experiments didn’t add up.”
    Referring to a new study on the magnetic noise from the monopoles published earlier this year, Castelnovo continued, “After several failed attempts to explain the noise results, we finally had a eureka moment, realizing that the monopoles must be living in a fractal world and not moving freely in three dimensions, as had always been assumed.” More

  • in

    A shield for 2D materials that adds vibrations to reduce vibration problems

    Monash University researchers have demonstrated a new, counterintuitive way to protect atomically-thin electronics — adding vibrations, to reduce vibrations.
    By ‘squeezing’ a thin droplet of liquid gallium, graphene devices are painted with a protective coating of glass, gallium-oxide.
    This oxide is remarkably thin, less than 100 atoms, yet covers centimetre-wide scales, making it potentially applicable for industrial large-scale fabrication. Current, frontier “2nm” transistors from IBM use gates of similar thickness, close to 10nm (140 atoms).
    “Mechanically transferring such large-area nanosheets is quite novel,” says lead author Matthew Gebert.
    The oxide provides a new method of device protection, whilst also improving device performance:
    “The oxide not only enhances and protects our devices when we first transfer it, but also later, during subsequent processing and fabrication,” says co-author Semonti Bhattacharyya. More

  • in

    Mitigating corrosion by liquid tin could lead to better cooling in fusion reactors

    Researchers at Tokyo Institute of Technology and the National Institute for Fusion Science have clarified the chemical compatibility between high temperature liquid metal tin (Sn) and reduced activation ferritic martensitic, a candidate structural material for fusion reactors. This discovery has paved the way for the development of a liquid metal tin divertor, which is an advanced heat-removal component of fusion reactors. A device called a divertor is installed in the fusion reactors to maintain the purity of the plasma. For divertors, there has been demand for liquid metals that can withstand extremely large heat loads from high-temperature plasma.
    Background
    Fusion reactors are being actively developed throughout the world as a form of sustainable zero-carbon energies because their fuel can be extracted from an inexhaustible supply of seawater. Also, they do not emit greenhouse gases. In addition to the construction of the tokamak (ITER), which is being constructed through collaboration of seven of the world’s leading countries and regions (Japan, EU, United States, South Korea, China, Russia, and India), fusion development by the private sector is also accelerating.
    One of the most important components in these fusion reactors is the divertor, a component which gasifies impurities in the plasma and sends the gas to an exhaust pump. During operation of a fusion reactor, some of the structural components of the divertor are exposed to extremely large heat loads at the same level as the “space shuttle when entering the atmosphere.” Researchers are working to develop a solid divertor in which a block of heat-resistant material such as tungsten is placed in contact with the plasma and cooled with high-temperature, high-pressure water. This solid divertor system is also used in the ITER project and the prototype fusion reactors. Conversely, as an innovative mechanism to withstand the large heat load from plasma, researchers have also considered the concept of a liquid metal divertor which protects the divertor from plasma by covering the structural material of the divertor with a liquid metal that possesses excellent cooling performance.
    Tin (Sn) is a metal that has been used in various ways in our daily lives; for example, as a material for tableware and as a component of solder. Tin has a relatively low melting point of 232°C and is suitable for use in a liquid state. Another property of tin is that its vapor pressure at high temperatures is lower than thatof other liquid metals. When liquid metal tin is used as a coolant to cover and protect the structural material surface of the liquid metal divertor of a fusion reactor, it is difficult to evaporate even if it is heated by plasma and reaches a high temperature. It also possesses the advantage of the evaporated metal being less likely to mix with the plasma. However, the corrosion of structural materials is a technical issue that has been concerned by researchers.
    Research results
    Kondo’s laboratory has focused on chemical coexistence with various structural and functional materials. The laboratory has given particular attention to liquid metal coolants attracting attention in the field of next-generation energy such as fusion reactors. Researchers concentrated on liquid metal tin, which reveals the inconvenient property of being highly reactive at high temperatures. They worked to clarify the corrosion mechanism of fusion reactor structural materials and to discover materials that exhibit corrosion resistance. More

  • in

    Researchers develop all-optical approach to pumping chip-based nanolasers

    Researchers have developed a new all-optical method for driving multiple highly dense nanolaser arrays. The approach could enable chip-based optical communication links that process and move data faster than today’s electronic-based devices.
    “The development of optical interconnects equipped with high-density nanolasers would improve information processing in the data centers that move information across the internet,” said research team leader Myung-Ki Kim from Korea University. “This could allow streaming of ultra-high-definition movies, enable larger-scale interactive online encounters and games, accelerate the expansion of the Internet of Things and provide the fast connectivity needed for big data analytics.”
    In Optica, Optica Publishing Group’s journal for high-impact research, the researchers demonstrate that densely integrated nanolaser arrays — in which the lasers are just 18 microns apart — can be fully driven and programmed with light from a single optical fiber.
    “Optical devices integrated onto a chip are a promising alternative to electronic integrated devices, which are struggling to keep up with today’s data processing demands,” said Kim. “By eliminating the large and complex electrodes typically used to drive laser arrays, we reduced the overall dimensions of the laser array while also eliminating the heat generation and processing delays that come with electrode-based drivers.”
    Replacing electrodes with light
    The new nanolasers could be used in optical integrated circuit systems, which detect, generate, transmit and process information on a microchip via light. Instead of the fine copper wires used in electronic chips, optical circuits use optical waveguides, which allow much higher bandwidths while generating less heat. However, because the size of optical integrated circuits is quickly reaching into the nanometer regime, there is a need for new ways to drive and control their nano-sized light sources efficiently.
    To emit light, lasers need to be supplied with energy in a process called pumping. For nanolaser arrays, this is typically accomplished using a pair of electrodes for each laser within an array, which requires significant on-chip space and energy consumption while also causing processing delays. To overcome this critical limitation, the researchers replaced these electrodes with a unique optical driver that creates programmable patterns of light via interference. This pump light travels through an optical fiber onto which nanolasers are printed.
    To demonstrate this approach, the researchers used a high-resolution transfer-printing technique to fabricate multiple photonic crystal nanolasers spaced 18 microns apart. These arrays were applied onto the surface of a 2-micron-diameter optical microfiber. This had to be done in a way that precisely aligned the nanolaser arrays with the interference pattern. The interference pattern could also be modified by adjusting the driving beam’s polarization and pulse width.
    Laser driving with a single fiber
    The experiments showed that the design allowed multiple nanolaser arrays to be driven using light traveling through a single fiber. The results matched well with numerical calculations and showed that the printed nanolaser arrays could be fully controlled by the pump beam interference patterns.
    “Our all-optical laser driving and programming technology can also be applied to chip-based silicon photonics systems, which could play a key role in the development of chip-to-chip or on-chip optical interconnects,” said Kim. “However, it would be necessary to prove how independently the modes of a silicon waveguide can be controlled. If this can be done, it would be a huge leap forward in the advancement of on-chip optical interconnects and optical integrated circuits.”
    Story Source:
    Materials provided by Optica. Note: Content may be edited for style and length. More