More stories

  • in

    Why there is no speed limit in the superfluid universe

    Physicists from Lancaster University have established why objects moving through superfluid helium-3 lack a speed limit in a continuation of earlier Lancaster research.
    Helium-3 is a rare isotope of helium, in which one neutron is missing. It becomes superfluid at extremely low temperatures, enabling unusual properties such as a lack of friction for moving objects.
    It was thought that the speed of objects moving through superfluid helium-3 was fundamentally limited to the critical Landau velocity, and that exceeding this speed limit would destroy the superfluid. Prior experiments in Lancaster have found that it is not a strict rule and objects can move at much greater speeds without destroying the fragile superfluid state.
    Now scientists from Lancaster University have found the reason for the absence of the speed limit: exotic particles that stick to all surfaces in the superfluid.
    The discovery may guide applications in quantum technology, even quantum computing, where multiple research groups already aim to make use of these unusual particles.
    To shake the bound particles into sight, the researchers cooled superfluid helium-3 to within one ten thousandth of a degree from absolute zero (0.0001K or -273.15°C). They then moved a wire through the superfluid at a high speed, and measured how much force was needed to move the wire. Apart from an extremely small force related to moving the bound particles around when the wire starts to move, the measured force was zero.
    Lead author Dr Samuli Autti said: “Superfluid helium-3 feels like vacuum to a rod moving through it, although it is a relatively dense liquid. There is no resistance, none at all. I find this very intriguing.”
    PhD student Ash Jennings added: “By making the rod change its direction of motion we were able to conclude that the rod will be hidden from the superfluid by the bound particles covering it, even when its speed is very high.” “The bound particles initially need to move around to achieve this, and that exerts a tiny force on the rod, but once this is done, the force just completely disappears,” said Dr Dmitry Zmeev, who supervised the project.
    The Lancaster researchers included Samuli Autti, Sean Ahlstrom, Richard Haley, Ash Jennings, George Pickett, Malcolm Poole, Roch Schanen, Viktor Tsepelin, Jakub Vonka, Tom Wilcox, Andrew Woods and Dmitry Zmeev. The results are published in Nature Communications.

    Story Source:
    Materials provided by Lancaster University. Note: Content may be edited for style and length. More

  • in

    New design principles for spin-based quantum materials

    As our lives become increasingly intertwined with technology — whether supporting communication while working remotely or streaming our favorite show — so too does our reliance on the data these devices create. Data centers supporting these technology ecosystems produce a significant carbon footprint — and consume 200 terawatt hours of energy each year, greater than the annual energy consumption of Iran. To balance ecological concerns yet meet growing demand, advances in microelectronic processors — the backbone of many Internet of Things (IoT) devices and data hubs — must be efficient and environmentally friendly.
    Northwestern University materials scientists have developed new design principles that could help spur development of future quantum materials used to advance (IoT) devices and other resource-intensive technologies while limiting ecological damage.
    “New path-breaking materials and computing paradigms are required to make data centers more energy-lean in the future,” said James Rondinelli, professor of materials science and engineering and the Morris E. Fine Professor in Materials and Manufacturing at the McCormick School of Engineering, who led the research.
    The study marks an important step in Rondinelli’s efforts to create new materials that are non-volatile, energy efficient, and generate less heat — important aspects of future ultrafast, low-power electronics and quantum computers that can help meet the world’s growing demand for data.
    Rather than certain classes of semiconductors using the electron’s charge in transistors to power computing, solid-state spin-based materials utilize the electron’s spin and have the potential to support low-energy memory devices. In particular, materials with a high-quality persistent spin texture (PST) can exhibit a long-lived persistent spin helix (PSH), which can be used to track or control the spin-based information in a transistor.
    Although many spin-based materials already encode information using spins, that information can be corrupted as the spins propagate in the active portion of the transistor. The researchers’ novel PST protects that spin information in helix form, making it a potential platform where ultralow energy and ultrafast spin-based logic and memory devices operate.
    The research team used quantum-mechanical models and computational methods to develop a framework to identify and assess the spin textures in a group of non-centrosymmetric crystalline materials. The ability to control and optimize the spin lifetimes and transport properties in these materials is vital to realizing the future of quantum microelectronic devices that operate with low energy consumption.
    “The limiting characteristic of spin-based computing is the difficulty in attaining both long-lived and fully controllable spins from conventional semiconductor and magnetic materials,” Rondinelli said. “Our study will help future theoretical and experimental efforts aimed at controlling spins in otherwise non-magnetic materials to meet future scaling and economic demands.”
    Rondinelli’s framework used microscopic effective models and group theory to identify three materials design criteria that would produce useful spin textures: carrier density, the number of electrons propagating through an effective magnetic field, Rashba anisotropy, the ratio between intrinsic spin-orbit coupling parameters of the materials, and momentum space occupation, the PST region active in the electronic band structure. These features were then assessed using quantum-mechanical simulations to discover high-performing PSHs in a range of oxide-based materials.
    The researchers used these principles and numerical solutions to a series of differential spin-diffusion equations to assess the spin texture of each material and predict the spin lifetimes for the helix in the strong spin-orbit coupling limit. They also found they could adjust and improve the PST performance using atomic distortions at the picoscale. The group determined an optimal PST material, Sr3Hf2O7, which showed a substantially longer spin lifetime for the helix than in any previously reported material.
    “Our approach provides a unique chemistry-agnostic strategy to discover, identify, and assess symmetry-protected persistent spin textures in quantum materials using intrinsic and extrinsic criteria,” Rondinelli said. “We proposed a way to expand the number of space groups hosting a PST, which may serve as a reservoir from which to design future PST materials, and found yet another use for ferroelectric oxides — compounds with a spontaneous electrical polarization. Our work also will help guide experimental efforts aimed at implementing the materials in real device structures.”

    Story Source:
    Materials provided by Northwestern University. Original written by Alex Gerage. Note: Content may be edited for style and length. More

  • in

    Solar storm forecasts for Earth improved with help from the public

    Solar storm analysis carried out by an army of citizen scientists has helped researchers devise a new and more accurate way of forecasting when Earth will be hit by harmful space weather. Scientists at the University of Reading added analysis carried out by members of the public to computer models designed to predict when coronal mass ejections (CMEs) — huge solar eruptions that are harmful to satellites and astronauts — will arrive at Earth.
    The team found forecasts were 20% more accurate, and uncertainty was reduced by 15%, when incorporating information about the size and shape of the CMEs in the volunteer analysis. The data was captured by thousands of members of the public during the latest activity in the Solar Stormwatch citizen science project, which was devised by Reading researchers and has been running since 2010.
    The findings support the inclusion of wide-field CME imaging cameras on board space weather monitoring missions currently being planned by agencies like NASA and ESA.
    Dr Luke Barnard, space weather researcher at the University of Reading’s Department of Meteorology, who led the study, said: “CMEs are sausage-shaped blobs made up of billions of tonnes of magnetised plasma that erupt from the Sun’s atmosphere at a million miles an hour. They are capable of damaging satellites, overloading power grids and exposing astronauts to harmful radiation.
    “Predicting when they are on a collision course with Earth is therefore extremely important, but is made difficult by the fact the speed and direction of CMEs vary wildly and are affected by solar wind, and they constantly change shape as they travel through space.
    “Solar storm forecasts are currently based on observations of CMEs as soon as they leave the Sun’s surface, meaning they come with a large degree of uncertainty. The volunteer data offered a second stage of observations at a point when the CME was more established, which gave a better idea of its shape and trajectory.

    advertisement

    “The value of additional CME observations demonstrates how useful it would be to include cameras on board spacecraft in future space weather monitoring missions. More accurate predictions could help prevent catastrophic damage to our infrastructure and could even save lives.”
    In the study, published in AGU Advances, the scientists used a brand new solar wind model, developed by Reading co-author Professor Mathew Owens, for the first time to create CME forecasts.
    The simplified model is able to run up to 200 simulations — compared to around 20 currently used by more complex models — to provide improved estimates of the solar wind speed and its impact on the movement of CMEs, the most harmful of which can reach Earth in 15-18 hours.
    Adding the public CME observations to the model’s predictions helped provide a clearer picture of the likely path the CME would take through space, reducing the uncertainty in the forecast. The new method could also be applied to other solar wind models.
    The Solar Stormwatch project was led by Reading co-author Professor Chris Scott. It asked volunteers to trace the outline of thousands of past CMEs captured by Heliospheric Imagers — specialist, wide-angle cameras — on board two NASA STEREO spacecraft, which orbit the Sun and monitor the space between it and Earth.
    The scientists retrospectively applied their new forecasting method to the same CMEs the volunteers had analysed to test how much more accurate their forecasts were with the additional observations.
    Using the new method for future solar storm forecasts would require swift real-time analysis of the images captured by the spacecraft camera, which would provide warning of a CME being on course for Earth several hours or even days in advance of its arrival.

    Story Source:
    Materials provided by University of Reading. Note: Content may be edited for style and length. More

  • in

    Biologists create new genetic systems to neutralize gene drives

    In the past decade, researchers have engineered an array of new tools that control the balance of genetic inheritance. Based on CRISPR technology, such gene drives are poised to move from the laboratory into the wild where they are being engineered to suppress devastating diseases such as mosquito-borne malaria, dengue, Zika, chikungunya, yellow fever and West Nile. Gene drives carry the power to immunize mosquitoes against malarial parasites, or act as genetic insecticides that reduce mosquito populations.
    Although the newest gene drives have been proven to spread efficiently as designed in laboratory settings, concerns have been raised regarding the safety of releasing such systems into wild populations. Questions have emerged about the predictability and controllability of gene drives and whether, once let loose, they can be recalled in the field if they spread beyond their intended application region.
    Now, scientists at the University of California San Diego and their colleagues have developed two new active genetic systems that address such risks by halting or eliminating gene drives in the wild. On Sept.18, 2020 in the journal Molecular Cell, research led by Xiang-Ru Xu, Emily Bulger and Valentino Gantz in the Division of Biological Sciences offers two new solutions based on elements developed in the common fruit fly.
    “One way to mitigate the perceived risks of gene drives is to develop approaches to halt their spread or to delete them if necessary,” said Distinguished Professor Ethan Bier, the paper’s senior author and science director for the Tata Institute for Genetics and Society. “There’s been a lot of concern that there are so many unknowns associated with gene drives. Now we have saturated the possibilities, both at the genetic and molecular levels, and developed mitigating elements.”
    The first neutralizing system, called e-CHACR (erasing Constructs Hitchhiking on the Autocatalytic Chain Reaction) is designed to halt the spread of a gene drive by “shooting it with its own gun.” e-CHACRs use the CRISPR enzyme Cas9 carried on a gene drive to copy itself, while simultaneously mutating and inactivating the Cas9 gene. Xu says an e-CHACR can be placed anywhere in the genome.
    “Without a source of Cas9, it is inherited like any other normal gene,” said Xu. “However, once an e-CHACR confronts a gene drive, it inactivates the gene drive in its tracks and continues to spread across several generations ‘chasing down’ the drive element until its function is lost from the population.”
    The second neutralizing system, called ERACR (Element Reversing the Autocatalytic Chain Reaction), is designed to eliminate the gene drive altogether. ERACRs are designed to be inserted at the site of the gene drive, where they use the Cas9 from the gene drive to attack either side of the Cas9, cutting it out. Once the gene drive is deleted, the ERACR copies itself and replaces the gene-drive.
    “If the ERACR is also given an edge by carrying a functional copy of a gene that is disrupted by the gene drive, then it races across the finish line, completely eliminating the gene drive with unflinching resolve,” said Bier.
    The researchers rigorously tested and analyzed e-CHACRs and ERACRs, as well as the resulting DNA sequences, in meticulous detail at the molecular level. Bier estimates that the research team, which includes mathematical modelers from UC Berkeley, spent an estimated combined 15 years of effort to comprehensively develop and analyze the new systems. Still, he cautions there are unforeseen scenarios that could emerge, and the neutralizing systems should not be used with a false sense of security for field-implemented gene drives.
    “Such braking elements should just be developed and kept in reserve in case they are needed since it is not known whether some of the rare exceptional interactions between these elements and the gene drives they are designed to corral might have unintended activities,” he said.
    According to Bulger, gene drives have enormous potential to alleviate suffering, but responsibly deploying them depends on having control mechanisms in place should unforeseen consequences arise. ERACRs and eCHACRs offer ways to stop the gene drive from spreading and, in the case of the ERACR, can potentially revert an engineered DNA sequence to a state much closer to the naturally-occurring sequence.
    “Because ERACRs and e-CHACRs do not possess their own source of Cas9, they will only spread as far as the gene drive itself and will not edit the wild type population,” said Bulger. “These technologies are not perfect, but we now have a much more comprehensive understanding of why and how unintended outcomes influence their function and we believe they have the potential to be powerful gene drive control mechanisms should the need arise.” More

  • in

    Engineers produce a fisheye lens that's completely flat

    To capture panoramic views in a single shot, photographers typically use fisheye lenses — ultra-wide-angle lenses made from multiple pieces of curved glass, which distort incoming light to produce wide, bubble-like images. Their spherical, multipiece design makes fisheye lenses inherently bulky and often costly to produce.
    Now engineers at MIT and the University of Massachusetts at Lowell have designed a wide-angle lens that is completely flat. It is the first flat fisheye lens to produce crisp, 180-degree panoramic images. The design is a type of “metalens,” a wafer-thin material patterned with microscopic features that work together to manipulate light in a specific way.
    In this case, the new fisheye lens consists of a single flat, millimeter-thin piece of glass covered on one side with tiny structures that precisely scatter incoming light to produce panoramic images, just as a conventional curved, multielement fisheye lens assembly would. The lens works in the infrared part of the spectrum, but the researchers say it could be modified to capture images using visible light as well.
    The new design could potentially be adapted for a range of applications, with thin, ultra-wide-angle lenses built directly into smartphones and laptops, rather than physically attached as bulky add-ons. The low-profile lenses might also be integrated into medical imaging devices such as endoscopes, as well as in virtual reality glasses, wearable electronics, and other computer vision devices.
    “This design comes as somewhat of a surprise, because some have thought it would be impossible to make a metalens with an ultra-wide-field view,” says Juejun Hu, associate professor in MIT’s Department of Materials Science and Engineering. “The fact that this can actually realize fisheye images is completely outside expectation.
    This isn’t just light-bending — it’s mind-bending.”
    Hu and his colleagues have published their results in the journal Nano Letters. Hu’s MIT coauthors are Mikhail Shalaginov, Fan Yang, Peter Su, Dominika Lyzwa, Anuradha Agarwal, and Tian Gu, along with Sensong An and Hualiang Zhang of UMass Lowell.

    advertisement

    Design on the back side
    Metalenses, while still largely at an experimental stage, have the potential to significantly reshape the field of optics. Previously, scientists have designed metalenses that produce high-resolution and relatively wide-angle images of up to 60 degrees. To expand the field of view further would traditionally require additional optical components to correct for aberrations, or blurriness — a workaround that would add bulk to a metalens design.
    Hu and his colleagues instead came up with a simple design that does not require additional components and keeps a minimum element count. Their new metalens is a single transparent piece made from calcium fluoride with a thin film of lead telluride deposited on one side. The team then used lithographic techniques to carve a pattern of optical structures into the film.
    Each structure, or “meta-atom,” as the team refers to them, is shaped into one of several nanoscale geometries, such as a rectangular or a bone-shaped configuration, that refracts light in a specific way. For instance, light may take longer to scatter, or propagate off one shape versus another — a phenomenon known as phase delay.
    In conventional fisheye lenses, the curvature of the glass naturally creates a distribution of phase delays that ultimately produces a panoramic image. The team determined the corresponding pattern of meta-atoms and carved this pattern into the back side of the flat glass.

    advertisement

    ‘We’ve designed the back side structures in such a way that each part can produce a perfect focus,” Hu says.
    On the front side, the team placed an optical aperture, or opening for light.
    “When light comes in through this aperture, it will refract at the first surface of the glass, and then will get angularly dispersed,” Shalaginov explains. “The light will then hit different parts of the backside, from different and yet continuous angles. As long as you design the back side properly, you can be sure to achieve high-quality imaging across the entire panoramic view.”
    Across the panorama
    In one demonstration, the new lens is tuned to operate in the mid-infrared region of the spectrum. The team used the imaging setup equipped with the metalens to snap pictures of a striped target. They then compared the quality of pictures taken at various angles across the scene, and found the new lens produced images of the stripes that were crisp and clear, even at the edges of the camera’s view, spanning nearly 180 degrees.
    “It shows we can achieve perfect imaging performance across almost the whole 180-degree view, using our methods,” Gu says.
    In another study, the team designed the metalens to operate at a near-infrared wavelength using amorphous silicon nanoposts as the meta-atoms. They plugged the metalens into a simulation used to test imaging instruments. Next, they fed the simulation a scene of Paris, composed of black and white images stitched together to make a panoramic view. They then ran the simulation to see what kind of image the new lens would produce.
    “The key question was, does the lens cover the entire field of view? And we see that it captures everything across the panorama,” Gu says. “You can see buildings and people, and the resolution is very good, regardless of whether you’re looking at the center or the edges.”
    The team says the new lens can be adapted to other wavelengths of light. To make a similar flat fisheye lens for visible light, for instance, Hu says the optical features may have to be made smaller than they are now, to better refract that particular range of wavelengths. The lens material would also have to change. But the general architecture that the team has designed would remain the same.
    The researchers are exploring applications for their new lens, not just as compact fisheye cameras, but also as panoramic projectors, as well as depth sensors built directly into smartphones, laptops, and wearable devices.
    “Currently, all 3D sensors have a limited field of view, which is why when you put your face away from your smartphone, it won’t recognize you,” Gu says. “What we have here is a new 3D sensor that enables panoramic depth profiling, which could be useful for consumer electronic devices.” More

  • in

    Promising computer simulations for stellarator plasmas

    For the fusion researchers at IPP, who want to develop a power plant based on the model of the sun, the turbulence formation in its fuel — a hydrogen plasma — is a central research topic. The small eddies carry particles and heat out of the hot plasma centre and thus reduce the thermal insulation of the magnetically confined plasma. Because the size and thus the price of electricity of a future fusion power plant depends on it, one of the most important goals is to understand, predict and influence this “turbulent transport.”
    Since the exact computational description of plasma turbulence would require the solution of highly complex systems of equations and the execution of countless computational steps, the code development process is aimed at achieving reasonable simplifications. The GENE code developed at IPP is based on a set of simplified, so-called gyrokinetic equations. They disregard all phenomena in the plasma which do not play a major role in turbulent transport. Although the computational effort can be reduced by many orders of magnitude in this way, the world’s fastest and most powerful supercomputers have always been needed to further develop the code. In the meantime, GENE is able to describe the formation and propagation of small low-frequency plasma eddies in the plasma interior well and to reproduce and explain the experimental results — but originally only for the simply constructed, because axisymmetric fusion systems of the tokamak type.
    For example, calculations with GENE showed that fast ions can greatly reduce turbulent transport in tokamak plasmas. Experiments at the ASDEX Upgrade tokamak at Garching confirmed this result. The required fast ions were provided by plasma heating using radio waves of the ion cyclotron frequency.
    A tokamak code for stellarators
    In stellarators, this turbulence suppression by fast ions had not been observed experimentally so far. However, the latest calculations with GENE now suggest that this effect should also exist in stellarator plasmas: In the Wendelstein 7-X stellarator at IPP at Greifswald, it could theoretically reduce turbulence by more than half. As IPP scientists Alessandro Di Siena, Alejandro Bañón Navarro and Frank Jenko show in the journal Physical Review Letters, the optimal ion temperature depends strongly on the shape of the magnetic field. Professor Frank Jenko, head of the Tokamak Theory department at IPP in Garching: “If this calculated result is confirmed in future experiments with Wendelstein 7-X in Greifswald, this could open up a path to interesting high-performance plasmas.”
    In order to use GENE for turbulence calculation in the more complicated shaped plasmas of stellarators, major code adjustments were necessary. Without the axial symmetry of the tokamaks, one has to cope with a much more complex geometry for stellarators.
    For Professor Per Helander, head of the Stellarator Theory department at IPP in Greifswald, the stellarator simulations performed with GENE are “very exciting physics.” He hopes that the results can be verified in the Wendelstein 7-X stellarator at Greifswald. “Whether the plasma values in Wendelstein 7-X are suitable for such experiments can be investigated when, in the coming experimental period, the radio wave heating system will be put into operation in addition to the current microwave and particle heating,” says Professor Robert Wolf, whose department is responsible for plasma heating.
    GENE becomes GENE-3D
    According to Frank Jenko, it was another “enormous step” to make GENE not only approximately, but completely fit for the complex, three-dimensional shape of stellarators. After almost five years of development work, the code GENE-3D, now presented in the “Journal of Computational Physics” by Maurice Maurer and co-authors, provides a “fast and yet realistic turbulence calculation also for stellarators,” says Frank Jenko. In contrast to other stellarator turbulence codes, GENE-3D describes the full dynamics of the system, i.e. the turbulent motion of the ions and also of the electrons over the entire inner volume of the plasma, including the resulting fluctuations of the magnetic field.

    Story Source:
    Materials provided by Max-Planck-Institut für Plasmaphysik (IPP). Note: Content may be edited for style and length. More

  • in

    New mathematical tool can select the best sensors for the job

    In the 2019 Boeing 737 Max crash, the recovered black box from the aftermath hinted that a failed pressure sensor may have caused the ill-fated aircraft to nose dive. This incident and others have fueled a larger debate on sensor selection, number and placement to prevent the reoccurrence of such tragedies.
    Texas A&M University researchers have now developed a comprehensive mathematical framework that can help engineers make informed decisions about which sensors to use and where they must be positioned in aircraft and other machines.
    “During the early design stage for any control system, critical decisions have to be made about which sensors to use and where to place them so that the system is optimized for measuring certain physical quantities of interest,” said Dr. Raktim Bhattacharya, associate professor in the Department of Aerospace Engineering. “With our mathematical formulation, engineers can feed the model with information on what needs to be sensed and with what precision, and the model’s output will be the fewest sensors needed and their accuracies.”
    The researchers detailed their mathematical framework in the June issue of the Institute of Electrical and Electronics Engineers’ Control System Letters.
    Whether a car or an airplane, complex systems have internal properties that need to be measured. For instance, in an airplane, sensors for angular velocity and acceleration are placed at specific locations to estimate the velocity.
    Sensors can also have different accuracies. In technical terms, accuracy is measured by the noise or the wiggles in the sensor measurements. This noise impacts how accurately the internal properties can be predicted. However, accuracies may be defined differently depending on the system and the application. For instance, some systems may require that noise in the predictions do not exceed a certain amount, while others may need the square of the noise to be as small as possible. In all cases, prediction accuracy has a direct impact on the cost of the sensor.

    advertisement

    “If you want to get sensor accuracy that is two times more accurate, the cost is likely to be more than double,” said Bhattacharya. “Furthermore, in some cases, very high accuracy is not even required. For example, an expensive 4K HD vehicle camera for object detection is unnecessary because first, fine features are not needed to distinguish humans from other cars and second, data processing from high-definition cameras becomes an issue.”
    Bhattacharya added that even if the sensors are extremely precise, knowing where to put the sensor is critical because one might place an expensive sensor at a location where it is not needed. Thus, he said the ideal solution balances cost and precision by optimizing the number of sensors and their positions.
    To test this rationale, Bhattacharya and his team designed a mathematical model using a set of equations that described the model of an F-16 aircraft. In their study, the researchers’ objective was to estimate the forward velocity, the direction of wind angle with respect to the airplane (the angle of attack), the angle between where the airplane is pointed and the horizon (the pitch angle) and pitch rate for this aircraft. Available to them were sensors that are normally in aircraft for measuring acceleration, angular velocity, pitch rate, pressure and the angle of attack. In addition, the model was also provided with expected accuracies for each sensor.
    Their model revealed that all of the sensors were not needed to accurately estimate forward velocity; readings from angular velocity sensors and pressure sensors were enough. Also, these sensors were enough to estimate the other physical states, like the angle of attack, precluding the need of an additional angle of attack sensor. In fact, these sensors, although a surrogate for measuring the angle of attack, had the effect of introducing redundancy in the system, resulting in higher system reliability.
    Bhattacharya said the mathematical framework has been designed so that it always indicates the least sensors that are needed even if it is provided with a repertoire of sensors to choose from.
    “Let’s assume a designer wants to put every type of sensor everywhere. The beauty of our mathematical model is that it will take out the unnecessary sensors and then give you the minimum number of sensors needed and their position,” he said.
    Furthermore, the researchers noted that although the study is from an aerospace engineering perspective, their mathematical model is very general and can impact other systems as well.
    “As engineering systems become bigger and more complex, the question of where to put the sensor becomes more and more difficult,” said Bhattacharya. “So, for example, if you are building a really long wind turbine blade, some physical properties of the system need to be estimated using sensors and these sensors need to be placed at optimal locations to make sure the structure does not fail. This is nontrivial and that’s where our mathematical framework comes in.” More

  • in

    Shedding light on the development of efficient blue-emitting semiconductors

    Artificial light accounts for approximately 20% of the total electricity consumed globally. Considering the present environmental crisis, this makes the discovery of energy-efficient light-emitting materials particularly important, especially those that produce white light. Over the last decade, technological advances in solid-state lighting, the subfield of semiconductors research concerned with light-emitting compounds, has led to the widespread use of white LEDs. However, most of these LEDs are actually a blue LED chip coated with a yellow luminescent material; the emitted yellow light combined with the remaining blue light produces the white color.
    Therefore, a way to reduce the energy consumption of modern white LED lights is to find better blue-emitting semiconductors. Unfortunately, no known blue-emitting compounds were simultaneously highly efficient, easily processible, durable, eco-friendly, and made from abundant materials — until now.
    In a recent study, published in Advanced Materials, a team of scientists from Tokyo Institute of Technology, Japan, discovered a new alkali copper halide, Cs5Cu3Cl6I2, that fills all the criteria. Unlike Cs3Cu2I5, another promising blue-emitting candidate for future devices, the proposed compound has two different halides, chloride and iodide. Although mixed-halide materials have been tried before, Cs5Cu3Cl6I2 has unique properties that emerge specifically from the use of I− and CI− ions.
    It turns out that Cs5Cu3Cl6I2 forms a one-dimensional zigzag chain out of two different subunits, and the links in the chain are exclusively bridged by I− ions. The scientists also found another important feature: its valence band, which describes the energy levels of electrons in different positions of the material’s crystalline structure, is almost flat (of constant energy). In turn, this characteristic makes photo-generated holes — positively charged pseudoparticles that represent the absence of a photoexcited electron — “heavier.” These holes tend to become immobilized due to their strong interaction with I− ions, and they easily bond with nearby free electrons to form a small system known as an exciton.
    Excitons induce distortions in the crystal structure. Much like the fact that one would have trouble moving atop a suspended large net that is considerably deformed by one’s own weight, the excitons become trapped in place by their own effect. This is crucial for the highly efficient generation of blue light. Professor Junghwan Kim, who led the study, explains: “The self-trapped excitons are localized forms of optically excited energy; the eventual recombination of their constituting electron-hole pair causes photoluminescence, the emission of blue light in this case.”
    In addition to its efficiency, Cs5Cu3Cl6I2 has other attractive properties. It is exclusively composed of abundant materials, making it relatively inexpensive. Moreover, it is much more stable in air than Cs3Cu2I5 and other alkali copper halide compounds. The scientists found that the performance of Cs5Cu3Cl6I2 did not degrade when stored in air for three months, while similar light-emitting compounds performed worse after merely days. Finally, Cs5Cu3Cl6I2 does not require lead, a highly toxic element, making it eco-friendly overall.
    Excited about the results of the study, Prof. Kim concludes: “Our findings provide a new perspective for the development of new alkali copper halide candidates and demonstrate that Cs5Cu3Cl6I2 could be a promising blue-emitting material.” The light shed by this team of scientists will hopefully lead to more efficient and eco-friendly lighting technology.

    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More