More stories

  • in

    Promising computer simulations for stellarator plasmas

    For the fusion researchers at IPP, who want to develop a power plant based on the model of the sun, the turbulence formation in its fuel — a hydrogen plasma — is a central research topic. The small eddies carry particles and heat out of the hot plasma centre and thus reduce the thermal insulation of the magnetically confined plasma. Because the size and thus the price of electricity of a future fusion power plant depends on it, one of the most important goals is to understand, predict and influence this “turbulent transport.”
    Since the exact computational description of plasma turbulence would require the solution of highly complex systems of equations and the execution of countless computational steps, the code development process is aimed at achieving reasonable simplifications. The GENE code developed at IPP is based on a set of simplified, so-called gyrokinetic equations. They disregard all phenomena in the plasma which do not play a major role in turbulent transport. Although the computational effort can be reduced by many orders of magnitude in this way, the world’s fastest and most powerful supercomputers have always been needed to further develop the code. In the meantime, GENE is able to describe the formation and propagation of small low-frequency plasma eddies in the plasma interior well and to reproduce and explain the experimental results — but originally only for the simply constructed, because axisymmetric fusion systems of the tokamak type.
    For example, calculations with GENE showed that fast ions can greatly reduce turbulent transport in tokamak plasmas. Experiments at the ASDEX Upgrade tokamak at Garching confirmed this result. The required fast ions were provided by plasma heating using radio waves of the ion cyclotron frequency.
    A tokamak code for stellarators
    In stellarators, this turbulence suppression by fast ions had not been observed experimentally so far. However, the latest calculations with GENE now suggest that this effect should also exist in stellarator plasmas: In the Wendelstein 7-X stellarator at IPP at Greifswald, it could theoretically reduce turbulence by more than half. As IPP scientists Alessandro Di Siena, Alejandro Bañón Navarro and Frank Jenko show in the journal Physical Review Letters, the optimal ion temperature depends strongly on the shape of the magnetic field. Professor Frank Jenko, head of the Tokamak Theory department at IPP in Garching: “If this calculated result is confirmed in future experiments with Wendelstein 7-X in Greifswald, this could open up a path to interesting high-performance plasmas.”
    In order to use GENE for turbulence calculation in the more complicated shaped plasmas of stellarators, major code adjustments were necessary. Without the axial symmetry of the tokamaks, one has to cope with a much more complex geometry for stellarators.
    For Professor Per Helander, head of the Stellarator Theory department at IPP in Greifswald, the stellarator simulations performed with GENE are “very exciting physics.” He hopes that the results can be verified in the Wendelstein 7-X stellarator at Greifswald. “Whether the plasma values in Wendelstein 7-X are suitable for such experiments can be investigated when, in the coming experimental period, the radio wave heating system will be put into operation in addition to the current microwave and particle heating,” says Professor Robert Wolf, whose department is responsible for plasma heating.
    GENE becomes GENE-3D
    According to Frank Jenko, it was another “enormous step” to make GENE not only approximately, but completely fit for the complex, three-dimensional shape of stellarators. After almost five years of development work, the code GENE-3D, now presented in the “Journal of Computational Physics” by Maurice Maurer and co-authors, provides a “fast and yet realistic turbulence calculation also for stellarators,” says Frank Jenko. In contrast to other stellarator turbulence codes, GENE-3D describes the full dynamics of the system, i.e. the turbulent motion of the ions and also of the electrons over the entire inner volume of the plasma, including the resulting fluctuations of the magnetic field.

    Story Source:
    Materials provided by Max-Planck-Institut für Plasmaphysik (IPP). Note: Content may be edited for style and length. More

  • in

    New mathematical tool can select the best sensors for the job

    In the 2019 Boeing 737 Max crash, the recovered black box from the aftermath hinted that a failed pressure sensor may have caused the ill-fated aircraft to nose dive. This incident and others have fueled a larger debate on sensor selection, number and placement to prevent the reoccurrence of such tragedies.
    Texas A&M University researchers have now developed a comprehensive mathematical framework that can help engineers make informed decisions about which sensors to use and where they must be positioned in aircraft and other machines.
    “During the early design stage for any control system, critical decisions have to be made about which sensors to use and where to place them so that the system is optimized for measuring certain physical quantities of interest,” said Dr. Raktim Bhattacharya, associate professor in the Department of Aerospace Engineering. “With our mathematical formulation, engineers can feed the model with information on what needs to be sensed and with what precision, and the model’s output will be the fewest sensors needed and their accuracies.”
    The researchers detailed their mathematical framework in the June issue of the Institute of Electrical and Electronics Engineers’ Control System Letters.
    Whether a car or an airplane, complex systems have internal properties that need to be measured. For instance, in an airplane, sensors for angular velocity and acceleration are placed at specific locations to estimate the velocity.
    Sensors can also have different accuracies. In technical terms, accuracy is measured by the noise or the wiggles in the sensor measurements. This noise impacts how accurately the internal properties can be predicted. However, accuracies may be defined differently depending on the system and the application. For instance, some systems may require that noise in the predictions do not exceed a certain amount, while others may need the square of the noise to be as small as possible. In all cases, prediction accuracy has a direct impact on the cost of the sensor.

    advertisement

    “If you want to get sensor accuracy that is two times more accurate, the cost is likely to be more than double,” said Bhattacharya. “Furthermore, in some cases, very high accuracy is not even required. For example, an expensive 4K HD vehicle camera for object detection is unnecessary because first, fine features are not needed to distinguish humans from other cars and second, data processing from high-definition cameras becomes an issue.”
    Bhattacharya added that even if the sensors are extremely precise, knowing where to put the sensor is critical because one might place an expensive sensor at a location where it is not needed. Thus, he said the ideal solution balances cost and precision by optimizing the number of sensors and their positions.
    To test this rationale, Bhattacharya and his team designed a mathematical model using a set of equations that described the model of an F-16 aircraft. In their study, the researchers’ objective was to estimate the forward velocity, the direction of wind angle with respect to the airplane (the angle of attack), the angle between where the airplane is pointed and the horizon (the pitch angle) and pitch rate for this aircraft. Available to them were sensors that are normally in aircraft for measuring acceleration, angular velocity, pitch rate, pressure and the angle of attack. In addition, the model was also provided with expected accuracies for each sensor.
    Their model revealed that all of the sensors were not needed to accurately estimate forward velocity; readings from angular velocity sensors and pressure sensors were enough. Also, these sensors were enough to estimate the other physical states, like the angle of attack, precluding the need of an additional angle of attack sensor. In fact, these sensors, although a surrogate for measuring the angle of attack, had the effect of introducing redundancy in the system, resulting in higher system reliability.
    Bhattacharya said the mathematical framework has been designed so that it always indicates the least sensors that are needed even if it is provided with a repertoire of sensors to choose from.
    “Let’s assume a designer wants to put every type of sensor everywhere. The beauty of our mathematical model is that it will take out the unnecessary sensors and then give you the minimum number of sensors needed and their position,” he said.
    Furthermore, the researchers noted that although the study is from an aerospace engineering perspective, their mathematical model is very general and can impact other systems as well.
    “As engineering systems become bigger and more complex, the question of where to put the sensor becomes more and more difficult,” said Bhattacharya. “So, for example, if you are building a really long wind turbine blade, some physical properties of the system need to be estimated using sensors and these sensors need to be placed at optimal locations to make sure the structure does not fail. This is nontrivial and that’s where our mathematical framework comes in.” More

  • in

    Shedding light on the development of efficient blue-emitting semiconductors

    Artificial light accounts for approximately 20% of the total electricity consumed globally. Considering the present environmental crisis, this makes the discovery of energy-efficient light-emitting materials particularly important, especially those that produce white light. Over the last decade, technological advances in solid-state lighting, the subfield of semiconductors research concerned with light-emitting compounds, has led to the widespread use of white LEDs. However, most of these LEDs are actually a blue LED chip coated with a yellow luminescent material; the emitted yellow light combined with the remaining blue light produces the white color.
    Therefore, a way to reduce the energy consumption of modern white LED lights is to find better blue-emitting semiconductors. Unfortunately, no known blue-emitting compounds were simultaneously highly efficient, easily processible, durable, eco-friendly, and made from abundant materials — until now.
    In a recent study, published in Advanced Materials, a team of scientists from Tokyo Institute of Technology, Japan, discovered a new alkali copper halide, Cs5Cu3Cl6I2, that fills all the criteria. Unlike Cs3Cu2I5, another promising blue-emitting candidate for future devices, the proposed compound has two different halides, chloride and iodide. Although mixed-halide materials have been tried before, Cs5Cu3Cl6I2 has unique properties that emerge specifically from the use of I− and CI− ions.
    It turns out that Cs5Cu3Cl6I2 forms a one-dimensional zigzag chain out of two different subunits, and the links in the chain are exclusively bridged by I− ions. The scientists also found another important feature: its valence band, which describes the energy levels of electrons in different positions of the material’s crystalline structure, is almost flat (of constant energy). In turn, this characteristic makes photo-generated holes — positively charged pseudoparticles that represent the absence of a photoexcited electron — “heavier.” These holes tend to become immobilized due to their strong interaction with I− ions, and they easily bond with nearby free electrons to form a small system known as an exciton.
    Excitons induce distortions in the crystal structure. Much like the fact that one would have trouble moving atop a suspended large net that is considerably deformed by one’s own weight, the excitons become trapped in place by their own effect. This is crucial for the highly efficient generation of blue light. Professor Junghwan Kim, who led the study, explains: “The self-trapped excitons are localized forms of optically excited energy; the eventual recombination of their constituting electron-hole pair causes photoluminescence, the emission of blue light in this case.”
    In addition to its efficiency, Cs5Cu3Cl6I2 has other attractive properties. It is exclusively composed of abundant materials, making it relatively inexpensive. Moreover, it is much more stable in air than Cs3Cu2I5 and other alkali copper halide compounds. The scientists found that the performance of Cs5Cu3Cl6I2 did not degrade when stored in air for three months, while similar light-emitting compounds performed worse after merely days. Finally, Cs5Cu3Cl6I2 does not require lead, a highly toxic element, making it eco-friendly overall.
    Excited about the results of the study, Prof. Kim concludes: “Our findings provide a new perspective for the development of new alkali copper halide candidates and demonstrate that Cs5Cu3Cl6I2 could be a promising blue-emitting material.” The light shed by this team of scientists will hopefully lead to more efficient and eco-friendly lighting technology.

    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    The brain's memory abilities inspire AI experts in making neural networks less 'forgetful'

    Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a “major, long-standing obstacle to increasing AI capabilities” by drawing inspiration from a human brain memory mechanism known as “replay.”
    First author and postdoctoral researcher Gido van de Ven and principal investigator Andreas Tolias at Baylor, with Hava Siegelmann at UMass Amherst, write in Nature Communications that they have developed a new method to protect — “surprisingly efficiently” — deep neural networks from “catastrophic forgetting” — upon learning new lessons, the networks forget what they had learned before.
    Siegelmann and colleagues point out that deep neural networks are the main drivers behind recent AI advances, but progress is held back by this forgetting.
    They write, “One solution would be to store previously encountered examples and revisit them when learning something new. Although such ‘replay’ or ‘rehearsal’ solves catastrophic forgetting,” they add, “constantly retraining on all previously learned tasks is highly inefficient and the amount of data that would have to be stored becomes unmanageable quickly.”
    Unlike AI neural networks, humans are able to continuously accumulate information throughout their life, building on earlier lessons. An important mechanism in the brain believed to protect memories against forgetting is the replay of neuronal activity patterns representing those memories, they explain.
    Siegelmann says the team’s major insight is in “recognizing that replay in the brain does not store data.” Rather, “the brain generates representations of memories at a high, more abstract level with no need to generate detailed memories.” Inspired by this, she and colleagues created an artificial brain-like replay, in which no data is stored. Instead, like the brain, the network generates high-level representations of what it has seen before.
    The “abstract generative brain replay” proved extremely efficient, and the team showed that replaying just a few generated representations is sufficient to remember older memories while learning new ones. Generative replay not only prevents catastrophic forgetting and provides a new, more streamlined path for system learning, it allows the system to generalize learning from one situation to another, they state.
    For example, “if our network with generative replay first learns to separate cats from dogs, and then to separate bears from foxes, it will also tell cats from foxes without specifically being trained to do so. And notably, the more the system learns, the better it becomes at learning new tasks,” says van de Ven.
    He and colleagues write, “We propose a new, brain-inspired variant of replay in which internal or hidden representations are replayed that are generated by the network’s own, context-modulated feedback connections. Our method achieves state-of-the-art performance on challenging continual learning benchmarks without storing data, and it provides a novel model for abstract level replay in the brain.”
    Van de Ven says, “Our method makes several interesting predictions about the way replay might contribute to memory consolidation in the brain. We are already running an experiment to test some of these predictions.”

    Story Source:
    Materials provided by University of Massachusetts Amherst. Note: Content may be edited for style and length. More

  • in

    All-optical method sets record for ultrafast high-spatial-resolution imaging: 15 trillion frames per second

    High-speed cameras can take pictures in quick succession. This makes them useful for visualizing ultrafast dynamic phenomena, such as femtosecond laser ablation for precise machining and manufacturing processes, fast ignition for nuclear fusion energy systems, shock-wave interactions in living cells, and certain chemical reactions.
    Among the various parameters in photography, the sequential imaging of microscopic ultrafast dynamic processes requires high frame rates and high spatial and temporal resolutions. In current imaging systems, these characteristics are in a tradeoff with one another.
    However, scientists at Shenzhen University, China, have recently developed an all-optical ultrafast imaging system with high spatial and temporal resolutions, as well as a high frame rate. Because the method is all-optical, it’s free from the bottlenecks that arise from scanning with mechanical and electronic components.
    Their design focuses on non-collinear optical parametric amplifiers (OPAs). An OPA is a crystal that, when simultaneously irradiated with a desired signal light beam and a higher-frequency pump light beam, amplifies the signal beam and produces another light beam known as an idler. Because the crystal used in this study is non-collinear, the idler is fired in a different direction from that of the signal beam. But how is such a device useful in a high-speed imaging system?
    The answer lies in cascading OPAs. The information of the target, contained in the signal beam, is mapped onto the idler beam by the OPA while the pump beam is active. Because the idler moves in a different direction, it can be captured using a conventional charge-coupled device (CCD) camera “set to the side” while the signal beam moves toward the next stage in the OPA cascade.
    Just like how water would descend in a waterfall, the signal beam reaches the subsequent OPA, and the pump beam generated from the same laser source activates it; except now, a delay line makes the pump beam arrive later, causing the CCD camera next to the OPA in the second stage to take a picture later. Through a cascade of four OPAs with four associated CCD cameras and four different delay lines for the pump laser, the scientists created a system that can take four pictures in extremely quick succession.
    The speed of capturing consecutive pictures is limited by how small the difference between two laser delay lines can be. In this regard, this system achieved an effective frame rate of 15 trillion frames per second — a record shutter speed for high-spatial-resolution cameras. Conversely, the temporal resolution depends on the duration of the laser pulses triggering the OPAs and generating the idler signals. In this case, the pulse width was 50 fs (fifty millionths of a nanosecond). Coupled with the incredibly fast frame rate, this method is able to observe ultrafast physical phenomena, such as an air plasma grating and a rotating optical field spinning at 10 trillion radians per second.
    According to Anatoly Zayats, Co-Editor-in-Chief of Advanced Photonics, “The team at Shenzhen University has demonstrated ultrafast photographic imaging with the record fastest shutter speed. This research opens up new opportunities for studies of ultrafast processes in various fields.”
    This imaging method has scope for improvement but could easily become a new microscopy technique. Future research will unlock the potential of this approach to give us a clearer picture of ultrafast transient phenomena. More

  • in

    Algorithm boosts efficiency, nutrition for food bank ops

    Cornell University systems engineers examined data from a busy New York state food bank and, using a new algorithm, found ways to better distribute and allocate food, and elevate nutrition among its patrons in the process.
    “In order to serve thousands of people and combat food insecurity, our algorithm helps food banks manage their food resources more efficiently — and patrons get more nutrition,” said lead researcher Faisal Alkaabneh, Cornell’s first doctoral graduate in systems engineering.
    Alkaabneh and his adviser, Oliver Gao, professor of civil and environmental engineering, are co-authors of “A Unified Framework for Efficient, Effective and Fair Resource Allocation by Food Banks Using an Approximate Dynamic Programming Approach,” published in the journal Omega.
    The researchers reviewed data of the Food Bank of the Southern Tier, which serves six counties in upstate New York. In 2019, the food bank distributed 10.9 million meals to about 21,700 people each week. Nearly 19% of its patrons are seniors and about 41% are children, according to the group’s data.
    Last year, the food bank distributed 2.8 million pounds of fresh fruit through 157 partner agencies, and moved about 3.4 million pounds of food through local mobile pantries.
    The algorithm Gao and his team used to determine how to allocate several food categories efficiently, based upon pantry requests, demonstrated a 7.73% improvement in efficiency from 2018 to 2019, compared to standard food bank allocation practices. Their calculations also showed a 3% improvement in nutrition using a wider variety of food, Alkaabneh said.
    “We hope our research is used as a baseline model for food banks improving practices,” Gao said. “and boosting nutrition and policies to help people at risk for hunger.”

    Story Source:
    Materials provided by Cornell University. Original written by Blaine Friedlander. Note: Content may be edited for style and length. More

  • in

    Mathematical modelling to prevent fistulas

    It is better to invest in measures that make it easier for women to visit a doctor during pregnancy than measures to repair birth injuries. This is the conclusion from two mathematicians at LiU, using Uganda as an example.
    A fistula is a connection between the vagina and bladder or between the vagina and the rectum. It can arise in women during prolonged childbirth or as a result of violent rape. The connection causes incontinence, in which the patient cannot control either urine or faeces, which in turn leads to several further medical problems and to major physical, mental and social suffering. The medical care system in Uganda is well-developed, particularly in metropolitan areas, but despite this the occurrence of fistulas during childbirth is among the highest in Africa. It has been calculated that between 1.63 and 2.25 percent of women of childbearing age, 15-49 years, are affected.
    Betty Nannyonga, postdoc at LiU who also works at Makerere University in Kampala, Uganda, and Martin Singull, associate professor in mathematical statistics at the Department of Mathematics at LiU, have published an article in the scientific journal PLOS ONE that demonstrates how the available resources can be put to best use.
    “We have tried to construct a mathematical model to show how to prevent (obstetric) fistulas in women during prolonged childbirth. This is hugely important in a country such as Uganda,” says Martin Singull.
    The study analysed data from the Uganda Demographic Health Survey 2016. This survey collected information from 18,506 women aged 15-49 years and living in 15 regions in Uganda. Some special clinics, known as “fistula camps,” have been set up in recent years to provide surgery for affected women. Data from two of these, in different parts of the country, were also included in the study.
    The research found that significantly fewer women have received surgery at the clinics than expected.

    advertisement

    “Our results show that Uganda has a huge backlog of women who should be treated for fistula. In one of the regions we looked at, we found that for each woman who had undergone an operation, at least eight more should have received care,” says Betty Nannyonga.
    The researchers have looked at the relationship between the resources required to provide surgery for the women after being injured and the corresponding resources used to give the women access to professional care during the pregnancy, which includes the availability of delivery by Caesarean section if required. Another chilling statistic that must be considered is that not only do the women suffer from fistula, but that the baby survives in only 9% of such cases.
    The mathematical models demonstrate that the number of women who suffer from fistula decreases most rapidly if the resources are put into preventive maternity care, and making it possible for the women to give birth in hospital.
    The authors point out, however, that several difficulties contribute to the relatively high occurrence of fistula.
    “Even if professional healthcare and medical care are available in Uganda, most women do not enjoy good maternity care during their pregnancy. In some cases, this is because the distance to the healthcare providers is too far. Other reasons are the women do not have the money needed, or that they require permission from their husbands. It won’t do any good to invest money in health centres if the women don’t attend,” says Betty Nannyonga.
    The regions differ considerably in how they provide care. It is twice as likely that a woman in a town will see a doctor than it is for someone who lives in rural areas. Another factor is education: those with upper secondary education are twice as likely as those without, and having higher education than upper secondary increases the probability by a further factor of two. Social status also plays a major role: the probability of seeing a doctor on some occasion during the pregnancy is directly linked to income.
    One positive trend shown by the statistics is an increase in the fraction of women who receive professional care during the delivery itself (although the figures are for only those cases in which a child is born alive). This has risen from 37% in the period before 2001, to 42% in 2006, 58% in 2011 and 74% in 2016.
    “Our results show that professional care and surgery by themselves cannot prevent all cases of fistula: other measures will be needed,” concludes Betty Nannyonga.

    Story Source:
    Materials provided by Linköping University. Original written by Monica Westman Svenselius. Note: Content may be edited for style and length. More

  • in

    World's smallest ultrasound detector created

    Researchers at Helmholtz Zentrum München and the Technical University of Munich (TUM) have developed the world’s smallest ultrasound detector. It is based on miniaturized photonic circuits on top of a silicon chip. With a size 100 times smaller than an average human hair, the new detector can visualize features that are much smaller than previously possible, leading to what is known as super-resolution imaging.
    Since the development of medical ultrasound imaging in the 1950s, the core detection technology of ultrasound waves has primarily focused on using piezoelectric detectors, which convert the pressure from ultrasound waves into electric voltage. The imaging resolution achieved with ultrasound depends on the size of the piezoelectric detector employed. Reducing this size leads to higher resolution and can offer smaller, densely packed one or two dimensional ultrasound arrays with improved ability to discriminate features in the imaged tissue or material. However, further reducing the size of piezoelectric detectors impairs their sensitivity dramatically, making them unusable for practical application.
    Using computer chip technology to create an optical ultrasound detector
    Silicon photonics technology is widely used to miniaturize optical components and densely pack them on the small surface of a silicon chip. While silicon does not exhibit any piezoelectricity, its ability to confine light in dimensions smaller than the optical wavelength has already been widely exploited for the development of miniaturized photonic circuits.
    Researchers at Helmholtz Zentrum Mu?nchen and TUM capitalized on the advantages of those miniaturized photonic circuits and built the world’s smallest ultrasound detector: the silicon waveguide-etalon detector, or SWED. Instead of recording voltage from piezoelectric crystals, SWED monitors changes in light intensity propagating through the miniaturized photonic circuits.
    “This is the first time that a detector smaller than the size of a blood cell is used to detect ultrasound using the silicon photonics technology,” says Rami Shnaiderman, developer of SWED. “If a piezoelectric detector was miniaturized to the scale of SWED, it would be 100 million times less sensitive.”
    Super-resolution imaging

    advertisement

    “The degree to which we were we able to miniaturize the new detector while retaining high sensitivity due to the use of silicon photonics was breathtaking,” says Prof. Vasilis Ntziachristos, lead of the research team. The SWED size is about half a micron (=0,0005 millimeters). This size corresponds to an area that is at least 10,000 times smaller than the smallest piezoelectric detectors employed in clinical imaging applications. The SWED is also up to 200 times smaller than the ultrasound wavelength employed, which means that it can be used to visualize features that are smaller than one micrometer, leading to what is called super-resolution imaging.
    Inexpensive and powerful
    As the technology capitalizes on the robustness and easy manufacturability of the silicon platform, large numbers of detectors can be produced at a small fraction of the cost of piezoelectric detectors, making mass production feasible. This is important for developing a number of different detection applications based on ultrasound waves. “We will continue to optimize every parameter of this technology — the sensitivity, the integration of SWED in large arrays, and its implementation in hand-held devices and endoscopes,” adds Shnaiderman.
    Future development and applications
    “The detector was originally developed to propel the performance of optoacoustic imaging, which is a major focus of our research at Helmholtz Zentrum München and TUM. However, we now foresee applications in a broader field of sensing and imaging,” says Ntziachristos.
    While the researchers are primarily aiming for applications in clinical diagnostics and basic biomedical research, industrial applications may also benefit from the new technology. The increased imaging resolution may lead to studying ultra-fine details in tissues and materials. A first line of investigation involves super-resolution optoacoustic (photoacoustic) imaging of cells and micro-vasculature in tissues, but the SWED could be also used to study fundamental properties of ultrasonic waves and their interactions with matter on a scale that was not possible before. More