More stories

  • in

    New theory hints at more efficient way to develop quantum algorithms

    In 2019, Google claimed it was the first to demonstrate a quantum computer performing a calculation beyond the abilities of today’s most powerful supercomputers.
    But most of the time, creating a quantum algorithm that stands a chance at beating a classical computer is an accidental process, Purdue University scientists say. To bring more guidance to this process and make it less arbitrary, these scientists developed a new theory that may eventually lead to more systematic design of quantum algorithms.
    The new theory, described in a paper published in the journal Advanced Quantum Technologies, is the first known attempt to determine which quantum states can be created and processed with an acceptable number of quantum gates to outperform a classical algorithm.
    Physicists refer to this concept of having the right number of gates to control each state as “complexity.” Since the complexity of a quantum algorithm is closely related to the complexity of quantum states involved in the algorithm, the theory could therefore bring order to the search for quantum algorithms by characterizing which quantum states meet that complexity criteria.
    An algorithm is a sequence of steps to perform a calculation. The algorithm is usually implemented on a circuit.
    In classical computers, circuits have gates that switch bits to either a 0 or 1 state. A quantum computer instead relies on computational units called “qubits” that store 0 and 1 states simultaneously in superposition, allowing more information to be processed.

    advertisement

    What would make a quantum computer faster than a classical computer is simpler information processing, characterized by the enormous reduction in the number of quantum gates in a quantum circuit compared with a classical circuit.
    In classical computers the number of gates in circuits increases exponentially with respect to the size of the problem of interest. This exponential model grows so astonishingly fast that it becomes physically impossible to handle for even a moderately sized problem of interest.
    “For example, even a small protein molecule may contain hundreds of electrons. If each electron can only take two forms, then to simulate 300 electrons would require 2300 classical states, which is more than the number of all the atoms in the universe,” said Sabre Kais, a professor in Purdue’s Department of Chemistry and member of the Purdue Quantum Science and Engineering Institute.
    For quantum computers, there is a way for quantum gates to scale up “polynomially” — rather than just exponentially like a classical computer — with the size of the problem (like the number of electrons in the last example). “Polynomial” means that there would be drastically fewer steps (gates) needed to process the same amount of information, making a quantum algorithm superior to a classical algorithm.
    Researchers so far haven’t had a good way to identify which quantum states could satisfy this condition of polynomial complexity.
    “There is a very large search space for finding the states and sequence of gates that match up in complexity to create a useful quantum algorithm capable of performing calculations faster than a classical algorithm,” said Kais, whose research group is developing quantum algorithms and quantum machine learning methods.
    Kais and Zixuan Hu, a Purdue postdoctoral associate, used the new theory to identify a large group of quantum states with polynomial complexity. They also showed that these states may share a coefficient feature that could be used to better identify them when designing a quantum algorithm.
    “Given any quantum state, we are now able to design an efficient coefficient sampling procedure to determine if it belongs to the class or not,” Hu said.
    This work is supported by the U.S. Department of Energy (Office of Basic Energy Sciences) under Award No. DE-SC0019215. The Purdue Quantum Science and Engineering Institute is part of Purdue’s Discovery Park.

    Story Source:
    Materials provided by Purdue University. Original written by Kayla Wiles. Note: Content may be edited for style and length. More

  • in

    Team's flexible micro LEDs may reshape future of wearable technology

    University of Texas at Dallas researchers and their international colleagues have developed a method to create micro LEDs that can be folded, twisted, cut and stuck to different surfaces.
    The research, published online in June in the journal Science Advances, helps pave the way for the next generation of flexible, wearable technology.
    Used in products ranging from brake lights to billboards, LEDs are ideal components for backlighting and displays in electronic devices because they are lightweight, thin, energy efficient and visible in different types of lighting. Micro LEDs, which can be as small as 2 micrometers and bundled to be any size, provide higher resolution than other LEDs. Their size makes them a good fit for small devices such as smart watches, but they can be bundled to work in flat-screen TVs and other larger displays. LEDs of all sizes, however, are brittle and typically can only be used on flat surfaces.
    The researchers’ new micro LEDs aim to fill a demand for bendable, wearable electronics.
    “The biggest benefit of this research is that we have created a detachable LED that can be attached to almost anything,” said Dr. Moon Kim, Louis Beecherl Jr. Distinguished Professor of materials science and engineering at UT Dallas and a corresponding author of the study. “You can transfer it onto your clothing or even rubber — that was the main idea. It can survive even if you wrinkle it. If you cut it, you can use half of the LED.”
    Researchers in the Erik Jonsson School of Engineering and Computer Science and the School of Natural Sciences and Mathematics helped develop the flexible LED through a technique called remote epitaxy, which involves growing a thin layer of LED crystals on the surface of a sapphire crystal wafer, or substrate.

    advertisement

    Typically, the LED would remain on the wafer. To make it detachable, researchers added a nonstick layer to the substrate, which acts similarly to the way parchment paper protects a baking sheet and allows for the easy removal of cookies, for instance. The added layer, made of a one-atom-thick sheet of carbon called graphene, prevents the new layer of LED crystals from sticking to the wafer.
    “The graphene does not form chemical bonds with the LED material, so it adds a layer that allows us to peel the LEDs from the wafer and stick them to any surface,” said Kim, who oversaw the physical analysis of the LEDs using an atomic resolution scanning/transmission electron microscope at UT Dallas’ Nano Characterization Facility.
    Colleagues in South Korea carried out laboratory tests of LEDs by adhering them to curved surfaces, as well as to materials that were subsequently twisted, bent and crumpled. In another demonstration, they adhered an LED to the legs of a Lego minifigure with different leg positions.
    Bending and cutting do not affect the quality or electronic properties of the LED, Kim said.
    The bendy LEDs have a variety of possible uses, including flexible lighting, clothing and wearable biomedical devices. From a manufacturing perspective, the fabrication technique offers another advantage: Because the LED can be removed without breaking the underlying wafer substrate, the wafer can be used repeatedly.
    “You can use one substrate many times, and it will have the same functionality,” Kim said.
    In ongoing studies, the researchers also are applying the fabrication technique to other types of materials.
    “It’s very exciting; this method is not limited to one type of material,” Kim said. “It’s open to all kinds of materials.”

    Story Source:
    Materials provided by University of Texas at Dallas. Original written by Kim Horner. Note: Content may be edited for style and length. More

  • in

    Intelligent software tackles plant cell jigsaw puzzle

    Imagine working on a jigsaw puzzle with so many pieces that even the edges seem indistinguishable from others at the puzzle’s centre. The solution seems nearly impossible. And, to make matters worse, this puzzle is in a futuristic setting where the pieces are not only numerous, but ever-changing. In fact, you not only must solve the puzzle, but “un-solve” it to parse out how each piece brings the picture wholly into focus.
    That’s the challenge molecular and cellular biologists face in sorting through cells to study an organism’s structural origin and the way it develops, known as morphogenesis. If only there was a tool that could help. An eLife paper out this week shows there now is.
    An EMBL research group led by Anna Kreshuk, a computer scientist and expert in machine learning, joined the DFG-funded FOR2581 consortium of plant biologists and computer scientists to develop a tool that could solve this cellular jigsaw puzzle. Starting with computer code and moving on to a more user-friendly graphical interface called PlantSeg, the team built a simple open-access method to provide the most accurate and versatile analysis of plant tissue development to date. The group included expertise from EMBL, Heidelberg University, the Technical University of Munich, and the Max Planck Institute for Plant Breeding Research in Cologne.
    “Building something like PlantSeg that can take a 3D perspective of cells and actually separate them all is surprisingly hard to do, considering how easy it is for humans,” Kreshuk says. “Computers aren’t as good as humans when it comes to most vision-related tasks, as a rule. With all the recent development in deep learning and artificial intelligence at large, we are closer to solving this now, but it’s still not solved — not for all conditions. This paper is the presentation of our current approach, which took some years to build.”
    If researchers want to look at morphogenesis of tissues at the cellular level, they need to image individual cells. Lots of cells means they also have to separate or “segment” them to see each cell individually and analyse the changes over time.
    “In plants, you have cells that look extremely regular that in a cross-section looks like rectangles or cylinders,” Kreshuk says. “But you also have cells with so-called ‘high lobeness’ that have protrusions, making them look more like puzzle pieces. These are more difficult to segment because of their irregularity.”
    Kreshuk’s team trained PlantSeg on 3D microscope images of reproductive organs and developing lateral roots of a common plant model, Arabidopsis thaliana, also known as thale cress. The algorithm needed to factor in the inconsistencies in cell size and shape. Sometimes cells were more regular, sometimes less. As Kreshuk points out, this is the nature of tissue.

    advertisement

    A beautiful side of this research came from the microscopy and images it provided to the algorithm. The results manifested themselves in colourful renderings that delineated the cellular structures, making it easier to truly “see” segmentation.
    “We have giant puzzle boards with thousands of cells and then we’re essentially colouring each one of these puzzle pieces with a different colour,” Kreshuk says.
    Plant biologists have long needed this kind of tool, as morphogenesis is at the crux of many developmental biology questions. This kind of algorithm allows for all kinds of shape-related analysis, for example, analysis of shape changes through development or under a change in environmental conditions, or between species. The paper gives some examples, such as characterising developmental changes in ovules, studying the first asymmetric cell division which initiates the formation of the lateral root, and comparing and contrasting the shape of leaf cells between two different plant species.
    While this tool currently targets plants specifically, Kreshuk points out that it could be tweaked to be used for other living organisms as well.
    Machine learning-based algorithms, like the ones used at the core of PlantSeg, are trained from correct segmentation examples. The group has trained PlantSeg on many plant tissue volumes, so that now it generalises quite well to unseen plant data. The underlying method is, however, applicable to any tissue with cell boundary staining and one could easily retrain it for animal tissue.
    “If you have tissue where you have a boundary staining, like cell walls in plants or cell membranes in animals, this tool can be used,” Kreshuk says. “With this staining and at high enough resolution, plant cells look very similar to our cells, but they are not quite the same. The tool right now is really optimised for plants. For animals, we would probably have to retrain parts of it, but it would work.”
    Currently, PlantSeg is an independent tool but one that Kreshuk’s team will eventually merge into another tool her lab is working on, ilastik Multicut workflow. More

  • in

    Algorithm aims to alert consumers before they use illicit online pharmacies

    Consumers are expected to spend more than $100 billion at online pharmacies in the next few years, but not all of these businesses are legitimate. Without proper quality control, these illicit online pharmacies are more than just a commercial threat, they can create serious health threats.
    In a study, a team of Penn State researchers report that an algorithm they developed may be able to spot illicit online pharmacies that could be providing customers with substandard medications without their knowledge, among other potential problems.
    “There are several problems with illicit online pharmacies,” said Soundar Kumara, the Allen E. Pearce and Allen M. Pearce Professor of Industrial Engineering. “One is they might put bad content into a pill, and the other problem is they might reduce the content of a medicine, so, for example, instead of taking 200 milligrams of a medication, the customers are only taking 100 milligrams — and they probably never realize it.”
    Besides often selling sub-standard and counterfeit drugs, illicit pharmacies may provide potentially dangerous and addictive drugs, such as opioids, without a prescription, according to the researchers, who report their findings in the Journal of Medical Internet Research, a top-tier peer-reviewed open-access journal in health/medical informatics. The paper, “Managing Illicit Online Pharmacies: Web Analytics and Predictive Models Study,” can be accessed here.
    The researchers designed the computer model to approach the problem of weeding out good online pharmacies from bad in much the same way that people make comparisons, said Kumara, who is also an associate of Penn State’s Institute for Computational and Data Sciences.
    “The essential question in this study is, how do you know what is good or bad — you create a baseline of what is good and then you compare that baseline with anything else you encounter, which normally tells you whether something is not good,” said Kumara. “This is how we recognize things that might be out of the norm. The same thing applies here. You look at a good online pharmacy and find out what the features are of that site and then you collect the features of other online pharmacies and do a comparison.”
    Hui Zhao, associate professor of supply chain and information systems and the Charles and Lilian Binder Faculty Fellow in the Smeal College of Business, said that sorting legitimate online pharmacies from illicit ones can be a daunting task.

    advertisement

    “It’s very challenging to develop these tools for two reasons,” said Zhao. “First is just the huge scale of the problem. There are at least 32,000 to 35,000 online pharmacies. Second, the nature of online channels because these online pharmacies are so dynamic. They come and go quickly — around 20 a day.”
    According to Sowmyasri Muthupandi, a former research assistant in industrial engineering and currently a data engineer at Facebook, the team looked at several attributes of online pharmacies but identified the relationships between the pharmacies and other sites as a critical attribute in determining whether the business was legitimate, or not.
    “One novelty of the algorithm is that we focused mostly on websites that link to these particular pharmacies,” said Muthupandi. “And among all the attributes we found that it’s these referral websites that paint a clearer picture when it comes to classifying online pharmacies.”
    She added that if a pharmacy is mainly reached from referral websites that mostly link to or refer illicit pharmacies, then this pharmacy is more likely to be illicit.
    Zhao said that the algorithm the team developed could help consumers identify illicit online pharmacies, which are estimated to represent up to 75% of all online drug merchants. As an added danger, most consumers lack the awareness of the prevalence and the danger of these illicit pharmacies and consequently use the site without knowing the potential risks, she said.

    advertisement

    The researchers said a warning system could be developed that alerts the consumer before a purchase that the site may be an illicit pharmacy. Search engines, social media, online markets, such as Amazon, and payment or credit card companies could also use the algorithm to filter out illicit online pharmacies, or take the status of the online pharmacies into consideration when ranking search results, deciding advertising allocations, making payments, or disqualifying vendors.
    Policy makers, government agencies, patient advocacy groups and drug manufacturers could use such a system to identify, monitor, curb illicit online pharmacies and educate consumers.
    According to Muthupandi, for future work, researchers may want to consider expanding the number of websites and attributes for analysis to further improve the algorithm’s ability to detect illicit online pharmacies.
    This work was funded through the Smeal Commercialization of Research (SCOR) Grant, established for “Research with Impact.” This particular project was funded collaboratively by the Farrell Center for Corporate Innovation and Entrepreneurship, the College of Engineering’s ENGINE Program and the Penn State Fund for Innovation. The team has also received a patent — U.S. Patent No. 10,672,048 — for this work. More

  • in

    Brain-inspired electronic system could vastly reduce AI's carbon footprint

    Extremely energy-efficient artificial intelligence is now closer to reality after a study by UCL researchers found a way to improve the accuracy of a brain-inspired computing system.
    The system, which uses memristors to create artificial neural networks, is at least 1,000 times more energy efficient than conventional transistor-based AI hardware, but has until now been more prone to error.
    Existing AI is extremely energy-intensive — training one AI model can generate 284 tonnes of carbon dioxide, equivalent to the lifetime emissions of five cars. Replacing the transistors that make up all digital devices with memristors, a novel electronic device first built in 2008, could reduce this to a fraction of a tonne of carbon dioxide — equivalent to emissions generated in an afternoon’s drive.
    Since memristors are so much more energy-efficient than existing computing systems, they can potentially pack huge amounts of computing power into hand-held devices, removing the need to be connected to the Internet.
    This is especially important as over-reliance on the Internet is expected to become problematic in future due to ever-increasing data demands and the difficulties of increasing data transmission capacity past a certain point.
    In the new study, published in Nature Communications, engineers at UCL found that accuracy could be greatly improved by getting memristors to work together in several sub-groups of neural networks and averaging their calculations, meaning that flaws in each of the networks could be cancelled out.

    advertisement

    Memristors, described as “resistors with memory,” as they remember the amount of electric charge that flowed through them even after being turned off, were considered revolutionary when they were first built over a decade ago, a “missing link” in electronics to supplement the resistor, capacitor and inductor. They have since been manufactured commercially in memory devices, but the research team say they could be used to develop AI systems within the next three years.
    Memristors offer vastly improved efficiency because they operate not just in a binary code of ones and zeros, but at multiple levels between zero and one at the same time, meaning more information can be packed into each bit.
    Moreover, memristors are often described as a neuromorphic (brain-inspired) form of computing because, like in the brain, processing and memory are implemented in the same adaptive building blocks, in contrast to current computer systems that waste a lot of energy in data movement.
    In the study, Dr Adnan Mehonic, PhD student Dovydas Joksas (both UCL Electronic & Electrical Engineering), and colleagues from the UK and the US tested the new approach in several different types of memristors and found that it improved the accuracy of all of them, regardless of material or particular memristor technology. It also worked for a number of different problems that may affect memristors’ accuracy.
    Researchers found that their approach increased the accuracy of the neural networks for typical AI tasks to a comparable level to software tools run on conventional digital hardware.
    Dr Mehonic, director of the study, said: “We hoped that there might be more generic approaches that improve not the device-level, but the system-level behaviour, and we believe we found one. Our approach shows that, when it comes to memristors, several heads are better than one. Arranging the neural network into several smaller networks rather than one big network led to greater accuracy overall.”
    Dovydas Joksas further explained: “We borrowed a popular technique from computer science and applied it in the context of memristors. And it worked! Using preliminary simulations, we found that even simple averaging could significantly increase the accuracy of memristive neural networks.”
    Professor Tony Kenyon (UCL Electronic & Electrical Engineering), a co-author on the study, added: “We believe now is the time for memristors, on which we have been working for several years, to take a leading role in a more energy-sustainable era of IoT devices and edge computing.” More

  • in

    Civilization may need to 'forget the flame' to reduce CO2 emissions

    Just as a living organism continually needs food to maintain itself, an economy consumes energy to do work and keep things going. That consumption comes with the cost of greenhouse gas emissions and climate change, though. So, how can we use energy to keep the economy alive without burning out the planet in the process?
    In a paper in PLOS ONE, University of Utah professor of atmospheric sciences Tim Garrett, with mathematician Matheus Grasselli of McMaster University and economist Stephen Keen of University College London, report that current world energy consumption is tied to unchangeable past economic production. And the way out of an ever-increasing rate of carbon emissions may not necessarily be ever-increasing energy efficiency — in fact it may be the opposite.
    “How do we achieve a steady-state economy where economic production exists, but does not continually increase our size and add to our energy demands?” Garrett says. “Can we survive only by repairing decay, simultaneously switching existing fossil infrastructure to a non-fossil appetite? Can we forget the flame?”
    Thermoeconomics
    Garrett is an atmospheric scientist. But he recognizes that atmospheric phenomena, including rising carbon dioxide levels and climate change, are tied to human economic activity. “Since we model the earth system as a physical system,” he says, “I wondered whether we could model economic systems in a similar way.”
    He’s not alone in thinking of economic systems in terms of physical laws. There’s a field of study, in fact, called thermoeconomics. Just as thermodynamics describe how heat and entropy (disorder) flow through physical systems, thermoeconomics explores how matter, energy, entropy and information flow through human systems.

    advertisement

    Many of these studies looked at correlations between energy consumption and current production, or gross domestic product. Garrett took a different approach; his concept of an economic system begins with the centuries-old idea of a heat engine. A heat engine consumes energy at high temperatures to do work and emits waste heat. But it only consumes. It doesn’t grow.
    Now envision a heat engine that, like an organism, uses energy to do work not just to sustain itself but also to grow. Due to past growth, it requires an ever-increasing amount of energy to maintain itself. For humans, the energy comes from food. Most goes to sustenance and a little to growth. And from childhood to adulthood our appetite grows. We eat more and exhale an ever-increasing amount of carbon dioxide.
    “We looked at the economy as a whole to see if similar ideas could apply to describe our collective maintenance and growth,” Garrett says. While societies consume energy to maintain day to day living, a small fraction of consumed energy goes to producing more and growing our civilization.
    “We’ve been around for a while,” he adds. “So it is an accumulation of this past production that has led to our current size, and our extraordinary collective energy demands and CO2 emissions today.”
    Growth as a symptom
    To test this hypothesis, Garrett and his colleagues used economic data from 1980 to 2017 to quantify the relationship between past cumulative economic production and the current rate at which we consume energy. Regardless of the year examined, they found that every trillion inflation-adjusted year 2010 U.S. dollars of economic worldwide production corresponded with an enlarged civilization that required an additional 5.9 gigawatts of power production to sustain itself . In a fossil economy, that’s equivalent to around 10 coal-fired power plants, Garrett says, leading to about 1.5 million tons of CO2 emitted to the atmosphere each year. Our current energy usage, then, is the natural consequence of our cumulative previous economic production.

    advertisement

    They came to two surprising conclusions. First, although improving efficiency through innovation is a hallmark of efforts to reduce energy use and greenhouse gas emissions, efficiency has the side effect of making it easier for civilization to grow and consume more.
    Second, that the current rates of world population growth may not be the cause of rising rates of energy consumption, but a symptom of past efficiency gains.
    “Advocates of energy efficiency for climate change mitigation may seem to have a reasonable point,” Garrett says, “but their argument only works if civilization maintains a fixed size, which it doesn’t. Instead, an efficient civilization is able to grow faster. It can more effectively use available energy resources to make more of everything, including people. Expansion of civilization accelerates rather than declines, and so do its energy demands and CO2 emissions.”
    A steady-state decarbonized future?
    So what do those conclusions mean for the future, particularly in relation to climate change? We can’t just stop consuming energy today any more than we can erase the past, Garrett says. “We have inertia. Pull the plug on energy consumption and civilization stops emitting but it also becomes worthless. I don’t think we could accept such starvation.”
    But is it possible to undo the economic and technological progress that have brought civilization to this point? Can we, the species who harnessed the power of fire, now “forget the flame,” in Garrett’s words, and decrease efficient growth?
    “It seems unlikely that we will forget our prior innovations, unless collapse is imposed upon us by resource depletion and environmental degradation,” he says, “which, obviously, we hope to avoid.”
    So what kind of future, then, does Garrett’s work envision? It’s one in which the economy manages to hold at a steady state — where the energy we use is devoted to maintaining our civilization and not expanding it.
    It’s also one where the energy of the future can’t be based on fossil fuels. Those have to stay in the ground, he says.
    “At current rates of growth, just to maintain carbon dioxide emissions at their current level will require rapidly constructing renewable and nuclear facilities, about one large power plant a day. And somehow it will have to be done without inadvertently supporting economic production as well, in such a way that fossil fuel demands also increase.”
    It’s a “peculiar dance,” he says, between eliminating the prior fossil-based innovations that accelerated civilization expansion, while innovating new non-fossil fuel technologies. Even if this steady-state economy were to be implemented immediately, stabilizing CO2 emissions, the pace of global warming would be slowed — not eliminated. Atmospheric levels of CO2 would still reach double their pre-industrial level before equilibrating, the research found.
    By looking at the global economy through a thermodynamic lens, Garrett acknowledges that there are unchangeable realities. Any form of an economy or civilization needs energy to do work and survive. The trick is balancing that with the climate consequences.
    “Climate change and resource scarcity are defining challenges of this century,” Garrett says. “We will not have a hope of surviving our predicament by ignoring physical laws.”
    Future work
    This study marks the beginning of the collaboration between Garrett, Grasselli and Keen. They’re now working to connect the results of this study with a full model for the economy, including a systematic investigation of the role of matter and energy in production.
    “Tim made us focus on a pretty remarkable empirical relationship between energy consumption and cumulative economic output,” Grasselli says. “We are now busy trying to understand what this means for models that include notions that are more familiar to economists, such as capital, investment and the always important question of monetary value and inflation.” More

  • in

    Student research team develops hybrid rocket engine

    In a year defined by obstacles, a University of Illinois at Urbana-Champaign student rocket team persevered. Working together across five time zones, they successfully designed a hybrid rocket engine that uses paraffin and a novel nitrous oxide-oxygen mixture called Nytrox. The team has its sights set on launching a rocket with the new engine at the 2021 Intercollegiate Rocketry and Engineering Competition.
    “Hybrid propulsion powers Virgin Galactic’s suborbital tourist spacecraft and the development of that engine has been challenging. Our students are now experiencing those challenges first hand and learning how to overcome them,” said faculty adviser to the team Michael Lembeck.
    Last year the team witnessed a number of catastrophic failures with hybrid engines utilizing nitrous oxide. The propellant frequently overheated in the New Mexico desert, where the IREC competition is held. Lembeck said this motivated the team to find an alternative fuel that could remain stable at temperature. Nytrox surfaced as the solution to the problem.
    As the team began working on the engine this past spring semester, excitement to conduct hydrostatic testing of the ground oxidizer tank vessel quickly turned to frustration as the team lacked a safe test location.
    Team leader Vignesh Sella said, “We planned to conduct the test at the U of I’s Willard airport retired jet engine testing facility. But the Department of Aerospace Engineering halted all testing until safety requirements could be met.”
    Sella said they were disheartened at first, but rallied by creating a safety review meeting along with another student rocket group to examine their options.

    advertisement

    “As a result of that meeting, we came up with a plan to move the project forward. The hybrid team rigorously evaluated our safety procedures, and had our work reviewed by Dr. Dassou Nagassou, the Aerodynamics Research Lab manager. He became a great resource for us, and a very helpful mentor.”
    Sella and Andrew Larkey also approached Purdue University to draw from their extensive experience in the realm of rocket propulsion. They connected with Chris Nielson who is a graduate student and lab manager at Purdue. They did preliminary over-the-phone design reviews and were eventually invited to conduct their hydrostatic and cold-flow testing at Purdue’s Zucrow Laboratories, a facility dedicated to testing rocket propulsion with several experts in the field on-site.
    “We sent a few of the members there to scout the location and take notes before bringing the whole team there for a test,” Sella said. “These meetings, relationships, and advances, although they may sound smooth and easy to establish, were arduous and difficult to attain. It was a great relief to us to have the support from the department, a pressure vessel expert as our mentor, and Zucrow Laboratories available to our team.”
    The extended abstract, which the team had submitted much earlier to the AIAA Propulsion and Energy conference, assumed the engine would have been assembled and tested before the documentation process began. Team leader Vignesh Sella said they wanted to document hard test data but had to switch tactics in March. The campus move to online-only classes also curtailed all in-person activities, including those of registered student organizations like ISS.
    “As the disruptions caused by COVID-19 required us to work remotely, we pivoted the paper by focusing on documenting the design processes and decisions we made for the engine. This allowed us to work remotely and complete a paper that wasn’t too far from the original abstract. Our members, some of whom are international, met on Zoom and Discord to work on the paper together virtually, over five time zones,” Sella said.

    advertisement

    Sella said he and the entire team are proud of what they have accomplished and are “returning this fall with a vengeance.”
    The Illinois Space Society is a technical, professional, and educational outreach student organization at the U of I in the Department of Aerospace Engineering. The society consists of 150 active members. The hybrid rocket engine team consisted of 20 members and is one of the five technical projects within ISS. The project began in 2013 with the goal of constructing a subscale hybrid rocket engine before transitioning to a full-scale engine. The subscale hybrid rocket engine was successfully constructed and hot fired in the summer of 2018, yielding the positive test results necessary to move onto designing and manufacturing a full-scale engine.
    “After the engine completes its testing, the next task will be integrating the engine into the rocket vehicle,” said Sella “This will require fitting key flight hardware components within the geometric constraints of a rocket body tube and structurally securing the engine to the vehicle.”
    In June 2021, the rocket will be transported to Spaceport America in Truth or Consequences for its first launch.
    This work was supported by the U of I Student Sustainability Committee, the Office of Undergraduate Research, and the Illinois Space Society. Technical support was provided by the Department of Aerospace Engineering, the School of Chemical Sciences Machine Shop, Zucrow Laboratories and Christopher D. Nilsen at Purdue University, Stephen A. Whitmore of Utah State University, and Dassou Nagassou of the Aerodynamics Research Laboratory at Illinois. More

  • in

    Artificial intelligence learns continental hydrology

    Changes to water masses which are stored on the continents can be detected with the help of satellites. The data sets on the Earth’s gravitational field which are required for this, stem from the GRACE and GRACE-FO satellite missions. As these data sets only include the typical large-scale mass anomalies, no conclusions about small scale structures, such as the actual distribution of water masses in rivers and river branches, are possible. Using the South American continent as an example, the Earth system modellers at the German Research Centre for Geosciences GFZ, have developed a new Deep-Learning-Method, which quantifies small as well as large-scale changes to the water storage with the help of satellite data. This new method cleverly combines Deep-Learning, hydrological models and Earth observations from gravimetry and altimetry.
    So far it is not precisely known, how much water a continent really stores. The continental water masses are also constantly changing, thus affecting the Earth’s rotation and acting as a link in the water cycle between atmosphere and ocean. Amazon tributaries in Peru, for example, carry huge amounts of water in some years, but only a fraction of it in others. In addition to the water masses of rivers and other bodies of fresh water, considerable amounts of water are also found in soil, snow and underground reservoirs, which are difficult to quantify directly.
    Now the research team around primary author Christopher Irrgang developed a new method in order to draw conclusions on the stored water quantities of the South American continent from the coarsely-resolved satellite data. “For the so called down-scaling, we are using a convolutional neural network, in short CNN, in connection with a newly developed training method,” Irrgang says. “CNNs are particularly well suited for processing spatial Earth observations, because they can reliably extract recurrent patterns such as lines, edges or more complex shapes and characteristics.”
    In order to learn the connection between continental water storage and the respective satellite observations, the CNN was trained with simulation data of a numerical hydrological model over the period from 2003 until 2018. Additionally, data from the satellite altimetry in the Amazon region was used for validation. What is extraordinary, is that this CNN continuously self-corrects and self-validates in order to make the most accurate statements possible about the distribution of the water storage. “This CNN therefore combines the advantages of numerical modelling with high-precision Earth observation” according to Irrgang.
    The researchers’ study shows that the new Deep-Learning-Method is particularly reliable for the tropical regions north of the -20° latitude on the South American continent, where rain forests, vast surface waters and also large groundwater basins are located. Same as for the groundwater-rich, western part of South America’s southern tip. The down-scaling works less well in dry and desert regions. This can be explained by the comparably low variability of the already low water storage there, which therefore only have a marginal effect on the training of the neural network. However, for the Amazon region, the researchers were able to show that the forecast of the validated CNN was more accurate than the numerical model used.
    In future, large-scale as well as regional analysis and forecasts of the global continental water storage will be urgently needed. Further development of numerical models and the combination with innovative Deep-Learning-Methods will take up a more important role in this, in order to gain comprehensive insight into continental hydrology. Aside from purely geophysical investigations, there are many other possible applications, such as studying the impact of climate change on continental hydrology, the identification of stress factors for ecosystems such as droughts or floods, and the development of water management strategies for agricultural and urban regions.

    Story Source:
    Materials provided by GFZ GeoForschungsZentrum Potsdam, Helmholtz Centre. Note: Content may be edited for style and length. More