More stories

  • in

    Predictive model could improve hydrogen station availability

    Consumer confidence in driving hydrogen-fueled vehicles could be improved by having station operators adopt a predictive model that helps them anticipate maintenance needs, according to researchers at the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) and Colorado State University (CSU).
    Stations shutting down for unscheduled maintenance reduces hydrogen fueling availability to consumers and may slow adoption of these types of fuel cell electric vehicles, the researchers noted. The use of what is known as a prognostics health monitoring (PHM) model would allow hydrogen stations to reduce these unscheduled events.
    “Motorists expect to be able to fuel their vehicles without any problems. We want to ensure motorists who drive hydrogen-fueled cars have the same experience,” said Jennifer Kurtz, lead author of the new paper, “Hydrogen Station Prognostics and Health Monitoring Model,” which appears in the International Journal of Hydrogen Energy. “This predictive model can let station operators know in advance when a problem might occur and minimize any disruptions that motorists might experience with hydrogen fueling.”
    Co-authored by Spencer Gilleon of NREL and Thomas Bradley of CSU, the article posits the PHM model would improve station availability and consumer confidence.
    The availability of hydrogen as a vehicle fuel is low compared to the ubiquity of gasoline, a fact reflected in the number of stations that dispense the low-emission fuel. While California has more than 10,000 gasoline stations, the Hydrogen Fuel Cell Partnership counts just 59 retail hydrogen stations across the state. With relatively few choices, motorists who rely on hydrogen must be confident their needed fuel is available. Station operators must make any necessary repairs to meet the demands of consumers, but they also must investigate the causes of any failures to avoid future problems.
    Data from the National Fuel Cell Technology Evaluation Center reveals the single most common reason hydrogen stations shut down for unscheduled maintenance is problems with the dispenser system, which encompasses such items as the hoses and dispenser valves as well as the user interface. By using a data-based PHM, station operators could reduce the frequency of unscheduled maintenance and increase the frequency of preventive maintenance. The researchers have dubbed this particular PHM “hydrogen station prognostics health monitoring,” or H2S PHM.
    The H2S PHM calculates the probability a component will continue working without a failure, based on how many fills the station has completed. The model can also be used to estimate the remaining useful life for each of the components, thereby lowering maintenance costs and making the stations more reliable. Using a hypothetical example, the researchers considered a dispenser valve that the H2S PHM has flagged as needing attention. The station operator can then be prepared for upcoming maintenance and schedule a technician to come when demand for hydrogen will be low. That cuts down on the amount of time a station would be unable to fuel vehicles. If the valve were to fail without warning, the station operator would have to call a technician, wait for their arrival and diagnosis of the problem, while at the same time be unable to provide fuel.
    Kurtz, the director of NREL’s Energy Conversion and Storage Systems Center, noted that limitations exist when applying H2S PHM to the reliability of a hydrogen station. The method would not predict sudden failures, which can be caused by human error. The H2S PHM is also only as good as the available data, and more data is needed.
    The Department of Energy’s Hydrogen and Fuel Cell Technologies Office funded the research.
    NREL is the Department of Energy’s primary national laboratory for renewable energy and energy efficiency research and development. NREL is operated for DOE by the Alliance for Sustainable Energy LLC. More

  • in

    Buried ancient Roman glass formed substance with modern applications

    Some 2,000 years ago in ancient Rome, glass vessels carrying wine or water, or perhaps an exotic perfumes, tumble from a table in a marketplace, and shatter to pieces on the street. As centuries passed, the fragments were covered by layers of dust and soil and exposed to a continuous cycle of changes in temperature, moisture, and surrounding minerals.
    Now these tiny pieces of glass are being uncovered from construction sites and archaeological digs and reveal themselves to be something extraordinary. On their surface is a mosaic of iridescent colors of blue, green and orange, with some displaying shimmering gold-colored mirrors.
    These beautiful glass artifacts are often set in jewelry as pendants or earrings, while larger, more complete objects are displayed in museums.
    For Fiorenzo Omenetto and Giulia Guidetti, professors of engineering at the Tufts University Silklab and experts in materials science, what’s fascinating is how the molecules in the glass rearranged and recombined with minerals over thousands of years to form what are called photonic crystals — ordered arrangements of atoms that filter and reflect light in very specific ways.
    Photonic crystals have many applications in modern technology. They can be used to create waveguides, optical switches and other devices for very fast optical communications in computers and over the internet. Since they can be engineered to block certain wavelengths of light while allowing others to pass, they are used in filters, lasers, mirrors, and anti-reflection (stealth) devices.
    In a recent study published in the Proceedings of the National Academy of Sciences (PNAS) USA, Omenetto, Guidetti and collaborators report on the unique atomic and mineral structures that built up from the glass’ original silicate and mineral constituents, modulated by the pH of the surrounding environment, and the fluctuating levels of groundwater in the soil.
    The project started by chance during a visit to the Italian Institute of Technology’s (IIT) Center for Cultural Heritage Technology. “This beautiful sparkling piece of glass on the shelf attracted our attention,” said Omenetto. “It was a fragment of Roman glass recovered near the ancient city of Aquileia Italy.” Arianna Traviglia, director of the Center, said her team referred to it affectionately as the ‘wow glass’. They decided to take a closer look. More

  • in

    AI and machine learning can successfully diagnose polycystic ovary syndrome

    Artificial intelligence (AI) and machine learning (ML) can effectively detect and diagnose Polycystic Ovary Syndrome (PCOS), which is the most common hormone disorder among women, typically between ages 15 and 45, according to a new study by the National Institutes of Health. Researchers systematically reviewed published scientific studies that used AI/ML to analyze data to diagnose and classify PCOS and found that AI/ML based programs were able to successfully detect PCOS.
    “Given the large burden of under- and mis-diagnosed PCOS in the community and its potentially serious outcomes, we wanted to identify the utility of AI/ML in the identification of patients that may be at risk for PCOS,” said Janet Hall, M.D., senior investigator and endocrinologist at the National Institute of Environmental Health Sciences (NIEHS), part of NIH, and a study co-author. “The effectiveness of AI and machine learning in detecting PCOS was even more impressive than we had thought.”
    PCOS occurs when the ovaries do not work properly, and in many cases, is accompanied by elevated levels of testosterone. The disorder can cause irregular periods, acne, extra facial hair, or hair loss from the head. Women with PCOS are often at an increased risk for developing type 2 diabetes, as well as sleep, psychological, cardiovascular, and other reproductive disorders such as uterine cancer and infertility.
    “PCOS can be challenging to diagnose given its overlap with other conditions,” said Skand Shekhar, M.D., senior author of the study and assistant research physician and endocrinologist at the NIEHS. “These data reflect the untapped potential of incorporating AI/ML in electronic health records and other clinical settings to improve the diagnosis and care of women with PCOS.”
    Study authors suggested integrating large population-based studies with electronic health datasets and analyzing common laboratory tests to identify sensitive diagnostic biomarkers that can facilitate the diagnosis of PCOS.
    Diagnosis is based on widely accepted standardized criteria that have evolved over the years, but typically includes clinical features (e.g., acne, excess hair growth, and irregular periods) accompanied by laboratory (e.g., high blood testosterone) and radiological findings (e.g., multiple small cysts and increased ovarian volume on ovarian ultrasound). However, because some of the features of PCOS can co-occur with other disorders such as obesity, diabetes, and cardiometabolic disorders, it frequently goes unrecognized.
    AI refers to the use of computer-based systems or tools to mimic human intelligence and to help make decisions or predictions. ML is a subdivision of AI focused on learning from previous events and applying this knowledge to future decision-making. AI can process massive amounts of distinct data, such as that derived from electronic health records, making it an ideal aid in the diagnosis of difficult to diagnose disorders like PCOS. More

  • in

    Top scientists, engineers choose startups over tech behemoths for reasons other than money

    Fledgling technology startups need to hire skilled scientists and engineers to bring their cutting-edge products from the proverbial Silicon Valley garage to the market. But to attract the best and the brightest, startups also must routinely compete with established firms for top talent.
    Commonly held views on job-choice decision-making would point to highly sought-after tech workers choosing jobs with established companies that offer the highest pay and benefits, ostensibly leaving resource-constrained startups to sift through a weaker talent pool. But new research co-written by a University of Illinois Urbana-Champaign expert in technology entrepreneurship and scientific labor markets proposes an alternative theory: Some high-ability, in-demand tech workers would prefer to join startup firms despite the lower pay and riskier prospects for the company’s long-term survival because they’re attracted to the startup culture and environment.
    Non-monetary benefits such as independence, autonomy and the ability to work on innovative technologies are among the key selling points for talented scientists and engineers who spurn working for a bigger technology firm in favor of a startup, said Michael Roach, a professor of business administration at the Gies College of Business at Illinois.
    “Certain workers are willing to take a job for lower pay in exchange for other benefits such as working for a smaller firm and feeling like they’re contributing to the creation of something new and novel,” he said. “For some high-ability tech workers, there’s more significance to being employee number 20 than employee number 2,000.”
    The paper, which was published by the journal Management Science, was co-written by Henry Sauermann of the European School of Management and Technology Berlin.
    Using a longitudinal survey that followed more than 2,300 science and engineering doctoral students from graduate school through their first job, the researchers found that both an individual’s ability and career preferences strongly predicted post-graduate employment with a startup as opposed to a bigger, more-established tech firm.
    “There’s a lot of evidence using U.S. Census and other administrative data that shows that employees at small firms are paid less, which has been interpreted as startups not being able to attract high-ability people,” Roach said. “But we found that startups are able to recruit high-ability workers despite paying their new hires approximately 20% less than established firms.”
    The findings are consistent with preference-based job sorting in that working at a startup may be a better fit for some workers, Roach said. More

  • in

    What the French Revolution can teach us about inflation

    More than 200 years later, historians are still gleaning some unexpected insights from the French Revolution — not about tyranny or liberty — but rather, inflation.
    “Revolutionary France experienced the first modern hyperinflation,” said Louis Rouanet, Ph.D., assistant professor at The University of Texas at El Paso. “Although it happened more than two centuries ago, it offers relevant lessons for today.”
    Rouanet is the lead author of the new study, “Assignats or death: The politics and dynamics of hyperinflation in revolutionary France,” published recently in the European Economic Review. A faculty member of the UTEP Department of Economics and Finance, Rouanet is an expert in economic history, specializing in revolutionary France, and a Frenchman himself. The study advances a new framework for understanding the monetary phenomenon hyperinflation, a period of rapid and extreme price increases.
    Rouanet’s analysis found that political instability and shifting public expectations were key in explaining the scenario that unfolded between May 1794 and May 1796, when the French revolutionary governments’ decision to issue a paper currency called the assignat led to extreme inflation. Price levels increased more than 50% per month, complicating an already volatile economic situation. The currency was primarily supported by a political group known as the Jacobins, a party whose power waned throughout the revolution.
    The French Revolution began at the end of the 18th century when extreme popular discontent with feudal institutions erupted into revolution, Rouanet said. The conflict reshaped the French government and led to the end of the feudal system, a hierarchical system of government that placed the king at the top, nobility and clergy below him, and peasants below all.
    During the revolution, the government was bankrupt and expropriated substantial amounts of land and assets held by the Catholic Church in order to sell them. However, they were unable to sell the land fast enough to pay back creditors. To stimulate purchases, the government began issuing a paper currency called assignat. In order to prevent inflation, revolutionary officials promised to retire the assignat from circulation and burn the notes once they were used to buy property, but this commitment was not always honored, prompting public mistrust.
    At the same time, the strength of the Jacobin party was weakening. From failing insurrections in Paris and the establishment of a new regime known as the Directory, the key drivers of the assignat were on their way out. More

  • in

    Brain inspires more robust AI

    Most artificially intelligent systems are based on neural networks, algorithms inspired by biological neurons found in the brain. These networks can consist of multiple layers, with inputs coming in one side and outputs going out of the other. The outputs can be used to make automatic decisions, for example, in driverless cars. Attacks to mislead a neural network can involve exploiting vulnerabilities in the input layers, but typically only the initial input layer is considered when engineering a defense. For the first time, researchers augmented a neural network’s inner layers with a process involving random noise to improve its resilience.
    Artificial intelligence (AI) has become a relatively common thing; chances are you have a smartphone with an AI assistant or you use a search engine powered by AI. While it’s a broad term that can include many different ways to essentially process information and sometimes make decisions, AI systems are often built using artificial neural networks (ANN) analogous to those of the brain. And like the brain, ANNs can sometimes get confused, either by accident or by the deliberate actions of a third party. Think of something like an optical illusion — it might make you feel like you are looking at one thing when you are really looking at another.
    The difference between things that confuse an ANN and things that might confuse us, however, is that some visual input could appear perfectly normal, or at least might be understandable to us, but may nevertheless be interpreted as something completely different by an ANN. A trivial example might be an image-classifying system mistaking a cat for a dog, but a more serious example could be a driverless car mistaking a stop signal for a right-of-way sign. And it’s not just the already controversial example of driverless cars; there are medical diagnostic systems, and many other sensitive applications that take inputs and inform, or even make, decisions that can affect people.
    As inputs aren’t necessarily visual, it’s not always easy to analyze why a system might have made a mistake at a glance. Attackers trying to disrupt a system based on ANNs can take advantage of this, subtly altering an anticipated input pattern so that it will be misinterpreted, and the system will behave wrongly, perhaps even problematically. There are some defense techniques for attacks like these, but they have limitations. Recent graduate Jumpei Ukita and Professor Kenichi Ohki from the Department of Physiology at the University of Tokyo Graduate School of Medicine devised and tested a new way to improve ANN defense.
    “Neural networks typically comprise layers of virtual neurons. The first layers will often be responsible for analyzing inputs by identifying the elements that correspond to a certain input,” said Ohki. “An attacker might supply an image with artifacts that trick the network into misclassifying it. A typical defense for such an attack might be to deliberately introduce some noise into this first layer. This sounds counterintuitive that it might help, but by doing so, it allows for greater adaptations to a visual scene or other set of inputs. However, this method is not always so effective and we thought we could improve the matter by looking beyond the input layer to further inside the network.”
    Ukita and Ohki aren’t just computer scientists. They have also studied the human brain, and this inspired them to use a phenomenon they knew about there in an ANN. This was to add noise not only to the input layer, but to deeper layers as well. This is typically avoided as it’s feared that it will impact the effectiveness of the network under normal conditions. But the duo found this not to be the case, and instead the noise promoted greater adaptability in their test ANN, which reduced its susceptibility to simulated adversarial attacks.
    “Our first step was to devise a hypothetical method of attack that strikes deeper than the input layer. Such an attack would need to withstand the resilience of a network with a standard noise defense on its input layer. We call these feature-space adversarial examples,” said Ukita. “These attacks work by supplying an input intentionally far from, rather than near to, the input that an ANN can correctly classify. But the trick is to present subtly misleading artifacts to the deeper layers instead. Once we demonstrated the danger from such an attack, we injected random noise into the deeper hidden layers of the network to boost their adaptability and therefore defensive capability. We are happy to report it works.”
    While the new idea does prove robust, the team wishes to develop it further to make it even more effective against anticipated attacks, as well as other kinds of attacks they have not yet tested it against. At present, the defense only works on this specific kind of attack.
    “Future attackers might try to consider attacks that can escape the feature-space noise we considered in this research,” said Ukita. “Indeed, attack and defense are two sides of the same coin; it’s an arms race that neither side will back down from, so we need to continually iterate, improve and innovate new ideas in order to protect the systems we use every day.” More

  • in

    Making AI smarter with an artificial, multisensory integrated neuron

    The feel of a cat’s fur can reveal some information, but seeing the feline provides critical details: is it a housecat or a lion? While the sound of fire crackling may be ambiguous, its scent confirms the burning wood. Our senses synergize to give a comprehensive understanding, particularly when individual signals are subtle. The collective sum of biological inputs can be greater than their individual contributions. Robots tend to follow more straightforward addition, but Penn State researchers have now harnessed the biological concept for application in artificial intelligence (AI) to develop the first artificial, multisensory integrated neuron.
    Led by Saptarshi Das, associate professor of engineering science and mechanics at Penn State, the team published their work on September 15 in Nature Communication.
    “Robots make decisions based on the environment they are in, but their sensors do not generally talk to each other,” said Das, who also has joint appointments in electrical engineering and in materials science and engineering. “A collective decision can be made through a sensor processing unit, but is that the most efficient or effective method? In the human brain, one sense can influence another and allow the person to better judge a situation.”
    For instance, a car might have one sensor scanning for obstacles, while another senses darkness to modulate the intensity of the headlights. Individually, these sensors relay information to a central unit which then instructs the car to brake or adjust the headlights. According to Das, this process consumes more energy. Allowing sensors to communicate directly with each other can be more efficient in terms of energy and speed — particularly when the inputs from both are faint.
    “Biology enables small organisms to thrive in environments with limited resources, minimizing energy consumption in the process,” said Das, who is also affiliated with the Materials Research Institute. “The requirements for different sensors are based on the context — in a dark forest, you’d rely more on listening than seeing, but we don’t make decisions based on just one sense. We have a complete sense of our surroundings, and our decision making is based on the integration of what we’re seeing, hearing, touching, smelling, etcetera. The senses evolved together in biology, but separately in AI. In this work, we’re looking to combine sensors and mimic how our brains actually work.”
    The team focused on integrating a tactile sensor and a visual sensor so that the output of one sensor modifies the other, with the help of visual memory. According to Muhtasim Ul Karim Sadaf, a third-year doctoral student in engineering science and mechanics, even a short-lived flash of light can significantly enhance the chance of successful movement through a dark room.
    “This is because visual memory can subsequently influence and aid the tactile responses for navigation,” Sadaf said. “This would not be possible if our visual and tactile cortex were to respond to their respective unimodal cues alone. We have a photo memory effect, where light shines and we can remember. We incorporated that ability into a device through a transistor that provides the same response.”
    The researchers fabricated the multisensory neuron by connecting a tactile sensor to a phototransistor based on a monolayer of molybdenum disulfide, a compound that exhibits unique electrical and optical characteristics useful for detecting light and supporting transistors. The sensor generates electrical spikes in a manner reminiscent of neurons processing information, allowing it to integrate both visual and tactile cues. More

  • in

    Groundbreaking soft valve technology enabling sensing and control integration in soft robots

    Soft inflatable robots have emerged as a promising paradigm for applications that require inherent safety and adaptability. However, the integration of sensing and control systems in these robots has posed significant challenges without compromising their softness, form factor, or capabilities. Addressing this obstacle, a research team jointly led by Professor Jiyun Kim (Department of New Material Engineering, UNIST) and Professor Jonbum Bae (Department of Mechanical Engineering, UNIST) has developed groundbreaking “soft valve” technology — an all-in-one solution that integrates sensors and control valves while maintaining complete softness.
    Traditionally, soft robot bodies coexisted with rigid electronic components for perception purposes. The study conducted by this research team introduces a novel approach to overcome this limitation by creating soft analogs of sensors and control valves that operate without electricity. The resulting tube-shaped part serves dual functions: detecting external stimuli and precisely controlling driving motion using only air pressure. By eliminating the need for electricity-dependent components, these all-soft valves enable safe operation underwater or in environments where sparks may pose risks — while simultaneously reducing weight burdens on robotic systems. Moreover, each component is inexpensive at approximately 800 Won.
    “Previous soft robots had flexible bodies but relied on hard electronic parts for stimulus detection sensors and drive control units,” explained Professor Kim. “Our study focuses on making both sensors and drive control parts using soft materials.”
    The research team showcased various applications utilizing this groundbreaking technology. They created universal tongs capable of delicately picking up fragile items such as potato chips — preventing breakage caused by excessive force exerted by conventional rigid robot hands. Additionally, they successfully employed these all-soft components to develop wearable elbow assist robots designed to reduce muscle burden caused by repetitive tasks or strenuous activities involving arm movements. The elbow support automatically adjusts according to the angle at which an individual’s arm is bent — a breakthrough contributing to a 63% average decrease in the force exerted on the elbow when wearing the robot.
    The soft valve operates by utilizing air flow within a tube-shaped structure. When tension is applied to one end of the tube, a helically wound thread inside compresses it, controlling inflow and outflow of air. This accordion-like motion allows for precise and flexible movements without relying on electrical power.
    Furthermore, the research team confirmed that by programming different structures or numbers of threads within the tube, they could accurately control airflow variations. This programmability enables customized adjustments to suit specific situations and requirements — providing flexibility in driving unit response even with consistent external forces applied to the end of the tube.
    “These newly developed components can be easily employed using material programming alone, eliminating electronic devices,” expressed Professor Bae with excitement about this development. “This breakthrough will significantly contribute to advancements in various wearable systems.”
    This groundbreaking soft valve technology marks a significant step toward fully soft, electronics-free robots capable of autonomous operation — a crucial milestone for enhancing safety and adaptability across numerous industries.
    Support for this work was provided by various organizations including Korea’s National Research Foundation (NRF), Korea Institute of Materials Science (KIMS), and Korea Evaluation Institute of Industrial Technology (KEIT). More