More stories

  • in

    Assessing unintended consequences in AI-based neurosurgical training

    Virtual reality simulators can help learners improve their technical skills faster and with no risk to patients. In the field of neurosurgery, they allow medical students to practice complex operations before using a scalpel on a real patient. When combined with artificial intelligence, these tutoring systems can offer tailored feedback like a human instructor, identifying areas where the students need to improve and making suggestions on how to achieve expert performance.
    A new study from the Neurosurgical Simulation and Artificial Intelligence Learning Centre at The Neuro (Montreal Neurological Institute-Hospital) of McGill University, however, shows that human instruction is still necessary to detect and compensate for unintended, and sometimes negative, changes in neurosurgeon behaviour after virtual reality AI training.
    In the study, 46 medical students performed a tumour removal procedure on a virtual reality simulator. Half of them were randomly selected to receive instruction from an AI-powered intelligent tutor called the Virtual Operative Assistant (VOA), which uses a machine learning algorithm to teach surgical techniques and provide personalized feedback. The other half served as a control group by receiving no feedback. The students’ work was then compared to performance benchmarks selected by a team of established neurosurgeons.
    Comparing the results, AI-tutored students caused 55 per cent less damage to healthy tissues than the control group. AI-tutored students also showed a 59 per cent reduction in average distance between instruments in each hand and 46 per cent less maximum force applied, both important safety measures.
    However, AI-tutored students also showed some negative outcomes. For example, their dominant hand movements had 50 per cent lower velocity and 45 per cent lower acceleration than the control group, making their operations less efficient. The speed at which they removed tumour tissue was also 29 per cent lower in the AI-tutored group than the control group.
    These unintended outcomes underline the importance of human instructors in the learning process, to promote both safety and efficiency in students.
    “AI systems are not perfect,” says Ali Fazlollahi, a medical student researcher at the Neurosurgical Simulation and Artificial Intelligence Learning Centre and the study’s first author. “Achieving mastery will still require some level of apprenticeship from an expert. Programs adopting AI will enable learners to monitor their competency and focus their intraoperative learning time with instructors more efficiently and on their individual tailored learning goals. We’re currently working towards finding an optimal hybrid mode of instruction in a crossover trial.”
    Fazlollahi says his findings have implications beyond neurosurgery because many of the same principles are applied in other fields of skills’ training.
    “This includes surgical education, not just neurosurgery, and also a range of other fields from aviation to military training and construction,” he says. “Using AI alone to design and run a technical skills curriculum can lead to unintended outcomes that will require oversight from human experts to ensure excellence in training and patient care.”
    “Intelligent tutors powered by AI are becoming a valuable tool in the evaluation and training of the next generation of neurosurgeons,” says Dr. Rolando Del Maestro, the study’s senior author. “However, it is essential that surgical educators are an integral part of the development, application, and monitoring of these AI systems to maximize their ability to increase the mastery of neurosurgical skills and improve patient outcomes.” More

  • in

    Predictive model could improve hydrogen station availability

    Consumer confidence in driving hydrogen-fueled vehicles could be improved by having station operators adopt a predictive model that helps them anticipate maintenance needs, according to researchers at the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) and Colorado State University (CSU).
    Stations shutting down for unscheduled maintenance reduces hydrogen fueling availability to consumers and may slow adoption of these types of fuel cell electric vehicles, the researchers noted. The use of what is known as a prognostics health monitoring (PHM) model would allow hydrogen stations to reduce these unscheduled events.
    “Motorists expect to be able to fuel their vehicles without any problems. We want to ensure motorists who drive hydrogen-fueled cars have the same experience,” said Jennifer Kurtz, lead author of the new paper, “Hydrogen Station Prognostics and Health Monitoring Model,” which appears in the International Journal of Hydrogen Energy. “This predictive model can let station operators know in advance when a problem might occur and minimize any disruptions that motorists might experience with hydrogen fueling.”
    Co-authored by Spencer Gilleon of NREL and Thomas Bradley of CSU, the article posits the PHM model would improve station availability and consumer confidence.
    The availability of hydrogen as a vehicle fuel is low compared to the ubiquity of gasoline, a fact reflected in the number of stations that dispense the low-emission fuel. While California has more than 10,000 gasoline stations, the Hydrogen Fuel Cell Partnership counts just 59 retail hydrogen stations across the state. With relatively few choices, motorists who rely on hydrogen must be confident their needed fuel is available. Station operators must make any necessary repairs to meet the demands of consumers, but they also must investigate the causes of any failures to avoid future problems.
    Data from the National Fuel Cell Technology Evaluation Center reveals the single most common reason hydrogen stations shut down for unscheduled maintenance is problems with the dispenser system, which encompasses such items as the hoses and dispenser valves as well as the user interface. By using a data-based PHM, station operators could reduce the frequency of unscheduled maintenance and increase the frequency of preventive maintenance. The researchers have dubbed this particular PHM “hydrogen station prognostics health monitoring,” or H2S PHM.
    The H2S PHM calculates the probability a component will continue working without a failure, based on how many fills the station has completed. The model can also be used to estimate the remaining useful life for each of the components, thereby lowering maintenance costs and making the stations more reliable. Using a hypothetical example, the researchers considered a dispenser valve that the H2S PHM has flagged as needing attention. The station operator can then be prepared for upcoming maintenance and schedule a technician to come when demand for hydrogen will be low. That cuts down on the amount of time a station would be unable to fuel vehicles. If the valve were to fail without warning, the station operator would have to call a technician, wait for their arrival and diagnosis of the problem, while at the same time be unable to provide fuel.
    Kurtz, the director of NREL’s Energy Conversion and Storage Systems Center, noted that limitations exist when applying H2S PHM to the reliability of a hydrogen station. The method would not predict sudden failures, which can be caused by human error. The H2S PHM is also only as good as the available data, and more data is needed.
    The Department of Energy’s Hydrogen and Fuel Cell Technologies Office funded the research.
    NREL is the Department of Energy’s primary national laboratory for renewable energy and energy efficiency research and development. NREL is operated for DOE by the Alliance for Sustainable Energy LLC. More

  • in

    Buried ancient Roman glass formed substance with modern applications

    Some 2,000 years ago in ancient Rome, glass vessels carrying wine or water, or perhaps an exotic perfumes, tumble from a table in a marketplace, and shatter to pieces on the street. As centuries passed, the fragments were covered by layers of dust and soil and exposed to a continuous cycle of changes in temperature, moisture, and surrounding minerals.
    Now these tiny pieces of glass are being uncovered from construction sites and archaeological digs and reveal themselves to be something extraordinary. On their surface is a mosaic of iridescent colors of blue, green and orange, with some displaying shimmering gold-colored mirrors.
    These beautiful glass artifacts are often set in jewelry as pendants or earrings, while larger, more complete objects are displayed in museums.
    For Fiorenzo Omenetto and Giulia Guidetti, professors of engineering at the Tufts University Silklab and experts in materials science, what’s fascinating is how the molecules in the glass rearranged and recombined with minerals over thousands of years to form what are called photonic crystals — ordered arrangements of atoms that filter and reflect light in very specific ways.
    Photonic crystals have many applications in modern technology. They can be used to create waveguides, optical switches and other devices for very fast optical communications in computers and over the internet. Since they can be engineered to block certain wavelengths of light while allowing others to pass, they are used in filters, lasers, mirrors, and anti-reflection (stealth) devices.
    In a recent study published in the Proceedings of the National Academy of Sciences (PNAS) USA, Omenetto, Guidetti and collaborators report on the unique atomic and mineral structures that built up from the glass’ original silicate and mineral constituents, modulated by the pH of the surrounding environment, and the fluctuating levels of groundwater in the soil.
    The project started by chance during a visit to the Italian Institute of Technology’s (IIT) Center for Cultural Heritage Technology. “This beautiful sparkling piece of glass on the shelf attracted our attention,” said Omenetto. “It was a fragment of Roman glass recovered near the ancient city of Aquileia Italy.” Arianna Traviglia, director of the Center, said her team referred to it affectionately as the ‘wow glass’. They decided to take a closer look. More

  • in

    AI and machine learning can successfully diagnose polycystic ovary syndrome

    Artificial intelligence (AI) and machine learning (ML) can effectively detect and diagnose Polycystic Ovary Syndrome (PCOS), which is the most common hormone disorder among women, typically between ages 15 and 45, according to a new study by the National Institutes of Health. Researchers systematically reviewed published scientific studies that used AI/ML to analyze data to diagnose and classify PCOS and found that AI/ML based programs were able to successfully detect PCOS.
    “Given the large burden of under- and mis-diagnosed PCOS in the community and its potentially serious outcomes, we wanted to identify the utility of AI/ML in the identification of patients that may be at risk for PCOS,” said Janet Hall, M.D., senior investigator and endocrinologist at the National Institute of Environmental Health Sciences (NIEHS), part of NIH, and a study co-author. “The effectiveness of AI and machine learning in detecting PCOS was even more impressive than we had thought.”
    PCOS occurs when the ovaries do not work properly, and in many cases, is accompanied by elevated levels of testosterone. The disorder can cause irregular periods, acne, extra facial hair, or hair loss from the head. Women with PCOS are often at an increased risk for developing type 2 diabetes, as well as sleep, psychological, cardiovascular, and other reproductive disorders such as uterine cancer and infertility.
    “PCOS can be challenging to diagnose given its overlap with other conditions,” said Skand Shekhar, M.D., senior author of the study and assistant research physician and endocrinologist at the NIEHS. “These data reflect the untapped potential of incorporating AI/ML in electronic health records and other clinical settings to improve the diagnosis and care of women with PCOS.”
    Study authors suggested integrating large population-based studies with electronic health datasets and analyzing common laboratory tests to identify sensitive diagnostic biomarkers that can facilitate the diagnosis of PCOS.
    Diagnosis is based on widely accepted standardized criteria that have evolved over the years, but typically includes clinical features (e.g., acne, excess hair growth, and irregular periods) accompanied by laboratory (e.g., high blood testosterone) and radiological findings (e.g., multiple small cysts and increased ovarian volume on ovarian ultrasound). However, because some of the features of PCOS can co-occur with other disorders such as obesity, diabetes, and cardiometabolic disorders, it frequently goes unrecognized.
    AI refers to the use of computer-based systems or tools to mimic human intelligence and to help make decisions or predictions. ML is a subdivision of AI focused on learning from previous events and applying this knowledge to future decision-making. AI can process massive amounts of distinct data, such as that derived from electronic health records, making it an ideal aid in the diagnosis of difficult to diagnose disorders like PCOS. More

  • in

    Top scientists, engineers choose startups over tech behemoths for reasons other than money

    Fledgling technology startups need to hire skilled scientists and engineers to bring their cutting-edge products from the proverbial Silicon Valley garage to the market. But to attract the best and the brightest, startups also must routinely compete with established firms for top talent.
    Commonly held views on job-choice decision-making would point to highly sought-after tech workers choosing jobs with established companies that offer the highest pay and benefits, ostensibly leaving resource-constrained startups to sift through a weaker talent pool. But new research co-written by a University of Illinois Urbana-Champaign expert in technology entrepreneurship and scientific labor markets proposes an alternative theory: Some high-ability, in-demand tech workers would prefer to join startup firms despite the lower pay and riskier prospects for the company’s long-term survival because they’re attracted to the startup culture and environment.
    Non-monetary benefits such as independence, autonomy and the ability to work on innovative technologies are among the key selling points for talented scientists and engineers who spurn working for a bigger technology firm in favor of a startup, said Michael Roach, a professor of business administration at the Gies College of Business at Illinois.
    “Certain workers are willing to take a job for lower pay in exchange for other benefits such as working for a smaller firm and feeling like they’re contributing to the creation of something new and novel,” he said. “For some high-ability tech workers, there’s more significance to being employee number 20 than employee number 2,000.”
    The paper, which was published by the journal Management Science, was co-written by Henry Sauermann of the European School of Management and Technology Berlin.
    Using a longitudinal survey that followed more than 2,300 science and engineering doctoral students from graduate school through their first job, the researchers found that both an individual’s ability and career preferences strongly predicted post-graduate employment with a startup as opposed to a bigger, more-established tech firm.
    “There’s a lot of evidence using U.S. Census and other administrative data that shows that employees at small firms are paid less, which has been interpreted as startups not being able to attract high-ability people,” Roach said. “But we found that startups are able to recruit high-ability workers despite paying their new hires approximately 20% less than established firms.”
    The findings are consistent with preference-based job sorting in that working at a startup may be a better fit for some workers, Roach said. More

  • in

    What the French Revolution can teach us about inflation

    More than 200 years later, historians are still gleaning some unexpected insights from the French Revolution — not about tyranny or liberty — but rather, inflation.
    “Revolutionary France experienced the first modern hyperinflation,” said Louis Rouanet, Ph.D., assistant professor at The University of Texas at El Paso. “Although it happened more than two centuries ago, it offers relevant lessons for today.”
    Rouanet is the lead author of the new study, “Assignats or death: The politics and dynamics of hyperinflation in revolutionary France,” published recently in the European Economic Review. A faculty member of the UTEP Department of Economics and Finance, Rouanet is an expert in economic history, specializing in revolutionary France, and a Frenchman himself. The study advances a new framework for understanding the monetary phenomenon hyperinflation, a period of rapid and extreme price increases.
    Rouanet’s analysis found that political instability and shifting public expectations were key in explaining the scenario that unfolded between May 1794 and May 1796, when the French revolutionary governments’ decision to issue a paper currency called the assignat led to extreme inflation. Price levels increased more than 50% per month, complicating an already volatile economic situation. The currency was primarily supported by a political group known as the Jacobins, a party whose power waned throughout the revolution.
    The French Revolution began at the end of the 18th century when extreme popular discontent with feudal institutions erupted into revolution, Rouanet said. The conflict reshaped the French government and led to the end of the feudal system, a hierarchical system of government that placed the king at the top, nobility and clergy below him, and peasants below all.
    During the revolution, the government was bankrupt and expropriated substantial amounts of land and assets held by the Catholic Church in order to sell them. However, they were unable to sell the land fast enough to pay back creditors. To stimulate purchases, the government began issuing a paper currency called assignat. In order to prevent inflation, revolutionary officials promised to retire the assignat from circulation and burn the notes once they were used to buy property, but this commitment was not always honored, prompting public mistrust.
    At the same time, the strength of the Jacobin party was weakening. From failing insurrections in Paris and the establishment of a new regime known as the Directory, the key drivers of the assignat were on their way out. More

  • in

    Brain inspires more robust AI

    Most artificially intelligent systems are based on neural networks, algorithms inspired by biological neurons found in the brain. These networks can consist of multiple layers, with inputs coming in one side and outputs going out of the other. The outputs can be used to make automatic decisions, for example, in driverless cars. Attacks to mislead a neural network can involve exploiting vulnerabilities in the input layers, but typically only the initial input layer is considered when engineering a defense. For the first time, researchers augmented a neural network’s inner layers with a process involving random noise to improve its resilience.
    Artificial intelligence (AI) has become a relatively common thing; chances are you have a smartphone with an AI assistant or you use a search engine powered by AI. While it’s a broad term that can include many different ways to essentially process information and sometimes make decisions, AI systems are often built using artificial neural networks (ANN) analogous to those of the brain. And like the brain, ANNs can sometimes get confused, either by accident or by the deliberate actions of a third party. Think of something like an optical illusion — it might make you feel like you are looking at one thing when you are really looking at another.
    The difference between things that confuse an ANN and things that might confuse us, however, is that some visual input could appear perfectly normal, or at least might be understandable to us, but may nevertheless be interpreted as something completely different by an ANN. A trivial example might be an image-classifying system mistaking a cat for a dog, but a more serious example could be a driverless car mistaking a stop signal for a right-of-way sign. And it’s not just the already controversial example of driverless cars; there are medical diagnostic systems, and many other sensitive applications that take inputs and inform, or even make, decisions that can affect people.
    As inputs aren’t necessarily visual, it’s not always easy to analyze why a system might have made a mistake at a glance. Attackers trying to disrupt a system based on ANNs can take advantage of this, subtly altering an anticipated input pattern so that it will be misinterpreted, and the system will behave wrongly, perhaps even problematically. There are some defense techniques for attacks like these, but they have limitations. Recent graduate Jumpei Ukita and Professor Kenichi Ohki from the Department of Physiology at the University of Tokyo Graduate School of Medicine devised and tested a new way to improve ANN defense.
    “Neural networks typically comprise layers of virtual neurons. The first layers will often be responsible for analyzing inputs by identifying the elements that correspond to a certain input,” said Ohki. “An attacker might supply an image with artifacts that trick the network into misclassifying it. A typical defense for such an attack might be to deliberately introduce some noise into this first layer. This sounds counterintuitive that it might help, but by doing so, it allows for greater adaptations to a visual scene or other set of inputs. However, this method is not always so effective and we thought we could improve the matter by looking beyond the input layer to further inside the network.”
    Ukita and Ohki aren’t just computer scientists. They have also studied the human brain, and this inspired them to use a phenomenon they knew about there in an ANN. This was to add noise not only to the input layer, but to deeper layers as well. This is typically avoided as it’s feared that it will impact the effectiveness of the network under normal conditions. But the duo found this not to be the case, and instead the noise promoted greater adaptability in their test ANN, which reduced its susceptibility to simulated adversarial attacks.
    “Our first step was to devise a hypothetical method of attack that strikes deeper than the input layer. Such an attack would need to withstand the resilience of a network with a standard noise defense on its input layer. We call these feature-space adversarial examples,” said Ukita. “These attacks work by supplying an input intentionally far from, rather than near to, the input that an ANN can correctly classify. But the trick is to present subtly misleading artifacts to the deeper layers instead. Once we demonstrated the danger from such an attack, we injected random noise into the deeper hidden layers of the network to boost their adaptability and therefore defensive capability. We are happy to report it works.”
    While the new idea does prove robust, the team wishes to develop it further to make it even more effective against anticipated attacks, as well as other kinds of attacks they have not yet tested it against. At present, the defense only works on this specific kind of attack.
    “Future attackers might try to consider attacks that can escape the feature-space noise we considered in this research,” said Ukita. “Indeed, attack and defense are two sides of the same coin; it’s an arms race that neither side will back down from, so we need to continually iterate, improve and innovate new ideas in order to protect the systems we use every day.” More

  • in

    Making AI smarter with an artificial, multisensory integrated neuron

    The feel of a cat’s fur can reveal some information, but seeing the feline provides critical details: is it a housecat or a lion? While the sound of fire crackling may be ambiguous, its scent confirms the burning wood. Our senses synergize to give a comprehensive understanding, particularly when individual signals are subtle. The collective sum of biological inputs can be greater than their individual contributions. Robots tend to follow more straightforward addition, but Penn State researchers have now harnessed the biological concept for application in artificial intelligence (AI) to develop the first artificial, multisensory integrated neuron.
    Led by Saptarshi Das, associate professor of engineering science and mechanics at Penn State, the team published their work on September 15 in Nature Communication.
    “Robots make decisions based on the environment they are in, but their sensors do not generally talk to each other,” said Das, who also has joint appointments in electrical engineering and in materials science and engineering. “A collective decision can be made through a sensor processing unit, but is that the most efficient or effective method? In the human brain, one sense can influence another and allow the person to better judge a situation.”
    For instance, a car might have one sensor scanning for obstacles, while another senses darkness to modulate the intensity of the headlights. Individually, these sensors relay information to a central unit which then instructs the car to brake or adjust the headlights. According to Das, this process consumes more energy. Allowing sensors to communicate directly with each other can be more efficient in terms of energy and speed — particularly when the inputs from both are faint.
    “Biology enables small organisms to thrive in environments with limited resources, minimizing energy consumption in the process,” said Das, who is also affiliated with the Materials Research Institute. “The requirements for different sensors are based on the context — in a dark forest, you’d rely more on listening than seeing, but we don’t make decisions based on just one sense. We have a complete sense of our surroundings, and our decision making is based on the integration of what we’re seeing, hearing, touching, smelling, etcetera. The senses evolved together in biology, but separately in AI. In this work, we’re looking to combine sensors and mimic how our brains actually work.”
    The team focused on integrating a tactile sensor and a visual sensor so that the output of one sensor modifies the other, with the help of visual memory. According to Muhtasim Ul Karim Sadaf, a third-year doctoral student in engineering science and mechanics, even a short-lived flash of light can significantly enhance the chance of successful movement through a dark room.
    “This is because visual memory can subsequently influence and aid the tactile responses for navigation,” Sadaf said. “This would not be possible if our visual and tactile cortex were to respond to their respective unimodal cues alone. We have a photo memory effect, where light shines and we can remember. We incorporated that ability into a device through a transistor that provides the same response.”
    The researchers fabricated the multisensory neuron by connecting a tactile sensor to a phototransistor based on a monolayer of molybdenum disulfide, a compound that exhibits unique electrical and optical characteristics useful for detecting light and supporting transistors. The sensor generates electrical spikes in a manner reminiscent of neurons processing information, allowing it to integrate both visual and tactile cues. More