More stories

  • in

    AI speeds up development of new high-entropy alloys

    Developing new materials takes a lot of time, money and effort. Recently, a POSTECH research team has taken a step closer to creating new materials by applying AI to develop high-entropy alloys (HEAs) which are coined as “alloy of alloys.”
    A joint research team led by Professor Seungchul Lee, Ph.D. candidate Soo Young Lee, Professor Hyungyu Jin and Ph.D. candidate Seokyeong Byeon of the Department of Mechanical Engineering along with Professor Hyoung Seop Kim of the Department of Materials Science and Engineering have together developed a technique for phase prediction of HEAs using AI. The findings from the study were published in the latest issue of Materials and Design, an international journal on materials science.
    Metal materials are conventionally made by mixing the principal element for the desired property with two or three auxiliary elements. In contrast, HEAs are made with equal or similar proportions of five or more elements without a principal element. The types of alloys that can be made like this are theoretically infinite and have exceptional mechanical, thermal, physical, and chemical properties. Alloys resistant to corrosion or extremely low temperatures, and high-strength alloys have already been discovered.
    However, until now, designing new high-entropy alloy materials was based on trial and error, thus requiring much time and budget. It was even more difficult to determine in advance the phase and the mechanical and thermal properties of the high-entropy alloy being developed.
    To this, the joint research team focused on developing prediction models on HEAs with enhanced phase prediction and explainability using deep learning. They applied deep learning in three perspectives: model optimization, data generation and parameter analysis. In particular, the focus was on building a data-enhancing model based on the conditional generative adversarial network. This allowed AI models to reflect samples of HEAs that have not yet been discovered, thus improving the phase prediction accuracy compared to the conventional methods.
    In addition, the research team developed a descriptive AI-based HEA phase prediction model to provide interpretability to deep learning models, which acts as a black box, while also providing guidance on key design parameters for creating HEAs with certain phases.
    “This research is the result of drastically improving the limitations of existing research by incorporating AI into HEAs that have recently been drawing much attention,” remarked Professor Seungchul Lee. He added, “It is significant that the joint research team’s multidisciplinary collaboration has produced the results that can accelerate AI-based fabrication of new materials.”
    Professor Hyungyu Jin also added, “The results of the study are expected to greatly reduce the time and cost required for the existing new material development process, and to be actively used to develop new high-entropy alloys in the future.”

    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Survey of COVID-19 research provides fresh overview

    Researchers at Karolinska Institutet in Sweden have explored all COVID-19 research published during the initial phase of the pandemic. The results, which were achieved by using a machine learning-based approach and published in the Journal of Medical Internet Research, will make it easier to direct future research to where it is most needed.
    In the wake of the rapid spread of COVID-19, research on the disease has escalated dramatically. Over 60,000 COVID-19-related articles have been indexed to date in the medical database PubMed. This body of research is too large to be assessed by traditional methods, such as systematic and scoping reviews, which makes it difficult to gain a comprehensive overview of the science.
    “Despite COVID-19 being a novel disease, several systematic reviews have already been published,” says Andreas Älgå, medical doctor and researcher at the Department of Clinical Science and Education, Sodersjukhuset at Karolinska Institutet. “However, such reviews are extremely time- and resource-consuming, generally lag far behind the latest published evidence, and only focus on a specific aspect of the pandemic.”
    To obtain a fuller overview, Andreas Älgå and his colleagues have employed a machine learning technique that enables them to map key areas of a research field and track the development over time. This present study included 16,670 scientific papers on COVID-19 published from 14 February to 1 June 2020, divided into 14 different topics.
    The study shows that the most common research topics were health care response, clinical manifestations, and psychosocial impact. Some topics, like health care response, declined over time, while others, such as clinical manifestations and protective measures, showed a growing trend of publications.
    Protective measures, immunology, and clinical manifestations were the research topics published in journals with the highest average scientific ranking. The countries that accounted for the majority of publications (the USA, China, Italy and the UK) were also amongst the ones hardest hit by the pandemic.
    “Our results indicate how the scientific community has reacted to the current pandemic, what issues were prioritised during the early phase and where in the world the research was conducted,” says fellow-researcher Martin Nordberg, medical doctor and researcher at the Department of Clinical Science and Education, Sodersjukhuset.
    The researchers have also developed a website, where regular updates on the evolution of the COVID-19 evidence base can be found (http://www.c19research.org)
    “We hope that our results, including the website, could help researchers and policy makers to form a structured view of the research on COVID-19 and direct future research efforts accordingly,” says Dr Älgå.

    Story Source:
    Materials provided by Karolinska Institutet. Note: Content may be edited for style and length. More

  • in

    Machine learning models to predict critical illness and mortality in COVID-19 patients

    Mount Sinai researchers have developed machine learning models that predict the likelihood of critical events and mortality in COVID-19 patients within clinically relevant time windows. The new models outlined in the study — one of the first to use machine learning for risk prediction in COVID-19 patients among a large and diverse population, and published November 6 in the Journal of Medical Internet Research — could aid clinical practitioners at Mount Sinai and across the world in the care and management of COVID-19 patients.
    “From the initial outburst of COVID-19 in New York City, we saw that COVID-19 presentation and disease course are heterogeneous and we have built machine learning models using patient data to predict outcomes,” said Benjamin Glicksberg, PhD, Assistant Professor of Genetics and Genomic Sciences at the Icahn School of Medicine at Mount Sinai, member of the Hasso Plattner Institute for Digital Health at Mount Sinai and Mount Sinai Clinical Intelligence Center (MSCIC), and one of the study’s principal investigators. “Now in the early stages of a second wave, we are much better prepared than before. We are currently assessing how these models can aid clinical practitioners in managing care of their patients in practice.”
    In the retrospective study using electronic health records from more than 4,000 adult patients admitted to five Mount Sinai Health System hospitals from March to May, researchers and clinicians from the MSCIC analyzed characteristics of COVID-19 patients, including past medical history, comorbidities, vital signs, and laboratory test results at admission, to predict critical events such as intubation and mortality within various clinically relevant time windows that can forecast short and medium-term risks of patients over the hospitalization.
    The researchers used the models to predict a critical event or mortality at time windows of 3, 5, 7, and 10 days from admission. At the one-week mark — which performed best overall, correctly flagging the most critical events while returning the fewest false positives — acute kidney injury, fast breathing, high blood sugar, and elevated lactate dehydrogenase (LDH) indicating tissue damage or disease were the strongest drivers in predicting critical illness. Older age, blood level imbalance, and C-reactive protein levels indicating inflammation, were the strongest drivers in predicting mortality.
    “We have created high-performing predictive models using machine learning to improve the care of our patients at Mount Sinai,” said Girish Nadkarni, MD, Assistant Professor of Medicine (Nephrology) at the Icahn School of Medicine, Clinical Director of the Hasso Plattner Institute for Digital Health at Mount Sinai, and Co-Chair of MSCIC. “More importantly, we have created a method that identifies important health markers that drive likelihood estimates for acute care prognosis and can be used by health institutions across the world to improve care decisions, at both the physician and hospital level, and more effectively manage patients with COVID-19.”

    Story Source:
    Materials provided by The Mount Sinai Hospital / Mount Sinai School of Medicine. Note: Content may be edited for style and length. More

  • in

    Black hole or no black hole: On the outcome of neutron star collisions

    A new study lead by GSI scientists and international colleagues investigates black-hole formation in neutron star mergers. Computer simulations show that the properties of dense nuclear matter play a crucial role, which directly links the astrophysical merger event to heavy-ion collision experiments at GSI and FAIR. These properties will be studied more precisely at the future FAIR facility. The results have now been published in Physical Review Letters. With the award of the 2020 Nobel Prize in Physics for the theoretical description of black holes and for the discovery of a supermassive object at the center of our galaxy the topic currently also receives a lot of attention.
    But under which conditions does a black hole actually form? This is the central question of a study lead by the GSI Helmholtzzentrum für Schwerionenforschung in Darmstadt within an international collaboration. Using computer simulations, the scientists focus on a particular process to form black holes namely the merging of two neutron stars (simulation movie).
    Neutron stars consists of highly compressed dense matter. The mass of one and a half solar masses is squeezed to the size of just a few kilometers. This corresponds to similar or even higher densities than in the inner of atomic nuclei. If two neutron stars merge, the matter is additionally compressed during the collision. This brings the merger remnant on the brink to collapse to a black hole. Black holes are the most compact objects in the universe, even light cannot escape, so these objects cannot be observed directly.
    “The critical parameter is the total mass of the neutron stars. If it exceeds a certain threshold the collapse to a black hole is inevitable” summarizes Dr. Andreas Bauswein from the GSI theory department. However, the exact threshold mass depends on the properties of highly dense nuclear matter. In detail these properties of high-density matter are still not completely understood, which is why research labs like GSI collide atomic nuclei — like a neutron star merger but on a much smaller scale. In fact, the heavy-ion collisions lead to very similar conditions as mergers of neutron stars. Based on theoretical developments and physical heavy-ion experiments, it is possible to compute certain models of neutron star matter, so-call equations of state.
    Employing numerous of these equations of state, the new study calculated the threshold mass for black-hole formation. If neutron star matter or nuclear matter, respectively, is easily compressible — if the equation of state is “soft” — already the merger a relatively light neutron stars leads to the formation of a black hole. If nuclear matter is “stiffer” and less compressible, the remnant is stabilized against the so-called gravitational collapse and a massive rotating neutron star remnant forms from the collision. Hence, the threshold mass for collapse itself informs about properties of high-density matter. The new study revealed furthermore that the threshold to collapse may even clarify whether during the collision nucleon dissolve into their constituents, the quarks.
    “We are very excited about this results because we expect that future observations can reveal the threshold mass” adds Professor Nikolaos Stergioulas of the department of physics of the Aristotle University Thessaloniki in Greece. Just a few years ago a neutron star merger was observed for the first time by measuring gravitational waves from the collision. Telescopes also found the “electromagnetic counterpart” and detected light from the merger event. If a black hole is directly formed during the collision, the optical emission of the merger is pretty dim. Thus, the observational data indicates if a black hole was created. At the same time the gravitational-wave signal carries information about the total mass of the system. The more massive the stars the stronger is the gravitational-wave signal, which thus allows determining the threshold mass.
    While gravitational-wave detectors and telescopes wait for the next neutron star mergers, the course is being set in Darmstadt for knowledge that is even more detailed. The new accelerator facility FAIR, currently under construction at GSI, will create conditions, which are even more similar to those in neutron star mergers. Finally, only the combination of astronomical observations, computer simulations and heavy-ion experiments can settle the questions about the fundamental building blocks of matter and their properties, and, by this, they will also clarify how the collapse to a black hole occurs.

    Story Source:
    Materials provided by GSI Helmholtzzentrum für Schwerionenforschung GmbH. Note: Content may be edited for style and length. More

  • in

    Computer model can predict how COVID-19 spreads in cities

    A team of researchers has created a computer model that accurately predicted the spread of COVID-19 in 10 major cities this spring by analyzing three factors that drive infection risk: where people go in the course of a day, how long they linger and how many other people are visiting the same place at the same time.
    “We built a computer model to analyze how people of different demographic backgrounds, and from different neighborhoods, visit different types of places that are more or less crowded. Based on all of this, we could predict the likelihood of new infections occurring at any given place or time,” said Jure Leskovec, the Stanford computer scientist who led the effort, which involved researchers from Northwestern University.
    The study, published today in the journal Nature, merges demographic data, epidemiological estimates and anonymous cellphone location information, and appears to confirm that most COVID-19 transmissions occur at “superspreader” sites, like full-service restaurants, fitness centers and cafes, where people remain in close quarters for extended periods. The researchers say their model’s specificity could serve as a tool for officials to help minimize the spread of COVID-19 as they reopen businesses by revealing the tradeoffs between new infections and lost sales if establishments open, say, at 20 percent or 50 percent of capacity.
    Study co-author David Grusky, a professor of sociology at Stanford’s School of Humanities and Sciences, said this predictive capability is particularly valuable because it provides useful new insights into the factors behind the disproportionate infection rates of minority and low-income people. “In the past, these disparities have been assumed to be driven by preexisting conditions and unequal access to health care, whereas our model suggests that mobility patterns also help drive these disproportionate risks,” he said.
    Grusky, who also directs the Stanford Center on Poverty and Inequality, said the model shows how reopening businesses with lower occupancy caps tend to benefit disadvantaged groups the most. “Because the places that employ minority and low-income people are often smaller and more crowded, occupancy caps on reopened stores can lower the risks they face,” Grusky said. “We have a responsibility to build reopening plans that eliminate — or at least reduce — the disparities that current practices are creating.”
    Leskovec said the model “offers the strongest evidence yet” that stay-at-home policies enacted this spring reduced the number of trips outside the home and slowed the rate of new infections.

    advertisement

    Following footsteps
    The study traced the movements of 98 million Americans in 10 of the nation’s largest metropolitan areas through half a million different establishments, from restaurants and fitness centers to pet stores and new car dealerships.
    The team included Stanford PhD students Serina Chang, Pang Wei Koh and Emma Pierson, who graduated this summer, and Northwestern University researchers Jaline Gerardin and Beth Redbird, who assembled study data for the 10 metropolitan areas. In population order, these cities include: New York, Los Angeles, Chicago, Dallas, Washington, D.C., Houston, Atlanta, Miami, Philadelphia and San Francisco.
    SafeGraph, a company that aggregates anonymized location data from mobile applications, provided the researchers data showing which of 553,000 public locations such as hardware stores and religious establishments people visited each day; for how long; and, crucially, what the square footage of each establishment was so that researchers could determine the hourly occupancy density.
    The researchers analyzed data from March 8 to May 9 in two distinct phases. In phase one, they fed their model mobility data and designed their system to calculate a crucial epidemiological variable: the transmission rate of the virus under a variety of different circumstances in the 10 metropolitan areas. In real life, it is impossible to know in advance when and where an infectious and susceptible person come in contact to create a potential new infection. But in their model, the researchers developed and refined a series of equations to compute the probability of infectious events at different places and times. The equations were able to solve for the unknown variables because the researchers fed the computer one, important known fact: how many COVID-19 infections were reported to health officials in each city each day.

    advertisement

    The researchers refined the model until it was able to determine the transmission rate of the virus in each city. The rate varied from city to city depending on factors ranging from how often people ventured out of the house to which types of locations they visited.
    Once the researchers obtained transmission rates for the 10 metropolitan areas, they tested the model during phase two by asking it to multiply the rate for each city against their database of mobility patterns to predict new COVID-19 infections. The predictions tracked closely with the actual reports from health officials, giving the researchers confidence in the model’s reliability.
    Predicting infections
    By combining their model with demographic data available from a database of 57,000 census block groups — 600 to 3,000-person neighborhoods — the researchers show how minority and low-income people leave home more often because their jobs require it, and shop at smaller, more crowded establishments than people with higher incomes, who can work-from-home, use home-delivery to avoid shopping and patronize roomier businesses when they do go out. For instance, the study revealed that it’s roughly twice as risky for non-white populations to buy groceries compared to whites. “By merging mobility, demographic and epidemiological datasets, we were able to use our model to analyze the effectiveness and equity of different reopening policies,” Chang said.
    The team has made its tools and data publicly available so other researchers can replicate and build on the findings.
    “In principle, anyone can use this model to understand the consequences of different stay-at-home and business closure policy decisions,” said Leskovec, whose team is now working to develop the model into a user-friendly tool for policymakers and public health officials.
    Jure Leskovec is an associate professor of computer science at Stanford Engineering, a member of Stanford Bio-X and the Wu Tsai Neurosciences Institute. David Grusky is Edward Ames Edmonds Professor in the School of Humanities and Sciences, and a senior fellow at the Stanford Institute for Economic Policy Research (SIEPR).
    This research was supported by the National Science Foundation, the Stanford Data Science Initiative, the Wu Tsai Neurosciences Institute and the Chan Zuckerberg Biohub. More

  • in

    Skills development in Physical AI could give birth to lifelike intelligent robots

    The research suggests that teaching materials science, mechanical engineering, computer science, biology and chemistry as a combined discipline could help students develop the skills they need to create lifelike artificially intelligent (AI) robots as researchers.
    Known as Physical AI, these robots would be designed to look and behave like humans or other animals while possessing intellectual capabilities normally associated with biological organisms. These robots could in future help humans at work and in daily living, performing tasks that are dangerous for humans, and assisting in medicine, caregiving, security, building and industry.
    Although machines and biological beings exist separately, the intelligence capabilities of the two have not yet been combined. There have so far been no autonomous robots that interact with the surrounding environment and with humans in a similar way to how current computer and smartphone-based AI does.
    Co-lead author Professor Mirko Kovac of Imperial’s Department of Aeronautics and the Swiss Federal Laboratories for Materials Science and Technology (Empa)’s Materials and Technology Centre of Robotics said: “The development of robot ‘bodies’ has significantly lagged behind the development of robot ‘brains’. Unlike digital AI, which has been intensively explored in the last few decades, breathing physical intelligence into them has remained comparatively unexplored.”
    The researchers say that the reason for this gap might be that no systematic educational approach has yet been developed for teaching students and researchers to create robot bodies and brains combined as whole units.
    This new research, which is published today in Nature Machine Intelligence defines the term Physical AI. It also suggests an approach for overcoming the gap in skills by integrating scientific disciplines to help future researchers create lifelike robots with capabilities associated with intelligent organisms, such as developing bodily control, autonomy and sensing at the same time.

    advertisement

    The authors identified five main disciplines that are essential for creating Physical AI: materials science, mechanical engineering, computer science, biology and chemistry.
    Professor Kovac said: “The notion of AI is often confined to computers, smartphones and data intensive computation. We are proposing to think of AI in a broader sense and co-develop physical morphologies, learning systems, embedded sensors, fluid logic and integrated actuation. This Physical AI is the new frontier in robotics research and will have major impact in the decades to come, and co-evolving students’ skills in an integrative and multidisciplinary way could unlock some key ideas for students and researchers alike.”
    The researchers say that achieving nature-like functionality in robots requires combining conventional robotics and AI with other disciplines to create Physical AI as its own discipline.
    Professor Kovac said: “We envision Physical AI robots being evolved and grown in the lab by using a variety of unconventional materials and research methods. Researchers will need a much broader stock of skills for building lifelike robots. Cross-disciplinary collaborations and partnerships will be very important.”
    One example of such a partnership is the Imperial-Empa joint Materials and Technology Centre of Robotics that links up Empa’s material science expertise with Imperial’s Aerial Robotics Laboratory.

    advertisement

    The authors also propose intensifying research activities in Physical AI by supporting teachers on both the institutional and community level. They suggest hiring and supporting faculty members whose priority will be multidisciplinary Physical AI research.
    Co-lead author Dr Aslan Miriyev of Empa and the Department of Aeronautics at Imperial said: “Such backing is especially needed as working in the multidisciplinary playground requires daring to leave the comfort zones of narrow disciplinary knowledge for the sake of a high-risk research and career uncertainty.
    “Creating lifelike robots has thus far been an impossible task, but it could be made possible by including Physical AI in the higher education system. Developing skills and research in Physical AI could bring us closer than ever to redefining human-robot and robot-environment interaction.”
    The researchers hope that their work will encourage active discussion of the topic and will lead to integration of Physical AI disciplines in the educational mainstream.
    The researchers intend to implement the Physical AI methodology in their research and education activities to pave the way to human-robot ecosystems.

    Story Source:
    Materials provided by Imperial College London. Original written by Caroline Brogan. Note: Content may be edited for style and length. More

  • in

    Five mistakes people make when sharing COVID-19 data visualizations on Twitter

    The frantic swirl of coronavirus-related information sharing that took place this year on social media is the subject of a new analysis led by researchers at the School of Informatics and Computing at IUPUI.
    Published in the open-access journal Informatics, the study focuses on the sharing of data visualizations on Twitter — by health experts and average citizens alike — during the initial struggle to grasp the scope of the COVID-19 pandemic, and its effects on society. Many social media users continue to encounter similar charts and graphs every day, especially as a new wave of coronavirus cases has begun to surge across the globe.
    The work found that more than half of the analyzed visualizations from average users contained one of five common errors that reduced their clarity, accuracy or trustworthiness.
    “Experts have not yet begun to explore the world of casual visualizations on Twitter,” said Francesco Cafaro, an assistant professor in the School of Informatics and Computing, who led the study. “Studying the new ways people are sharing information online to understand the pandemic and its effect on their lives is an important step in navigating these uncharted waters.”
    Casual data visualizations refer to charts and graphs that rely upon tools available to average users in order to visually depict information in a personally meaningful way. These visualizations differ from traditional data visualization because they aren’t generated or distributed by the traditional “gatekeepers” of health information, such as the Centers for Disease Control and Prevention or the World Health Organization, or by the media.
    “The reality is that people depend upon these visualizations to make major decisions about their lives: whether or not it’s safe to send their kids back to school, whether or not it’s safe to take a vacation, and where to go,” Cafaro said. “Given their influence, we felt it was important to understand more about them, and to identify common issues that can cause people creating or viewing them to misinterpret data, often unintentionally.”
    For the study, IU researchers crawled Twitter to identify 5,409 data visualizations shared on the social network between April 14 and May 9, 2020. Of these, 540 were randomly selected for analysis — with full statistical analysis reserved for 435 visualizations based upon additional criteria. Of these, 112 were made by average citizens.
    Broadly, Cafaro said the study identified five pitfalls common to the data visualizations analyzed. In addition to identifying these problems, the study’s authors suggest steps to overcome or reduce their negative impact:
    Mistrust: Over 25 percent of the posts analyzed failed to clearly identify the source of their data, sowing distrust in the accuracy. This information was often obscured due to poor design — such as bad color choices, busy layout, or typos — not intentional obfuscation. To overcome these issues, the study’s authors suggest clearly labeling data sources as well as placing this information on the graphic itself rather than the accompanying text, as images are often unpaired from their original post during social sharing.
    Proportional reasoning: Eleven percent of posts exhibited issues related to proportional reasoning, which refers to the users’ ability to compare variables based on ratios or fractions. Understanding infection rates across different geographic locations is a challenge of proportional reasoning, for example, since similar numbers of infections can indicate different levels of severity in low- versus high-population settings. To overcome this challenge, the study’s authors suggest using labels such as number of infections per 1,000 people to compare regions with disparate populations, as this metric is easier to understand than absolute numbers or percentages.
    Temporal reasoning: The researchers identified 7 percent of the posts with issues related to temporal reasoning, which refers to users’ ability to understand change over time. These included visualizations that compared the numbers of deaths from flu in a full year to the number of deaths from COVID-19 in a few months, or visualizations that failed to account for the delay between the date of infection and deaths. Recommendations to address these issues included breaking metrics that depend upon different time scales in separate charts, as opposed to conveying the data in a single chart.
    Cognitive bias: A small percentage of posts (0.5 percent) contained text that seemed to encourage users to misinterpret data based upon the creator’s “biases related to race, country and immigration.” The researchers state that information should be presented with clear, objective descriptions carefully separated from any accompanying political commentary.
    Misunderstanding about virus: Two percent of visualizations were based upon misunderstandings about the novel coronavirus, such as the use of data related to SARS or influenza.
    The study also found certain types of data visualizations performed strongest on social media. Data visualizations that showed change over time, such as line or bar graphs, were most commonly shared. They also found that users engaged more frequently with charts conveying numbers of deaths as opposed to numbers of infections or impact on the economy, suggesting that people were more interested in the virus’s lethality than its other negative health or societal effects.
    “The challenge of accurately conveying information visually is not limited to information-sharing on Twitter, but we feel these communications should be considered especially carefully given the influence of social media on people’s decision-making,” Cafaro said. “We believe our findings can help government agencies, news media and average people better understand the types of information about which people care the most, as well as the challenges people may face while interpreting visual information related to the pandemic.”
    Additional leading authors on the study are Milka Trajkova, A’aeshah Alhakamy, Sanika Vedak, Rashmi Mallappa and Sreekanth R. Kankara, research assistants in the School of Informatics and Computing at IUPUI at the time of the study. Alhakamy is currently a lecturer at the University of University of Tabuk in Saudi Arabia.

    Story Source:
    Materials provided by Indiana University. Note: Content may be edited for style and length. More

  • in

    Scientists develop AI-powered 'electronic nose' to sniff out meat freshness

    A team of scientists led by Nanyang Technological University, Singapore (NTU Singapore) has invented an artificial olfactory system that mimics the mammalian nose to assess the freshness of meat accurately.
    The ‘electronic nose’ (e-nose) comprises a ‘barcode’ that changes colour over time in reaction to the gases produced by meat as it decays, and a barcode ‘reader’ in the form of a smartphone app powered by artificial intelligence (AI). The e-nose has been trained to recognise and predict meat freshness from a large library of barcode colours.
    When tested on commercially packaged chicken, fish and beef meat samples that were left to age, the team found that their deep convolutional neural network AI algorithm that powers the e-nose predicted the freshness of the meats with a 98.5 per cent accuracy. As a comparison, the research team assessed the prediction accuracy of a commonly used algorithm to measure the response of sensors like the barcode used in this e-nose. This type of analysis showed an overall accuracy of 61.7 per cent.
    The e-nose, described in a paper published in the scientific journal Advanced Materials in October, could help to reduce food wastage by confirming to consumers whether meat is fit for consumption, more accurately than a ‘Best Before’ label could, said the research team from NTU Singapore, who collaborated with scientists from Jiangnan University, China, and Monash University, Australia.
    Co-lead author Professor Chen Xiaodong, the Director of Innovative Centre for Flexible Devices at NTU, said: “Our proof-of-concept artificial olfactory system, which we tested in real-life scenarios, can be easily integrated into packaging materials and yields results in a short time without the bulky wiring used for electrical signal collection in some e-noses that were developed recently.
    “These barcodes help consumers to save money by ensuring that they do not discard products that are still fit for consumption, which also helps the environment. The biodegradable and non-toxic nature of the barcodes also means they could be safely applied in all parts of the food supply chain to ensure food freshness.”
    A patent has been filed for this method of real-time monitoring of food freshness, and the team is now working with a Singapore agribusiness company to extend this concept to other types of perishables.

    advertisement

    A nose for freshness
    The e-nose developed by NTU scientists and their collaborators comprises two elements: a coloured ‘barcode’ that reacts with gases produced by decaying meat; and a barcode ‘reader’ that uses AI to interpret the combination of colours on the barcode. To make the e-nose portable, the scientists integrated it into a smartphone app that can yield results in 30 seconds.
    The e-nose mimics how a mammalian nose works. When gases produced by decaying meat bind to receptors in the mammalian nose, signals are generated and transmitted to the brain. The brain then collects these responses and organises them into patterns, allowing the mammal to identify the odour present as meat ages and rots.
    In the e-nose, the 20 bars in the barcode act as the receptors. Each bar is made of chitosan (a natural sugar) embedded on a cellulose derivative and loaded with a different type of dye. These dyes react with the gases emitted by decaying meat and change colour in response to the different types and concentrations of gases, resulting in a unique combination of colours that serves as a ‘scent fingerprint’ for the state of any meat.
    For instance, the first bar in the barcode contains a yellow dye that is weakly acidic. When exposed to nitrogen-containing compounds produced by decaying meat (called bioamines), this yellow dye changes into blue as the dye reacts with these compounds. The colour intensity changes with an increasing concentration of bioamines as meat decays further.
    For this study, the scientists first developed a classification system (fresh, less fresh, or spoiled) using an international standard that determines meat freshness. This is done by extracting and measuring the amount of ammonia and two other bioamines found in fish packages wrapped in widely-used transparent PVC (polyvinyl chloride) packaging film and stored at 4°C (39°Fahrenheit) over five days at different intervals.
    They concurrently monitored the freshness of these fish packages with barcodes glued on the inner side of the PVC film without touching the fish. Images of these barcodes were taken at different intervals over five days. More