More stories

  • in

    AI helps detect pancreatic cancer

    An artificial intelligence (AI) tool is highly effective at detecting pancreatic cancer on CT, according to a study published in Radiology, a journal of the Radiological Society of North America (RSNA).
    Pancreatic cancer has the lowest five-year survival rate among cancers. It is projected to become the second leading cause of cancer death in the United States by 2030. Early detection is the best way to improve the dismal outlook, as prognosis worsens significantly once the tumor grows beyond 2 centimeters.
    CT is the key imaging method for detection of pancreatic cancer, but it misses about 40% of tumors under 2 centimeters. There is an urgent need for an effective tool to help radiologists in improving pancreatic cancer detection.
    Researchers in Taiwan have been studying a computer-aided detection (CAD) tool that uses a type of AI called deep learning to detect pancreatic cancer. They previously showed that the tool could accurately distinguish pancreatic cancer from noncancerous pancreas. However, that study relied on radiologists manually identifying the pancreas on imaging — a labor-intensive process known as segmentation. In the new study, the AI tool identified the pancreas automatically. This is an important advance considering that the pancreas borders multiple organs and structures and varies widely in shape and size.
    The researchers developed the tool with an internal test set consisting of 546 patients with pancreatic cancer and 733 control participants. The tool achieved 90% sensitivity and 96% specificity in the internal test set.
    Validation followed with a set of 1,473 individual CT exams from institutions throughout Taiwan. The tool achieved 90% sensitivity and 93% specificity in distinguishing pancreatic cancer from controls in that set. Sensitivity for detecting pancreatic cancers less than 2 centimeters was 75%.
    “The performance of the deep learning tool seemed on par with that of radiologists,” said study senior author Weichung Wang, Ph.D., professor at National Taiwan University and director of the university’s MeDA Lab. “Specifically, in this study, the sensitivity of the deep learning computer-aided detection tool for pancreatic cancer was comparable with that of radiologists in a tertiary referral center regardless of tumor size and stage.”
    The CAD tool has the potential to provide a wealth of information to assist clinicians, Dr. Wang said. It could indicate the region of suspicion to speed radiologist interpretation.
    “The CAD tool may serve as a supplement for radiologists to enhance the detection of pancreatic cancer,” said the study’s co-senior author, Wei-Chi Liao, M.D., Ph.D., from National Taiwan University and National Taiwan University Hospital.
    The researchers are planning further studies. In particular, they want to look at the tool’s performance in more diverse populations. Since the current study was retrospective, they want to see how it performs going forward in real-world clinical settings.
    Story Source:
    Materials provided by Radiological Society of North America. Note: Content may be edited for style and length. More

  • in

    Healthcare researchers must be wary of misusing AI

    An international team of researchers, writing in the journal Nature Medicine, advises that strong care needs to be taken not to misuse or overuse machine learning (ML) in healthcare research.
    “I absolutely believe in the power of ML but it has to be a relevant addition,” said neurosurgeon-in-training and statistics editor Dr Victor Volovici, first author of the comment, from Erasmus MC University Medical Center, The Netherlands. “Sometimes ML algorithms do not perform better than traditional statistical methods, leading to the publication of papers that lack clinical or scientific value.”
    Real world examples have shown that the misuse of algorithms in healthcare could perpetuate human prejudices or inadvertently cause harm when the machines are trained on biased datasets.
    “Many believe ML will revolutionise healthcare because machines make choices more objectively than humans. But without proper oversight, ML models may do more harm than good,” said Associate Professor Nan Liu, senior author of the comment, from the Centre for Quantitative Medicine and Health Services & Systems Research Programme at Duke-NUS Medical School, Singapore.
    “If, through ML, we uncover patterns that we otherwise would not see — like in radiology and pathology images — we should be able to explain how the algorithms got there, to allow for checks and balances.”
    Together with a group of scientists from the UK and Singapore, the researchers highlight that although guidelines have been formulated to regulate the use of ML in clinical research, these guidelines are only applicable once a decision to use ML has been made and do not ask whether or when its use is appropriate in the first place.
    For example, companies have successfully trained ML algorithms to recognise faces and road objects using billions of images and videos. But when it comes to their use in healthcare settings, they are often trained on data in the tens, hundreds or thousands. “This underscores the relative poverty of big data in healthcare and the importance of working towards achieving sample sizes that have been attained in other industries, as well as the importance of a concerted, international big data sharing effort for health data,” the researchers write.
    Another issue is that most ML and deep learning algorithms (that do not receive explicit instructions regarding the outcome) are often still regarded as a ‘black box’. For example, at the start of the COVID-19 pandemic, scientists published an algorithm that could predict coronavirus infections from lung photos. Afterwards, it turned out that the algorithm had drawn conclusions based on the imprint of the letter ‘R’ (for ‘Right Lung’) in the photos, which was always found in a slightly different spot on the scans.
    “We have to get rid of the idea that ML can discover patterns in data that we cannot understand,” said Dr Volovici about the incident. “ML can very well discover patterns that we cannot see directly, but then you have to be able to explain how you came to that conclusion. In order to do that, the algorithm has to be able to show what steps it took, and that requires innovation.”
    The researchers advise that ML algorithms should be evaluated against traditional statistical approaches (when applicable) before they are used in clinical research. And when deemed appropriate, they should complement clinician decision-making, rather than replace it. “ML researchers should recognise the limits of their algorithms and models in order to prevent their overuse and misuse, which could otherwise sow distrust and cause patient harm,” the researchers write.
    The team is working on organising an international effort to provide guidance on the use of ML and traditional statistics, and also to set up a large database of anonymised clinical data that can harness the power of ML algorithms.
    Story Source:
    Materials provided by Duke-NUS Medical School. Note: Content may be edited for style and length. More

  • in

    New method to identify symmetries in data using Bayesian statistics

    Symmetries in nature make things beautiful; symmetries in data make data handling efficient. However, the complexity of identifying such patterns in data has always bedeviled researchers. Scientists from Osaka Metropolitan University and their colleagues have taken a major step towards detecting symmetries in multi-dimensional data by utilizing Bayesian statistics. Their findings were published in The Annals of Statistics.
    Bayesian statistics has been in the spotlight in recent years due to improvements in computer performance and its potential applications in artificial intelligence. Bayesian statistics is a statistical approach that, even when data are insufficient, derives the probability of an event occurring by first setting a prior probability and then, whenever new information is obtained, calculating a posterior probability — an update to the prior probability — that the event will occur. The calculation of posterior probabilities requires complex calculations of integrals and therefore is often considered an approximation only.
    The international team including Professor Hideyuki Ishi from Osaka Metropolitan University, Professor Piotr Graczyk from the University of Angers, Professor Bartosz Kołodziejek from Warsaw University of Technology, and the late Professor Hélène Massam from York University (Toronto) has succeeded in deriving new exact integral formulas, and in developing a method to search for symmetries in multi-dimensional data using Bayesian statistical techniques.
    When the amount of data to be handled increases, the optimal pattern must be selected from a vast number of patterns, making it difficult to solve the problem precisely. Addressing this challenge, the team has also developed an efficient algorithm for obtaining an approximate solution even in such cases.
    In the words of Professor Ishi, “Symmetries in data are ubiquitous in a wide variety of models. Once symmetries are identified, the number of parameters required to display the structure of the data, and the number of samples required to determine the parameters, can be significantly reduced. In the future, the results of this research are expected to contribute to genetic analysis, discovering chromosomes that have the same function in different locations.”
    Story Source:
    Materials provided by Osaka Metropolitan University. Note: Content may be edited for style and length. More

  • in

    Optical rule was made to be broken

    If you’re going to break a rule with style, make sure everybody sees it. That’s the goal of engineers at Rice University who hope to improve screens for virtual reality, 3D displays and optical technologies in general.
    Gururaj Naik, an associate professor of electrical and computer engineering at Rice’s George R. Brown School of Engineering, and Applied Physics Graduate Program alumna Chloe Doiron found a way to manipulate light at the nanoscale that breaks the Moss rule, which describes a trade-off between a material’s optical absorption and how it refracts light.
    Apparently, it’s more like a guideline than an actual rule, because a number of “super-Mossian” semiconductors do exist. Fool’s gold, aka iron pyrite, is one of them.
    For their study in Advanced Optical Materials, Naik, Doiron and co-author Jacob Khurgin, a professor of electrical and computer engineering at Johns Hopkins University, find iron pyrite works particularly well as a nanophotonic material and could lead to better and thinner displays for wearable devices.
    More important is that they’ve established a method for finding materials that surpass the Moss rule and offer useful light-handling properties for displays and sensing applications.
    “In optics, we’re still limited to a very few materials,” Naik said. “Our periodic table is really small. But there are so many materials that are simply unknown, just because we haven’t developed any insight on how to find them. More

  • in

    The thermodynamics of life taking shape

    Revealing the scientific laws that govern our world is often considered the ‘holy grail’ by scientists, as such discoveries have wide-ranging implications. In an exciting development from Japan, scientists have shown how to use geometric representations to encode the laws of thermodynamics, and apply these representations to obtain generalized predictions. This work may significantly improve our understanding of the theoretical limits that apply within chemistry and biology.
    While living systems are bound by the laws of physics, they often find creative ways to take advantage of these rules in ways that non-living physical systems rarely can. For example, every living organism finds a way to reproduce itself. At a fundamental level, this relies on autocatalytic cycles in which a certain molecule can spur the production of identical molecules, or a set of molecules produce each other. As part of this, the compartment in which the molecules exist grows in volume. However, scientific knowledge lacks a complete thermodynamic representation of such self-replicating processes, which would enable scientists to understand how living systems can emerge from non-living objects.
    Now, in two related articles published in Physical Review Research, researchers from the Institute of Industrial Science at The University of Tokyo used a geometric technique to characterize the conditions that correspond with the growth of a self-reproducing system. The guiding principle is the famous second law of thermodynamics, which requires that entropy — generally understood to mean disorder — can only increase. However, an increase in order may be possible, such as a bacterium absorbing nutrients to enable it to divide into two bacteria, but at the cost of increased entropy somewhere else. “Self-replication is a hallmark of living systems, and our theory helps explain the environmental conditions to determine their fate, whether growing, shrinking, or equilibration,” says senior author Tetsuya J. Kobayashi.
    The main insight was to represent the thermodynamic relationships as hypersurfaces in a multidimensional space. Then, the researchers could study what happens as various operations are performed, in this case, using the Legendre transformation. This transformation describes how a surface to be mapped into a different geometric object with a significant thermodynamic meaning.
    “The results were obtained solely on the basis of the second law of thermodynamics that the total entropy must increase. Because of this, assumptions of an ideal gas or other simplifications about the types of interactions in the system were not required,” says first author Yuki Sughiyama. Being able to calculate the rate of entropy production can be vital for evaluating biophysical systems. This research can help put the study of the thermodynamics of living systems on a more solid theoretical footing, which may improve our understanding of biological reproduction.
    Story Source:
    Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Sport, sleep or screens: New app reveals the 'just right' day for kids

    Not too sport heavy, not too sleep deprived — finding the ‘just right’ balance in a child’s busy day can be a challenge. But while parents may struggle to squeeze in homework amid extracurricular commitments and downtime, a world-first app could provide a much-needed solution.
    Developed by University of South Australia in partnership with the Murdoch Children’s Research Institute, the Healthy-Day-App is helping parents understand which combination of activities can best help their child’s mental, physical, and academic outcomes.
    The study found that shifting 60 minutes of screen time to 60 minutes of physical activity resulted in 4.2 per cent lower body fat, 2.5 per cent improved wellbeing and 0.9 per cent higher academic performance.
    Lead researcher, UniSA’s Dr Dot Dumuid says that the app will help parents and health professionals better understand the relationships between children’s time use, health, and academic outcomes.
    “How children use their time can have a big impact on their health, wellbeing, and productivity,” Dr Dumuid says.
    “We know that screens are not great for children’s wellbeing, so if they’re choosing to play video games at the expense of playing sport, it’s easy to guess the negative impacts effects on their health. More

  • in

    New AI system predicts how to prevent wildfires

    Wildfires are a growing threat in a world shaped by climate change. Now, researchers at Aalto University have developed a neural network model that can accurately predict the occurrence of fires in peatlands. They used the new model to assess the effect of different strategies for managing fire risk and identified a suite of interventions that would reduce fire incidence by 50-76%.
    The study focused on the Central Kalimantan province of Borneo in Indonesia, which has the highest density of peatland fires in Southeast Asia. Drainage to support agriculture or residential expansion has made peatlands increasingly vulnerable to recurring fires. In addition to threatening lives and livelihoods, peatland fires release significant amounts of carbon dioxide. However, prevention strategies have faced difficulties because of the lack of clear, quantified links between proposed interventions and fire risk.
    The new model uses measurements taken before each fire season in 2002-2019 to predict the distribution of peatland fires. While the findings can be broadly applied to peatlands elsewhere, a new analysis would have to be done for other contexts. ‘Our methodology could be used for other contexts, but this specific model would have to be re-trained on the new data,’ says Alexander Horton, the postdoctoral researcher who carried out study.
    The researchers used a convolutional neural network to analyse 31 variables, such as the type of land cover and pre-fire indices of vegetation and drought. Once trained, the network predicted the likelihood of a peatland fire at each spot on the map, producing an expected distribution of fires for the year.
    Overall, the neural network’s predictions were correct 80-95% of the time. However, while the model was usually right in predicting a fire, it also missed many fires that actually occurred. About half of the observed fires weren’t predicted by the model, meaning that it isn’t suitable as an early-warning predictive system. Larger groupings of fires tended to be predicted well, while isolated fires were often missed by the network. With further work, the researchers hope to improve the network’s performance so it can also serve as an early-warning system.
    The team took advantage of the fact that fire predictions were usually correct to test the effect of different land management strategies. By simulating different interventions, they found that the most effective plausible strategy would be to convert shrubland and scrubland into swamp forests, which would reduce fire incidence by 50%. If this were combined with blocking all of the drainage canals except the major ones, fires would decrease by 70% in total.
    However, such a strategy would have clear economic drawbacks. ‘The local community is in desperate need of long-term, stable cultivation to booster the local economy,’ says Horton.
    An alternative strategy would be to establish more plantations, since well-managed dramatically reduce the likelihood of fire. However, the plantations are among the key drivers of forest loss, and Horton points out ‘the plantations are mostly owned by larger corporations, often based outside Borneo, which means the profits aren’t directly fed back into the local economy beyond the provision of labour for the local workforce.’
    Ultimately, fire prevention strategies have to balance risks, benefits, and costs, and this research provides the information to do that, explains Professor Matti Kummu, who led the study team. ‘We tried to quantify how the different strategies would work. It’s more about informing policy-makers than providing direct solutions.’
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    A carbon footprint life cycle assessment can cut down on greenwashing

    Today, you can buy a pair of sneakers partially made from carbon dioxide pulled out of the atmosphere. But measuring the carbon-reduction benefits of making that pair of sneakers with CO2 is complex. There’s the fossil fuel that stayed in the ground, a definite carbon savings. But what about the energy cost of cooling the CO2 into liquid form and transporting it to a production facility? And what about when your kid outgrows the shoes in six months and they can’t be recycled into a new product because those systems aren’t in place yet?

    As companies try to reduce their carbon footprint, many are doing life cycle assessments to quantify the full carbon cost of products, from procurement of materials to energy use in manufacturing to product transport to user behavior and end-of-life disposal. It’s a mind-bogglingly difficult metric, but such bean-counting is needed to hold the planet to a livable temperature, says low-carbon systems expert Andrea Ramirez Ramirez of the Delft University of Technology in the Netherlands.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Carbon accounting is easy to get wrong, she says. Differences in starting points for determining a product’s “lifetime” or assumptions about the energy sources can all affect the math.

    Carbon use can be reduced at many points along the production chain—by using renewable energy in the manufacturing process, for instance, or by adding atmospheric CO2 to the product. But if other points along the chain are energy-intensive or emit CO2, she notes, the final tally may show a positive rather than a negative number.

    A product is carbon negative only when its production actually removes carbon from the environment, temporarily or permanently. The Global CO2 Initiative, with European and American universities, has created a set of LCA guidelines to standardize measurement so that carbon accounting is consistent and terms such as “carbon neutral” or “carbon negative” have a verifiable meaning.

    In the rush to create products that can be touted as fighting climate change, however, some firms have been accused of “greenwashing” – making products or companies appear more environmentally friendly than they really are. Examples of greenwashing, according to a March 2022 analysis by mechanical engineers Grant Faber and Volker Sick of the University of Michigan in Ann Arbor include labeling plastic garbage bags as recyclable when their whole purpose is to be thrown away; using labels such as “eco-friendly” or “100% Natural” without official certification; and claiming a better carbon footprint without acknowledging the existence of even better choices. An example would be “fuel-efficient” sport utility vehicles, which are only fuel efficient when compared with other SUVs rather than with smaller cars, public transit or bicycles.

    Good LCA analysis, Sick says, can distinguish companies that are carbon-friendly in name only, from those that are truly helping the world clear the air.  More