More stories

  • in

    New research infuses equity principles into the algorithm development process

    In the U.S., the place where one was born, one’s social and economic background, the neighborhoods in which one spends one’s formative years, and where one grows old are factors that account for a quarter to 60% of deaths in any given year, partly because these forces play a significant role in occurrence and outcomes for heart disease, cancer, unintentional injuries, chronic lower respiratory diseases, and cerebrovascular diseases — the five leading causes of death.
    While data on such “macro” factors is critical to tracking and predicting health outcomes for individuals and communities, analysts who apply machine-learning tools to health outcomes tend to rely on “micro” data constrained to purely clinical settings and driven by healthcare data and processes inside the hospital, leaving factors that could shed light on healthcare disparities in the dark.
    Researchers at the NYU Tandon School of Engineering and NYU School of Global Public Health (NYU GPH), in a new perspective, “Machine learning and algorithmic fairness in public and population health,” in Nature Machine Intelligence, aim to activate the machine learning community to account for “macro” factors and their impact on health. Thinking outside the clinical “box” and beyond the strict limits of individual factors, Rumi Chunara, associate professor of computer science and engineering at NYU Tandon and of biostatistics at the NYU GPH, found a new approach to incorporating the larger web of relevant data for predictive modeling for individual and community health outcomes.
    “Research of what causes and reduces equity shows that to avoid creating more disparities it is essential to consider upstream factors as well,” explained Chunara. She noted, on the one hand, the large body of work on AI and machine learning implementation in healthcare in areas like image analysis, radiography, and pathology, and on the other the strong awareness and advocacy focused on such areas as structural racism, police brutality, and healthcare disparities that came to light around the COVID-19 pandemic.
    “Our goal is to take that work and the explosion of data-rich machine learning in healthcare, and create a holistic view beyond the clinical setting, incorporating data about communities and the environment.”
    Chunara, along with her doctoral students Vishwali Mhasawade and Yuan Zhao, at NYU Tandon and NYU GPH, respectively, leveraged the Social Ecological Model, a framework for understanding how the health, habits and behavior of an individual are affected by factors such as public policies at the national and international level and availability of health resources within a community and neighborhood. The team shows how principles of this model can be used in algorithm development to show how algorithms can be designed and used more equitably.
    The researchers organized existing work into a taxonomy of the types of tasks for which machine learning and AI are used that span prediction, interventions, identifying effects and allocations, to show examples of how a multi-level perspective can be leveraged. In the piece, the authors also show how the same framework is applicable to considerations of data privacy, governance, and best practices to move the healthcare burden from individuals, toward improving equity.
    As an example of such approaches, members of the same team recently presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society a new approach to using “causal multi-level fairness,” the larger web of relevant data for assessing fairness of algorithms. This work builds on the field of “algorithmic fairness,” which, to date, is limited by its exclusive focus on individual-level attributes such as gender and race.
    In this work Mhasawade and Chunara formalized a novel approach to understanding fairness relationships using tools from causal inference, synthesizing a means by which an investigator could assess and account for effects of sensitive macro attributes and not merely individual factors. They developed the algorithm for their approach and provided the settings under which it is applicable. They also illustrated their method on data showing how predictions based merely on data points associated with labels like race, income and gender are of limited value if sensitive attributes are not accounted for, or are accounted for without proper context.
    “As in healthcare, algorithmic fairness tends to be focused on labels — men and women, Black versus white, etc. — without considering multiple layers of influence from a causal perspective to decide what is fair and unfair in predictions,” said Chunara. “Our work presents a framework for thinking not only about equity in algorithms but also what types of data we use in them.” More

  • in

    Artificial Intelligence learns better when distracted

    How should you train your AI system? This question is pertinent, because many deep learning systems are still black boxes. Computer scientists from the Netherlands and Spain have now determined how a deep learning system well suited for image recognition learns to recognize its surroundings. They were able to simplify the learning process by forcing the system’s focus toward secondary characteristics.
    Convolutional Neural Networks (CNNs) are a form of bio-inspired deep learning in artificial intelligence. The interaction of thousands of ‘neurons’ mimics the way our brain learns to recognize images. ‘These CNNs are successful, but we don’t fully understand how they work’, says Estefanía Talavera Martinez, lecturer and researcher at the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence of the University of Groningen in the Netherlands.
    Food
    She has made use of CNNs herself to analyse images made by wearable cameras in the study of human behaviour. Among other things, Talavera Martinez has been studying our interactions with food, so she wanted the system to recognize the different settings in which people encounter food. ‘I noticed that the system made errors in the classification of some pictures and needed to know why this happened.’
    By using heat maps, she analysed which parts of the images were used by the CNNs to identify the setting. ‘This led to the hypothesis that the system was not looking at enough details’, she explains. For example, if an AI system has taught itself to use mugs to identify a kitchen, it will wrongly classify living rooms, offices and other places where mugs are used. The solution that was developed by Talavera Martinez and her colleagues David Morales (Andalusian Research Institute in Data Science and Computational Intelligence, University of Granada) and Beatriz Remeseiro (Department of Computer Science, Universidad de Oviedo), both in Spain, is to distract the system from their primary targets.
    Blurred
    They trained CNNs using a standard image set of planes or cars and identified through heat maps which parts of the images were used for classification. Then, these parts were blurred in the image set, which was then used for a second round of training. ‘This forces the system to look elsewhere for identifiers. And by using this extra information, it becomes more fine-grained in its classification.’
    The approach worked well in the standard image sets, and was also successful in the images Talavera Martinez had collected herself from the wearable cameras. ‘Our training regime gives us results similar to other approaches, but is much simpler and requires less computing time.’ Previous attempts to increase fine-grained classification included combining different sets of CNNs. The approach developed by Talavera Martinez and her colleagues is much more lightweight. ‘This study gave us a better idea of how these CNNs learn, and that has helped us to improve the training program.’
    Story Source:
    Materials provided by University of Groningen. Note: Content may be edited for style and length. More

  • in

    New information storage and processing device

    A team of scientists has developed a means to create a new type of memory, marking a notable breakthrough in the increasingly sophisticated field of artificial intelligence.
    “Quantum materials hold great promise for improving the capacities of today’s computers,” explains Andrew Kent, a New York University physicist and one of the senior investigators. “The work draws upon their properties in establishing a new structure for computation.”
    The creation, designed in partnership with researchers from the University of California, San Diego (UCSD) and the University of Paris-Saclay, is reported in Scientific Reports.
    “Since conventional computing has reached its limits, new computational methods and devices are being developed,” adds Ivan Schuller, a UCSD physicist and one of the paper’s authors. “These have the potential of revolutionizing computing and in ways that may one day rival the human brain.”
    In recent years, scientists have sought to make advances in what is known as “neuromorphic computing” — a process that seeks to mimic the functionality of the human brain. Because of its human-like characteristics, it may offer more efficient and innovative ways to process data using approaches not achievable using existing computational methods.
    In the Scientific Reports work, the researchers created a new device that marks major progress already made in this area.
    To do so, they built a nanoconstriction spintronic resonator to manipulate known physical properties in innovative ways.
    Resonators are capable of generating and storing waves of well-defined frequencies — akin to the box of a string instrument. Here, the scientists constructed a new type of resonator — capable of storing and processing information similar to synapses and neurons in the brain. The one described in Scientific Reports combines the unique properties of quantum materials together with that of spintronic magnetic devices.
    Spintronic devices are electronics that use an electron’s spin in addition to its electrical charge to process information in ways that reduce energy while increasing storage and processing capacity relative to more traditional approaches. A broadly used such device, a “spin torque oscillator,” operates at a specific frequency. Combining it with a quantum material allows tuning this frequency and thus broadening its applicability considerably.
    “This is a fundamental advance that has applications in computing, particularly in neuromorphic computing, where such resonators can serve as connections among computing components,” observes Kent.
    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    Adapting roots to a hotter planet could ease pressure on food supply

    The shoots of plants get all of the glory, with their fruit and flowers and visible structure. But it’s the portion that lies below the soil — the branching, reaching arms of roots and hairs pulling up water and nutrients — that interests plant physiologist and computer scientist, Alexander Bucksch, associate professor of Plant Biology at the University of Georgia.
    The health and growth of the root system has deep implications for our future.
    Our ability to grow enough food to support the population despite a changing climate, and to fix carbon from the atmosphere in the soil are critical to our, and other species’, survival. The solutions, Bucksch believes, lie in the qualities of roots.
    “When there is a problem in the world, humans can move. But what does the plant do?” he asked. “It says, ‘Let’s alter our genome to survive.’ It evolves.”
    Until recently, farmers and plant breeders didn’t have a good way to gather information about the root system of plants, or make decisions about the optimal seeds to grow deep roots.
    In a paper published this month in Plant Physiology, Bucksch and colleagues introduce DIRT/3D (Digital Imaging of Root Traits), an image-based 3D root phenotyping platform that can measure 18 architecture traits from mature field-grown maize root crowns excavated using the Shovelomics technique. More

  • in

    Systems intelligent organizations succeed – regardless of structures

    Matrix, process, or something else? The structure of an organisation is of little significance for its success, as long as there is systems intelligence, according to a study conducted by doctoral student Juha Törmänen together with Professor Esa Saarinen, and Professor Emeritus Raimo P. Hämäläinen, based on a survey of 470 British and US citizens in 2018-2019.
    Systems Intelligence is a concept created by Saarinen and Hämäläinen connecting human sensitivity and engineering thinking, which takes comprehensive interaction between individuals and their environments into account. It examines people and organisations through systemic perception, attitude, reflection, positive engagement, attunement, spirited discovery, wise action, and effective responsiveness.
    The researchers ascertained how well these different dimensions of Systems Intelligence explained an organisation’s success.
    ‘A systems intelligent organisation is successful. By its nature, a systems intelligent organisation is also one that is capable of learning and development. The employees of a systems intelligent organisation have models of behaviour and action, which enable learning’, Törmänen says.
    About 60 percent of the respondents were employees and about 40 percent were in a managerial or leadership position. The respondents evaluated issues pertaining to their organisations on if people there are warm and accepting of one another, and if they bring out the best in others. The metrics of Systems Intelligence were compared with the Dimensions of the Learning Organization Questionnaire (DLOQ), which is the most common scale used for evaluating learning organizations. Questions from both metrics were placed randomly in the questionnaire. In addition, respondents were asked to choose from among ten options on how successful the organisations that they represent are in their fields.
    The study revealed that Systems Intelligence and the DLOQ are approximately equally good at explaining an organisation’s success. A respondent who evaluates the organisation as being successful, giving it the highest evaluation, typically gave the organisation higher marks in both the dimensions of Systems Intelligence and the different areas of the DLOQ. More

  • in

    Dark mode may not save your phone's battery life as much as you think, but there are a few silver linings

    When Android and Apple operating system updates started giving users the option to put their smartphones in dark mode, the feature showed potential for saving the battery life of newer phones with screens that allow darker-colored pixels to use less power than lighter-colored pixels.
    But dark mode is unlikely to make a big difference to battery life with the way that most people use their phones on a daily basis, says a new study by Purdue University researchers.
    That doesn’t mean that dark mode can’t be helpful, though.
    “When the industry rushed to adopt dark mode, it didn’t have the tools yet to accurately measure power draw by the pixels,” said Charlie Hu, Purdue’s Michael and Katherine Birck Professor of Electrical and Computer Engineering. “But now we’re able to give developers the tools they need to give users more energy-efficient apps.”
    Based on their findings using these tools they built, the researchers clarify the facts about the effects of dark mode on battery life and recommend ways that users can already take better advantage of the feature’s power savings.
    The study looked at six of the most-downloaded apps on Google Play: Google Maps, Google News, Google Phone, Google Calendar, YouTube and Calculator. The researchers analyzed how dark mode affects 60 seconds of activity within each of these apps on the Pixel 2, Moto Z3, Pixel 4 and Pixel 5. More

  • in

    Machine learning fuels personalized cancer medicine

    The Biomedical Genomics laboratory at IRB Barcelona has developed a computational tool that identifies cancer driver mutations for each tumour type. This and other developments produced by the same lab seek to accelerate cancer research and provide tools to help oncologists choose the best treatment for each patient. The study has been published in the journal Nature.
    Each tumour — each patient — accumulates many mutations, but not all of them are relevant for the development of cancer. Researchers led by ICREA researcher Dr. Núria López-Bigas at IRB Barcelona have developed a tool, based on machine learning methods, that evaluates the potential contribution of all possible mutations in a gene in a given type of tumour to the development and progression of cancer.
    In previous work that is already available to the scientific and medical community, the laboratory developed a method to identify those genes responsible for the onset, progression, and spread of cancer. “BoostDM goes further: it simulates each possible mutation within each gene for a specific type of cancer and indicates which ones are key in the cancer process. This information helps us to understand how a tumour is caused at the molecular level and it can facilitate medical decisions regarding the most appropriate therapy for a patient,” explains Dr. López-Bigas, head of the Biomedical Genomics lab. In addition, the tool will contribute to a better understanding of the initial processes of tumour development in different tissues.
    The new tool has been integrated into the IntOGen platform, developed by the same group and designed to be used by the scientific and medical community in research projects, and into the Cancer Genome Interpreter, also developed by this group and which is more focused on clinical decision-making by medical oncologists.
    BoostDM currently works with the mutational profiles of 28,000 genomes analysed from 66 types of cancer. The scope of BoostDM will grow as a result of the foreseeable increase in publicly accessible cancer genomes.
    An advance founded on evolutionary biology
    To identify the mutations involved in cancer, the scientists based themselves on a key concept in evolution, namely positive selection. Mutations that drive the growth and development of cancer are found in higher numbers in distinct samples, compared to those that would occur randomly. More

  • in

    Now in 3D: Deep learning techniques help visualize X-ray data in three dimensions

    Computers have been able to quickly process 2D images for some time. Your cell phone can snap digital photographs and manipulate them in a number of ways. Much more difficult, however, is processing an image in three dimensions, and doing it in a timely manner. The mathematics are more complex, and crunching those numbers, even on a supercomputer, takes time.
    That’s the challenge a group of scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory is working to overcome. Artificial intelligence has emerged as a versatile solution to the issues posed by big data processing. For scientists who use the Advanced Photon Source (APS), a DOE Office of Science User Facility at Argonne, to process 3D images, it may be the key to turning X-ray data into visible, understandable shapes at a much faster rate. A breakthrough in this area could have implications for astronomy, electron microscopy and other areas of science dependent on large amounts of 3D data.
    The research team, which includes scientists from three Argonne divisions, has developed a new computational framework called 3D-CDI-NN, and has shown that it can create 3D visualizations from data collected at the APS hundreds of times faster than traditional methods can. The team’s research was published in Applied Physics Reviews, a publication of the American Institute of Physics.
    CDI stands for coherent diffraction imaging, an X-ray technique that involves bouncing ultra-bright X-ray beams off of samples. Those beams of light will then be collected by detectors as data, and it takes some computational effort to turn that data into images. Part of the challenge, explains Mathew Cherukara, leader of the Computational X-ray Science group in Argonne’s X-ray Science Division (XSD), is that the detectors only capture some of the information from the beams.
    But there is important information contained in the missing data, and scientists rely on computers to fill in that information. As Cherukara notes, while this takes some time to do in 2D, it takes even longer to do with 3D images. The solution, then, is to train an artificial intelligence to recognize objects and the microscopic changes they undergo directly from the raw data, without having to fill in the missing info.
    To do this, the team started with simulated X-ray data to train the neural network. The NN in the framework’s title, a neural network is a series of algorithms that can teach a computer to predict outcomes based on data it receives. Henry Chan, the lead author on the paper and a postdoctoral researcher in the Center for Nanoscale Materials (CNM), a DOE Office of Science User Facility at Argonne, led this part of the work. More