More stories

  • in

    Artificial Intelligence learns better when distracted

    How should you train your AI system? This question is pertinent, because many deep learning systems are still black boxes. Computer scientists from the Netherlands and Spain have now determined how a deep learning system well suited for image recognition learns to recognize its surroundings. They were able to simplify the learning process by forcing the system’s focus toward secondary characteristics.
    Convolutional Neural Networks (CNNs) are a form of bio-inspired deep learning in artificial intelligence. The interaction of thousands of ‘neurons’ mimics the way our brain learns to recognize images. ‘These CNNs are successful, but we don’t fully understand how they work’, says Estefanía Talavera Martinez, lecturer and researcher at the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence of the University of Groningen in the Netherlands.
    Food
    She has made use of CNNs herself to analyse images made by wearable cameras in the study of human behaviour. Among other things, Talavera Martinez has been studying our interactions with food, so she wanted the system to recognize the different settings in which people encounter food. ‘I noticed that the system made errors in the classification of some pictures and needed to know why this happened.’
    By using heat maps, she analysed which parts of the images were used by the CNNs to identify the setting. ‘This led to the hypothesis that the system was not looking at enough details’, she explains. For example, if an AI system has taught itself to use mugs to identify a kitchen, it will wrongly classify living rooms, offices and other places where mugs are used. The solution that was developed by Talavera Martinez and her colleagues David Morales (Andalusian Research Institute in Data Science and Computational Intelligence, University of Granada) and Beatriz Remeseiro (Department of Computer Science, Universidad de Oviedo), both in Spain, is to distract the system from their primary targets.
    Blurred
    They trained CNNs using a standard image set of planes or cars and identified through heat maps which parts of the images were used for classification. Then, these parts were blurred in the image set, which was then used for a second round of training. ‘This forces the system to look elsewhere for identifiers. And by using this extra information, it becomes more fine-grained in its classification.’
    The approach worked well in the standard image sets, and was also successful in the images Talavera Martinez had collected herself from the wearable cameras. ‘Our training regime gives us results similar to other approaches, but is much simpler and requires less computing time.’ Previous attempts to increase fine-grained classification included combining different sets of CNNs. The approach developed by Talavera Martinez and her colleagues is much more lightweight. ‘This study gave us a better idea of how these CNNs learn, and that has helped us to improve the training program.’
    Story Source:
    Materials provided by University of Groningen. Note: Content may be edited for style and length. More

  • in

    New information storage and processing device

    A team of scientists has developed a means to create a new type of memory, marking a notable breakthrough in the increasingly sophisticated field of artificial intelligence.
    “Quantum materials hold great promise for improving the capacities of today’s computers,” explains Andrew Kent, a New York University physicist and one of the senior investigators. “The work draws upon their properties in establishing a new structure for computation.”
    The creation, designed in partnership with researchers from the University of California, San Diego (UCSD) and the University of Paris-Saclay, is reported in Scientific Reports.
    “Since conventional computing has reached its limits, new computational methods and devices are being developed,” adds Ivan Schuller, a UCSD physicist and one of the paper’s authors. “These have the potential of revolutionizing computing and in ways that may one day rival the human brain.”
    In recent years, scientists have sought to make advances in what is known as “neuromorphic computing” — a process that seeks to mimic the functionality of the human brain. Because of its human-like characteristics, it may offer more efficient and innovative ways to process data using approaches not achievable using existing computational methods.
    In the Scientific Reports work, the researchers created a new device that marks major progress already made in this area.
    To do so, they built a nanoconstriction spintronic resonator to manipulate known physical properties in innovative ways.
    Resonators are capable of generating and storing waves of well-defined frequencies — akin to the box of a string instrument. Here, the scientists constructed a new type of resonator — capable of storing and processing information similar to synapses and neurons in the brain. The one described in Scientific Reports combines the unique properties of quantum materials together with that of spintronic magnetic devices.
    Spintronic devices are electronics that use an electron’s spin in addition to its electrical charge to process information in ways that reduce energy while increasing storage and processing capacity relative to more traditional approaches. A broadly used such device, a “spin torque oscillator,” operates at a specific frequency. Combining it with a quantum material allows tuning this frequency and thus broadening its applicability considerably.
    “This is a fundamental advance that has applications in computing, particularly in neuromorphic computing, where such resonators can serve as connections among computing components,” observes Kent.
    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    Adapting roots to a hotter planet could ease pressure on food supply

    The shoots of plants get all of the glory, with their fruit and flowers and visible structure. But it’s the portion that lies below the soil — the branching, reaching arms of roots and hairs pulling up water and nutrients — that interests plant physiologist and computer scientist, Alexander Bucksch, associate professor of Plant Biology at the University of Georgia.
    The health and growth of the root system has deep implications for our future.
    Our ability to grow enough food to support the population despite a changing climate, and to fix carbon from the atmosphere in the soil are critical to our, and other species’, survival. The solutions, Bucksch believes, lie in the qualities of roots.
    “When there is a problem in the world, humans can move. But what does the plant do?” he asked. “It says, ‘Let’s alter our genome to survive.’ It evolves.”
    Until recently, farmers and plant breeders didn’t have a good way to gather information about the root system of plants, or make decisions about the optimal seeds to grow deep roots.
    In a paper published this month in Plant Physiology, Bucksch and colleagues introduce DIRT/3D (Digital Imaging of Root Traits), an image-based 3D root phenotyping platform that can measure 18 architecture traits from mature field-grown maize root crowns excavated using the Shovelomics technique. More

  • in

    Systems intelligent organizations succeed – regardless of structures

    Matrix, process, or something else? The structure of an organisation is of little significance for its success, as long as there is systems intelligence, according to a study conducted by doctoral student Juha Törmänen together with Professor Esa Saarinen, and Professor Emeritus Raimo P. Hämäläinen, based on a survey of 470 British and US citizens in 2018-2019.
    Systems Intelligence is a concept created by Saarinen and Hämäläinen connecting human sensitivity and engineering thinking, which takes comprehensive interaction between individuals and their environments into account. It examines people and organisations through systemic perception, attitude, reflection, positive engagement, attunement, spirited discovery, wise action, and effective responsiveness.
    The researchers ascertained how well these different dimensions of Systems Intelligence explained an organisation’s success.
    ‘A systems intelligent organisation is successful. By its nature, a systems intelligent organisation is also one that is capable of learning and development. The employees of a systems intelligent organisation have models of behaviour and action, which enable learning’, Törmänen says.
    About 60 percent of the respondents were employees and about 40 percent were in a managerial or leadership position. The respondents evaluated issues pertaining to their organisations on if people there are warm and accepting of one another, and if they bring out the best in others. The metrics of Systems Intelligence were compared with the Dimensions of the Learning Organization Questionnaire (DLOQ), which is the most common scale used for evaluating learning organizations. Questions from both metrics were placed randomly in the questionnaire. In addition, respondents were asked to choose from among ten options on how successful the organisations that they represent are in their fields.
    The study revealed that Systems Intelligence and the DLOQ are approximately equally good at explaining an organisation’s success. A respondent who evaluates the organisation as being successful, giving it the highest evaluation, typically gave the organisation higher marks in both the dimensions of Systems Intelligence and the different areas of the DLOQ. More

  • in

    Dark mode may not save your phone's battery life as much as you think, but there are a few silver linings

    When Android and Apple operating system updates started giving users the option to put their smartphones in dark mode, the feature showed potential for saving the battery life of newer phones with screens that allow darker-colored pixels to use less power than lighter-colored pixels.
    But dark mode is unlikely to make a big difference to battery life with the way that most people use their phones on a daily basis, says a new study by Purdue University researchers.
    That doesn’t mean that dark mode can’t be helpful, though.
    “When the industry rushed to adopt dark mode, it didn’t have the tools yet to accurately measure power draw by the pixels,” said Charlie Hu, Purdue’s Michael and Katherine Birck Professor of Electrical and Computer Engineering. “But now we’re able to give developers the tools they need to give users more energy-efficient apps.”
    Based on their findings using these tools they built, the researchers clarify the facts about the effects of dark mode on battery life and recommend ways that users can already take better advantage of the feature’s power savings.
    The study looked at six of the most-downloaded apps on Google Play: Google Maps, Google News, Google Phone, Google Calendar, YouTube and Calculator. The researchers analyzed how dark mode affects 60 seconds of activity within each of these apps on the Pixel 2, Moto Z3, Pixel 4 and Pixel 5. More

  • in

    Machine learning fuels personalized cancer medicine

    The Biomedical Genomics laboratory at IRB Barcelona has developed a computational tool that identifies cancer driver mutations for each tumour type. This and other developments produced by the same lab seek to accelerate cancer research and provide tools to help oncologists choose the best treatment for each patient. The study has been published in the journal Nature.
    Each tumour — each patient — accumulates many mutations, but not all of them are relevant for the development of cancer. Researchers led by ICREA researcher Dr. Núria López-Bigas at IRB Barcelona have developed a tool, based on machine learning methods, that evaluates the potential contribution of all possible mutations in a gene in a given type of tumour to the development and progression of cancer.
    In previous work that is already available to the scientific and medical community, the laboratory developed a method to identify those genes responsible for the onset, progression, and spread of cancer. “BoostDM goes further: it simulates each possible mutation within each gene for a specific type of cancer and indicates which ones are key in the cancer process. This information helps us to understand how a tumour is caused at the molecular level and it can facilitate medical decisions regarding the most appropriate therapy for a patient,” explains Dr. López-Bigas, head of the Biomedical Genomics lab. In addition, the tool will contribute to a better understanding of the initial processes of tumour development in different tissues.
    The new tool has been integrated into the IntOGen platform, developed by the same group and designed to be used by the scientific and medical community in research projects, and into the Cancer Genome Interpreter, also developed by this group and which is more focused on clinical decision-making by medical oncologists.
    BoostDM currently works with the mutational profiles of 28,000 genomes analysed from 66 types of cancer. The scope of BoostDM will grow as a result of the foreseeable increase in publicly accessible cancer genomes.
    An advance founded on evolutionary biology
    To identify the mutations involved in cancer, the scientists based themselves on a key concept in evolution, namely positive selection. Mutations that drive the growth and development of cancer are found in higher numbers in distinct samples, compared to those that would occur randomly. More

  • in

    Now in 3D: Deep learning techniques help visualize X-ray data in three dimensions

    Computers have been able to quickly process 2D images for some time. Your cell phone can snap digital photographs and manipulate them in a number of ways. Much more difficult, however, is processing an image in three dimensions, and doing it in a timely manner. The mathematics are more complex, and crunching those numbers, even on a supercomputer, takes time.
    That’s the challenge a group of scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory is working to overcome. Artificial intelligence has emerged as a versatile solution to the issues posed by big data processing. For scientists who use the Advanced Photon Source (APS), a DOE Office of Science User Facility at Argonne, to process 3D images, it may be the key to turning X-ray data into visible, understandable shapes at a much faster rate. A breakthrough in this area could have implications for astronomy, electron microscopy and other areas of science dependent on large amounts of 3D data.
    The research team, which includes scientists from three Argonne divisions, has developed a new computational framework called 3D-CDI-NN, and has shown that it can create 3D visualizations from data collected at the APS hundreds of times faster than traditional methods can. The team’s research was published in Applied Physics Reviews, a publication of the American Institute of Physics.
    CDI stands for coherent diffraction imaging, an X-ray technique that involves bouncing ultra-bright X-ray beams off of samples. Those beams of light will then be collected by detectors as data, and it takes some computational effort to turn that data into images. Part of the challenge, explains Mathew Cherukara, leader of the Computational X-ray Science group in Argonne’s X-ray Science Division (XSD), is that the detectors only capture some of the information from the beams.
    But there is important information contained in the missing data, and scientists rely on computers to fill in that information. As Cherukara notes, while this takes some time to do in 2D, it takes even longer to do with 3D images. The solution, then, is to train an artificial intelligence to recognize objects and the microscopic changes they undergo directly from the raw data, without having to fill in the missing info.
    To do this, the team started with simulated X-ray data to train the neural network. The NN in the framework’s title, a neural network is a series of algorithms that can teach a computer to predict outcomes based on data it receives. Henry Chan, the lead author on the paper and a postdoctoral researcher in the Center for Nanoscale Materials (CNM), a DOE Office of Science User Facility at Argonne, led this part of the work. More

  • in

    Researchers demonstrate technique for recycling nanowires in electronics

    Researchers at North Carolina State University demonstrated a low-cost technique for retrieving nanowires from electronic devices that have reached the end of their utility and then using those nanowires in new devices. The work is a step toward more sustainable electronics.
    “There is a lot of interest in recycling electronic materials because we want to both reduce electronic waste and maximize the use we get out of rare or costly materials,” says Yuxuan Liu, first author of a paper on the work and a Ph.D. student at NC State. “We’ve demonstrated an approach that allows us to recycle nanowires, and that we think could be extended to other nanomaterials — including nanomaterials containing noble and rare-earth elements.”
    “Our recycling technique differs from conventional recycling,” says Yong Zhu, corresponding author of the paper and the Andrew A. Adams Distinguished Professor of Mechanical and Aerospace Engineering at NC State. “When you think about recycling a glass bottle, it is completely melted down before being used to create another glass object. In our approach, a silver nanowire network is separated from the rest of the materials in a device. That network is then disassembled into a collection of separate silver nanowires in solution. Those nanowires can then be used to create a new network and incorporated into a new sensor or other devices.”
    The new recycling technique takes into account the entire life cycle of a device. The first step is to design devices using polymers that are soluble in solvents that will not also dissolve the nanowires. Once a device has been used, the polymer matrix containing the silver nanowires is dissolved, leaving behind the nanowire network. The network is then placed in a separate solvent and hit with ultrasound. This disperses the nanowires, separating them out of the network.
    In a proof-of-concept demonstration, the researchers created a wearable health sensor patch that could be used to track a patient’s temperature and hydration. The sensor consisted of silver nanowire networks embedded in a polymer material. The researchers tested the sensors to ensure that they were fully functional. Once used, a sensor patch is normally discarded.
    But for their demonstration, the researchers dissolved the polymer in water, removed the nanowire network, broke it down into a collection of individual nanowires, and then used those nanowires to create a brand-new wearable sensor. While there was minor degradation in the properties of the nanowire network after each “life cycle,” the researchers found that the nanowires could be recycled four times without harming the sensor’s performance. More