More stories

  • in

    Improving computer vision for AI

    Researchers from UTSA, the University of Central Florida (UCF), the Air Force Research Laboratory (AFRL) and SRI International have developed a new method that improves how artificial intelligence learns to see.
    Led by Sumit Jha, professor in the Department of Computer Science at UTSA, the team has changed the conventional approach employed in explaining machine learning decisions that relies on a single injection of noise into the input layer of a neural network.
    The team shows that adding noise — also known as pixilation — along multiple layers of a network provides a more robust representation of an image that’s recognized by the AI and creates more robust explanations for AI decisions. This work aids in the development of what’s been called “explainable AI” which seeks to enable high-assurance applications of AI such as medical imaging and autonomous driving.
    “It’s about injecting noise into every layer,” Jha said. “The network is now forced to learn a more robust representation of the input in all of its internal layers. If every layer experiences more perturbations in every training, then the image representation will be more robust and you won’t see the AI fail just because you change a few pixels of the input image.”
    Computer vision — the ability to recognize images — has many business applications. Computer vision can better identify areas of concern in the livers and brains of cancer patients. This type of machine learning can also be employed in many other industries. Manufacturers can use it to detect defection rates, drones can use it to help detect pipeline leaks, and agriculturists have begun using it to spot early signs of crop disease to improve their yields.
    Through deep learning, a computer is trained to perform behaviors, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through set equations, deep learning works within basic parameters about a data set and trains the computer to learn on its own by recognizing patterns using many layers of processing.
    The team’s work, led by Jha, is a major advancement to previous work he’s conducted in this field. In a 2019 paper presented at the AI Safety workshop co-located with that year’s International Joint Conference on Artificial Intelligence (IJCAI), Jha, his students and colleagues from the Oak Ridge National Laboratory demonstrated how poor conditions in nature can lead to dangerous neural network performance. A computer vision system was asked to recognize a minivan on a road, and did so correctly. His team then added a small amount of fog and posed the same query again to the network: the AI identified the minivan as a fountain. As a result, their paper was a best paper candidate.
    In most models that rely on neural ordinary differential equations (ODEs), a machine is trained with one input through one network, and then spreads through the hidden layers to create one response in the output layer. This team of UTSA, UCF, AFRL and SRI researchers use a more dynamic approach known as a stochastic differential equations (SDEs). Exploiting the connection between dynamical systems to show that neural SDEs lead to less noisy, visually sharper, and quantitatively robust attributions than those computed using neural ODEs.
    The SDE approach learns not just from one image but from a set of nearby images due to the injection of the noise in multiple layers of the neural network. As more noise is injected, the machine will learn evolving approaches and find better ways to make explanations or attributions simply because the model created at the onset is based on evolving characteristics and/or the conditions of the image. It’s an improvement on several other attribution approaches including saliency maps and integrated gradients.
    Jha’s new research is described in the paper “On Smoother Attributions using Neural Stochastic Differential Equations.” Fellow contributors to this novel approach include UCF’s Richard Ewetz, AFRL’s Alvaro Velazquez and SRI’s Sumit Jha. The lab is funded by the Defense Advanced Research Projects Agency, the Office of Naval Research and the National Science Foundation. Their research will be presented at the 2021 IJCAI, a conference with about a 14% acceptance rate for submissions. Past presenters at this highly selective conference have included Facebook and Google.
    “I am delighted to share the fantastic news that our paper on explainable AI has just been accepted at IJCAI,” Jha added. “This is a big opportunity for UTSA to be part of the global conversation on how a machine sees.”
    Story Source:
    Materials provided by University of Texas at San Antonio. Original written by Milady Nazir. Note: Content may be edited for style and length. More

  • in

    The path to more human-like robot object manipulation skills

    What if a robot could organize your closet or chop your vegetables? A sous chef in every home could someday be a reality.
    Still, while advances in artificial intelligence and machine learning have made better robotics possible, there is still quite a wide gap between what humans and robots can do. Closing that gap will require overcoming a number of obstacles in robot manipulation, or the ability of robots to manipulate environments and adapt to changing stimuli.
    Ph.D. candidate Jinda Cui and Jeff Trinkle, Professor and Chair of the Department of Computer Science and Engineering at Lehigh University, are interested in those challenges. They work in an area called learned robot manipulation, in which robots are “trained” through machine learning to manipulate objects and environments like humans do.
    “I’ve always felt that for robots to be really useful they have to pick stuff up, they have to be able to manipulate it and put things together and fix things, to help you off the floor and all that,” says Trinkle who has conducted decades of research in robot manipulation and is well known for his pioneering work in simulating multibody systems under contact constraints. “It takes so many technical areas together to look at a problem like that.”
    “In robot manipulation, learning is a promising alternative to traditional engineering methods and has demonstrated great success, especially in pick-and-place tasks,” says Cui, whose work has been focused on the intersection of robot manipulation and machine learning. “Although many research questions still need to be answered, learned robot manipulation could potentially bring robot manipulators into our homes and businesses. Maybe we will see robots mopping our tables or organizing closets in the near future.”
    In a review article in Science Robotics called “Toward next-generation learned robot manipulation,” Cui and Trinkle summarize, compare and contrast research in learned robot manipulation through the lens of adaptability and outline promising research directions for the future. More

  • in

    Slender robotic finger senses buried items

    Over the years, robots have gotten quite good at identifying objects — as long as they’re out in the open.
    Discerning buried items in granular material like sand is a taller order. To do that, a robot would need fingers that were slender enough to penetrate the sand, mobile enough to wriggle free when sand grains jam, and sensitive enough to feel the detailed shape of the buried object.
    MIT researchers have now designed a sharp-tipped robot finger equipped with tactile sensing to meet the challenge of identifying buried objects. In experiments, the aptly named Digger Finger was able to dig through granular media such as sand and rice, and it correctly sensed the shapes of submerged items it encountered. The researchers say the robot might one day perform various subterranean duties, such as finding buried cables or disarming buried bombs.
    The research will be presented at the next International Symposium on Experimental Robotics. The study’s lead author is Radhen Patel, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Co-authors include CSAIL PhD student Branden Romero, Harvard University PhD student Nancy Ouyang, and Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in CSAIL and the Department of Brain and Cognitive Sciences.
    Seeking to identify objects buried in granular material — sand, gravel, and other types of loosely packed particles — isn’t a brand new quest. Previously, researchers have used technologies that sense the subterranean from above, such as Ground Penetrating Radar or ultrasonic vibrations. But these techniques provide only a hazy view of submerged objects. They might struggle to differentiate rock from bone, for example.
    “So, the idea is to make a finger that has a good sense of touch and can distinguish between the various things it’s feeling,” says Adelson. “That would be helpful if you’re trying to find and disable buried bombs, for example.” Making that idea a reality meant clearing a number of hurdles. More

  • in

    The world's smallest fruit picker controlled by artificial intelligence

    The goal of Kaare Hartvig Jensen, Associate Professor at DTU Physics, was to reduce the need for harvesting, transporting, and processing crops for the production of biofuels, pharmaceuticals, and other products. The new method of extracting the necessary substances, which are called plant metabolites, also eliminates the need for chemical and mechanical processes.
    Plant metabolites consist of a wide range of extremely important chemicals. Many, such as the malaria drug artemisinin, have remarkable therapeutic properties, while others, like natural rubber or biofuel from tree sap, have mechanical properties.
    Harvesting cell by cell
    Because most plant metabolites are isolated in individual cells, the method of extracting the metabolites is also important, since the procedure affects both product purity and yield.
    Usually the extraction involves grinding, centrifugation, and chemical treatment using solvents. This results in considerable pollution, which contributes to the high financial and environmental processing costs.
    “All the substances are produced and stored inside individual cells in the plant. That’s where you have to go in if you want the pure material. When you harvest the whole plant or separate the fruit from the branches, you also harvest a whole lot of tissue that doesn’t contain the substance you’re interested in,” explains Kaare Hartvig Jensen. More

  • in

    New tools to battle cancer, advance genomics research

    University of Virginia School of Medicine scientists have developed important new resources that will aid the battle against cancer and advance cutting-edge genomics research.
    UVA’s Chongzhi Zang, PhD, and his colleagues and students have developed a new computational method to map the folding patterns of our chromosomes in three dimensions from experimental data. This is important because the configuration of genetic material inside our chromosomes actually affects how our genes work. In cancer, that configuration can go wrong, so scientists want to understand the genome architecture of both healthy cells and cancerous ones. This will help them develop better ways to treat and prevent cancer, in addition to advancing many other areas of medical research.
    Using their new approaches, Zang and his colleagues and students have already unearthed a treasure trove of useful data, and they are making their techniques and findings available to their fellow scientists. To advance cancer research, they’ve even built an interactive website that brings together their findings with vast amounts of data from other resources. They say their new website, bartcancer.org, can provide “unique insights” for cancer researchers.
    “The folding pattern of the genome is highly dynamic; it changes frequently and differs from cell to cell. Our new method aims to link this dynamic pattern to the control of gene activities,” said Zang, a computational biologist with UVA’s Center for Public Health Genomics and UVA Cancer Center. “A better understanding of this link can help unravel the genetic cause of cancer and other diseases and can guide future drug development for precision medicine.”
    Bet on BART
    Zang’s new approach to mapping the folding of our genome is called BART3D. Essentially, it compares available three-dimensional configuration data about one region of a chromosome with many of its neighbors. It can then extrapolate from this comparison to fill in blanks in the blueprints of genetic material using “Binding Analysis for Regulation of Transcription,” or BART, a novel algorithm they recently developed. The result is a map that offers unprecedented insights into how our genes interact with the “transcriptional regulators” that control their activity. Identifying these regulators helps scientists understand what turns particular genes on and off — information they can use in the battle against cancer and other diseases.
    The researchers have built a web server, BARTweb, to offer the BART tool to their fellow scientists. It’s available, for free, at http://bartweb.org. The source code is available at https://github.com/zanglab/bart2. Test runs demonstrated that the server outperformed several existing tools for identifying the transcriptional regulators that control particular sets of genes, the researchers report.
    The UVA team also built the BART Cancer database to advance research into 15 different types of cancer, including breast, lung, colorectal and prostate cancer. Scientists can search the interactive database to see which regulators are more active and which are less active in each cancer.
    “While a cancer researcher can browse our database to screen potential drug targets, any biomedical scientist can use our web server to analyze their own genetic data,” Zang said. “We hope that the tools and resources we develop can benefit the whole biomedical research community by accelerating scientific discoveries and future therapeutic development.”
    The work was supported by the National Institutes of Health, grants R35GM133712 and K22CA204439; a Phi Beta Psi Sorority Research Grant; and a Seed Award from the Jayne Koskinas Ted Giovanis Foundation for Health and Policy.
    Story Source:
    Materials provided by University of Virginia Health System. Note: Content may be edited for style and length. More

  • in

    Hacking and loss of driving skills are major consumer concerns for self-driving cars

    A new study from the University of Kent, Toulouse Business School, ESSCA School of Management (Paris) and ESADE Business School (Spain) has revealed the three primary risks and benefits perceived by consumers towards autonomous vehicles (self-driving cars).
    The increased development of autonomous vehicles worldwide inspired the researchers to uncover how consumers feel towards the growing market, particularly in areas that dissuade them from purchasing, to understand the challenges of marketing the product. The following perceptions, gained through qualitative interviews and quantitative surveys, are key to consumer decision making around autonomous vehicles.
    The three key perceived risks for autonomous vehicles, according to surveyed consumers, can be classified as:1. Performance (safety) risks of the vehicles’ Artificial Intelligence and sensor systems 2. Loss of competencies by the driving public (primarily the ability to drive and use roads) 3. Privacy security breaches, similar to a personal computer or online account being hacked.These concerns, particularly regarding road and passenger safety, have long been present in how automotive companies have marketed their products. Marketers’ have advertised the continued improvements to the product’s technology, in a bid to ease safety concerns. However, the concerns for loss of driving skills and privacy breaches are still of major concern and will need addressing as these products become more widespread.
    The three perceived benefits to consumers were:1. Freeing of time (spent instead of driving) 2. Removing the issue of human error (accidents caused by human drivers) 3. Outperforming human capacity, such as improved route and traffic prediction, handling speed.Ben Lowe, Professor of Marketing at the University of Kent and co-author of the study said: ‘The results of this study illustrate the perceived benefits of autonomous vehicles for consumers and how marketers can appeal to consumers in this growing market. However, we will now see how the manufacturers respond to concerns of these key perceived risks as they are major factors in the decision making of consumers, with the safety of the vehicles’ performance the greatest priority. Our methods used in this study will help clarify for manufacturers and marketers that, second to the issue of online account security, they will now have to address concerns that their product is reducing the autonomy of the consumer.’
    Story Source:
    Materials provided by University of Kent. Original written by Sam Wood. Note: Content may be edited for style and length. More

  • in

    The last 30 years were the hottest on record for the United States

    There’s a new normal for U.S. weather. On May 4, the National Oceanic and Atmospheric Administration announced an official change to its reference values for temperature and precipitation. Instead of using the average values from 1981 to 2010, NOAA’s new “climate normals” will be the averages from 1991 to 2020.

    This new period is the warmest on record for the country. Compared with the previous 30-year-span, for example, the average temperature across the contiguous United States rose from 11.6° Celsius (52.8° Fahrenheit) to 11.8° C (53.3° F). Some of the largest increases were in the South and Southwest — and that same region also showed a dramatic decrease in precipitation (SN: 8/17/20).  

    The United States and other members of the World Meteorological Organization are required to update their climate normals every 10 years. These data put daily weather events in historical context and also help track changes in drought conditions, energy use and freeze risks for farmers.

    That moving window of averages for the United States also tells a stark story about the accelerating pace of climate change. When each 30-year period is compared with the average temperatures from 1901 to 2000, no part of the country is cooler now than it was during the 20th century. And temperatures in large swaths of the country, from the American West to the Northeast, are 1 to 2 degrees Fahrenheit higher. More

  • in

    Scientific software – Quality not always good

    Computational tools are indispensable in almost all scientific disciplines. Especially in cases where large amounts of research data are generated and need to be quickly processed, reliable, carefully developed software is crucial for analyzing and correctly interpreting such data. Nevertheless, scientific software can have quality quality deficiencies. To evaluate software quality in an automated way, computer scientists at Karlsruhe Institute of Technology (KIT) and Heidelberg Institute for Theoretical Studies (HITS) have designed the SoftWipe tool.
    “Adherence to coding standards is rarely considered in scientific software, although it can even lead to incorrect scientific results,” says Professor Alexandros Stamatakis, who works both at HITS and at the Institute of Theoretical Informatics (ITI) of KIT. The open-source SoftWipe software tool provides a fast, reliable, and cost-effective approach to addressing this problem by automatically assessing adherence to software development standards. Besides designing the above-mentioned tool, the computer scientists benchmarked 48 scientific software tools from different research areas, to assess to which degree they met coding standards.
    “SoftWipe can also be used in the review process of scientific software and support the software selection process,” adds Adrian Zapletal. The Master’s student and his fellow student Dimitri Höhler have substantially contributed to the development of SoftWipe. To select assessment criteria, they relied on existing standards that are used in safety-critical environments, such as at NASA or CERN.
    “Our research revealed enormous discrepancies in software quality,” says co-author Professor Carsten Sinz of ITI. Many programs, such as covid-sim, which is used in the UK for mathematical modeling of the COVID-19 disease, had a very low quality score and thus performed poorly in the ranking. The researchers recommend using programs such as SoftWipe by default in the selection and review process of software for scientific purposes.
    How Does SoftWipe Work?
    SoftWipe is a pipeline written in the Python3 programming language that uses several available static and dynamic code analyzers (most of them are freely available) in order to assess the code quality of software written in C/C++. In this process, SoftWipe compiles the software and then executes it so that programming errors can be detected during execution. Based on the output of the code analysis tools used, SoftWipe calculates a quality score between 0 (poor) and 10 (excellent) to compute an overall final score .
    Story Source:
    Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length. More