More stories

  • in

    Breaking AIs to make them better

    Today’s artificial intelligence systems used for image recognition are incredibly powerful with massive potential for commercial applications. Nonetheless, current artificial neural networks — the deep learning algorithms that power image recognition — suffer one massive shortcoming: they are easily broken by images that are even slightly modified.
    This lack of ‘robustness’ is a significant hurdle for researchers hoping to build better AIs. However, exactly why this phenomenon occurs, and the underlying mechanisms behind it, remain largely unknown.
    Aiming to one day overcome these flaws,researchers at Kyushu University’s Faculty of Information Science and Electrical Engineering have published in PLOS ONE a method called ‘Raw Zero-Shot’ that assesses how neural networks handle elements unknown to them. The results could help researchers identify common features that make AIs ‘non-robust’ and develop methods to rectify their problems.
    “There is a range of real-world applications for image recognition neural networks, including self-driving cars and diagnostic tools in healthcare,” explains Danilo Vasconcellos Vargas, who led the study. “However, no matter how well trained the AI, it can fail with even a slight change in an image.”
    In practice, image recognition AIs are ‘trained’ on many sample images before being asked to identify one. For example, if you want an AI to identify ducks, you would first train it on many pictures of ducks.
    Nonetheless, even the best-trained AIs can be misled. In fact, researchers have found that an image can be manipulated such that — while it may appear unchanged to the human eye — an AI cannot accurately identify it. Even a single-pixel change in the image can cause confusion.
    To better understand why this happens, the team began investigating different image recognition AIs with the hope of identifying patterns in how they behave when faced with samples that they had not been trained with, i.e., elements unknown to the AI.
    “If you give an image to an AI, it will try to tell you what it is, no matter if that answer is correct or not. So, we took the twelve most common AIs today and applied a new method called ‘Raw Zero-Shot Learning,'” continues Vargas. “Basically, we gave the AIs a series of images with no hints or training. Our hypothesis was that there would be correlations in how they answered. They would be wrong, but wrong in the same way.”
    What they found was just that. In all cases, the image recognition AI would produce an answer, and the answers — while wrong — would be consistent, that is to say they would cluster together. The density of each cluster would indicate how the AI processed the unknown images based on its foundational knowledge of different images.
    “If we understand what the AI was doing and what it learned when processing unknown images, we can use that same understanding to analyze why AIs break when faced with images with single-pixel changes or slight modifications,” Vargas states. “Utilization of the knowledge we gained trying to solve one problem by applying it to a different but related problem is known as Transferability.”
    The team observed that Capsule Networks, also known as CapsNet, produced the densest clusters, giving it the best transferability amongst neural networks. They believe it might be because of the dynamical nature of CapsNet.
    “While today’s AIs are accurate, they lack the robustness for further utility. We need to understand what the problem is and why it’s happening. In this work, we showed a possible strategy to study these issues,” concludes Vargas. “Instead of focusing solely on accuracy, we must investigate ways to improve robustness and flexibility. Then we may be able to develop a true artificial intelligence.”
    Story Source:
    Materials provided by Kyushu University. Note: Content may be edited for style and length. More

  • in

    Algorithm predicts crime a week in advance, but reveals bias in police response

    Advances in machine learning and artificial intelligence have sparked interest from governments that would like to use these tools for predictive policing to deter crime. Early efforts at crime prediction have been controversial, however, because they do not account for systemic biases in police enforcement and its complex relationship with crime and society.
    Data and social scientists from the University of Chicago have developed a new algorithm that forecasts crime by learning patterns in time and geographic locations from public data on violent and property crimes. The model can predict future crimes one week in advance with about 90% accuracy.
    In a separate model, the research team also studied the police response to crime by analyzing the number of arrests following incidents and comparing those rates among neighborhoods with different socioeconomic status. They saw that crime in wealthier areas resulted in more arrests, while arrests in disadvantaged neighborhoods dropped. Crime in poor neighborhoods didn’t lead to more arrests, however, suggesting bias in police response and enforcement.
    “What we’re seeing is that when you stress the system, it requires more resources to arrest more people in response to crime in a wealthy area and draws police resources away from lower socioeconomic status areas,” said Ishanu Chattopadhyay, PhD, Assistant Professor of Medicine at UChicago and senior author of the new study, which was published this week in Nature Human Behavior.
    The tool was tested and validated using historical data from the City of Chicago around two broad categories of reported events: violent crimes (homicides, assaults, and batteries) and property crimes (burglaries, thefts, and motor vehicle thefts). These data were used because they were most likely to be reported to police in urban areas where there is historical distrust and lack of cooperation with law enforcement. Such crimes are also less prone to enforcement bias, as is the case with drug crimes, traffic stops, and other misdemeanor infractions.
    Previous efforts at crime prediction often use an epidemic or seismic approach, where crime is depicted as emerging in “hotspots” that spread to surrounding areas. These tools miss out on the complex social environment of cities, however, and don’t consider the relationship between crime and the effects of police enforcement.
    “Spatial models ignore the natural topology of the city,” said sociologist and co-author James Evans, PhD, Max Palevsky Professor at UChicago and the Santa Fe Institute. “Transportation networks respect streets, walkways, train and bus lines. Communication networks respect areas of similar socio-economic background. Our model enables discovery of these connections.”
    The new model isolates crime by looking at the time and spatial coordinates of discrete events and detecting patterns to predict future events. It divides the city into spatial tiles roughly 1,000 feet across and predicts crime within these areas instead of relying on traditional neighborhood or political boundaries, which are also subject to bias. The model performed just as well with data from seven other U.S. cities: Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland, and San Francisco.
    “We demonstrate the importance of discovering city-specific patterns for the prediction of reported crime, which generates a fresh view on neighborhoods in the city, allows us to ask novel questions, and lets us evaluate police action in new ways,” Evans said.
    Chattopadhyay is careful to note that the tool’s accuracy does not mean that it should be used to direct law enforcement, with police departments using it to swarm neighborhoods proactively to prevent crime. Instead, it should be added to a toolbox of urban policies and policing strategies to address crime.
    “We created a digital twin of urban environments. If you feed it data from happened in the past, it will tell you what’s going to happen in future. It’s not magical, there are limitations, but we validated it and it works really well,” Chattopadhyay said. “Now you can use this as a simulation tool to see what happens if crime goes up in one area of the city, or there is increased enforcement in another area. If you apply all these different variables, you can see how the systems evolves in response.”
    The study, “Event-level Prediction of Urban Crime Reveals Signature of Enforcement Bias in U.S. Cities,” was supported by the Defense Advanced Research Projects Agency and the Neubauer Collegium for Culture and Society. Additional authors include Victor Rotaru, Yi Huang, and Timmy Li from the University of Chicago. More

  • in

    Common gene used to profile microbial communities

    Part of a gene is better than none when identifying a species of microbe. But for Rice University computer scientists, part was not nearly enough in their pursuit of a program to identify all the species in a microbiome.
    Emu, their microbial community profiling software, effectively identifies bacterial species by leveraging long DNA sequences that span the entire length of the gene under study.
    The Emu project led by computer scientist Todd Treangen and graduate student Kristen Curry of Rice’s George R. Brown School of Engineering facilitates the analysis of a key gene microbiome researchers use to sort out species of bacteria that could be harmful — or helpful — to humans and the environment.
    Their target, 16S, is a subunit of the rRNA (ribosomal ribonucleic acid) gene, whose usage was pioneered by Carl Woese in 1977. This region is highly conserved in bacteria and archaea and also contains variable regions that are critical for separating distinct genera and species.
    “It’s commonly used for microbiome analysis because it’s present in all bacteria and most archaea,” said Curry, in her third year in the Treangen group. “Because of that, there are regions that have been conserved over the years that make it easy to target. In DNA sequencing, we need parts of it to be the same in all bacteria so we know what to look for, and then we need parts to be different so we can tell bacteria apart.”
    The Rice team’s study, with collaborators in Germany and at the Houston Methodist Research Institute, Baylor College of Medicine and Texas Children’s Hospital, appears in the journal Nature Methods. More

  • in

    Capturing an elusive shadow: State-by-state gun ownership

    Policy-makers are faced with an exceptional challenge: how to reduce harm caused by firearms while maintaining citizens’ right to bear arms and protect themselves. This is especially true as the Supreme Court has hobbled New York State regulations restricting who can carry a concealed weapon.
    While meaningful legislation requires an understanding of how access to firearms is associated with different outcomes of harm, this knowledge also calls for accurate, highly-resolved data on firearm possession, data that is presently unavailable due to a lack of a comprehensive national firearm ownership registry.
    Newly published research from data scientist and firearm proliferation researcher Maurizio Porfiri, Institute Professor at the NYU Tandon School of Engineering, and co-authors Roni Barak Ventura, a post-doctoral researcher at Porfiri’s Dynamical Systems Lab, and Manuel Ruiz Marin of the Universidad Politécnica de Cartagena, Spain describe a spatio-temporal model to predict trends in firearm prevalence on a state-by-state level by fusing data from two available proxies — background checks per capita and suicides committed with a firearm in a given state. The study “A spatiotemporal model of firearm ownership in the United States,” in the Cell Press journal Patterns, details how by calibrating their results with yearly survey data, the team determined that the two proxies can be simultaneously considered to draw precise information regarding firearm ownership.
    Porfiri, who in 2020 received one of the first newly authorized NSF federal grants for $2 million to study the “firearm ecosystem” in the U.S., has spent the last few years exploring gun acquisition trends and how they relate to and are influenced by a number of factors, from media coverage of mass shootings, to the influence of the sitting President.
    “There is very limited knowledge on when and where guns are acquired in the country, and even less is known regarding future ownership trends,” said Porfiri, professor of mechanical and aerospace, biomedical, and civil and urban engineering and incoming director of the Center for Urban Science and Progress (CUSP) at NYU Tandon. “Prior studies have largely relied on the use of a single, select proxy to make some inference of gun prevalence, typically within simple correlation schemes. Our results show that there is a need to combine proxies of sales and violence to draw precise inferences on firearm prevalence.” He added that most research aggregates the measure counts within states and does not consider interference between states or spill-over effects.
    Their study shows how their model can be used to better understand the relationships between media coverage, mass shootings, and firearm ownership, uncovering causal associations that are masked when the proxies are used individually.
    While the researchers found, for example, that media coverage of firearm control is causally associated with firearm ownership, they discovered that their model generating a strong firearm ownership profile for a state was a strong predictor of mass shootings in that state.
    “The potential link between mass shootings and firearm purchases is a unique contribution of our model,” said Ruiz Marin. “Such a link can only be detected by scratching the surface on the exact gun counts in the country.”
    “We combined publicly available data variables into one measure of ownership. Because it has a spatial component, we could also track gun flow from one state to another based on political and cultural similarities,” said Barak-Ventura, adding that the spatial component of the work is novel. “Prior studies looked at a correlation of two variables such as increasing background checks and an increase in gun violence.”
    Barak-Ventura said the team is now using their model to explore which policies are effective in reducing death by guns in a state and surrounding regions, and how the relationship between gun ownership and violent outcomes is disrupted by different legislation.
    The research was supported by the National Science Foundation and by RAND’s National Collaborative on Gun Violence Research through a postdoctoral fellowship award. Roni Barak Ventura’s work was supported by a scholarship from the Mitsui USA Foundation. This study was also part of the collaborative activities carried out under the programs of the region of Murcia (Spain) as well as the Ministerio de Ciencia, Innovación y Universidades. More

  • in

    'Fake' data helps robots learn the ropes faster

    In a step toward robots that can learn on the fly like humans do, a new approach expands training data sets for robots that work with soft objects like ropes and fabrics, or in cluttered environments.
    Developed by robotics researchers at the University of Michigan, it could cut learning time for new materials and environments down to a few hours rather than a week or two.
    In simulations, the expanded training data set improved the success rate of a robot looping a rope around an engine block by more than 40% and nearly doubled the successes of a physical robot for a similar task.
    That task is among those a robot mechanic would need to be able to do with ease. But using today’s methods, learning how to manipulate each unfamiliar hose or belt would require huge amounts of data, likely gathered for days or weeks, says Dmitry Berenson, U-M associate professor of robotics and senior author of a paper presented today at Robotics: Science and Systems in New York City.
    In that time, the robot would play around with the hose — stretching it, bringing the ends together, looping it around obstacles and so on — until it understood all the ways the hose could move.
    “If the robot needs to play with the hose for a long time before being able to install it, that’s not going to work for many applications,” Berenson said. More

  • in

    Artificial intelligence techniques used to obtain antibiotic resistance patterns

    The Universidad Carlos III de Madrid (UC3M) is conducting research that analyses antibiotic resistance patterns with the aim of finding trends that can help decide which treatment to apply to each type of patient and stop the spread of bacteria. This study, recently published in the scientific journal Nature Communications, has been carried out together with the University of Exeter, the University of Birmingham (both in the United Kingdom) and the Westmead Hospital in Sydney (Australia).
    In order to observe a bacterial pathogen’s resistance to an antibiotic in clinical environments, a measure called MIC (Minimum Inhibitory Concentration) is used, which is the minimum concentration of antibiotic capable of inhibiting bacterial growth. The greater the MIC of a bacterium against an antibiotic, the greater its resistance.
    However, most public databases only contain the frequency of resistant pathogens, which is aggregated data calculated from MIC measurements and predefined resistance thresholds. “For example, for a given pathogen, the antibiotic resistance threshold may be 4: if a bacterium has an MIC of 16, it is considered resistant and is counted when calculating the resistance frequency,” says Pablo Catalán, lecturer and researcher in the UC3M Mathematics Department and author of the study. In this regard, the resistance reports that are carried out nationally and by organisations such as the WHO are prepared using this aggregated resistance frequency data.
    To conduct this research, the team has analysed a database which is ground-breaking, as it contains raw data on antibiotic resistance. This database, called ATLAS, is managed by Pfizer and has been public since 2018. The working group led by UC3M has compared the information of 600,000 patients from over 70 countries and has used machine learning methods (a type of artificial intelligence technique) to extract resistance evolution patterns.
    By analysing this data, the research team has discovered that there are resistance evolution patterns that can be detected when using the raw data (MIC), but which are undetectable using the aggregated data. “A clear example of this is a pathogen whose MIC is slowly increasing over time, but below the resistance threshold. Using this frequency data we wouldn’t be able to say anything, since the resistance frequency remains constant. However, by using MIC data we can detect such a case and be on alert. In the paper, we discuss several clinically relevant cases which have these characteristics. Furthermore, we are the first team to describe this database in depth,” says Catalán.
    This study makes it possible to design antibiotic treatments that are more effective in controlling infections and curbing the rise of resistance which causes many clinical problems. “The research uses mathematical ideas to find new ways of extracting antibiotic resistance patterns from 6.5 million data points,” concludes the research author.
    Story Source:
    Materials provided by Universidad Carlos III de Madrid. Note: Content may be edited for style and length. More

  • in

    Tracking a levitated nanoparticle with a mirror

    Sensing with levitated nanoparticles has so far been limited by the precision of position measurements. Now, researchers at the University of Innsbruck led by Tracy Northup, have demonstrated a new method for optical interferometry in which light scattered by a particle is reflected by a mirror. This opens up new possibilities for using levitated particles as sensors, in particular, in quantum regimes.
    Levitated nanoparticles are promising tools for sensing ultra-weak forces of biological, chemical or mechanical origin and even for testing the foundations of quantum physics. However, such applications require precise position measurement. Researchers at the Department of Experimental Physics of the University of Innsbruck, Austria, have now demonstrated a new technique that boosts the efficiency with which the position of a sub-micron levitated object is detected. “Typically, we measure a nanoparticle’s position with a technique called optical interferometry, in which part of the light emitted by a nanoparticle is compared with the light from a reference laser,” says Lorenzo Dania, a PhD student in Tracy Northup’s research group. “A laser beam, however, has a much different shape than the light pattern emitted by a nanoparticle, known as dipole radiation.” That shape difference currently limits the measurement precision.
    Self-interference method
    The new technique demonstrated by Tracy Northup, a professor at the University of Innsbruck, and her team resolves this limitation by replacing the laser beam with the light of the particle reflected by a mirror. The technique builds on a method to track barium ions that has been developed in recent years by Rainer Blatt, also of the University of Innsbruck, and his team. Last year, researchers from the two teams proposed to extend this method to nanoparticles. Now, using a nanoparticle levitated in an electromagnetic trap, the researchers showed that this method outperformed other state-of-the-art detection techniques. The result opens up new possibilities for using levitated particles as sensors — for example, to measure tiny forces — and for bringing the particles’ motion into realms described by quantum mechanics.
    Financial support for the research was provided, among others, by the European Union as well as by the Austrian Science Fund FWF, the Austrian Academy of Sciences and the Austrian Federal Ministry of Education, Science and Research.
    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More

  • in

    Robot overcomes uncertainty to retrieve buried objects

    For humans, finding a lost wallet buried under a pile of items is pretty straightforward — we simply remove things from the pile until we find the wallet. But for a robot, this task involves complex reasoning about the pile and objects in it, which presents a steep challenge.
    MIT researchers previously demonstrated a robotic arm that combines visual information and radio frequency (RF) signals to find hidden objects that were tagged with RFID tags (which reflect signals sent by an antenna). Building off that work, they have now developed a new system that can efficiently retrieve any object buried in a pile. As long as some items in the pile have RFID tags, the target item does not need to be tagged for the system to recover it.
    The algorithms behind the system, known as FuseBot, reason about the probable location and orientation of objects under the pile. Then FuseBot finds the most efficient way to remove obstructing objects and extract the target item. This reasoning enabled FuseBot to find more hidden items than a state-of-the-art robotics system, in half the time.
    This speed could be especially useful in an e-commerce warehouse. A robot tasked with processing returns could find items in an unsorted pile more efficiently with the FuseBot system, says senior author Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the Media Lab.
    “What this paper shows, for the first time, is that the mere presence of an RFID-tagged item in the environment makes it much easier for you to achieve other tasks in a more efficient manner. We were able to do this because we added multimodal reasoning to the system — FuseBot can reason about both vision and RF to understand a pile of items,” adds Adib.
    Joining Adib on the paper are research assistants Tara Boroushaki, who is the lead author; Laura Dodds; and Nazish Naeem. The research will be presented at the Robotics: Science and Systems conference. More