More stories

  • in

    Tool helps clear biases from computer vision

    Researchers at Princeton University have developed a tool that flags potential biases in sets of images used to train artificial intelligence (AI) systems. The work is part of a larger effort to remedy and prevent the biases that have crept into AI systems that influence everything from credit services to courtroom sentencing programs.
    Although the sources of bias in AI systems are varied, one major cause is stereotypical images contained in large sets of images collected from online sources that engineers use to develop computer vision, a branch of AI that allows computers to recognize people, objects and actions. Because the foundation of computer vision is built on these data sets, images that reflect societal stereotypes and biases can unintentionally influence computer vision models.
    To help stem this problem at its source, researchers in the Princeton Visual AI Lab have developed an open-source tool that automatically uncovers potential biases in visual data sets. The tool allows data set creators and users to correct issues of underrepresentation or stereotypical portrayals before image collections are used to train computer vision models. In related work, members of the Visual AI Lab published a comparison of existing methods for preventing biases in computer vision models themselves, and proposed a new, more effective approach to bias mitigation.
    The first tool, called REVISE (REvealing VIsual biaSEs), uses statistical methods to inspect a data set for potential biases or issues of underrepresentation along three dimensions: object-based, gender-based and geography-based. A fully automated tool, REVISE builds on earlier work that involved filtering and balancing a data set’s images in a way that required more direction from the user. The study was presented Aug. 24 at the virtual European Conference on Computer Vision.
    REVISE takes stock of a data set’s content using existing image annotations and measurements such as object counts, the co-occurrence of objects and people, and images’ countries of origin. Among these measurements, the tool exposes patterns that differ from median distributions.
    For example, in one of the tested data sets, REVISE showed that images including both people and flowers differed between males and females: Males more often appeared with flowers in ceremonies or meetings, while females tended to appear in staged settings or paintings. (The analysis was limited to annotations reflecting the perceived binary gender of people appearing in images.)
    Once the tool reveals these sorts of discrepancies, “then there’s the question of whether this is a totally innocuous fact, or if something deeper is happening, and that’s very hard to automate,” said Olga Russakovsky, an assistant professor of computer science and principal investigator of the Visual AI Lab. Russakovsky co-authored the paper with graduate student Angelina Wang and Arvind Narayanan, an associate professor of computer science.

    advertisement

    For example, REVISE revealed that objects including airplanes, beds and pizzas were more likely to be large in the images including them than a typical object in one of the data sets. Such an issue might not perpetuate societal stereotypes, but could be problematic for training computer vision models. As a remedy, the researchers suggest collecting images of airplanes that also include the labels mountain, desert or sky.
    The underrepresentation of regions of the globe in computer vision data sets, however, is likely to lead to biases in AI algorithms. Consistent with previous analyses, the researchers found that for images’ countries of origin (normalized by population), the United States and European countries were vastly overrepresented in data sets. Beyond this, REVISE showed that for images from other parts of the world, image captions were often not in the local language, suggesting that many of them were captured by tourists and potentially leading to a skewed view of a country.
    Researchers who focus on object detection may overlook issues of fairness in computer vision, said Russakovsky. “However, this geography analysis shows that object recognition can still can be quite biased and exclusionary, and can affect different regions and people unequally,” she said.
    “Data set collection practices in computer science haven’t been scrutinized that thoroughly until recently,” said co-author Angelina Wang, a graduate student in computer science. She said images are mostly “scraped from the internet, and people don’t always realize that their images are being used [in data sets]. We should collect images from more diverse groups of people, but when we do, we should be careful that we’re getting the images in a way that is respectful.”
    “Tools and benchmarks are an important step … they allow us to capture these biases earlier in the pipeline and rethink our problem setup and assumptions as well as data collection practices,” said Vicente Ordonez-Roman, an assistant professor of computer science at the University of Virginia who was not involved in the studies. “In computer vision there are some specific challenges regarding representation and the propagation of stereotypes. Works such as those by the Princeton Visual AI Lab help elucidate and bring to the attention of the computer vision community some of these issues and offer strategies to mitigate them.”
    A related study from the Visual AI Lab examined approaches to prevent computer vision models from learning spurious correlations that may reflect biases, such as overpredicting activities like cooking in images of women, or computer programming in images of men. Visual cues such as the fact that zebras are black and white, or basketball players often wear jerseys, contribute to the accuracy of the models, so developing effective models while avoiding problematic correlations is a significant challenge in the field.

    advertisement

    In research presented in June at the virtual International Conference on Computer Vision and Pattern Recognition, electrical engineering graduate student Zeyu Wang and colleagues compared four different techniques for mitigating biases in computer vision models.
    They found that a popular technique known as adversarial training, or “fairness through blindness,” harmed the overall performance of image recognition models. In adversarial training, the model cannot consider information about the protected variable — in the study, the researchers used gender as a test case. A different approach, known as domain-independent training, or “fairness through awareness,” performed much better in the team’s analysis.
    “Essentially, this says we’re going to have different frequencies of activities for different genders, and yes, this prediction is going to be gender-dependent, so we’re just going to embrace that,” said Russakovsky.
    The technique outlined in the paper mitigates potential biases by considering the protected attribute separately from other visual cues.
    “How we really address the bias issue is a deeper problem, because of course we can see it’s in the data itself,” said Zeyu Wang. “But in in the real world, humans can still make good judgments while being aware of our biases” — and computer vision models can be set up to work in a similar way, he said. More

  • in

    AI can detect COVID-19 in the lungs like a virtual physician, new study shows

    A University of Central Florida researcher is part of a new study showing that artificial intelligence can be nearly as accurate as a physician in diagnosing COVID-19 in the lungs.
    The study, recently published in Nature Communications, shows the new technique can also overcome some of the challenges of current testing.
    Researchers demonstrated that an AI algorithm could be trained to classify COVID-19 pneumonia in computed tomography (CT) scans with up to 90 percent accuracy, as well as correctly identify positive cases 84 percent of the time and negative cases 93 percent of the time.
    CT scans offer a deeper insight into COVID-19 diagnosis and progression as compared to the often-used reverse transcription-polymerase chain reaction, or RT-PCR, tests. These tests have high false negative rates, delays in processing and other challenges.
    Another benefit to CT scans is that they can detect COVID-19 in people without symptoms, in those who have early symptoms, during the height of the disease and after symptoms resolve.
    However, CT is not always recommended as a diagnostic tool for COVID-19 because the disease often looks similar to influenza-associated pneumonias on the scans.

    advertisement

    The new UCF co-developed algorithm can overcome this problem by accurately identifying COVID-19 cases, as well as distinguishing them from influenza, thus serving as a great potential aid for physicians, says Ulas Bagci, an assistant professor in UCF’s Department of Computer Science.
    Bagci was a co-author of the study and helped lead the research.
    “We demonstrated that a deep learning-based AI approach can serve as a standardized and objective tool to assist healthcare systems as well as patients,” Bagci says. “It can be used as a complementary test tool in very specific limited populations, and it can be used rapidly and at large scale in the unfortunate event of a recurrent outbreak.”
    Bagci is an expert in developing AI to assist physicians, including using it to detect pancreatic and lung cancers in CT scans.
    He also has two large, National Institutes of Health grants exploring these topics, including $2.5 million for using deep learning to examine pancreatic cystic tumors and more than $2 million to study the use of artificial intelligence for lung cancer screening and diagnosis.

    advertisement

    To perform the study, the researchers trained a computer algorithm to recognize COVID-19 in lung CT scans of 1,280 multinational patients from China, Japan and Italy.
    Then they tested the algorithm on CT scans of 1,337 patients with lung diseases ranging from COVID-19 to cancer and non-COVID pneumonia.
    When they compared the computer’s diagnoses with ones confirmed by physicians, they found that the algorithm was extremely proficient in accurately diagnosing COVID-19 pneumonia in the lungs and distinguishing it from other diseases, especially when examining CT scans in the early stages of disease progression.
    “We showed that robust AI models can achieve up to 90 percent accuracy in independent test populations, maintain high specificity in non-COVID-19 related pneumonias, and demonstrate sufficient generalizability to unseen patient populations and centers,” Bagci says.
    The UCF researcher is a longtime collaborator with study co-authors Baris Turkbey and Bradford J. Wood. Turkbey is an associate research physician at the NIH’s National Cancer Institute Molecular Imaging Branch, and Wood is the director of NIH’s Center for Interventional Oncology and chief of interventional radiology with NIH’s Clinical Center.
    This research was supported with funds from the NIH Center for Interventional Oncology and the Intramural Research Program of the National Institutes of Health, intramural NIH grants, the NIH Intramural Targeted Anti-COVID-19 program, the National Cancer Institute and NIH.
    Bagci received his doctorate in computer science from the University of Nottingham in England and joined UCF’s Department of Computer Science, part of the College of Engineering and Computer Science, in 2015. He is the Science Applications International Corp (SAIC) chair in UCF’s Department of Computer Science and a faculty member of UCF’s Center for Research in Computer Vision. SAIC is a Virginia-based government support and services company. More

  • in

    Drugs aren't typically tested on women — artificial intelligence could correct that bias

    Researchers at Columbia University have developed AwareDX — Analysing Women At Risk for Experiencing Drug toXicity — a machine learning algorithm that identifies and predicts differences in adverse drug effects between men and women by analyzing 50 years’ worth of reports in an FDA database. The algorithm, described September 22 in the journal Patterns, automatically corrects for the biases in these data that stem from an overrepresentation of male subjects in clinical research trials.
    Though men and women can have different responses to medications — the sleep aid Ambien, for example, metabolizes more slowly in women, causing next-day grogginess — even doctors may not know about these differences because most clinical trial data itself are biased toward men. This trickles down to impact prescribing guidelines, drug marketing, and ultimately, patients’ health.
    “Pharma has a history of ignoring complex problems. Traditionally, clinical trials have not even included women in their studies. The old-fashioned way used to be to get a group of healthy guys together to give them the drug, make sure it didn’t kill them, and you’re off to the races. As a result, we have a lot less information about how women respond to drugs than men,” says Nicholas Tatonetti (@nicktatonetti), an associate professor of biomedical informatics at Columbia University and a co-author on the paper. “We haven’t had the ability to evaluate these differences before, or even to quantify them.”
    Tatonetti teamed up with one of his students — Payal Chandak, a senior biomedical informatics major at Columbia University and the other co-author on the paper. Together they developed AwareDX. Because it is a machine learning algorithm, AwareDX can automatically adjust for sex-based biases in a way that would take concerted effort to do manually.
    “Machine learning is definitely a buzzword, but essentially the idea is to correct for these biases before you do any other statistical analysis by building a balanced subset of patients with equal parts men and women for each drug,” says Chandak.
    The algorithm uses data from the FDA Adverse Event Reporting System (FAERS), which contains reports of adverse drug effects from consumers, healthcare providers, and manufacturers all the way back to 1968. AwareDX groups the data into sex-balanced subsets before looking for patterns and trends. To improve the results, the algorithm then repeats the whole process 25 times.
    The researchers compiled the results into a bank of over 20,000 potential sex-specific drug effects, which can then be verified either by looking back at older data or by conducting new studies down the line. Though there is a lot of work left to do, the researchers have already had success verifying the results for several drugs based on previous genetic research.
    For example, the ABCB1 gene, which affects how much of a drug is usable by the body and for how long, is known to be more active in men than women. Because of this, the researchers expected to see a greater risk of muscle aches for men taking simvastatin — a cholesterol medication — and a greater risk of slowing heart rate for women taking risperidone — an antipsychotic. AwareDX successfully predicted both of these effects.
    “The most exciting thing to me is that not only do we have a database of adverse events that we’ve developed from this FDA resource, but we’ve shown that for some of these events, there is preexisting knowledge of genetic differences between men and women,” says Chandak. “Using that knowledge, we can actually predict different responses that men and women should have and validate our method against those. That gives us a lot of confidence in the method itself.”
    By continuing to verify their results, the researchers hope that the insights from AwareDX will help doctors make more informed choices when prescribing drugs, especially to women. “Doctors actually look at adverse effect information specific to the drug they prescribe. So once this information is studied further and corroborated, it’s actually going to impact drug prescriptions and people’s health,” says Tatonetti.
    This work was supported by National Institutes of Health.

    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Screen time can change visual perception — and that's not necessarily bad

    The coronavirus pandemic has shifted many of our interactions online, with Zoom video calls replacing in-person classes, work meetings, conferences and other events. Will all that screen time damage our vision?
    Maybe not. It turns out that our visual perception is highly adaptable, according to research from Psychology Professor and Cognitive and Brain Sciences Coordinator Peter Gerhardstein’s lab at Binghamton University.
    Gerhardstein, Daniel Hipp and Sara Olsen — his former doctoral students — will publish “Mind-Craft: Exploring the Effect of Digital Visual Experience on Changes in Orientation Sensitivity in Visual Contour Perception,” in an upcoming issue of the academic journal Perception. Hipp, the lead author and main originator of the research, is now at the VA Eastern Colorado Health Care System’s Laboratory for Clinical and Translational Research. Olsen, who designed stimuli for the research and aided in the analysis of the results, is now at the University of Minnesota’s Department of Psychiatry.
    “The finding in the work is that the human perceptual system rapidly adjusts to a substantive alteration in the statistics of the visual world, which, as we show, is what happens when someone is playing video games,” Gerhardstein said.
    The experiments
    The research focuses on a basic element of vision: our perception of orientation in the environment.

    advertisement

    Take a walk through the Binghamton University Nature Preserve and look around. Stimuli — trees, branches, bushes, the path — are oriented in many different angles. According to an analysis by Hipp, there is a slight predominance of horizontal and then vertical planes — think of the ground and the trees — but no shortage of oblique angles.
    Then consider the “carpentered world” of a cityscape — downtown Binghamton, perhaps. The percentage of horizontal and vertical orientations increases dramatically, while the obliques fall away. Buildings, roofs, streets, lampposts: The cityscape is a world of sharp angles, like the corner of a rectangle. The digital world ramps up the predominance of the horizontal and vertical planes, Gerhardstein explained.
    Research shows that we tend to pay more attention to horizontal and vertical orientations, at least in the lab; in real-world environments, these differences probably aren’t noticeable, although they likely still drive behavior. Painters, for example, tend to exacerbate these distinctions in their work, a focus of a different research group.
    Orientation is a fundamental aspect of how our brain and eyes work together to build the visual world. Interestingly, it’s not fixed; our visual system can adapt to changes swiftly, as the group’s two experiments show.
    The first experiment established a method of eye tracking that doesn’t require an overt response, such as touching a screen. The second had college students play four hours of Minecraft — one of the most popular computer games in the world — before and after showing them visual stimuli. Then, researchers determined subjects’ ability to perceive phenomena in the oblique and vertical/horizontal orientations using the eye-tracking method from the first experiment.

    advertisement

    A single session produced a clearly detectable change. While the screen-less control group showed no changes in their perception, the game-players detected horizontal and vertical orientations more easily. Neither group changed their perception in oblique orientations.
    We still don’t know how temporary these changes are, although Gerhardstein speculates that the vision of the game-playing research subjects likely returned to normal quickly.
    “So, the immediate takeaway is the impressive extent to which the young adult visual system can rapidly adapt to changes in the statistics of the visual environment,” he said.
    In the next phase of research, Gerhardstein’s lab will track the visual development of two groups of children, one assigned to regularly play video games and the other to avoid screen-time, including television. If the current experiment is any indication, there may be no significant differences, at least when it comes to orientation sensitivity. The pandemic has put in-person testing plans on hold, although researchers have given a survey about children’s playing habits to local parents and will use the results to design a study.
    Adaptive vision
    Other research groups who have examined the effects of digital exposure on other aspects of visual perception have concluded that long-term changes do take place, at least some of which are seen as helpful.
    Helpful? Like other organisms, humans tend to adapt fully to the environment they experience. The first iPhone came out in 2008 and the first iPad in 2010. Children who are around 10 to 12 years old have grown up with these devices, and will live and operate in a digital world as adults, Gerhardstein pointed out.
    “Is it adaptive for them to develop a visual system that is highly sensitive to this particular environment? Many would argue that it is,” he said. “I would instead suggest that a highly flexible system that can shift from one perceptual ‘set’ to another rapidly, so that observers are responding appropriately to the statistics of a digital environment while interacting with digital media, and then shifting to respond appropriately to the statistics of a natural scene or a cityscape, would be most adaptive.” More

  • in

    New detector breakthrough pushes boundaries of quantum computing

    Physicists at Aalto University and VTT Technical Research Centre of Finland have developed a new detector for measuring energy quanta at unprecedented resolution. This discovery could help bring quantum computing out of the laboratory and into real-world applications. The results have been published today in Nature.
    The type of detector the team works on is called a bolometer, which measures the energy of incoming radiation by measuring how much it heats up the detector. Professor Mikko Möttönen’s Quantum Computing and Devices group at Aalto has been developing their expertise in bolometers for quantum computing over the past decade, and have now developed a device that can match current state-of-the-art detectors used in quantum computers.
    ‘It is amazing how we have been able to improve the specs of our bolometer year after year, and now we embark on an exciting journey into the world of quantum devices,’ says Möttönen.
    Measuring the energy of qubits is at the heart of how quantum computers operate. Most quantum computers currently measure a qubit’s energy state by measuring the voltage induced by the qubit. However, there are three problems with voltage measurements: firstly, measuring the voltage requires extensive amplification circuitry, which may limit the scalability of the quantum computer; secondly, this circuitry consumes a lot of power; and thirdly, the voltage measurements carry quantum noise which introduces errors in the qubit readout. Quantum computer researchers hope that by using bolometers to measure qubit energy, they can overcome all of these complications, and now Professor Möttönen’s team have developed one that is fast enough and sensitive enough for the job.
    ‘Bolometers are now entering the field of quantum technology and perhaps their first application could be in reading out the quantum information from qubits. The bolometer speed and accuracy seems now right for it,’ says Professor Möttönen.
    The team had previously produced a bolometer made of a gold-palladium alloy with unparalleled low noise levels in its measurements, but it was still too slow to measure qubits in quantum computers. The breakthrough in this new work was achieved by swapping from making the bolometer out of gold-palladium alloys to making them out of graphene. To do this, they collaborated with Professor Pertti Hakonen’s NANO group — also at Aalto University — who have expertise in fabricating graphene-based devices. Graphene has a very low heat capacity, which means that it is possible to detect very small changes in its energy quickly. It is this speed in detecting the energy differences that makes it perfect for a bolometer with applications in measuring qubits and other experimental quantum systems. By swapping to graphene, the researchers have produced a bolometer that can make measurements in well below a microsecond, as fast as the technology currently used to measure qubits.
    ‘Changing to graphene increased the detector speed by 100 times, while the noise level remained the same. After these initial results, there is still a lot of optimisation we can do to make the device even better,’ says Professor Hakonen.
    Now that the new bolometers can compete when it comes to speed, the hope is to utilise the other advantages bolometers have in quantum technology. While the bolometers reported in the current work performs on par with the current state-of-the-art voltage measurements, future bolometers have the potential to outperform them. Current technology is limited by Heisenberg’s uncertainty principle: voltage measurements will always have quantum noise, but bolometers do not. This higher theoretical accuracy, combined with the lower energy demands and smaller size — the graphene flake could fit comfortably inside a single bacterium — means that bolometers are an exciting new device concept for quantum computing.
    The next steps for their research is to resolve the smallest energy packets ever observed using bolometers in real-time and to use the bolometer to measure the quantum properties of microwave photons, which not only have exciting applications in quantum technologies such as computing and communications, but also in fundamental understanding of quantum physics.
    Many of the scientists involved in the researchers also work at IQM, a spin-out of Aalto University developing technology for quantum computers. “IQM is constantly looking for new ways to enhance its quantum-computer technology and this new bolometer certainly fits the bill,” explains Dr Kuan Yen Tan, Co-Founder of IQM who was also involved in the research.

    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    'Liking' an article online may mean less time spent reading it

    When people have the option to click “like” on a media article they encounter online, they spend less time actually reading the text, a new study suggests.
    In a lab experiment, researchers found that people spent about 7 percent less time reading articles on controversial topics when they had the opportunity to upvote or downvote them than if there was no interactive element.
    The finding was strongest when an article agreed with the reader’s point of view.
    The results suggest that the ability to interact with online content may change how we consume it, said Daniel Sude, who led the work while earning a doctoral degree in communication at The Ohio State University.
    “When people are voting whether they like or dislike an article, they’re expressing themselves. They are focused on their own thoughts and less on the content in the article,” Sude said.
    “It is like the old phrase, ‘If you’re talking, you’re not listening.’ People were talking back to the articles without listening to what they had to say.”
    In another finding, people’s existing views on controversial topics like gun control or abortion became stronger after voting on articles that agreed with their views, even when they spent less time reading them.

    advertisement

    “Just having the ability to like an article you agreed with was enough to amplify your attitude,” said study co-author Silvia Knobloch-Westerwick, professor of communication at Ohio State.
    “You didn’t need to read the article carefully, you didn’t have to learn anything new, but you are more committed to what you already believed.”
    The study, also co-authored by former Ohio State doctoral student George Pearson, was published online recently in the journal Computers in Human Behavior and will appear in the January 2021 print edition.
    The study involved 235 college students. Before the study, the researchers measured their views on four controversial topics used in the experiment: abortion, welfare benefits, gun control and affirmative action.
    Participants were then showed four versions of an online news website created for the study, each on one of the controversial topics. Each webpage showed headlines and first paragraphs for four articles, two with a conservative slant and two with a liberal slant. Participants could click on the headlines to read the full stories.

    advertisement

    Two versions of the websites had a banner that said, “Voting currently enabled for this topic,” and each article had an up arrow or down arrow that participants could click on to express their opinion.
    The other two websites had a banner that said, “Voting currently disabled for this topic.”
    Participants were given three minutes to browse each website as they wished, although they were not told about the time limit. The researchers measured the time participants spent on each story and whether they voted if they had the opportunity.
    As expected, for each website, participants spent more time reading articles that agreed with their views (about 1.5 minutes) than opposing views (less than a minute).
    But they spent about 12 seconds less time reading the articles they agreed with if they could vote.
    In addition, people voted on about 12 percent of articles that they didn’t select to read, the study showed.
    “Rather than increasing engagement with website content, having the ability to interact may actually distract from it,” Sude said.
    The researchers measured the participants’ views on the four topics again after they read the websites to see if their attitudes had changed at all.
    Results showed that when participants were not able to vote, time spent reading articles that agreed with their original views strengthened these views. The more time they spent reading, the stronger their views became.
    When participants were able to vote, their voting behavior was as influential as their reading time. Even if they stopped reading and upvoted an article, their attitudes still became stronger.
    “It is important that people’s views still became stronger by just having the opportunity to vote, Knobloch-Westerwick said.
    “When they had the opportunity to vote on the articles, their attitudes were getting more extreme with limited or no input from the articles themselves. They were in an echo chamber of one.”
    Sude said there is a better way to interact with online news.
    “Don’t just click the like button. Read the article and leave thoughtful comments that are more than just a positive or negative rating,” he said.
    “Say why you liked or disliked the article. The way we express ourselves is important and can influence the way we think about an issue.” More

  • in

    The secretive networks used to move money offshore

    In 2016, the world’s largest ever data leak dubbed “The Panama Papers” exposed a scandal, uncovering a vast global network of people — including celebrities and world leaders, who used offshore tax havens, anonymous transactions through intermediaries and shell corporations to hide their wealth, grow their fortunes and avoid taxes.
    Researchers at USC Viterbi School of Engineering have now conducted a deep analysis of the entities and their interrelationships that were originally revealed in the 11.5 million files leaked to the International Consortium of Investigative Journalists. The academic researchers have made some discoveries about how this network and transactions operate, uncovering uniquely fragmented network behavior, vastly different from more traditional social or organizational networks, demonstrating why these systems of transactions and associations are so robust and difficult to infiltrate or take down. The work has been published in Applied Network Science.
    Lead author Mayank Kejriwal is an assistant professor working in the Daniel J. Epstein Department of Industrial and Systems Engineering and USC’s Information Sciences Institute who studies complex (typically, social) systems like online trafficking markets using computational methods and network science. He said the research team’s aim was to study the Panama Papers network as a whole, in the same way you might study a social network like Facebook, to try to understand what the network behavior can tell us about how money can be moved.
    “In general, in any social network like LinkedIn or Facebook, there is something called ‘Small World Phenomenon’, which means that you’re only ever around six people away from anyone in the world,” Kejriwal said.
    “For instance, if you want get from yourself to Bill Gates, on average you would be around six connections away,” he said.
    However the team discovered that the Panama Papers network was about as far removed from this traditional social or organizational network behavior as it could possibly be. Instead of a network of highly integrated connections, the researchers discovered a series of secretive disconnected fragments, with entities, intermediaries and individuals involved in transactions and corporations exhibiting very few connections with other entities in the system.

    advertisement

    “It was really unusual. The degree of fragmentation is something I have never seen before,” said Kejriwal. “I’m not aware of any other network that has this kind of fragmentation.”
    “So (without any documentation or leak), if you wanted to find the chain between one organization and another organization, you would not be able to find it, because the chances are that that there is no chain — it’s completely disconnected,” Kejriwal said.
    Most social, friendship or organizational networks contain a series of triangular structures in a system known as the ‘friend of a friend phenomenon.”
    “The simple notion is that a friend of a friend is also a friend,” Kejriwal said. “And we can measure that by counting the number of triangles in the network.”
    However, the team discovered that this triangular structure was not a feature of the Panama Papers network.

    advertisement

    “It turns out that not only is it not prevalent, but it’s far less than prevalent than even for a random network,” Kejriwal said. “If you literally randomly connect things, in a haphazard fashion and then you count the triangles in that network, this network is even sparser than that.” He added, “Compared to a random network, in this type of network, links between financial entities are scrambled until they are essentially meaningless (so that anyone can be transacting with anyone else).”
    It is precisely this disconnectedness that makes the system of secret global financial dealings so robust. Because there was no way to trace relationships between entities, the network could not be easily compromised.
    “So what this suggests is that secrecy is built into the system and you cannot penetrate it,” Kejriwal said.
    “In an interconnected world, we don’t expect anyone to be impenetrable. Everyone has a weak link,” Kejriwal said. “But not in this network. The fact it is so fragmented actually protects them.”
    Kejriwal said the network behavior demonstrates that those involved in the Panama Papers network of offshore entities and transactions were very sophisticated, knowing exactly how to move money around in a way that it becomes untraceable and they are not vulnerable through their connections to others in the system. Because it is a global network, there are few options for national or international bodies to intervene in order to recoup taxes and investigate corruption and money laundering.
    “I don’t know how anyone would try to bring this down, and I’m not sure that they would be able to. The system seems unattackable,” Kejriwal said. More

  • in

    App analyzes coronavirus genome on a smartphone

    A new mobile app has made it possible to analyse the genome of the SARS-CoV-2 virus on a smartphone in less than half an hour.
    Cutting-edge nanopore devices have enabled scientists to read or ‘sequence’ the genetic material in a biological sample outside a laboratory, however analysing the raw data has still required access to high-end computing power — until now.
    The app Genopo, developed by the Garvan Institute of Medical Research, in collaboration with the University of Peradeniya in Sri Lanka, makes genomics more accessible to remote or under-resourced regions, as well as the hospital bedside.
    “Not everyone has access to the high-power computing resources that are required for DNA and RNA analysis, but most people have access to a smartphone,” says co-senior author Dr Ira Deveson, who heads the Genomic Technologies Group at Garvan’s Kinghorn Centre for Clinical Genomics.
    “Fast, real-time genomic analysis is more crucial today than ever, as a central method for tracking the spread of coronavirus. Our app makes genomic analysis more accessible, literally placing the technology into the pockets of scientists around the world.”
    The researchers report the app Genopo in the journal Communications Biology.

    advertisement

    Taking genome analysis off-line
    Genomic sequencing no longer requires a sophisticated lab setup.
    At the size of a USB stick, portable devices such as the Oxford Nanopore Technologies MinION sequencer can rapidly generate genomic sequences from a sample in the field or the clinic. The technology has been used for Ebola surveillance in West Africa, to profile microbial communities in the Arctic and determine coronavirus evolution during the current pandemic.
    However, analysing genome sequencing data requires powerful computation. Scientists need to piece the many strings of genetic letters from the raw data into a single sequence and pinpoint the instances of genetic variation that shed light on how a virus evolves.
    “Until now, genomic analysis has required the processing power of high-end server computers or cloud services. We set out to change that,” explains co-senior author Hasindu Gamaarachchi, Genomics Computing Systems Engineer at the Garvan Institute.

    advertisement

    “To enable in situ genomic sequencing and analysis, in real time and without major laboratory infrastructure, we developed an app that could execute bioinformatics workflows on nanopore sequencing datasets that are downloaded to a smartphone. The reengineering process, spearheaded by first author Hiruna Samarakoon, required overcoming a number of technical challenges due to various resource constraints in smartphones. The app Genopo combines a number of available bioinformatics tools into a single Android application, ‘miniaturised’ to work on the processing power of a consumer Android device.”
    Coronavirus testing
    The researchers tested Genopo on the raw sequencing data of virus samples isolated from nine Sydney patients infected with SARS-CoV-2, which involved extracting and amplifying the virus RNA from a swab sample, sequencing the amplified DNA with a MinION device and analysing the data on a smartphone. The researchers tested their app on different Android devices, including models from Nokia, Huawei, LG and Sony.
    The Genopo app took an average 27 minutes to determine the complete SARS-CoV-2 genome sequence from the raw data, which the researchers say opens the possibility to do genomic analysis at the point of care, in real time. The researchers also showed that Genopo can be used to profile DNA methylation — a modification which changes gene activity — in a sample of the human genome.
    “This illustrates a flexible, efficient architecture that is suitable to run many popular bioinformatics tools and accommodate small or large genomes,” says Dr Deveson. “We hope this will make genomics much more accessible to researchers to unlock the information in DNA or RNA to the benefit of human health, including in the current pandemic.”
    Genopo is a free, open-source application available through the Google Play store (https://play.google.com/store/apps/details?id=com.mobilegenomics.genopo&hl=en).
    This project was supported by a Medical Research Future Fund (grant APP1173594), a Cancer Institute NSW Early Career Fellowship and The Kinghorn Foundation. Garvan is affiliated with St Vincent’s Hospital Sydney and UNSW Sydney. More