More stories

  • in

    One in three who are aware of deepfakes say they have inadvertently shared them on social media

    A Nanyang Technological University, Singapore (NTU Singapore) study has found that some Singaporeans have reported that, despite being aware of the existence of ‘deepfakes’ in general, they believe they have circulated deepfake content on social media which they later found out was a hoax.
    Deepfakes, a portmanteau of ‘deep learning’ and ‘fake’, are ultrarealistic fake videos made with artificial intelligence (AI) software to depict people doing things they have never done — not just slowing them down or changing the pitch of their voice, but also making them appear to say things that they have never said at all.
    In a survey of 1,231 Singaporeans led by NTU Singapore’s Assistant Professor Saifuddin Ahmed, 54 per cent of the respondents said they were aware of deepfakes, of which one in three reported sharing content on social media that they subsequently learnt was a deepfake.
    The study also found that more than one in five of those who are aware of deepfakes said that they regularly encounter deepfakes online.
    The survey findings, reported in the journal Telematics and Informatics in October, come in the wake of rising numbers of deepfake videos identified online. Over the six months to June 2020, Sensity, a deepfake detection technology firm , estimates that identified deepfake videos online had doubled to 49,081.
    Deepfakes that have gone viral include one with former President Barack Obama using an expletive to describe President Donald Trump in 2018, and another last year of Facebook founder Mark Zuckerberg claiming to control the future, thanks to stolen data.

    advertisement

    Assistant Professor Saifuddin of NTU’s Wee Kim Wee School of Communication and Information said: “Fake news refers to false information published under the guise of being authentic news to mislead people, and deepfakes are a new, far more insidious form of fake news. In some countries, we are already witnessing how such deepfakes can be used to create non-consensual porn, incite fear and violence, and influence civic mistrust. As the AI technology behind the creation of deepfakes evolves, it will be even more challenging to discern fact from fiction.”
    “While tech companies like Facebook, Twitter and Google have started to label what they have identified as manipulated online content like deepfakes, more efforts will be required to educate the citizenry in effectively negating such content.”
    Americans more likely than Singaporeans to share deepfakes
    The study benchmarked the findings on Singaporeans’ understanding of deepfakes against a similar demographic and number of respondents in the United States.
    Respondents in the US were more aware of deepfakes (61% in US vs. 54% in SG). They said they were also more concerned by and frequently exposed to deepfakes. More people reported sharing content that they later learnt was a deepfake in the US than in Singapore (39% in US vs. 33% in SG).

    advertisement

    Asst Prof Saifuddin said: “These differences are not surprising, given the more widespread relevance and public discussion surrounding deepfakes in the US. More recently, a rise in the number of deepfakes, including those of President Donald Trump, has raised anxieties regarding the destructive potential of this form of disinformation.
    “On the other hand, Singapore has not witnessed direct impacts of deepfakes, and the government has introduced the Protection from Online Falsehoods and Manipulation Act (POFMA) to limit the threat posed by disinformation, including deepfakes.”
    But legislation alone is not enough, he added, citing a 2018 survey by global independent market research agency Ipsos which found that while four in five Singaporeans say that they can confidently spot fake news, more than 90 per cent mistakenly identified at least one in five fake headlines as being real.
    “The government’s legislation to inhibit the pervasive threat of disinformation has also been helpful, but we need to continue improving digital media literacy going forward, especially for those who are less capable of discerning facts from disinformation,” said Asst Prof Saifuddin, whose research interests include social media and public opinion.
    The NTU study on deepfake awareness was funded by the University and Singapore’s Ministry of Education, and the findings are part of a longer-term study that examines citizens’ trust in AI technology. More

  • in

    Measuring risk-taking – by watching people move computer mouses

    How you move a computer mouse while deciding whether to click on a risky bet or a safe choice may reveal how much of a risk-taker you really are.
    Researchers found that people whose mouse drifted toward the safe option on the computer screen — even when they ended up taking the risky bet — may be more risk-averse than their choice would indicate. Those who moved the mouse toward the risk before accepting the safe option may be more open to risk than it seems.
    “We could see the conflict people were feeling making the choice through their hand movements with the mouse,” said Paul Stillman, lead author of the study who received his Ph.D. in psychology at The Ohio State University.
    “How much their hand is drawn to the choice they didn’t make can reveal a lot about how difficult the decision was for them,” said Stillman, who is now a postdoctoral researcher in marketing at Yale University.
    Stillman conducted the study with Ian Krajbich, associate professor of psychology and economics at Ohio State, and Melissa Ferguson, professor of psychology at Yale. It was published today in the Proceedings of the National Academy of Sciences.
    The researchers were surprised at how accurate mouse tracking was at predicting how people would react to other similar risk choices.

    advertisement

    “In many cases, we could accurately predict how people would behave in the future after we observed them just once choosing to take a gamble or not,” Krajbich said.
    “It is rare to get predictive accuracy with just a single decision in an experiment like this.”
    The researchers conducted three studies with a total of 652 people. They measured participants’ mouse movements as they made 215 decisions on various gambles. Each gamble was different, with some being bigger risks than others.
    Each participant’s mouse always started at the bottom center of the screen. Each trial began with two boxes appearing on the top left and right corners of the screen.
    One box offered them a 50/50 gamble, such as a 50% chance of gaining $10 and a 50% chance of losing $5. The other box contained a certain option that was usually equal to $0.

    advertisement

    The question was: How would people move the mouse toward their ultimate choice?
    In some cases, participants took a relatively straight path from where they started to the choice they made. The researchers interpreted that as indicating the person was confident about their choice from the start and didn’t have much internal conflict.
    But sometimes, they veered toward one option or the other before settling on the other choice. That suggests they did feel some conflict.
    This tells the researchers much more about the participants than simply observing what they finally chose, Krajbich said.
    “Choice data is not very useful for many purposes. You don’t know the strength of a person’s preference or how close they were to making the other choice,” he said. “And that’s what the mouse-tracking measure can give us.”
    For example, in one analysis, the researchers looked at people who all made the same choice on one gamble. Could they tell which ones would flip to the opposite choice on a similar gamble?
    It turns out they could, simply by measuring the mouse trajectories to see if they had veered toward the opposite choice the first time.
    “We could very nicely differentiate between people, even when they made the same choice,” Stillman said. “It gives us a much richer picture of risk aversion and loss aversion in people.”
    In one of the studies, the researchers tested whether they could manipulate how much risk people were willing to take — and whether it would be visible in their mouse trajectories.
    In this study, the researchers told some participants to treat the gambles like a stock trader would. They were told not to focus so much on individual gambles, but to see if they could build a “portfolio” of winning choices.
    “When we told them to think like a trader, we could see from the mouse tracking that they were less conflicted when they accepted gambles and more conflicted when they rejected them — just as we would expect,” Krajbich said.
    While this study looked at mouse trajectories, the results suggest other motor movements might also provide information about our decision-making, according to the researchers.
    “Scrolling on a phone may also provide information on how people are making a decision,” Krajbich said.
    “What we’re measuring is a physical manifestation of hesitation. Anything like that, such as scrolling, could yield a similar glimpse of this internal conflict.” More

  • in

    Optimizing complex modeling processes through machine learning technologies

    Engineering a spaceship is as difficult as it sounds. Modeling plays a large role in the time and effort it takes to create spaceships and other complex engineering systems. It requires extensive physics calculations, sifting through a multitude of different models and tribal knowledge to determine singular parts of a system’s design.
    Dr. Zohaib Hasnain’s research shows that data-driven techniques used in autonomous systems hold the potential to solve these complex modeling problems more accurately and efficiently. Applying high-functioning artificial intelligence to physics-based processes, he aims to “automate” modeling, reducing the time it takes to produce solutions and cutting production costs.
    “If I am trying to undertake something along the lines of, say, designing a pencil, there’s a process involved in designing that pencil,” Hasnain said. “I have a certain set of steps that I would undertake given the knowledge that I have available to me based on what others have done in the past. Anything that can be described by a process or an algorithm on paper can be automated and analyzed in the context of an autonomous system.”
    An assistant professor in the J. Mike Walker ’66 Department of Mechanical Engineering, Hasnain realized while working in the aerospace industry, the delay in projects due to modeling efforts. While conducting traditional modeling processes, scientists and researchers must create various models, many of which require testing. Additionally, filing through individual models takes far too long to produce answers. An example of a traditional modeling for space systems is computer fluid dynamics, or CFD, which uses numerical analysis to determine solutions, resulting in hefty costs computationally, and in human labor for verification.
    “I always thought that there was work to be cut out because there are autonomous systems and machines that seemed capable of handling the bottleneck that is modeling,” Hasnain said. “My research is a first step in understanding how and when data-driven techniques are beneficial, with the ultimate goal of taking a process that consumes months or weeks to solve, and producing a solution in hours or days.”
    Hasnain, accompanied by assistant professor Dr. Vinayak R. Krishnamurthy and graduate research assistant Kaustubh Tangsali, conducted a study to understand how commonly used machine-learning architectures such as convolutional neural networks (CNN) and physics informed neural networks (PINN) fare when applied to the problem of fluidic prediction. The data-driven approach uses a pre-existing modeling database to train a model over carefully controlled variations in fundamental physics of the fluid, as well as geometries over which the fluid flows. The model is then used to make a prediction. Their research found that both CNN and PINN have the potential to optimize modeling processes if targeting very specific aspects of the solution process. They are now working on a hybrid learning approach to achieve their final goal of speeding up the design process.
    “We’re looking at a different set of tools that will replace the old tools,” said Hasnain. “We are trying to understand how these new tools behave in the context of applications traditionally governed by first principles-based solution techniques.”
    The researchers published their findings in the Journal of Mechanical Design. Their article, “Generalizability of Convolutional Encoder-Decoder Networks for Aerodynamic Flow-field Prediction Across Geometric and Physical-Fluidic Variations,” focuses on understanding dimensional tools that have the potential of replacing modeling tools that are the current industry standard.
    From the research results, Hasnain hopes to build an autonomous infrastructure that pulls from a collection of data to produce modeling solutions through hybrid machine-learning architectures. Through algorithms and pre-existing data, the infrastructure will be a modeling process that can be applied to various systems in real-life applications. Eventually, he plans to share this infrastructure for widespread, free usage.
    “I would like this infrastructure to be a community initiative that’s offered free to everyone,” Hasnain said. “Perhaps more importantly, because it can produce near on-demand solutions as opposed to the current modeling state-of-the-art, which is extremely time-consuming.”
    The infrastructure is in its early stages of development. Hasnain and his fellow researchers are working to produce a prototype in the near future.

    Story Source:
    Materials provided by Texas A&M University. Original written by Michelle Revels. Note: Content may be edited for style and length. More

  • in

    World's smallest atom-memory unit created

    Faster, smaller, smarter and more energy-efficient chips for everything from consumer electronics to big data to brain-inspired computing could soon be on the way after engineers at The University of Texas at Austin created the smallest memory device yet. And in the process, they figured out the physics dynamic that unlocks dense memory storage capabilities for these tiny devices.
    The research published recently in Nature Nanotechnology builds on a discovery from two years ago, when the researchers created what was then the thinnest memory storage device. In this new work, the researchers reduced the size even further, shrinking the cross section area down to just a single square nanometer.
    Getting a handle on the physics that pack dense memory storage capability into these devices enabled the ability to make them much smaller. Defects, or holes in the material, provide the key to unlocking the high-density memory storage capability.
    “When a single additional metal atom goes into that nanoscale hole and fills it, it confers some of its conductivity into the material, and this leads to a change or memory effect,” said Deji Akinwande, professor in the Department of Electrical and Computer Engineering.
    Though they used molybdenum disulfide — also known as MoS2 — as the primary nanomaterial in their study, the researchers think the discovery could apply to hundreds of related atomically thin materials.
    The race to make smaller chips and components is all about power and convenience. With smaller processors, you can make more compact computers and phones. But shrinking down chips also decreases their energy demands and increases capacity, which means faster, smarter devices that take less power to operate.
    “The results obtained in this work pave the way for developing future generation applications that are of interest to the Department of Defense, such as ultra-dense storage, neuromorphic computing systems, radio-frequency communication systems and more,” said Pani Varanasi, program manager for the U.S. Army Research Office, which funded the research.
    The original device — dubbed “atomristor” by the research team — was at the time the thinnest memory storage device ever recorded, with a single atomic layer of thickness. But shrinking a memory device is not just about making it thinner but also building it with a smaller cross-sectional area.
    “The scientific holy grail for scaling is going down to a level where a single atom controls the memory function, and this is what we accomplished in the new study,” Akinwande said.
    Akinwande’s device falls under the category of memristors, a popular area of memory research, centered around electrical components with the ability to modify resistance between its two terminals without a need for a third terminal in the middle known as the gate. That means they can be smaller than today’s memory devices and boast more storage capacity.
    This version of the memristor — developed using the advanced facilities at the Oak Ridge National Laboratory — promises capacity of about 25 terabits per square centimeter. That is 100 times higher memory density per layer compared with commercially available flash memory devices.

    Story Source:
    Materials provided by University of Texas at Austin. Note: Content may be edited for style and length. More

  • in

    Direct visualization of quantum dots reveals shape of quantum wave function

    Trapping and controlling electrons in bilayer graphene quantum dots yields a promising platform for quantum information technologies. Researchers at UC Santa Cruz have now achieved the first direct visualization of quantum dots in bilayer graphene, revealing the shape of the quantum wave function of the trapped electrons.
    The results, published November 23 in Nano Letters, provide important fundamental knowledge needed to develop quantum information technologies based on bilayer graphene quantum dots.
    “There has been a lot of work to develop this system for quantum information science, but we’ve been missing an understanding of what the electrons look like in these quantum dots,” said corresponding author Jairo Velasco Jr., assistant professor of physics at UC Santa Cruz.
    While conventional digital technologies encode information in bits represented as either 0 or 1, a quantum bit, or qubit, can represent both states at the same time due to quantum superposition. In theory, technologies based on qubits will enable a massive increase in computing speed and capacity for certain types of calculations.
    A variety of systems, based on materials ranging from diamond to gallium arsenide, are being explored as platforms for creating and manipulating qubits. Bilayer graphene (two layers of graphene, which is a two-dimensional arrangement of carbon atoms in a honeycomb lattice) is an attractive material because it is easy to produce and work with, and quantum dots in bilayer graphene have desirable properties.
    “These quantum dots are an emergent and promising platform for quantum information technology because of their suppressed spin decoherence, controllable quantum degrees of freedom, and tunability with external control voltages,” Velasco said.

    advertisement

    Understanding the nature of the quantum dot wave function in bilayer graphene is important because this basic property determines several relevant features for quantum information processing, such as the electron energy spectrum, the interactions between electrons, and the coupling of electrons to their environment.
    Velasco’s team used a method he had developed previously to create quantum dots in monolayer graphene using a scanning tunneling microscope (STM). With the graphene resting on an insulating hexagonal boron nitride crystal, a large voltage applied with the STM tip creates charges in the boron nitride that serve to electrostatically confine electrons in the bilayer graphene.
    “The electric field creates a corral, like an invisible electric fence, that traps the electrons in the quantum dot,” Velasco explained.
    The researchers then used the scanning tunneling microscope to image the electronic states inside and outside of the corral. In contrast to theoretical predictions, the resulting images showed a broken rotational symmetry, with three peaks instead of the expected concentric rings.
    “We see circularly symmetric rings in monolayer graphene, but in bilayer graphene the quantum dot states have a three-fold symmetry,” Velasco said. “The peaks represent sites of high amplitude in the wave function. Electrons have a dual wave-particle nature, and we are visualizing the wave properties of the electron in the quantum dot.”
    This work provides crucial information, such as the energy spectrum of the electrons, needed to develop quantum devices based on this system. “It is advancing the fundamental understanding of the system and its potential for quantum information technologies,” Velasco said. “It’s a missing piece of the puzzle, and taken together with the work of others, I think we’re moving toward making this a useful system.”
    In addition to Velasco, the authors of the paper include co-first authors Zhehao Ge, Frederic Joucken, and Eberth Quezada-Lopez at UC Santa Cruz, along with coauthors at the Federal University of Ceara, Brazil, the National Institute for Materials Science in Japan, University of Minnesota, and UCSC’s Baskin School of Engineering. This work was funded by the National Science Foundation and the Army Research Office. More

  • in

    Misinformation or artifact: A new way to think about machine learning

    Deep neural networks, multilayered systems built to process images and other data through the use of mathematical modeling, are a cornerstone of artificial intelligence.
    They are capable of seemingly sophisticated results, but they can also be fooled in ways that range from relatively harmless — misidentifying one animal as another — to potentially deadly if the network guiding a self-driving car misinterprets a stop sign as one indicating it is safe to proceed.
    A philosopher with the University of Houston suggests in a paper published in Nature Machine Intelligence that common assumptions about the cause behind these supposed malfunctions may be mistaken, information that is crucial for evaluating the reliability of these networks.
    As machine learning and other forms of artificial intelligence become more embedded in society, used in everything from automated teller machines to cybersecurity systems, Cameron Buckner, associate professor of philosophy at UH, said it is critical to understand the source of apparent failures caused by what researchers call “adversarial examples,” when a deep neural network system misjudges images or other data when confronted with information outside the training inputs used to build the network. They’re rare and are called “adversarial” because they are often created or discovered by another machine learning network — a sort of brinksmanship in the machine learning world between more sophisticated methods to create adversarial examples and more sophisticated methods to detect and avoid them.
    “Some of these adversarial events could instead be artifacts, and we need to better know what they are in order to know how reliable these networks are,” Buckner said.
    In other words, the misfire could be caused by the interaction between what the network is asked to process and the actual patterns involved. That’s not quite the same thing as being completely mistaken.

    advertisement

    “Understanding the implications of adversarial examples requires exploring a third possibility: that at least some of these patterns are artifacts,” Buckner wrote. ” … Thus, there are presently both costs in simply discarding these patterns and dangers in using them naively.”
    Adversarial events that cause these machine learning systems to make mistakes aren’t necessarily caused by intentional malfeasance, but that’s where the highest risk comes in.
    “It means malicious actors could fool systems that rely on an otherwise reliable network,” Buckner said. “That has security applications.”
    A security system based upon facial recognition technology could be hacked to allow a breach, for example, or decals could be placed on traffic signs that cause self-driving cars to misinterpret the sign, even though they appear harmless to the human observer.
    Previous research has found that, counter to previous assumptions, there are some naturally occurring adversarial examples — times when a machine learning system misinterprets data through an unanticipated interaction rather than through an error in the data. They are rare and can be discovered only through the use of artificial intelligence.

    advertisement

    But they are real, and Buckner said that suggests the need to rethink how researchers approach the anomalies, or artifacts.
    These artifacts haven’t been well understood; Buckner offers the analogy of a lens flare in a photograph — a phenomenon that isn’t caused by a defect in the camera lens but is instead produced by the interaction of light with the camera.
    The lens flare potentially offers useful information — the location of the sun, for example — if you know how to interpret it. That, he said, raises the question of whether adverse events in machine learning that are caused by an artifact also have useful information to offer.
    Equally important, Buckner said, is that this new way of thinking about the way in which artifacts can affect deep neural networks suggests a misreading by the network shouldn’t be automatically considered evidence that deep learning isn’t valid.
    “Some of these adversarial events could be artifacts,” he said. “We have to know what these artifacts are so we can know how reliable the networks are.”

    Story Source:
    Materials provided by University of Houston. Original written by Jeannie Kever. Note: Content may be edited for style and length. More

  • in

    Algorithm accurately predicts COVID-19 patient outcomes

    With communities across the nation experiencing a wave of COVID-19 infections, clinicians need effective tools that will enable them to aggressively and accurately treat each patient based on their specific disease presentation, health history, and medical risks.
    In research recently published online in Medical Image Analysis, a team of engineers demonstrated how a new algorithm they developed was able to successfully predict whether or not a COVID-19 patient would need ICU intervention. This artificial intelligence-based approach could be a valuable tool in determining a proper course of treatment for individual patients.
    The research team, led by Pingkun Yan, an assistant professor of biomedical engineering at Rensselaer Polytechnic Institute, developed this method by combining chest computed tomography (CT) images that assess the severity of a patient’s lung infection with non-imaging data, such as demographic information, vital signs, and laboratory blood test results. By combining these data points, the algorithm is able to predict patient outcomes, specifically whether or not a patient will need ICU intervention.
    The algorithm was tested on datasets collected from a total of 295 patients from three different hospitals — one in the United States, one in Iran, and one in Italy. Researchers were able to compare the algorithm’s predictions to what kind of treatment a patient actually ended up needing.
    “As a practitioner of AI, I do believe in its power,” said Yan, who is a member of the Center for Biotechnology and Interdisciplinary Studies (CBIS) at Rensselaer. “It really enables us to analyze a large quantity of data and also extract the features that may not be that obvious to the human eye.”
    This development is the result of research supported by a recent National Institutes of Health grant, which was awarded to provide solutions during this worldwide pandemic. As the team continues its work, Yan said, researchers will integrate their new algorithm with another that Yan had previously developed to assess a patient’s risk of cardiovascular disease using chest CT scans.
    “We know that a key factor in COVID mortality is whether a patient has underlying conditions and heart disease is a significant comorbidity,” Yan said. “How much this contributes to their disease progress is, right now, fairly subjective. So, we have to have a quantification of their heart condition and then determine how we factor that into this prediction.”
    “This critical work, led by Professor Yan, is offering an actionable solution for clinicians who are in the middle of a worldwide pandemic,” said Deepak Vashishth, the director of CBIS. “This project highlights the capabilities of Rensselaer expertise in bioimaging combined with important partnerships with medical institutions.”
    Yan is joined at Rensselaer by Ge Wang, an endowed chair professor of biomedical engineering and member of CBIS, as well as graduate students Hanqing Chao, Xi Fang, and Jiajin Zhang. The Rensselaer team is working in collaboration with Massachusetts General Hospital. When this work is complete, Yan said, the team hopes to translate its algorithm into a method that doctors at Massachusetts General can use to assess their patients.
    “We actually are seeing that the impact could go well beyond COVID diseases. For example, patients with other lung diseases,” Yan said. “Assessing their heart disease condition, together with their lung condition, could better predict their mortality risk so that we can help them to manage their condition.”

    Story Source:
    Materials provided by Rensselaer Polytechnic Institute. Original written by Torie Wells. Note: Content may be edited for style and length. More

  • in

    After more than a decade, ChIP-seq may be quantitative after all

    For more than a decade, scientists studying epigenetics have used a powerful method called ChIP-seq to map changes in proteins and other critical regulatory factors across the genome. While ChIP-seq provides invaluable insights into the underpinnings of health and disease, it also faces a frustrating challenge: its results are often viewed as qualitative rather than quantitative, making interpretation difficult.
    But, it turns out, ChIP-seq may have been quantitative all along, according to a recent report selected as an Editors’ Pick by and featured on the cover of the Journal of Biological Chemistry.
    “ChIP-seq is the backbone of epigenetics research. Our findings challenge the belief that additional steps are required to make it quantitative,” said Brad Dickson, Ph.D., a staff scientist at Van Andel Institute and the study’s corresponding author. “Our new approach provides a way to quantify results, thereby making ChIP-seq more precise, while leaving standard protocols untouched.”
    Previous attempts to quantify ChIP-seq results have led to additional steps being added to the protocol, including the use of “spike-ins,” which are additives designed to normalize ChIP-seq results and reveal histone changes that otherwise may be obscured. These extra steps increase the complexity of experiments while also adding variables that could interfere with reproducibility. Importantly, the study also identifies a sensitivity issue in spike-in normalization that has not previously been discussed.
    Using a predictive physical model, Dickson and his colleagues developed a novel approach called the sans-spike-in method for Quantitative ChIP-sequencing, or siQ-ChIP. It allows researchers to follow the standard ChIP-seq protocol, eliminating the need for spike-ins, and also outlines a set of common measurements that should be reported for all ChIP-seq experiments to ensure reproducibility as well as quantification.
    By leveraging the binding reaction at the immunoprecipitation step, siQ-ChIP defines a physical scale for sequencing results that allows comparison between experiments. The quantitative scale is based on the binding isotherm of the immunoprecipitation products.

    Story Source:
    Materials provided by Van Andel Research Institute. Note: Content may be edited for style and length. More