More stories

  • in

    New AI makes better permafrost maps

    New insights from artificial intelligence about permafrost coverage in the Arctic may soon give policy makers and land managers the high-resolution view they need to predict climate-change-driven threats to infrastructure such as oil pipelines, roads and national security facilities.
    “The Arctic is warming four times faster than the rest of the globe, and permafrost is a component of the Arctic that’s changing really rapidly,” said Evan Thaler, a Chick Keller Postdoctoral Fellow at Los Alamos National Laboratory. Thaler is corresponding author of a paper published in the journal Earth and Space Science on an innovative application of AI to permafrost data.
    “Current models don’t give the resolution needed to understand how permafrost thaw is changing the environment and affecting infrastructure,” Thaler said. “Our model creates high-resolution maps telling us where permafrost is now and where it is likely to change in the future.”
    The AI models also identify the landscape and ecological features driving the predictions, such as vegetative greenness, landscape slope angle and the duration of snow cover.
    AI versus field data
    Thaler was part of a team with fellow Los Alamos researchers Joel Rowland, Jon Schwenk and Katrina Bennett, plus collaborators from Lawrence Berkeley National Laboratory, that used a form of AI called supervised machine learning. The work tested the accuracy of three different AI approaches against field data collected by Los Alamos researchers from three watersheds with patchy permafrost on the Seward Peninsula in Alaska.
    Permafrost, or ground that stays below freezing temperature two years or more, covers about one-sixth of the exposed land in the Northern Hemisphere, Thaler said. Thawing permafrost is already disrupting roads, oil pipelines and other facilities built over it and carries a range of environmental hazards as well.

    As air temperatures warm under climate change, the thawing ground releases water. It flows to lower terrain, rivers, lakes and the ocean, causing land-surface subsidence, transporting minerals, altering the direction of groundwater, changing soil chemistry and releasing carbon to the atmosphere.
    Useful results
    The resolution of the most widely used current pan-arctic model for permafrost is about one-third square mile, far too coarse to predict how changing permafrost will undermine a road or pipeline, for instance. The new Los Alamos AI model determines surface permafrost coverage to a resolution of just under 100 square feet, smaller than a typical parking space and far more practical for assessing risk at a specific location.
    Using their AI model trained on data from three sites on the Seward Peninsula, the team generated a map showing large areas without any permafrost around the Seward sites, matching the field data with 83% accuracy. Using the pan-arctic model for comparison, the team generated a map of the same sites with only 50% accuracy.
    “It’s the highest accuracy pan-arctic product to date, but it obviously isn’t good enough for site-specific predictions,” Thaler said. “The pan-arctic product predicts 100% of that site is permafrost, but our model predicts only 68%, which we know is closer to the real percentage based on field data.”
    Feeding the AI models
    This initial study proved the concept of the Los Alamos model on the Seward data, delivering acceptable accuracy for terrain similar to the location where the field data was collected. To measure each model’s transferability, the team also trained it on data from one site then ran the model using data from a second site with different terrain that the model had not been trained on. None of the models transferred well by creating a map matching actual findings at the second site.

    Thaler said the team will do additional work on the AI algorithms to improve the model’s transferability to other areas across the Arctic. “We want to be able to train on one data set and then apply the model to a place it hasn’t seen before. We just need more data from more diverse landscapes to train the models, and we hope to collect that data soon,” he said.
    Part of the study involved comparing the accuracy of three different AI approaches — extremely randomized trees, support vector machines and an artificial neural network — to see which model came closest to matching the “ground truth” data gathered in field observations at the Seward Peninsula. Part of that data was used to train the AI models. Each model then generated a map based on unseen data predicting the extent of near-surface permafrost.
    While the Los Alamos research demonstrated a marked improvement over the best — and widely used — pan-arctic model, the results from the team’s three AI models were mixed, with the support vector machines showing the most promise for transferability. More

  • in

    Online versus reality: Social media influences perceptions

    People may form inaccurate impressions about us from our social media posts, finds new Cornell University research that is the first to examine perceptions of our personalities based on online posts.
    An analysis of Facebook status updates found substantial discrepancies between how viewers saw the authors across a range of personality traits, and the authors’ self-perceptions. Viewers rated the Facebook users on average as having lower self-esteem and being more self-revealing, for example, than the users rated themselves.
    Status updates containing photos, video or links in addition to text facilitated more accurate assessments than those with just text, the researchers found. Overall, they said, the study sheds light on the dynamic process by which a cyber audience tries to make sense of who we are from isolated fragments of shared information, jointly constructing our digital identity.
    “The impression people form about us on social media based on what we post can differ from the way we view ourselves,” said Qi Wang, professor of psychology and director of the Culture & Cognition Lab. “A mismatch between who we are and how people perceive us could influence our ability to feel connected online and the benefits of engaging in social media interaction.”
    Wang is the lead author of “The Self Online: When Meaning-Making is Outsourced to the Cyber Audience,” published in PLOS One.
    Prior research has focused on perceptions of personality traits gleaned from personal websites, such as blogs or online profiles, finding that readers can assess them accurately. The Cornell researchers believe their study is the first to investigate audience perceptions of social media users through their posts, on platforms where users often don’t share cohesive personal narratives while interacting with “friends” they may know only a little or sometimes not at all.
    Interestingly, the study found that Facebook status updates generated perceptions of users that were consistent with cultural norms in offline contexts concerning gender and ethnicity — even though viewers were blind to their identities. For example, female Facebook users were rated as more extraverted than male users, in line with general findings that women score higher on extraversion. White Facebook users were seen as being more extraverted and having greater self-esteem than Asian users, whose cultures place more emphasis on modesty, Wang said.

    “We present ourselves in line with our cultural frameworks,” she said, “and others can discern our ‘cultured persona’ through meaning making of our posts.”
    The scholars said future research should explore this “outsourced meaning-making process” with larger samples of posts, and on other popular platforms such as Instagram and X, formerly known as Twitter.
    Wang said the findings could help developers design interfaces that allow people to express themselves most authentically. For users, misunderstandings about who they are on social media might not cause direct harm, she said, but could hinder their efforts to foster good communication and relationships.
    “If people’s view of us is very different from who we actually are, or how we would like to be perceived,” Wang said, “it could undermine our social life and well-being.” More

  • in

    New deepfake detector designed to be less biased

    The image spoke for itself.
    University at Buffalo computer scientist and deepfake expert Siwei Lyu created a photo collage out of the hundreds of faces that his detection algorithms had incorrectly classified as fake — and the new composition clearly had a predominantly darker skin tone.
    “A detection algorithm’s accuracy should be statistically independent from factors like race,” Lyu says, “but obviously many existing algorithms, including our own, inherit a bias.”
    Lyu, PhD, co-director of the UB Center for Information Integrity, and his team have now developed what they believe are the first-ever deepfake detection algorithms specifically designed to be less biased.
    Their two machine learning methods — one which makes algorithms aware of demographics and one that leaves them blind to them — reduced disparities in accuracy across races and genders, while, in some cases, still improving overall accuracy.
    The research was presented at the Winter Conference on Applications of Computer Vision (WACV), held Jan. 4-8, and was supported in part by the U.S. Defense Advanced Research Projects Agency (DARPA).
    Lyu, the study’s senior author, collaborated with his former student, Shu Hu, PhD, now an assistant professor of computer and information technology at Indiana University-Purdue University Indianapolis, as well as George Chen, PhD, assistant professor of information systems at Carnegie Mellon University. Other contributors include Yan Ju, a PhD student in Lyu’s Media Forensic Lab at UB, and postdoctoral researcher Shan Jia.

    Ju, the study’s first author, says detection tools are often less scrutinized than the artificial intelligence tools they keep in check, but that doesn’t mean they don’t need to be held accountable, too.
    “Deepfakes have been so disruptive to society that the research community was in a hurry to find a solution,” she says, “but even though these algorithms were made for a good cause, we still need to be aware of their collateral consequences.”
    Demographic aware vs. demographic agnostic
    Recent studies have found large disparities in deepfake detection algorithms’ error rates — up to a 10.7% difference in one study — among different races. In particular, it’s been shown that some are better at guessing the authenticity of lighter-skinned subjects than darker-skinned ones.
    This can result in certain groups being more at risk of having their real image pegged as a fake, or perhaps even more damaging, a doctored image of them pegged as real.
    The problem is not necessarily the algorithms themselves, but the data they’ve been trained on. Middle-aged white men are often overly represented in such photo and video datasets, so the algorithms are better at analyzing them than they are underrepresented groups, says Lyu, SUNY Empire Professor in the UB Department of Computer Science and Engineering, within the School of Engineering and Applied Sciences.

    “Say one demographic group has 10,000 samples in the dataset and the other only has 100. The algorithm will sacrifice accuracy on the smaller group in order to minimize errors on the larger group,” he adds. “So it reduces overall errors, but at the expense of the smaller group.”
    While other studies have attempted to make databases more demographically balanced — a time-consuming process — Lyu says his team’s study is the first attempt to actually improve the fairness of the algorithms themselves.
    To explain their method, Lyu uses an analogy of a teacher being evaluated by student test scores.
    “If a teacher has 80 students do well and 20 students do poorly, they’ll still end up with a pretty good average,” he says. “So instead we want to give a weighted average to the students around the middle, forcing them to focus more on everyone instead of the dominating group.”
    First, their demographic-aware method supplied algorithms with datasets that labeled subjects’ gender — male or female — and race — white, Black, Asian or other — and instructed it to minimize errors on the less represented groups.
    “We’re essentially telling the algorithms that we care about overall performance, but we also want to guarantee that the performance of every group meets certain thresholds, or at least is only so much below the overall performance,” Lyu says.
    However, datasets typically aren’t labeled for race and gender. Thus, the team’s demographic-agnostic method classifies deepfake videos not based on the subjects’ demographics — but on features in the video not immediately visible to the human eye.
    “Maybe a group of videos in the dataset corresponds to a particular demographic group or maybe it corresponds with some other feature of the video, but we don’t need demographic information to identify them,” Lyu says. “This way, we do not have to handpick which groups should be emphasized. It’s all automated based on which groups make up that middle slice of data.”
    Improving fairness — and accuracy
    The team tested their methods using the popular FaceForensic++ dataset and state-of-the-art Xception detection algorithm. This improved all of the algorithm’s fairness metrics, such as equal false positive rate among races, with the demographic-aware method performing best of all.
    Most importantly, Lyu says, their methods actually increased the overall detection accuracy of the algorithm — from 91.49% to as high as 94.17%.
    However, when using the Xception algorithm with different datasets and the FF+ dataset with different algorithms, the methods — while still improving most fairness metrics — slightly reduced overall detection accuracy.
    “There can be a small tradeoff between performance and fairness, but we can guarantee that the performance degradation is limited,” Lyu says. “Of course, the fundamental solution to the bias problem is improving the quality of the datasets, but for now, we should incorporate fairness into the algorithms themselves.” More

  • in

    ‘Smart glove’ can boost hand mobility of stroke patients

    This month, a group of stroke survivors in B.C. will test a new technology designed to aid their recovery, and ultimately restore use of their limbs and hands.
    Participants will wear a new groundbreaking “smart glove” capable of tracking their hand and finger movements during rehabilitation exercises supervised by Dr. Janice Eng, a leading stroke rehabilitation specialist and professor of medicine at UBC.
    The glove incorporates a sophisticated network of highly sensitive sensor yarns and pressure sensors that are woven into a comfortable stretchy fabric, enabling it to track, capture and wirelessly transmit even the smallest hand and finger movements.
    “With this glove, we can monitor patients’ hand and finger movements without the need for cameras. We can then analyze and fine-tune their exercise programs for the best possible results, even remotely,” says Dr. Eng.
    Precision in a wearable device
    UBC electrical and computer engineering professor Dr. Peyman Servati, PhD student Arvin Tashakori and their team at their startup, Texavie, created the smart glove for collaboration on the stroke project. Dr. Servati highlighted a number of breakthroughs, described in a paper published last week in Nature Machine Intelligence.
    “This is the most accurate glove we know of that can track hand and finger movement and grasping force without requiring motion-capture cameras. Thanks to machine learning models we developed, the glove can accurately determine the angles of all finger joints and the wrist as they move. The technology is highly precise and fast, capable of detecting small stretches and pressures and predicting movement with at least 99-per-cent accuracy — matching the performance of costly motion-capture cameras.”
    Unlike other products in the market, the glove is wireless and comfortable, and can be easily washed after removing the battery. Dr. Servati and his team have developed advanced methods to manufacture the smart gloves and related apparel at a relatively low cost locally.

    Augmented reality and robotics
    Dr. Servati envisions a seamless transition of the glove into the consumer market with ongoing improvements, in collaboration with different industrial partners. The team also sees potential applications in virtual reality and augmented reality, animation and robotics.
    “Imagine being able to accurately capture hand movements and interactions with objects and have it automatically display on a screen. There are endless applications. You can type text without needing a physical keyboard, control a robot, or translate American Sign Language into written speech in real time, providing easier communication for individuals who are deaf or hard of hearing.” More

  • in

    Advancement in thermoelectricity could light up the Internet of Things

    Imagine stoplights and cars communicating with each other to optimize the flow of traffic. This isn’t science fiction — it’s the Internet of Things (IoT), i.e., objects that sense their surroundings and respond via the internet. As the global population rises and such technologies continue to develop, you might wonder — what will power this digital world of tomorrow?
    Wind, solar, yes. Something all around us might not immediately come to mind though — heat. Now, in a study recently published in Nature Communications, a multi-institutional research team including Osaka University has unveiled a breakthrough in clean energy: greatly improved thermoelectric conversion. One of its many potential applications? That’s right, the IoT.
    Large-scale, global integration of the IoT is limited by the lack of a suitable energy supply. Realistically, an energy supply for the IoT must be local and small scale. Miniaturization of thermoelectric conversion can help solve this energy-supply problem by applying the otherwise wasted heat from microelectronics as a source of electricity. However, for practical applications, the efficiency of current thermoelectric-energy conversion is insufficient. Improving this efficiency was the goal of the research team’s study.
    “In our work, we demonstrate a two-dimensional electron gas (2DEG) system with multiple subbands that uses gallium arsenide. The system is different from conventional methods of thermoelectric conversion,” explain Yuto Uematsu and Yoshiaki Nakamura, lead and senior authors of the study. “Our system facilitates better conversion from temperature (heat) to electricity, and improves the mobility of electrons in their 2D sheet. This readily benefits everyday devices like semiconductors.”
    Incredibly, the researchers were able to improve the power factor of thermoelectric conversion by a factor of 4 compared with conventional 2DEG systems. Other technologies like resonant scattering have not been as efficient for thermoelectric conversion.
    The team’s findings could open the way to a sustainable power source for the IoT. Thin thermoelectric films on substrates made of gallium arsenide would be suitable for IoT application. For example, these could power environmental monitoring systems in remote locations or wearable devices for medical monitoring.
    “We’re excited because we have expanded upon the principles of a process that is crucial to clean energy and the development of a sustainable IoT,” says Yoshiaki Nakamura, senior author. “What’s more, our methodology can be applied to any element-based material; the practical applications are far reaching.”
    This work is an important step forward in maximizing the utility of thermoelectric power generation in modern microelectronics and is especially suitable for the IoT. As the results are not limited to gallium arsenide, further advancements to the system are possible, with sustainability and the IoT potentially benefitting greatly. More

  • in

    Do violent video games numb us towards real violence?

    Neuroscientists from the University of Vienna and the Karolinska Institute in Stockholm have investigated whether playing violent video games leads to a reduction in human empathy. To do this, they had adult test subjects play a violent video game repeatedly over the course of an experiment lasting several weeks. Before and after, their empathic responses to the pain of another person were measured. It was found that the violent video game had no discernible effect on empathy and underlying brain activity. These results have now been published in the journal eLife.
    Video games have become an integral part of the everyday life of many children and adults. Many of the most popular video games contain explicit depictions of extreme violence. Therefore, concerns have been raised that these games may blunt the empathy of their players and could therefore lower the inhibition threshold for real violence. An international research team led by Viennese neuroscientists Claus Lamm and Lukas Lengersdorff has now investigated whether this is actually the case.
    The Austrian and Swedish researchers invited a total of 89 adult male subjects to take part in the study. A key selection criterion was that the subjects had had little or no previous contact with violent video games. This ensured that the results were not influenced by different experiences with these games. In a first experimental study, the baseline level of empathy of the test subjects was assessed. Brain scans were used to record how the test subjects reacted when a second person was administered painful electric shocks. Then, the video game phase of the experiment began, during which the test subjects came to the research laboratory seven times to play a video game for one hour each time. The participants in the experimental group played a highly violent version of the game Grand Theft Auto V and were given the task of killing as many other game characters as possible. In the control group, all violence had been removed from the game and the participants were given the task of taking photos of other game characters. Finally, after the video game phase was over, the test subjects were examined a second time to determine whether their empathic responses had changed.
    The analysis of the data showed that the video game violence had no discernible effect on the empathic abilities of the test subjects. The reactions of the participants in the experimental group who were confronted with extreme depictions of violence did not differ statistically from those of the participants who only had to take photos. In addition, there were no significant differences in the activity of brain regions that had been identified in other studies as being associated with empathy — such as the anterior insular and anterior midcingulate cortex.
    Does that mean that concerns about violence in video games are unfounded? The authors advise against jumping to conclusions. “Precisely because this is such a sensitive topic, we have to be very careful when interpreting these results,” explains lead author Lukas Lengersdorff, who carried out the study as part of his doctoral studies. “The conclusion should not be that violent video games are now definitively proven to be harmless. Our study lacks the data to make such statements.” According to the neuroscientist and statistician, the value of the study lies rather in the fact that it allows a sober look at previous results. “A few hours of video game violence have no significant influence on the empathy of mentally healthy adult test subjects. We can clearly draw this conclusion. Our results thus contradict those of previous studies, in which negative effects were reported after just a few minutes of play.” In these previous studies, participants had played the violent video game immediately before data collection. “Such experimental designs are not able to distinguish the short-term and long-term effects of video games,” explains Lengersdorff.
    According to research group leader and co-author Claus Lamm, the study also sets a new standard for future research in this area: “Strong experimental controls and longitudinal research designs that allow causal conclusions to be drawn are needed to make clear statements about the effects of violent video games. We wanted to take a step in this direction with our study.” Now it is the task of further research to check whether there are no negative consequences even after significantly longer exposure to video game violence — and whether this is also the case for vulnerable subpopulations. “The most important question is of course: are children and young people also immune to violence in video games? The young brain is highly plastic, so repeated exposure to depictions of violence could have a much greater effect. But of course these questions are difficult to investigate experimentally without running up against the limits of scientific ethics,” says Lamm. More

  • in

    Experiment could test quantum nature of large masses for the first time

    An experiment outlined by a UCL (University College London)-led team of scientists from the UK and India could test whether relatively large masses have a quantum nature, resolving the question of whether quantum mechanical description works at a much larger scale than that of particles and atoms.
    Quantum theory is typically seen as describing nature at the tiniest scales and quantum effects have not been observed in a laboratory for objects more massive than about a quintillionth of a gram, or more precisely 10^(-20)g.
    The new experiment, described in a paper published in Physical Review Letters and involving researchers at UCL, the University of Southampton and the Bose Institute in Kolkata, India, could in principle test the quantumness of an object regardless of its mass or energy.
    The proposed experiment exploits the principle in quantum mechanics that the act of measurement of an object can change its nature. (The term measurement encompasses any interaction of the object with a probe — for instance, if light shines on it, or if it emits light or heat).
    The experiment focuses on a pendulum-like object oscillating like a ball on a string. A light is shone on one half of the area of oscillation, revealing information about the location of the object (i.e., if scattered light is not observed, then it can be concluded that the object is not in that half). A second light is shone, showing the location of the object further along on its swing.
    If the object is quantum, the first measurement (the first flash of light) will disturb its path (by measurement induced collapse — a property inherent to quantum mechanics), changing the likelihood of where it will be at the second flash of light, whereas if it is classical then the act of observation will make no difference. Researchers can then compare scenarios in which they shine a light twice to ones where only the second flash of light occurs to see if there is a difference in the final distributions of the object.
    Lead author Dr Debarshi Das (UCL Physics & Astronomy and the Royal Society) said: “A crowd at a football match cannot affect the result of the game simply by staring strongly. But with quantum mechanics, the act of observation or measurement itself changes the system.

    “Our proposed experiment can test if an object is classical or quantum by seeing if an act of observation can lead to a change in its motion.”
    The proposal, the researchers say, could be implemented with current technologies using nanocrystals or, in principle, even using mirrors at LIGO (Laser Interferometer Gravitational-Wave Observatory) in the United States which have an effective mass of 10kg.
    The four LIGO mirrors, which each weigh 40kg but together vibrate as if they were a single 10kg object, have already been cooled to the minimum-energy state (a fraction above absolute zero) that would be required in any experiment seeking to detect quantum behaviour.
    Senior author Professor Sougato Bose (UCL Physics & Astronomy) said: “Our scheme has wide conceptual implications. It could test whether relatively large objects have definite properties, i.e., their properties are real, even when we are not measuring them. It could extend the domain of quantum mechanics and probe whether this fundamental theory of nature is valid only at certain scales or if it holds true for larger masses too.
    “If we do not encounter a mass limit to quantum mechanics, this makes ever more acute the problem of trying to reconcile quantum theory with reality as we experience it.”
    In quantum mechanics, objects do not have definite properties until they are observed or interact with their environment. Prior to observation they do not exist in a definite location but may be in two places at once (a state of superposition). This led to Einstein’s remark: “Is the moon there when no one is looking at it?”
    Quantum mechanics may seem at odds with our experience of reality but its insights have helped the development of computers, smartphones, broadband, GPS, and magnetic resonance imaging.

    Most physicists believe quantum mechanics holds true at larger scales, but is merely harder to observe due to the isolation required to preserve a quantum state. To detect quantum behaviour in an object, its temperature or vibrations must be reduced to its lowest possible level (its ground state) and it must be in a vacuum so that nearly no atoms are interacting with it. That is because a quantum state will collapse, a process called decoherence, if the object interacts with its environment.
    The new proposed experiment is a development of an earlier quantum test devised by Professor Bose and colleagues in 2018. A project to conduct an experiment using this methodology, which will test the quantum nature of a nanocrystal numbering a billion atoms, is already underway, funded by the Engineering and Physical Sciences Research Council (EPSRC) and led by the University of Southampton.
    That project already aims for a jump in terms of mass, with previous attempts to test the quantum nature of a macroscopic object limited to hundreds of thousands of atoms. The newly published scheme, meanwhile, could be achieved with current technologies using a nanocrystal with trillions of atoms.
    The new paper was co-authored by Dr Das and Professor Bose at UCL along with Professor Dipankar Home of India’s Bose Institute (who also co-authored the 2018 paper) and Professor Hendrik Ulbricht of the University of Southampton. More

  • in

    Artificial intelligence helped scientists create a new type of battery 

    In the hunt for new materials, scientists have traditionally relied on tinkering in the lab, guided by intuition, with a hefty serving of trial and error.

    But now a new battery material has been discovered by combining two computing superpowers: artificial intelligence and supercomputing. It’s a discovery that highlights the potential for using computers to help scientists discover materials suited to specific needs, from batteries to carbon capture technologies to catalysts. 

    Calculations winnowed down more than 32 million candidate materials to just 23 promising options, researchers from Microsoft and Pacific Northwest National Laboratory, or PNNL, report in a paper submitted January 8 to arXiv.org. The team then synthesized and tested one of those materials and created a working battery prototype. More