More stories

  • in

    Study identifies new findings on implant positioning and stability during robotic-assisted knee revision surgery

    An innovative study at Marshall University published in ArthroplastyToday explores the use of robotic-assisted joint replacement in revision knee scenarios, comparing the pre- and post-revision implant positions in a series of revision total knee arthroplasties (TKA) using a state-of-the-art robotic arm system.
    In this retrospective study, the orthopaedic team at the Marshall University Joan C. Edwards School of Medicine and Marshall Health performed 25 revision knee replacements with a robotic assisted computer system. The procedure involved placing new implants at the end of the thighbone and top of the shinbone with the computer’s aid to ensure the knee was stable and balanced throughout the range of motion. Researchers then carefully compared the initial positions of the primary implants with the final planned positions of the robotic revision implants for each patient, assessing the differences in millimeters and degrees.
    The analysis found that exceedingly small changes in implant position significantly influence the function of the knee replacement. Robotic assistance during revision surgery has the potential to measure these slight differences. In addition, the computer system can help the surgeon predict what size implant to use as well as help to balance the knee for stability.
    “Robotic-assisted surgery has the potential to change the way surgeons think about revision knee replacement,” said Matthew Bullock, D.O., associate professor of orthopaedic surgery and co-author on the study. “The precision offered by robotic-assisted surgery not only enhances the surgical process but also holds promise for improved patient outcomes. Besides infection, knee replacements usually fail because they become loose from the bone or because they are unbalanced leading to pain and instability. When this happens patients can have difficulty with activities of daily living such as walking long distances or negotiating stairs.”
    The study underscores the importance of aligning the prosthesis during revision surgery. The research also suggests potential advantages, including appropriately sized implants that can impact the ligament tension which is crucial for functional knee revisions.
    “These findings open new doors in the realm of revision knee arthroplasty,” said Alexander Caughran, M.D., assistant professor of orthopaedic surgery and co-author on the study. “We continue to collect more data for future studies on patient outcomes after robotic revision knee replacement. We anticipate that further research and technological advancements in the realm of artificial intelligence will continue to shape the landscape of orthopaedic surgery.”
    In addition to Bullock and Caughran, co-authors from Marshall University include Micah MacAskill, M.D., resident physician; Richard Peluso, M.D., resident physician; Jonathan Lash, M.D., resident physician; and Timothy Hewett, Ph.D., professor. More

  • in

    Chemists create a 2D heavy fermion

    Researchers at Columbia University have successfully synthesized the first 2D heavy fermion material. They introduce the new material, a layered intermetallic crystal composed of cerium, silicon, and iodine (CeSiI), in a research article published today in Nature.
    Heavy fermion compounds are a class of materials with electrons that are up to 1000x heavier than usual. In these materials, electrons get tangled up with magnetic spins that slow them down and increase their effective mass. Such interactions are thought to play important roles in a number of enigmatic quantum phenomena, including superconductivity, the movement of electrical current with zero resistance.
    Researchers have been exploring heavy fermions for decades, but in the form of bulky, 3D crystals. The new material synthesized by PhD student Victoria Posey in the lab of Columbia chemist Xavier Roy will allow researchers to drop a dimension.
    “We’ve laid a new foundation to explore fundamental physics and to probe unique quantum phases,” said Posey.
    One of the latest materials to come out of the Roy lab, CeSiI is a van der Waals crystal that can be peeled into layers that are just a few atoms thick. That makes it easier to manipulate and combine with other materials than a bulk crystal, in addition to possessing potential quantum properties that occur in 2D. “It’s amazing that Posey and the Roy lab could make a heavy fermion so small and thin,” said senior author Abhay Pasupathy, a physicist at Columbia and Brookhaven National Laboratory. “Just like we saw with the recent Nobel Prize to quantum dots, you can do many interesting things when you shrink dimensions.”
    With its middle sheet of silicon sandwiched between magnetic cerium atoms, Posey and her colleagues suspected that CeSiI, first described in a paper in 1998, might have some interesting electronic properties. Its first stop (after Posey figured out how to prepare the extremely air-sensitive crystal for transport) was a Scanning Tunneling Microscope (STM) in Abhay Pasupathy’s physics lab at Columbia. With the STM, they observed a particular spectrum shape characteristic of heavy fermions. Posey then synthesized a non-magnetic equivalent to CeSiI and weighed the electrons of both materials via their heat capacities. CeSiI’s were heavier. “By comparing the two — one with magnetic spins and one without — we can confirm we’ve created a heavy fermion,” said Posey.
    Samples then made their way across campus and the country for additional analyses, including to Pasupathy’s lab at Brookhaven National Laboratory for photoemission spectroscopy; to Philip Kim’s lab at Harvard for electron transport measurements; and to the National High Magnetic Field Laboratory in Florida to study its magnetic properties. Along the way, theorists Andrew Millis at Columbia and Angel Rubio at Max Planck helped explain the teams’ observations.
    From here, Columbia’s researchers will do what they do best with 2D materials: stack, strain, poke, and prod them to see what unique quantum behaviors can be coaxed out of them. Pasupathy plans to add CeSiI to his arsenal of materials in the search for quantum criticality, the point where a material shifts from one unique phase to another. At the crossover, interesting phenomena like superconductivity may await.
    “Manipulating CeSiI at the 2D limit will let us explore new pathways to achieve quantum criticality,” said Michael Ziebel, a postdoc in the Roy group and co-corresponding author, “and this can guide us in the design of new materials.”
    Back in the chemistry department, Posey, who has perfected the air-free synthesis techniques needed, is systematically replacing the atoms in the crystal — for example, swapping silicon for other metals, like aluminum or gallium — to create related heavy fermions with their own unique properties to study. “We initially thought CeSiI was a one-off,” said Roy. “But this project has blossomed into a new kind of chemistry in my group.” More

  • in

    Higher measurement accuracy opens new window to the quantum world

    Using the well-known terbium titanate as an example, the team demonstrated that the method delivers highly reliable results. The thermal Hall effect provides information about coherent multi-particle states in quantum materials, based on their interaction with lattice vibrations (phonons).
    A team at HZB has developed a new measurement method that, for the first time, accurately detects tiny temperature differences in the range of 100 microkelvin in the thermal Hall effect. Previously, these temperature differences could not be measured quantitatively due to thermal noise. Using the well-known terbium titanate as an example, the team demonstrated that the method delivers highly reliable results. The thermal Hall effect provides information about coherent multi-particle states in quantum materials, based on their interaction with lattice vibrations (phonons).
    The laws of quantum physics apply to all materials. However, in so-called quantum materials, these laws give rise to particularly unusual properties. For example, magnetic fields or changes in temperature can cause excitations, collective states or quasiparticles that are accompanied by phase transitions to exotic states. This can be utilised in a variety of ways, provided it can be understood, managed and controlled: For example, in future information technologies that can store or process data with minimal energy requirements.
    The thermal Hall effect (THE) plays a key role in identifying exotic states in condensed matter. The effect is based on tiny transverse temperature differences that occur when a thermal current is passed through a sample and a perpendicular magnetic field is applied. In particular, the quantitative measurement of the thermal Hall effect allows to separate the exotic excitations from conventional behaviour. The thermal Hall effect is observed in a variety of materials, including spin liquids, spin ice, parent phases of high-temperature superconductors and materials with strongly polar properties. However, the thermal differences that occur perpendicular to the temperature gradient in the sample are extremely small: in typical millimetre-sized samples, they are in the range of microkelvins to millikelvins. Until now, it has been difficult to detect these heat differences experimentally because the heat introduced by the measurement electronics and sensors masks the effect.
    A novel sample holder
    The team led by PD Dr Klaus Habicht has now carried out pioneering work. Together with specialists from the HZB sample environment, they have developed a novel sample rod with a modular structure that can be inserted into various cryomagnets. The sample head measures the thermal Hall effect using capacitive thermometry. This takes advantage of the temperature dependence of the capacitance of specially manufactured miniature capacitors. With this setup, the experts have succeeded in significantly reducing heat transfer through sensors and electronics, and in attenuating interference signals and noise with several innovations. To validate the measurement method, they analysed a sample of terbium titanate, whose thermal conductivity in different crystal directions under a magnetic field is well known. The measured data were in excellent agreement with the literature.
    Further improvement of the measurement method
    “The ability to resolve temperature differences in the sub-millikelvin range fascinates me greatly and is a key to studying quantum materials in more detail,” says first author Dr Danny Kojda. “We have now jointly developed a sophisticated experimental design, clear measurement protocols and precise analysis procedures that allow high-resolution and reproducible measurements.” Department head Klaus Habicht adds: “Our work also provides information how to further improve the resolution in future instruments designed for low sample temperatures. I would like to thank everyone involved, especially the sample environment team. I hope that the experimental setup will be firmly integrated into the HZB infrastructure and that the proposed upgrades will be implemented.”
    Outlook: Topological properties of phonons
    Habicht’s group will now use measurements of the thermal Hall effect to investigate the topological properties of lattice vibrations or phonons in quantum materials. “The microscopic mechanisms and the physics of the scattering processes for the thermal Hall effect in ionic crystals are far from being fully understood. The exciting question is why electrically neutral quasiparticles in non-magnetic insulators are nevertheless deflected in the magnetic field,” says Habicht. With the new instrument, the team has now created the prerequisites to answer this question. More

  • in

    Ultrafast laser pulses could lessen data storage energy needs

    A discovery from an experiment with magnets and lasers could be a boon to energy-efficient data storage.
    “We wanted to study the physics of light-magnet interaction,” said Rahul Jangid, who led the data analysis for the project while earning his Ph.D. in materials science and engineering at UC Davis under associate professor Roopali Kukreja. “What happens when you hit a magnetic domain with very short pulses of laser light?”
    Domains are areas within a magnet that flip from north to south poles. This property is used for data storage, for example in computer hard drives.
    Jangid and his colleagues found that when a magnet is hit with a pulsed laser, the domain walls in the ferromagnetic layers move at a speed of approximately 66 km/s, which is about 100 times faster than the speed limit previously thought.
    Domain walls moving at this speed could drastically affect the way data is stored and processed, offering a means of faster, more stable memory and reducing energy consumption in spintronic devices such as hard disk drives that use the spin of electrons within magnetic metallic multilayers to store, process or transmit information.
    “No one thought it was possible to move these walls that fast because they should hit their limit,” said Jangid. “It sounds absolutely bananas, but it’s true.”
    It’s “bananas,” because of the Walker breakdown phenomenon, which says that domain walls can only be pushed so far at a given velocity before they effectively break down and stop moving. This research, however, gives evidence that the domain walls can be driven at previously unknown velocities using lasers.

    While most personal devices like laptops and cell phones use faster flash drives, data centers use cheaper, slower hard disk drives. However, each time a bit of information is processed, or flipped, the drive uses a magnetic field to conduct heat through a coil of wire, burning a lot of energy. If, rather, a drive could use laser pulses on the magnetic layers, the device would operate at a lower voltage and bit flips would take significantly less energy to process.
    Current projections indicate that by 2030, information and communications technology will account for 21% of the world’s energy demand, exacerbating climate change. This finding, which was highlighted in a paper by Jangid and co-authors titled “Extreme Domain Wall Speeds under Ultrafast Optical Excitation” in the journal Physical Review Letters on Dec. 19, comes at a time when finding energy-efficient technologies is paramount.
    When laser meets magnet
    To conduct the experiment, Jangid and his collaborators, including researchers from the National Institute of Science and Technology; UC San Diego; University of Colorado, Colorado Springs and Stockholm University used the Free Electron Laser Radiation for Multidisciplinary Investigations, or FERMI facility, a free electron laser source based in Trieste, Italy.
    “Free electron lasers are insane facilities,” Jangid said. “It’s a 2-mile-long vacuum tube, and you take a small number of electrons, accelerate them up to the speed of light, and at the end wiggle them to create X-rays so bright, that if you’re not careful, your sample could be vaporized. Think of it like taking all the sunlight falling on the Earth and focusing it all on a penny — that’s how much photon flux we have at free electron lasers.”
    At FERMI, the group utilized X-rays to measure what occurs when a nano-scale magnet with multiple layers of cobalt, iron and nickel is excited by femtosecond pulses. A femtosecond is defined as 10 to the negative 15th of a second or one-millionth of one-billionth of a second.

    “There are more femtoseconds in one second than there are days in the age of the universe,” Jangid said. “These are extremely small, extremely fast measurements that are difficult to wrap your head around.”
    Jangid, who was analyzing the data saw that it was these ultrafast laser pulses exciting the ferromagnetic layers that resulted in the movement of the domain walls. Based on how fast those domain walls were moving, the study posits that these ultrafast laser pulses can switch a stored bit of information approximately 1,000 times faster than the magnetic field or spin current-based methods being used now.
    The future of ultrafast phenomena
    The technology is far from being practically applied, as current lasers consume a lot of power. However, a process similar to the way compact discs, or CDs, use lasers to store information and CD players use lasers to play it back could potentially work in the future, Jangid said.
    The next steps include further exploring the physics of mechanisms that enable ultrafast domain wall velocities higher than the previously known limits, as well as imaging the domain wall motion.
    This research will continue at UC Davis under Kukreja. Jangid is now pursuing similar research at National Synchrotron Light Source 2 at Brookhaven National Laboratory.
    “There are so many aspects of ultrafast phenomenon that we are just starting to understand,” Jangid said. “I’m eager to tackle the open questions that could unlock transformative advancements in low power spintronics, data storage, and information processing.” More

  • in

    Tiny AI-based bio-loggers revealing the interesting bits of a bird’s day

    Have you ever wondered what wildlife animals do all day? Documentaries offer a glimpse into their lives, but animals under the watchful eye do not do anything interesting. The true essence of their behaviors remains elusive. Now, researchers from Japan have developed a camera that allows us to capture these behaviors.
    In a study recently published in PNAS Nexus, researchers from Osaka University have created a small sensor-based data logger (called a bio-logger) that automatically detects and records video of infrequent behaviors in wild seabirds without supervision by researchers.
    Infrequent behaviors, such as diving into the water for food, can lead to new insights or even new directions in research. But observing enough of these behaviors to infer any results is difficult, especially when these behaviors take place in an environment that is not hospitable to humans, such as the open ocean. As a result, the detailed behaviors of these animals remain largely unknown.
    “Video cameras attached to the animal are an excellent way to observe behavior,” says Kei Tanigaki, lead author of the study. However, video cameras are very power hungry, and this leads to a trade-off. “Either the video only records until the battery runs out, in which case you might miss the rare behavior, or you use a larger, heavier battery, which is not suitable for the animal.”
    To avoid having to make this choice for the wild seabirds under study, the team use low-power sensors, such as accelerometers, to determine when an unusual behavior is taking place. The camera is then turned on, the behavior is recorded, and the camera powers off until the next time. This bio-logger is the first to use artificial intelligence to do this task.
    “We use a method called an isolation forest,” says Takuya Maekawa, senior author. “This method detects outlier events well, but like many other artificial intelligence algorithms, it is computationally complex. This means, like the video cameras, it is power hungry.” For the bio-loggers, the researchers needed a light-weight algorithm, so they trained the original isolation forest on their data and then used it as a “teacher” to train a smaller “student” outlier detector installed on the bio-logger.
    The final bio-logger is 23 g, which is less than 5% of the body weight of the Streaked Shearwater birds under study. Eighteen bio-loggers were deployed, a total of 205 hours of low-power sensor data were collected, and 76 5-min videos were collected. The researchers were able to collect enough data to reveal novel aspects of head-shaking and foraging behaviors of the birds.
    This approach, which overcomes the battery-life limitation of most bio-loggers, will help us understand the behaviors of wildlife that venture into human-inhabited areas. It will also enable animals in extreme environments inaccessible to humans to be observed. This means that many other rare behaviors — from sweet-potato washing by Japanese monkeys to penguins feeding on jellyfish — can now be studied in the future. More

  • in

    New AI makes better permafrost maps

    New insights from artificial intelligence about permafrost coverage in the Arctic may soon give policy makers and land managers the high-resolution view they need to predict climate-change-driven threats to infrastructure such as oil pipelines, roads and national security facilities.
    “The Arctic is warming four times faster than the rest of the globe, and permafrost is a component of the Arctic that’s changing really rapidly,” said Evan Thaler, a Chick Keller Postdoctoral Fellow at Los Alamos National Laboratory. Thaler is corresponding author of a paper published in the journal Earth and Space Science on an innovative application of AI to permafrost data.
    “Current models don’t give the resolution needed to understand how permafrost thaw is changing the environment and affecting infrastructure,” Thaler said. “Our model creates high-resolution maps telling us where permafrost is now and where it is likely to change in the future.”
    The AI models also identify the landscape and ecological features driving the predictions, such as vegetative greenness, landscape slope angle and the duration of snow cover.
    AI versus field data
    Thaler was part of a team with fellow Los Alamos researchers Joel Rowland, Jon Schwenk and Katrina Bennett, plus collaborators from Lawrence Berkeley National Laboratory, that used a form of AI called supervised machine learning. The work tested the accuracy of three different AI approaches against field data collected by Los Alamos researchers from three watersheds with patchy permafrost on the Seward Peninsula in Alaska.
    Permafrost, or ground that stays below freezing temperature two years or more, covers about one-sixth of the exposed land in the Northern Hemisphere, Thaler said. Thawing permafrost is already disrupting roads, oil pipelines and other facilities built over it and carries a range of environmental hazards as well.

    As air temperatures warm under climate change, the thawing ground releases water. It flows to lower terrain, rivers, lakes and the ocean, causing land-surface subsidence, transporting minerals, altering the direction of groundwater, changing soil chemistry and releasing carbon to the atmosphere.
    Useful results
    The resolution of the most widely used current pan-arctic model for permafrost is about one-third square mile, far too coarse to predict how changing permafrost will undermine a road or pipeline, for instance. The new Los Alamos AI model determines surface permafrost coverage to a resolution of just under 100 square feet, smaller than a typical parking space and far more practical for assessing risk at a specific location.
    Using their AI model trained on data from three sites on the Seward Peninsula, the team generated a map showing large areas without any permafrost around the Seward sites, matching the field data with 83% accuracy. Using the pan-arctic model for comparison, the team generated a map of the same sites with only 50% accuracy.
    “It’s the highest accuracy pan-arctic product to date, but it obviously isn’t good enough for site-specific predictions,” Thaler said. “The pan-arctic product predicts 100% of that site is permafrost, but our model predicts only 68%, which we know is closer to the real percentage based on field data.”
    Feeding the AI models
    This initial study proved the concept of the Los Alamos model on the Seward data, delivering acceptable accuracy for terrain similar to the location where the field data was collected. To measure each model’s transferability, the team also trained it on data from one site then ran the model using data from a second site with different terrain that the model had not been trained on. None of the models transferred well by creating a map matching actual findings at the second site.

    Thaler said the team will do additional work on the AI algorithms to improve the model’s transferability to other areas across the Arctic. “We want to be able to train on one data set and then apply the model to a place it hasn’t seen before. We just need more data from more diverse landscapes to train the models, and we hope to collect that data soon,” he said.
    Part of the study involved comparing the accuracy of three different AI approaches — extremely randomized trees, support vector machines and an artificial neural network — to see which model came closest to matching the “ground truth” data gathered in field observations at the Seward Peninsula. Part of that data was used to train the AI models. Each model then generated a map based on unseen data predicting the extent of near-surface permafrost.
    While the Los Alamos research demonstrated a marked improvement over the best — and widely used — pan-arctic model, the results from the team’s three AI models were mixed, with the support vector machines showing the most promise for transferability. More

  • in

    Online versus reality: Social media influences perceptions

    People may form inaccurate impressions about us from our social media posts, finds new Cornell University research that is the first to examine perceptions of our personalities based on online posts.
    An analysis of Facebook status updates found substantial discrepancies between how viewers saw the authors across a range of personality traits, and the authors’ self-perceptions. Viewers rated the Facebook users on average as having lower self-esteem and being more self-revealing, for example, than the users rated themselves.
    Status updates containing photos, video or links in addition to text facilitated more accurate assessments than those with just text, the researchers found. Overall, they said, the study sheds light on the dynamic process by which a cyber audience tries to make sense of who we are from isolated fragments of shared information, jointly constructing our digital identity.
    “The impression people form about us on social media based on what we post can differ from the way we view ourselves,” said Qi Wang, professor of psychology and director of the Culture & Cognition Lab. “A mismatch between who we are and how people perceive us could influence our ability to feel connected online and the benefits of engaging in social media interaction.”
    Wang is the lead author of “The Self Online: When Meaning-Making is Outsourced to the Cyber Audience,” published in PLOS One.
    Prior research has focused on perceptions of personality traits gleaned from personal websites, such as blogs or online profiles, finding that readers can assess them accurately. The Cornell researchers believe their study is the first to investigate audience perceptions of social media users through their posts, on platforms where users often don’t share cohesive personal narratives while interacting with “friends” they may know only a little or sometimes not at all.
    Interestingly, the study found that Facebook status updates generated perceptions of users that were consistent with cultural norms in offline contexts concerning gender and ethnicity — even though viewers were blind to their identities. For example, female Facebook users were rated as more extraverted than male users, in line with general findings that women score higher on extraversion. White Facebook users were seen as being more extraverted and having greater self-esteem than Asian users, whose cultures place more emphasis on modesty, Wang said.

    “We present ourselves in line with our cultural frameworks,” she said, “and others can discern our ‘cultured persona’ through meaning making of our posts.”
    The scholars said future research should explore this “outsourced meaning-making process” with larger samples of posts, and on other popular platforms such as Instagram and X, formerly known as Twitter.
    Wang said the findings could help developers design interfaces that allow people to express themselves most authentically. For users, misunderstandings about who they are on social media might not cause direct harm, she said, but could hinder their efforts to foster good communication and relationships.
    “If people’s view of us is very different from who we actually are, or how we would like to be perceived,” Wang said, “it could undermine our social life and well-being.” More

  • in

    New deepfake detector designed to be less biased

    The image spoke for itself.
    University at Buffalo computer scientist and deepfake expert Siwei Lyu created a photo collage out of the hundreds of faces that his detection algorithms had incorrectly classified as fake — and the new composition clearly had a predominantly darker skin tone.
    “A detection algorithm’s accuracy should be statistically independent from factors like race,” Lyu says, “but obviously many existing algorithms, including our own, inherit a bias.”
    Lyu, PhD, co-director of the UB Center for Information Integrity, and his team have now developed what they believe are the first-ever deepfake detection algorithms specifically designed to be less biased.
    Their two machine learning methods — one which makes algorithms aware of demographics and one that leaves them blind to them — reduced disparities in accuracy across races and genders, while, in some cases, still improving overall accuracy.
    The research was presented at the Winter Conference on Applications of Computer Vision (WACV), held Jan. 4-8, and was supported in part by the U.S. Defense Advanced Research Projects Agency (DARPA).
    Lyu, the study’s senior author, collaborated with his former student, Shu Hu, PhD, now an assistant professor of computer and information technology at Indiana University-Purdue University Indianapolis, as well as George Chen, PhD, assistant professor of information systems at Carnegie Mellon University. Other contributors include Yan Ju, a PhD student in Lyu’s Media Forensic Lab at UB, and postdoctoral researcher Shan Jia.

    Ju, the study’s first author, says detection tools are often less scrutinized than the artificial intelligence tools they keep in check, but that doesn’t mean they don’t need to be held accountable, too.
    “Deepfakes have been so disruptive to society that the research community was in a hurry to find a solution,” she says, “but even though these algorithms were made for a good cause, we still need to be aware of their collateral consequences.”
    Demographic aware vs. demographic agnostic
    Recent studies have found large disparities in deepfake detection algorithms’ error rates — up to a 10.7% difference in one study — among different races. In particular, it’s been shown that some are better at guessing the authenticity of lighter-skinned subjects than darker-skinned ones.
    This can result in certain groups being more at risk of having their real image pegged as a fake, or perhaps even more damaging, a doctored image of them pegged as real.
    The problem is not necessarily the algorithms themselves, but the data they’ve been trained on. Middle-aged white men are often overly represented in such photo and video datasets, so the algorithms are better at analyzing them than they are underrepresented groups, says Lyu, SUNY Empire Professor in the UB Department of Computer Science and Engineering, within the School of Engineering and Applied Sciences.

    “Say one demographic group has 10,000 samples in the dataset and the other only has 100. The algorithm will sacrifice accuracy on the smaller group in order to minimize errors on the larger group,” he adds. “So it reduces overall errors, but at the expense of the smaller group.”
    While other studies have attempted to make databases more demographically balanced — a time-consuming process — Lyu says his team’s study is the first attempt to actually improve the fairness of the algorithms themselves.
    To explain their method, Lyu uses an analogy of a teacher being evaluated by student test scores.
    “If a teacher has 80 students do well and 20 students do poorly, they’ll still end up with a pretty good average,” he says. “So instead we want to give a weighted average to the students around the middle, forcing them to focus more on everyone instead of the dominating group.”
    First, their demographic-aware method supplied algorithms with datasets that labeled subjects’ gender — male or female — and race — white, Black, Asian or other — and instructed it to minimize errors on the less represented groups.
    “We’re essentially telling the algorithms that we care about overall performance, but we also want to guarantee that the performance of every group meets certain thresholds, or at least is only so much below the overall performance,” Lyu says.
    However, datasets typically aren’t labeled for race and gender. Thus, the team’s demographic-agnostic method classifies deepfake videos not based on the subjects’ demographics — but on features in the video not immediately visible to the human eye.
    “Maybe a group of videos in the dataset corresponds to a particular demographic group or maybe it corresponds with some other feature of the video, but we don’t need demographic information to identify them,” Lyu says. “This way, we do not have to handpick which groups should be emphasized. It’s all automated based on which groups make up that middle slice of data.”
    Improving fairness — and accuracy
    The team tested their methods using the popular FaceForensic++ dataset and state-of-the-art Xception detection algorithm. This improved all of the algorithm’s fairness metrics, such as equal false positive rate among races, with the demographic-aware method performing best of all.
    Most importantly, Lyu says, their methods actually increased the overall detection accuracy of the algorithm — from 91.49% to as high as 94.17%.
    However, when using the Xception algorithm with different datasets and the FF+ dataset with different algorithms, the methods — while still improving most fairness metrics — slightly reduced overall detection accuracy.
    “There can be a small tradeoff between performance and fairness, but we can guarantee that the performance degradation is limited,” Lyu says. “Of course, the fundamental solution to the bias problem is improving the quality of the datasets, but for now, we should incorporate fairness into the algorithms themselves.” More