More stories

  • in

    NFTs offer new method to control personal health information

    NFTs, or nonfungible tokens, created using blockchain technology, first made a splash in the art world as a platform to buy and sell digital art backed by a digital contract. But could NFT digital contracts be useful in other marketplaces? A global, multidisciplinary team of scholars in ethics, law and informatics led by bioethicists at Baylor College of Medicine wrote one of the first commentaries on how this new emerging technology could be repurposed for the healthcare industry.
    In a new publication in the journal Science, the researchers propose that the tool could help patients gain more control over their personal health information. NFT digital contracts could provide an opportunity for patients to specify who can access their personal health information and to track how it is shared.
    “Our personal health information is completely outside of our control in terms of what happens to it once it is digitalized into an electronic health record and how it gets commercialized and exchanged from there,” said Dr. Kristin Kostick-Quenet, first author of the paper and assistant professor at the Center for Medical Ethics and Health Policy at Baylor. “NFTs could be used to democratize health data and help individuals regain control and participate more in decisions about who can see and use their health information.”
    “In the era of big data, health information is its own currency; it has become commodified and profitable,” said Dr. Amy McGuire, senior author of the paper and Leon Jaworski Professor of Biomedical Ethics and director of the Center for Medical Ethics and Health Policy at Baylor. “Using NFTs for health data is the perfect storm between a huge market place that’s evolving and the popularity of cryptocurrency, but there are also many ethical, legal and social implications to consider.”
    The researchers point out that NFTs are still vulnerable to data security flaws, privacy issues, and disputes over intellectual property rights. Further, the complexity of NFTs may prevent the average citizen from capitalizing on their potential. The researchers believe it is important to consider potential benefits and challenges as NFTs emerge as a potential avenue to transform the world of health data.
    “Federal regulations already give patients the right to connect an app of their choice to their doctor’s electronic health record and download their data in a computable format,” said Dr. Kenneth Mandl, co-author of the paper, director of the Computational Health Informatics Program at Boston Children’s Hospital and Donald A.B. Lindberg Professor of Pediatrics and Biomedical Informatics at Harvard Medical School. “It’s intriguing to contemplate whether NFTs or NFT-like technology could enable intentional sharing of those data under smart contracts in the future.”
    Dr. Timo Minssen, I. Glenn Cohen, Dr. Urs Gasser and Dr. Isaac Kohane also contributed to this publication. They are from the following institutions: Boston Children’s Hospital, Harvard Medical School, Harvard Law School, University of Copenhagen and Technical University of Munich. See the publication for a full list of funding for these researchers.
    Story Source:
    Materials provided by Baylor College of Medicine. Note: Content may be edited for style and length. More

  • in

    Neuroscientists use deep learning model to simulate brain topography

    Damage to a part of the brain that processes visual information — the inferotemporal (IT) cortex — can be devastating, especially for adults. Those affected may lose the ability to read (a disorder known as alexia), or recognize faces (prosopagnosia) or objects (agnosia), and there is currently not much doctors can do.
    A more accurate model of the visual system may help neuroscientists and clinicians develop better treatments for these conditions. Carnegie Mellon University researchers have developed a computational model that allows them to simulate the spatial organization or topography of the IT and learn more about how neighboring clusters of brain tissue are organized and interact. This could also help them understand how damage to that area affects the ability to recognize faces, objects and scenes.
    The researchers — Nicholas Blauch, a Ph.D. student in the Program in Neural Computation, and his advisors David C. Plaut and Marlene Behrmann, both professors in the Department of Psychology and the Neuroscience Institute at CMU — described the model in the Jan. 18 issue of the Proceedings of the National Academy of Sciences.
    Blauch said the paper may help cognitive neuroscientists answer longstanding questions about how different parts of the brain work together.
    “We have been wondering for a long time if we should be thinking of the network of regions in the brain that responds to faces as a separate entity just for recognizing faces, or if we should think of it as part of a broader neural architecture for object recognition,” Blauch said. “We’re trying to come at this problem using a computational model that assumes this simpler, general organization, and seeing whether this model can then account for the specialization we see in the brain through learning to perform tasks.”
    To do so, the researchers developed a deep learning model endowed with additional features of biological brain connectivity, hypothesizing that the model could reveal the spatial organization, or topography of the IT. More

  • in

    The brain’s secret to life-long learning can now come as hardware for artificial intelligence

    When the human brain learns something new, it adapts. But when artificial intelligence learns something new, it tends to forget information it already learned.
    As companies use more and more data to improve how AI recognizes images, learns languages and carries out other complex tasks, a paper publishing in Science this week shows a way that computer chips could dynamically rewire themselves to take in new data like the brain does, helping AI to keep learning over time.
    “The brains of living beings can continuously learn throughout their lifespan. We have now created an artificial platform for machines to learn throughout their lifespan,” said Shriram Ramanathan, a professor in Purdue University’s School of Materials Engineering who specializes in discovering how materials could mimic the brain to improve computing.
    Unlike the brain, which constantly forms new connections between neurons to enable learning, the circuits on a computer chip don’t change. A circuit that a machine has been using for years isn’t any different than the circuit that was originally built for the machine in a factory.
    This is a problem for making AI more portable, such as for autonomous vehicles or robots in space that would have to make decisions on their own in isolated environments. If AI could be embedded directly into hardware rather than just running on software as AI typically does, these machines would be able to operate more efficiently.
    In this study, Ramanathan and his team built a new piece of hardware that can be reprogrammed on demand through electrical pulses. Ramanathan believes that this adaptability would allow the device to take on all of the functions that are necessary to build a brain-inspired computer. More

  • in

    Satellites have located the world’s methane ‘ultra-emitters’

    A small number of “ultra-emitters” of methane from oil and gas production contribute as much as 12 percent of emissions of the greenhouse gas to the atmosphere every year — and now scientists know where many of these sources are.

    Analyses of satellite images from 2019 and 2020 reveal that a majority of the 1,800 biggest methane sources come from six major oil- and gas-producing countries: Turkmenistan led the pack, followed by Russia, the United States, Iran, Kazakhstan and Algeria.

    Plugging those leaks would not only be a boon to the planet, but also could save those countries billions in U.S. dollars, climate scientist Thomas Lauvaux of the University of Paris-Saclay and colleagues report in the Feb. 4 Science.

    Ultra-emitters are sources that spurt at least 25 metric tons of methane per hour into the atmosphere. These occasional massive bursts make up only a fraction — but a sizable one — of the methane shunted into Earth’s atmosphere annually.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Cleaning up such leaks would be a big first step in reducing overall emissions, says Euan Nisbet, a geochemist at Royal Holloway, University of London in Egham, who was not involved in the study. “If you see somebody badly injured in a road accident, you bandage up the bits that are bleeding hardest.”

    Methane has about 80 times the atmosphere-warming potential of carbon dioxide, though it tends to have a much shorter lifetime in the atmosphere — 10 to 20 years or so, compared with hundreds of years. The greenhouse gas can seep into the atmosphere from both natural and human-made sources (SN: 2/19/20).

    In oil and gas production, massive methane bursts might be the result of accidents or leaky pipelines or other facilities, Lauvaux says. But these leaks are often the result of routine maintenance practices, the team found. Rather than shut down for days to clear gas from pipelines, for example, managers might open valves on both ends of the line, releasing and burning off the gas quickly. That sort of practice stood out starkly in satellite images as “two giant plumes” along a pipeline track, Lauvaux says.

    Stopping such practices and repairing leaky facilities are relatively easy, which is why such changes may be the low-hanging fruit when it comes to addressing greenhouse gas emissions. But identifying the particular sources of those huge methane emissions has been the challenge. Airborne studies can help pinpoint some large sources, such as landfills, dairy farms and oil and gas producers, but such flights are limited by being both regional and of short duration (SN: 11/14/19).

    Satellites, such as the European Space Agency’s Tropospheric Monitoring Instrument, or TROPOMI, offer a much bigger window in both space and time. Scientists have previously used TROPOMI to estimate the overall leakiness of oil and gas production in Texas’s massive Permian Basin, finding that the region sends twice as much methane to the atmosphere as previously thought (SN: 4/22/20).

    In the new study, the team didn’t include sources in the Permian Basin among the ultra-emitters; the large emissions from that region are the result of numerous tightly clustered but smaller emissions sources. Because TROPOMI doesn’t peer well through clouds, other regions around the globe, such as Canada and the equatorial tropics, also weren’t included.

    But that doesn’t mean those regions are off the hook, Lauvaux says. “There’s just no data available.” On the heels of this broad-brush view from TROPOMI, Lauvaux and other scientists are now working to plug those data gaps using other satellites with better resolution and the ability to penetrate clouds.

    Stopping all of these big leaks, which amount to an estimated 8 to 12 percent of total annual methane emissions, could save these countries billions of dollars, the researchers say. And the reduction in those emissions would be about as beneficial to the planet as cutting all emissions from Australia since 2005, or removing 20 million vehicles from the roads for a year.

    Such a global map can also be helpful to countries in meeting their goals under the Global Methane Pledge launched in November at the United Nations’ annual climate summit, says Daniel Jacob, an atmospheric chemist at Harvard University who was not involved in the study (SN: 1/11/22).

    Signatories to the pledge agreed to reduce global emissions of the gas by at least 30 percent relative to 2020 levels by 2030. These new findings, Jacob says, can help achieve that target because it “encourages action rather than despair.”  More

  • in

    Observation of quantum transport at room temperature in a 2.8-nanometer CNT transistor

    National Institute for Materials Science, Japan. “Observation of quantum transport at room temperature in a 2.8-nanometer CNT transistor: Semiconductor nanochannels created within metallic CNTS by thermally and mechanically altering the helical structure.” ScienceDaily. ScienceDaily, 3 February 2022. .
    National Institute for Materials Science, Japan. (2022, February 3). Observation of quantum transport at room temperature in a 2.8-nanometer CNT transistor: Semiconductor nanochannels created within metallic CNTS by thermally and mechanically altering the helical structure. ScienceDaily. Retrieved February 4, 2022 from www.sciencedaily.com/releases/2022/02/220203123008.htm
    National Institute for Materials Science, Japan. “Observation of quantum transport at room temperature in a 2.8-nanometer CNT transistor: Semiconductor nanochannels created within metallic CNTS by thermally and mechanically altering the helical structure.” ScienceDaily. www.sciencedaily.com/releases/2022/02/220203123008.htm (accessed February 4, 2022). More

  • in

    Researchers find new way to amplify trustworthy news content on social media without shielding bias

    Social media sites continue to amplify misinformation and conspiracy theories. To address this concern, an interdisciplinary team of computer scientists, physicists and social scientists led by the University of South Florida (USF) has found a solution to ensure social media users are exposed to more reliable news sources.
    In their study published in the journal Nature Human Behaviour, the researchers focused on the recommendation algorithm that is used by social media platforms to prioritize content displayed to users. Rather than measuring engagement based on the number of users and pageviews, the researchers looked at what content gets amplified on a newsfeed, focusing on a news source’s reliability score and the political diversity of their audience.
    “Low-quality content is engaging because it conforms to what we already know and like, regardless of whether it is accurate or not,” said Giovanni Luca Ciampaglia, assistant professor of computer science and engineering at USF. “As a result, misinformation and conspiracy theories often go viral within like-minded audiences. The algorithm ends up picking the wrong signal and keeps promoting it further. To break this cycle, one should look for content that is engaging, but for a diverse audience, not for a like-minded one.”
    In collaboration with researchers at Indiana University and Dartmouth College, the team created a new algorithm using data on the web traffic and self-reported partisanship of 6,890 individuals who reflect the diversity of the United States in sex, race and political affiliation. The data was provided by online polling company YouGov. They also reviewed the reliability scores of 3,765 news sources based on the NewGuard Reliability Index, which rates news sources on several journalistic criteria, such as editorial responsibility, accountability and financial transparency.
    They found that incorporating the partisan diversity of a news audience can increase the reliability of recommended sources while still providing users with relevant recommendations. Since the algorithm isn’t exclusively based on engagement or popularity, it is still able to promote reliable sources, regardless of their partisanship.
    “This is especially welcome news for social media platforms, especially since they have been reluctant of introducing changes to their algorithms for fear of criticism about partisan bias,” said co-author Filippo Menczer, distinguished Luddy professor of informatics and computer science at Indiana University.
    Researchers say that platforms would easily be able to include audience diversity into their own recommendation algorithms because diversity measures can be derived from engagement data, and platforms already log this type of data whenever users click “like” or share something on a newsfeed. Ciampaglia and his colleagues propose social media platforms adopt this new strategy in order to help prevent the spread of misinformation.
    Story Source:
    Materials provided by University of South Florida (USF Innovation). Note: Content may be edited for style and length. More

  • in

    Missing the bar: How people misinterpret data in bar graphs

    Thanks to their visual simplicity, bar graphs are popular tools for representing data. But do we really understand how to read them? New research from Wellesley College published in the Journal of Vision has found that bar graphs are frequently misunderstood. The study demonstrates that people who view exactly the same graph often walk away with completely different understandings of the facts it represents.
    “Our work reveals that bar graphs are not the clear communication tools many had supposed,” said Sarah H. Kerns, a 2019 graduate of Wellesley, research associate in its psychology department, and first author of the paper, entitled “Two graphs walk into a bar: Readout-based measurement reveals the Bar-Tip Limit error, a common, categorical misinterpretation of mean bar graphs.”
    “Bar graphs that depict mean values are ubiquitous in politics, science, education, and government, and they are used to convey data over a wide range of topics including climate change, public health, and the economy,” said co-author Jeremy Wilmer, associate professor of psychology at Wellesley. “A lack of clarity in domains such as these could have far-reaching negative impacts on public discourse.”
    Kerns and Wilmer’s revelation about bar graphs was made possible by a powerful new measurement technique that they developed. This technique relies upon having a person draw, on paper, their interpretation of the graph. “Drawing tasks are particularly effective at capturing visuospatial thinking in a way that is concrete, expressive, and detailed,” said Kerns. “Drawings have long been used in psychology as a way to reveal the contents of one’s thoughts, but they have not previously been used to study graph interpretation.”
    The research team asked hundreds of people to show where they believed the data underlying a bar graph would be by drawing dots on the graphs themselves. A striking pattern emerged. About one in five graph readers categorically misinterpreted bar graphs that depicted averages. “These readers sketched all, or nearly all, of the data points below the average,” said Wilmer. “The average is the balanced center point of the data. It is impossible for the bulk of the data to be below-average. We call this mistake the bar-tip limit error, because the viewer has misinterpreted the bar’s tip as the outer limit of the data.” The error was equally prevalent across ages, genders, education levels, and nationalities.
    Given the severity of this error, how could decades of graph interpretation research have missed it? “Previous research typically asked rather abstract, indirect questions: about predictions, probabilities, and payoffs,” said Kerns. “It is difficult to read a person’s thoughts from their answers to such questions. It is like looking through frosted glass — one may gain a vague sense of what is there, but it lacks definition. Our measurement approach is more concrete, more direct, more detailed. The drawings provide a clear window into the graph interpreter’s thinking.”
    “A major lesson from this work is that simplification in graph design can yield more confusion than clarification,” said Wilmer. “The whole point of replacing individual values with a summary statistic like an average, is to simplify the visual display and make it easier to read. But this simplification misleads many viewers, and not only about the location of the individual data points that have been removed — it misleads them also about the average, which is the one thing the graph actually depicts.”
    The team suggests some changes in data visualization practices based on their findings. First, they recommend that a bar be used only to convey a single number, such as a count (150 hospital beds) or quantity ($5.75): “In that case, no data is hidden,” said Kerns. “In contrast, our research shows that a bar used to depict the average of multiple numbers risks severe confusion.” Their second recommendation is to think twice before replacing concrete, detailed information (e.g., individual data points) with visually simpler yet conceptually more abstract information (e.g., an average value). “Our work provides a case-in-point that abstraction in data communication risks serious misunderstanding,” said Wilmer.
    The team’s education-focused recommendations include the use of data sketching tasks to teach data literacy. “Once a student’s interpretation is made explicit and visible on paper, it is easy to discuss and, if necessary, correct,” Wilmer said. They also suggest having students work with real data. “Data is fundamentally concrete,” Kerns said. “There is value to reading about it in the abstract, but that will always be a bit like reading a book to learn how to ride a bike. There is no substitute for hands-on experience.”
    Collection, visualization, and analysis of data now form a centerpiece of all of Wilmer’s courses. An enabling tool in this effort is a free-access suite of data visualization web apps he created at ShowMyData.org, which allow the user, in a matter of seconds, to build and curate attractive, high-quality graphs with individual datapoints. “Such graphs avoid the sorts of errors that our research reveals,” says Kerns. “And they are easily interpreted, even by young children,” adds Wilmer, whose children, aged 11 and 7, are “two of my most astute (and ruthless) app development and data communication consultants.”
    In a political and scientific milieu where information spreads fast, and where misunderstanding can have a profound impact on popular opinion and public policy, clear data communication and robust data literacy are increasingly important. “From the grocery store to the doctors office to the ballot box, data informs our decisions,” Kerns said. “We hope our work will help to enhance data comprehension and smooth the path to informed decision-making by institutions and individuals alike.” More

  • in

    People prefer interacting with female robots in hotels, study finds

    People are more comfortable talking to female rather than male robots working in service roles in hotels, according to a study by Washington State University researcher Soobin Seo.
    The study, which surveyed about 170 people on hypothetical service robot scenarios, also found that the preference was stronger when the robots were described as having more human features. The findings are detailed in a paper published online in the International Journal of Hospitality Management.
    “People have a tendency to feel more comfort in being cared for by females because of existing gender stereotyping about service roles,” said Seo, an assistant professor of hospitality management at WSU’s Carson Business College in Everett. “That gender stereotype appears to transfer to robot interactions, and it is more amplified when the robots are more human like.”
    Even before the pandemic, the hotel industry struggled with high turnover of employees, and Seo noted that some hotels have turned to robots and automation for a variety of functions from dishwashing and room cleaning to customer service such as greeting guests and delivering luggage.
    Examples range from the female humanized robots named “Pepper” at the Mandarin Oriental Hotel in Las Vegas to the fully automated FlyZoo hotel chain in China where guests interact only with robots and artificial intelligence (AI) features.
    For the study, survey participants were presented with one of four scenarios about interacting with an AI service robot at a hotel. In one scenario, they were greeted by a male service robot named “Alex” who was described as having a face and human-like body. A second scenario was worded exactly the same with just two changes: the robot’s gender was female, and its name was “Sara.” In two other scenarios, the robots were both gendered and named differently but described as “machine-like’ with an interactive screen instead of a face.
    The respondents were then asked to rank how they felt about the interactions. The participants who were presented with the female robot scenarios rated the experience as more pleasant and satisfying than those who had scenarios with male robots. The preference for the female robot was more pronounced when the robots were described as looking more human.
    Seo cautioned that replacing human hospitality workers with AI robots of any gender raises many issues that need further research. For instance, if a robot breaks down or fails in service in some way, such as losing luggage or getting a reservation wrong, customers may want a human employee to help them.
    The WSU business researcher is also in the process of investigating how the personality of AI robots may impact customers’ perceptions, such as if they are extroverted and talkative or introverted and quiet.
    These are important considerations for AI robot developers as well as for hospitality employers to consider as they think about adopting robots more widely, Seo said.
    “We may start to see more robots as replacements of human employees in hotels and restaurants in the future, so we may find that some of the psychological relationships that we see in human-to-human interaction also implemented in robot interactions,” she said.
    Story Source:
    Materials provided by Washington State University. Original written by Sara Zaske. Note: Content may be edited for style and length. More