Machine learning, blockchain technology could help counter spread of fake news
A proposed machine learning framework and expanded use of blockchain technology could help counter the spread of fake news by allowing content creators to focus on areas where the misinformation is likely to do the most public harm, according to new research from Binghamton University, State University of New York.
The research led by Thi Tran, assistant professor of management information systems at Binghamton University’s School of Management, expands on existing studies by offering tools for recognizing patterns in misinformation and helping content creators zero in the worst offenders.
“I hope this research helps us educate more people about being aware of the patterns,” Tran said, “so they know when to verify something before sharing it and are more alert to mismatches between the headline and the content itself, which would keep the misinformation from spreading unintentionally.”
Tran’s research proposed machine learning systems — a branch of artificial intelligence (AI) and computer science that uses data and algorithms to imitate the way humans learn while gradually improving its accuracy — to help determine the scale to which content could cause the most harm to its audience.
Examples could include stories that circulated during the height of the COVID-19 pandemic touting false alternate treatments to the vaccine.
The framework would use data and algorithms to spot indicators of misinformation and use those examples to inform and improve the detection process. It would also consider user characteristics from people with prior experience or knowledge about fake news to help piece together a harm index. The index would reflect the severity of possible harm to a person in certain contexts if they were exposed and victimized by the misinformation.
“We’re most likely to care about fake news if it causes a harm that impacts readers or audiences. If people perceive there’s no harm, they’re more likely to share the misinformation,” Tran said. “The harms come from whether audiences act according to claims from the misinformation, or if they refuse the proper action because of it. If we have a systematic way of identifying where misinformation will do the most harm, that will help us know where to focus on mitigation.”
Based on the information gathered, Tran said, the machine learning system could help fake news mitigators discern which messages are likely to be the most damaging if allowed to spread unchallenged. More
