Science & Society

The new tool utilizes AI to flag fake news for media truth checkers

A new artificial intelligence (AI) tool could help social media networks and news associations weed out false stories.

The tool, created by analysts at the University of Waterloo, utilizes deep-learning AI algorithms to decide whether claims made in posts or stories are upheld by different posts and stories on a similar subject.

“If they are, great, it’s probably a real story,” said Alexander Wong, a professor of systems design engineering at Waterloo. “But if most of the other material isn’t supportive, it’s a strong indication you’re dealing with fake news.”

Analysts were inspired to build up the tool by the proliferation of online posts and news stories that are manufactured to deceive or mislead readers, commonly for the political or economic increase.

Their system propels continuous efforts to grow completely automated technology capable of detecting fake news by accomplishing 90 percent precision in a key zone of research known as position detection.

Given a claim in one post or story and different posts and stories on a similar subject that have been gathered for comparison, the system can correctly decide whether they support it or not nine out of 10 times.

That is a new benchmark for accuracy by analysts utilizing a huge dataset made for a 2017 scientific competition called the Fake News Challenge.

While researchers around the globe keep on moving in the direction of a completely automated system, the Waterloo technology could be utilized as a screening tool by human fact-checkers at social media and news organizations.

“It augments their capabilities and flags information that doesn’t look quite right for verification,” said Wong, a founding member of the Waterloo Artificial Intelligence Institute. “It isn’t designed to replace people, but to help them fact-check faster and more reliably.”

AI algorithms at the heart of the system were demonstrated a huge number of claims combined with stories that either supported or didn’t support them. After some time, the system learned to decide support or non-support itself when indicated new claim-story pairs.

“We need to empower journalists to uncover the truth and keep us informed,” said Chris Dulhanty, a graduate student who led the project. “This represents one effort in a larger body of work to mitigate the spread of disinformation.”

Disclaimer: The views, suggestions, and opinions expressed here are the sole responsibility of the experts. No News Feed Central journalist was involved in the writing and production of this article.

Comment here