In the age of viral headlines and endless scrolling, misinformation travels faster than the truth. Even careful readers can be swayed by stories that sound factual but twist logic in subtle ways that quietly distort reality while never quite crossing the line into a lie.
That’s where Skeptik comes in.
Developed by a cross-disciplinary team from Arizona State University, Skeptik is a new browser-based tool designed to help readers recognize these hidden flaws. The system, created by researchers from the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering, and ASU’s Center for Strategic Communication, uses large language models, similar to those that power modern artificial intelligence, or AI, chatbots. Skeptik combines these models with human communication theory to automatically identify and explain logical fallacies in online news articles.
“Our goal isn’t to tell people what to think,” says Fan Lei, a researcher who led the project until he received his computer science doctoral degree from the Fulton Schools in 2025. “It’s to help them see how an argument is built, where it’s solid and where it might be taking shortcuts. We want to empower readers to think critically, not passively consume information.”
Seeing through the spin
Traditional fact-checking can verify whether a claim is true, but it often misses the deeper structure of persuasion and the rhetorical sleights of hand that make falsehoods seem reasonable. The Skeptik framework fills that gap by scanning news text for logical inconsistencies and marking suspect sentences directly within the article.
Readers can then click to reveal brief explanations, external evidence and even initiate a live chat with an AI model that provides deeper clarification. In the system’s prototype, each fallacy type is color-coded and linked to an interactive sidebar. A vague statement, for instance, might appear underlined in purple, while a red line could flag a strawman argument. Hovering reveals a short explanation, and clicking opens multi-layered “intervention” panels that guide readers through progressively deeper insights.
The first layer offers a simple clarification, explaining why the reasoning may be misleading. The second layer provides supporting evidence and counterarguments to help readers evaluate the claim more critically. The third layer offers proactive context, anticipating similar misinformation patterns before the reader encounters them again.
“People don’t always fall for misinformation because they’re careless,” Lei says. “They fall for it because persuasive writing often feels logical. We wanted to give readers a way to pause and ask, ‘Does this conclusion really follow from the evidence?’”
Read the full story on Engineering News.