Towards Robust Fact-Checking: A Multi-Agent System with Advanced Evidence Retrieval
-
Updated
Jun 24, 2025 - Python
Towards Robust Fact-Checking: A Multi-Agent System with Advanced Evidence Retrieval
Code associated with the NAACL 2025 paper "COVE: COntext and VEracity prediction for out-of-context images"
This repository provides scripts and workflows for translating fact-checking datasets and automating claim classification using large language models (LLMs).
debunkr.org Dashboard is a Browser extension that helps you analyze suspicious content on the web using AI-powered analysis. Simply highlight text on any website, right-click, and let our egalitarian AI analyze it for bias, manipulation, and power structures.
Tathya (तथ्य, "truth") is an Agentic fact-checking system that verifies claims using multiple sources including Google Search, DuckDuckGo, Wikidata, and news APIs. It provides structured analysis with confidence scores, detailed explanations, and transparent source attribution through a modern Streamlit interface and FastAPI backend.
OpenSiteTrust is an open, explainable, and reusable website scoring ecosystem
🔍 ABCheckers 💬 is a data-driven project that analyzes Twitter discourse to uncover misinformation around 🇵🇭 inflation and the weakening peso, empowering users with contextual insights.
Media Literacy System powered by AI - Analyze news for bias and manipulation.
An advanced AI-powered fake news detection system that verifies text, images, and social media posts using Gemini AI, FastAPI, and Next.js. Includes a modern web interface, a lightweight Streamlit app, and a Chrome extension for real-time fake content detection. Built to combat misinformation with explainable AI results and contextual source links.
This project implements a complete NLP pipeline for Persian tweets to classify topics and detect fake news. Using a Random Forest classifier, it compares tweet content with trusted news sources, achieving 70% accuracy in fake news detection.
Fine-tuned roberta-base classifier on the LIAR dataset. Aaccepts multiple input types text, URLs, and PDFs and outputs a prediction with a confidence score. It also leverages google/flan-t5-base to generate explanations and uses an Agentic AI with LangGraph to orchestrate agents for planning, retrieval, execution, fallback, and reasoning.
A modern, responsive website showcasing the Financial Misinformation Detection (FMD) project. It compares three AI models—Logistic Regression, RNN (LSTM), and BERT—on the COLING25-FMD dataset, featuring interactive comparisons, performance metrics, model insights, and advanced text analysis in a sleek, animated UI/UX.
Trinetra AI is a creative solution brought to the market that produces verifications for AI-created text or other conceivable falsehoods. It supplies a quick and accessible web experience where users can receive the detected verdicts, confidence scores, and topical categories along with graphics of trend and community feedback.
A React + Vite + Tailwind CSS web app that verifies text for potential misinformation in real time using Gemini AI. Delivers a minimal, responsive UI with clear verdicts, confidence scores, and category tags. Includes a dashboard-ready structure and components for insights and community upvoting.
Node.js + Express API that powers misinformation verification by integrating Gemini AI and MongoDB. Exposes endpoints for verification, category summaries, upvoting, and health checks, designed for low-latency responses. Persists flagged content with confidence and metadata for analytics and auditability.
Imagine Hashing embeds cryptographic hashes into images using steganography and SHA256 to ensure authenticity, integrity, and resilience against tampering or manipulation.
NLP classifier for misinformation detection. TruStorE™ blends linguistic heuristics, sentiment drift analysis, and Word Pair Logic™ to flag manipulative tone in news articles, a tell-tale sign of fake news as emotions are elicited to replace facts. Built for reproducibility, modular deployment, and artifact-grade impact.
Code for the paper "Evaluating AI capabilities in detecting conspiracy theories on YouTube".
Source Code for the Bachelor's Project: Hybrid Small Language Models for Accurate Multimodal Disinformation and Misinformation Analysis
Watermarking System | AI-Generated Media Detection A system for detecting and flagging AI-generated images using ML and steganography. Ensures authenticity with imperceptible, resilient watermarks embedded at creation.
Add a description, image, and links to the misinformation-detection topic page so that developers can more easily learn about it.
To associate your repository with the misinformation-detection topic, visit your repo's landing page and select "manage topics."