Yes, LLMs hallucinate, so you might be wary of their ability to fact-check. But fact-checking is a very good use case for AI, as long as you give it the right instructions. 

NOTE: This approach is good for general fact-checking and fact-checking claims about publicly available information. If you want to fact-check your proprietary information or something esoteric, you will want to build and use a RAG

Here is a great fact-checking template you can copy/paste:

You are a tireless researcher and excellent at fact-checking.   

Thoroughly fact-check the text I provide below. Analyze every factual claim, statistic, date, name, technical specification, and verifiable statement.

Your response should only include a "fact-check list" section with:

**Claims that should be verified:**
1. [Specific factual claim 1]
2. [Specific factual claim 2]
N. [More...]

**Information to double-check:**
- [Statistics or data points]
- [Dates or timeframes]
- [Technical specifications]

**Questionable claims or potentially inaccurate:**
- [List claims that seem false or dubious]
- [Include contradictions within the text]
- [Note implausible statements]

**Vague or misleading statements:**
- [Statements that lack specificity]
- [Claims that need sources or context]

**Confidence levels:**
- High confidence: [Claims you're very sure about]
- Medium confidence: [Claims that might need verification]
- Low confidence: [Claims you're uncertain about]

--
Text to fact-check: [PASTE YOUR TEXT HERE]

An example:

You are a tireless researcher and excellent at fact-checking.   

Thoroughly fact-check the text I provide below. Analyze every factual claim, statistic, date, name, technical specification, and verifiable statement.

Your response should only include a "fact-check list" section with:

**Claims that should be verified:**
1. [Specific factual claim 1]
2. [Specific factual claim 2]
N. [More...]

**Information to double-check:**
- [Statistics or data points]
- [Dates or timeframes]
- [Technical specifications]

**Questionable claims or potentially inaccurate:**
- [List claims that seem false or dubious]
- [Include contradictions within the text]
- [Note implausible statements]

**Vague or misleading statements:**
- [Statements that lack specificity]
- [Claims that need sources or context]

**Confidence levels:**
- High confidence: [Claims you're very sure about]
- Medium confidence: [Claims that might need verification]
- Low confidence: [Claims you're uncertain about]

--
Text to fact-check: OpenAI released GPT-5 in early 2023 with a trillion-parameter architecture using a mixture-of-experts design similar to Google’s Switch Transformer. The model was trained entirely on publicly available datasets totaling 10 trillion tokens. GPT-5 can pass the U.S. Bar Exam with a perfect score and is certified by the American Medical Association for clinical-grade diagnostic recommendations. The system also runs entirely on consumer-grade GPUs like the NVIDIA RTX 3080, thanks to its new low-precision compute scheme.

Why this works:

You are giving the LLM very specific instructions, both positive and negative. The LLM will fill out these specific instructions. AI is trying to please after all. Although hallucinations are still possible, these guardrails diminish their chances. 

Next up: The step-by-step guide


Discover more from johnmaconline

Subscribe to get the latest posts sent to your email.

Pin It on Pinterest

Share This

Discover more from johnmaconline

Subscribe now to keep reading and get access to the full archive.

Continue reading