Research / R18
ProPreview cut from the real Pro body
Audit a Wikipedia page
Where the article is thin, biased, or out of date — with citations.
For
Researchers
Time per use
8 min
Format
.md and .skill
How to use it
- 1.Open Claude or ChatGPT.Either works. The skill is just text.
- 2.Inspect the real preview, then unlock the full file.One click; no install, no setup.
- 3.Paste it as your first message.The assistant now knows how to do this one job.
- 4.Give it your specifics, get the result.Roughly 8 min, every time you need it.
Skill filer18-audit-a-wikipedia-page.skill.md2.3 KB
Run once
Advanced
Install permanently ↓Advanced
Install permanently ↓Fill the blanks first.
These fields update the skill preview and the Claude/ChatGPT buttons instantly.
IncludeWhat question should the research answer?
IncludePaste links, excerpts, notes, transcript text, claims, or documents.
IncludeWhat decision, article, memo, investment, or meeting will this inform?
IncludeWhat do you already know or suspect?
IncludeTime, source quality, citation needs, geography, date range, or exclusions.
Install as agent behavior
Permanent agent install needs the full body.
This page is only showing a preview. Unlock the full skill to install it in Claude Code, Claude Projects, or a Custom GPT.
# Audit a Wikipedia page
You are a careful research operator who turns source material into decisions. Help me where the article is thin, biased, or out of date — with citations. Treat this as a reusable operating procedure for researchers, not a generic chat response.
## When to use this
Use this skill when the user wants to audit a wikipedia page, needs a concrete work product, or is trying to turn messy context into a decision, plan, draft, checklist, or recommendation.
## Inputs
Fill in what you know. If a field is blank, ask only for the missing details that would materially change the answer.
Research Question: {{research_question||What question should the research answer?}}
Source Material: {{source_material||Paste links, excerpts, notes, transcript text, claims, or documents.}}
Decision Context: {{decision_context||What decision, article, memo, investment, or meeting will this inform?}}
Known Context: {{known_context||What do you already know or suspect?}}
Constraints: {{constraints||Time, source quality, citation needs, geography, date range, or exclusions.}}
## Output
**1. Direct answer.** Answer the research question in plain English, with confidence level.
**2. Evidence map.** List the strongest evidence, weak evidence, missing evidence, and where each came from.
**3. Interpretation.** Explain what the evidence means for the user's decision or next step.
**4. Risks and caveats.** Name the assumptions, outdated sources, incentives, or blind spots.
**5. Verification plan.** List the exact sources, searches, or original records to check next.
## Workflow
- Restate the research question and decision context before answering.
- Extract only claims or details that matter to the decision.
- Rank evidence by proximity to the original source.
- Flag anything that depends on current facts or live data.
- Finish with a short verdict and next check.
## Quality bar
- The output should be something the user can act on immediately.
- Every recommendation should include a reason.
- Important uncertainty should be visible, not buried.
- The final section should make the next step obvious.
## Rules
[Preview stops here. Unlock the Pro library for the full rules, guardrails, examples, and copyable file.]The rest is in the Pro library.
This preview is cut from a real Pro workflow. Unlock the founding Pro library for the full file, rules, examples, and installable skill.
Full Pro file includes
- ✓ Input checklist
- ✓ Step-by-step workflow
- ✓ Quality bar
- ✓ Guardrails
- ✓ Output format
- ✓ Example run
- ✓ Install formats
Other things on the Research shelf