You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
5.**Run field comparison**: `python3 $TOOLS_DIR/compare.py <file.bib>` — this programmatically compares every entry against CrossRef and returns exact field-level mismatches. Do NOT skip this step or rely on visual comparison alone. The output is a JSON list; each element has `key`, `versions` (a list of alternative CrossRef candidate matches for the same entry, each with `mismatches`, `url`, `doi`, etc.), and `error`. When multiple versions are returned, choose the best matching candidate; do not combine fields from different versions. **Skip rule**: if an entry has zero mismatches across all versions and no error in the compare.py output, skip it entirely — do NOT investigate, modify, or add comments to it. Only proceed with entries that compare.py flagged (mismatches, errors, or duplicates from step 4).
89
-
6.**Verify every planned modification with web search** — for entries that compare.py flagged with mismatches or errors, and for entries flagged as duplicates, verify the planned action via web search. For `fix` patches, gather one or more source URLs. Entries where `compare.py` returned an error (e.g. "No exact title match") still need full verification — the verification agent should search for the paper and check all fields. **Important: after selecting the best-matching version, verification agents MUST NOT override that selected version's `compare.py` field values.** CrossRef is the authoritative source for metadata (pages, volume, number, etc.) because it receives data directly from publishers via DOI registration. When web search finds a conflicting value (e.g. different page numbers on a conference website), always use the CrossRef value and add `% bibtidy: REVIEW` if desired — but do NOT keep the old value.
88
+
4.**Remove exact duplicates**: `python3 $TOOLS_DIR/duplicates.py <file.bib> --exact` — this comments out entries that are identical (same key, same type, same fields). Safe to auto-remove since no information is lost.
89
+
5.**Run field comparison**: `python3 $TOOLS_DIR/compare.py <file.bib>` — this programmatically compares every entry against CrossRef and returns exact field-level mismatches. Do NOT skip this step or rely on visual comparison alone. The output is a JSON list; each element has `key`, `versions` (a list of alternative CrossRef candidate matches for the same entry, each with `mismatches`, `url`, `doi`, etc.), and `error`. When multiple versions are returned, choose the best matching candidate; do not combine fields from different versions. **Skip rule**: if an entry has zero mismatches across all versions and no error in the compare.py output, skip it entirely — do NOT investigate, modify, or add comments to it. Only proceed with entries that compare.py flagged (mismatchesor errors).
90
+
6.**Verify every planned modification with web search** — for entries that compare.py flagged with mismatches or errors, verify the planned action via web search. For `fix` patches, gather one or more source URLs. Entries where `compare.py` returned an error (e.g. "No exact title match") still need full verification — the verification agent should search for the paper and check all fields. **Important: after selecting the best-matching version, verification agents MUST NOT override that selected version's `compare.py` field values.** CrossRef is the authoritative source for metadata (pages, volume, number, etc.) because it receives data directly from publishers via DOI registration. When web search finds a conflicting value (e.g. different page numbers on a conference website), always use the CrossRef value and add `% bibtidy: REVIEW` if desired — but do NOT keep the old value.
90
91
7.**Flag hallucinated/non-existent references** — if compare.py returned an error (e.g. "No CrossRef results found" or "No exact title match in CrossRef results") AND web search also finds no matching paper, the reference likely does not exist. Add `% bibtidy: NOT FOUND — no matching paper on CrossRef or web search; verify this reference exists` above the entry, then comment out the entire entry (prefix every line with `% `). Do NOT add a URL line.
91
-
8. Apply fixes **sequentially** using `edit.py` — do NOT edit the .bib file directly with agent editing tools (for example, Claude Code Edit or Codex `apply_patch`), and do NOT rewrite the entire file. Build a patches.json for each entry (or batch) and run `python3 $TOOLS_DIR/edit.py <file.bib> <patches.json>`. This ensures the commented original, source URLs, and explanation are always included. After selecting the correct version, you MUST apply **every** mismatch from that selected version — do not skip any field (including `number`, `pages`, `volume`). Use the `crossref_value` exactly as given (do NOT rephrase, reformat, or partially apply it). For title mismatches on preprint→published upgrades, replace the entire title with the CrossRef title — do NOT try to edit parts of the old title. Never reject a CrossRef value because another source disagrees. Every patch MUST include `urls` (list of source URLs) and `explanation` (what changed and why). Include the CrossRef URL from compare.py's `url` field when available, plus any other authoritative source (DOI URL, venue page) found via web search.
92
-
9. Run format validation; fix violations and re-run until clean
93
-
10. Delete backup: `rm <file>.bib.orig`
94
-
11. Print a Markdown summary table with headers `Metric | Count` and exactly these rows: total entries, verified, fixed, not found. Do NOT include a separate "needs manual review" row.
92
+
8. Apply fixes **sequentially** using `edit.py` — do NOT edit the .bib file directly with agent editing tools (for example, Claude Code Edit or Codex `apply_patch`), and do NOT rewrite the entire file. Build a patches.json for each entry (or batch) and run `python3 $TOOLS_DIR/edit.py <file.bib> <patches.json>`. This ensures the commented original, source URLs, and explanation are always included. After selecting the correct version, you MUST apply **every** mismatch from that selected version — do not skip any field (including `author`, `number`, `pages`, `volume`). In particular, if the bib entry uses `and others` but CrossRef returns the full author list, you MUST replace the truncated list with the complete one from CrossRef. Use the `crossref_value` exactly as given (do NOT rephrase, reformat, or partially apply it). For title mismatches on preprint→published upgrades, replace the entire title with the CrossRef title — do NOT try to edit parts of the old title. Never reject a CrossRef value because another source disagrees. Every patch MUST include `urls` (list of source URLs) and `explanation` (what changed and why). Include the CrossRef URL from compare.py's `url` field when available, plus any other authoritative source (DOI URL, venue page) found via web search.
93
+
9.**Post-fix exact duplicate removal**: `python3 $TOOLS_DIR/duplicates.py <file.bib> --exact` — entries that were different before fixing may now be identical after metadata corrections. Comment out any new exact duplicates.
94
+
10.**Detect near-duplicates**: `python3 $TOOLS_DIR/duplicates.py <file.bib>` — flag entries that share the same key, DOI, or title (with a shared author), plus likely preprint→published pairs with the same lead author and overlapping significant title words, but are not identical. Apply `duplicate` patches via `edit.py` to add `% bibtidy: DUPLICATE of <other_key>` comments. Do NOT delete or comment out near-duplicates.
95
+
11. Run format validation; fix violations and re-run until clean
96
+
12. Delete backup: `rm <file>.bib.orig`
97
+
13. Print a Markdown summary table with headers `Metric | Count` and exactly these rows: total entries, verified, fixed, not found, exact duplicates removed, near-duplicates flagged. Do NOT include a separate "needs manual review" row.
95
98
96
99
## Parallel Verification with Subagents
97
100
98
101
Use subagents, when available, to verify multiple entries concurrently. This dramatically reduces wall-clock time (e.g., 7 entries: ~1 min parallel vs ~5 min sequential; 100 entries: ~3 min vs ~40 min). If subagents are unavailable, do the same verification work sequentially yourself.
99
102
100
-
**Step 1 — Dispatch verification agents:** For entries that `compare.py` flagged with mismatches or errors, and any duplicate entries you plan to annotate, launch a subagent that:
103
+
**Step 1 — Dispatch verification agents:** For entries that `compare.py` flagged with mismatches or errors, launch a subagent that:
101
104
- For mismatches: uses web search to confirm the CrossRef data (especially for preprint upgrades and author changes)
102
105
- For errors (e.g. paper not found in CrossRef): uses web search to verify **every** field from scratch — title, author, journal/booktitle, volume, number, pages, year. Do NOT skip number or other fields just because they look plausible.
103
106
- Returns a JSON summary: key, whether each mismatch is confirmed, source URL, CrossRef URL (if there is a CrossRef match), any additional corrections found
@@ -134,19 +137,19 @@ Entry:
134
137
135
138
## Duplicate Detection
136
139
137
-
```
138
-
python3 $TOOLS_DIR/duplicates.py <file.bib>
139
-
```
140
+
Duplicate handling has three phases (see workflow steps 4, 9, 10):
141
+
142
+
**Exact duplicates** (same key, type, and all field values): `python3 $TOOLS_DIR/duplicates.py <file.bib> --exact` comments them out automatically. Run before and after metadata fixes.
140
143
141
-
Returns JSON array of duplicate pairs (by key, DOI, or title). For eachduplicate, add:`% bibtidy: DUPLICATE of <other_key> — consider removing`
144
+
**Near-duplicates** (same key, DOI, or title with shared author, plus likely preprint→published pairs with the same lead author and overlapping significant title words, but different content): `python3 $TOOLS_DIR/duplicates.py <file.bib>` returns a JSON array of pairs. For each, apply a `duplicate` patch via `edit.py` to add `% bibtidy: DUPLICATE of <other_key>`. Do NOT delete or comment out near-duplicates.
142
145
143
146
## Per-Entry Checks
144
147
145
148
For each `@article`, `@inproceedings`, `@book`, etc.:
146
149
147
150
**1. Verify existence** — Search for `"<title>" <first author last name>`. If not found: `% bibtidy: NOT FOUND — verify manually`
148
151
149
-
**2. Cross-check metadata** — Always search via`crossref.py search "<title>"`. If DOI exists, also fetch via `crossref.py doi <DOI>`. If neither finds a match, fall back to `crossref.py bibliographic "<title>"`. Compare title, year, authors, journal, volume, number, pages.
152
+
**2. Cross-check metadata** — `compare.py` runs both`crossref.py search "<title>"` and `crossref.py bibliographic "<title>"` unconditionally, plus `crossref.py doi <DOI>` when a DOI exists, deduplicating results by DOI. Only exact normalized title matches are kept. Compare title, year, authors, journal, volume, number, pages.
150
153
151
154
**3. Check for published preprints** — If journal contains "arxiv"/"biorxiv"/"chemrxiv", search for published version. Update title, venue, year, volume, pages, entry type. Only update if confirmed via DOI or two independent sources.
0 commit comments