Fetch. Summarize.
Understand.
A Python toolkit for summarizing research papers from arXiv. Fetch papers, generate summaries, and compare research—all powered by 100+ LLMs via LiteLLM.
Features
Everything You Need
A complete toolkit for fetching, summarizing, and analyzing research papers from arXiv.
Paper Fetching
Fetch papers from arXiv by ID or URL. Supports all arXiv formats including new IDs, old IDs, and direct PDF links.
AI Summarization
Generate intelligent summaries with key points using any LLM. Choose brief, medium, or detailed summaries in any language.
arXiv Search
Search arXiv with powerful query syntax. Filter by author, category, date, and more. Sort by relevance or date.
Paper Comparison
Compare multiple papers side-by-side. Identify common themes, differences, and how research builds upon prior work.
100+ LLM Models
Powered by LiteLLM for maximum flexibility. Use OpenAI, Anthropic, Google, Cohere, Ollama, and many more providers.
CLI & Python API
Use as a command-line tool or import as a library. Intuitive interface for both quick tasks and complex workflows.
Quick Start
Simple by Design
Get started in seconds. Set your API key and start summarizing papers.
import thom # Fetch a paper from arXiv paper = thom.fetch_paper("1706.03762") print(paper.title) # Attention Is All You Need # Generate a summary summary = thom.summarize(paper) print(summary.summary) print(summary.key_points) # Search for papers papers = thom.search("transformer attention", max_results=5) # Compare multiple papers analysis = thom.compare([paper1, paper2])
# Use local Ollama models - no API key needed! import thom paper = thom.fetch_paper("2301.00001") # Summarize with local Llama, Mistral, or Gemma summary = thom.summarize(paper, model="ollama/gemma3:1b") summary = thom.summarize(paper, model="ollama/llama3") summary = thom.summarize(paper, model="ollama/mistral") # Or use cloud providers summary = thom.summarize(paper, model="gpt-4o") summary = thom.summarize(paper, model="claude-3-5-sonnet-20241022")
CLI Reference
The thom Command
A powerful command-line interface for research paper analysis.
--model for LLM, --detail (brief/medium/detailed), --language, --json.
--json for structured output.
-n for max results, --sort (relevance/lastUpdatedDate/submittedDate).
What limits the true is not the false, but the insignificant.
Examples
Common Workflows
See thom in action with real-world examples.
# Summarize the famous Transformer paper $ thom summarize 1706.03762 Fetching paper: 1706.03762... Summarizing with gpt-4o-mini... # Use a local model with Ollama $ thom summarize 1706.03762 --model ollama/gemma3:1b ============================================================ Attention Is All You Need ============================================================ # Search for recent papers on a topic $ thom search "large language models" -n 5 Found 5 papers: 1. [2401.xxxxx] Paper Title... # Compare papers in a research area $ thom compare 1706.03762 1810.04805 2005.14165 Comparing with gpt-4o-mini...