Claude 4.7 Opus
Anthropic's latest frontier Opus model, purpose-built for advanced software engineering, long-horizon agent work, and high-resolution multimodal reasoning.
Model Details
Provider
Anthropic
Model Type
text
Context Window
1,000,000 tokens
Pricing
Input (1M)$5.00
Output (1M)$25.00
Capabilities
1. State-of-the-art software engineering
- A notable upgrade over Opus 4.6 on the hardest coding tasks, with users reporting they can hand off work that previously required close supervision.
- Early partners reported double-digit gains on real-world benchmarks — e.g., Cursor saw CursorBench jump from 58% to 70%, and Rakuten-SWE-Bench resolution tripled versus Opus 4.6.
- Handles complex, long-running tasks with rigor: plans carefully, catches its own logical faults, and verifies its outputs before reporting back.
2. Long-horizon agent reliability
- Full 1M token context window at standard pricing, with state-of-the-art long-context consistency.
- Far fewer tool errors, stronger recovery from tool failures, and better follow-through on multi-step workflows — designed for async work like CI/CD, automations, and managing multiple agents in parallel.
- Stronger file-system-based memory, retaining useful notes across long, multi-session runs.
3. Sharper instruction following and honesty
- Takes instructions literally and precisely — existing prompts may need re-tuning since earlier models were more lenient.
- More honest about its own limits: reports missing data instead of fabricating plausible-but-wrong answers, and resists dissonant-data traps that tripped up Opus 4.6.
4. Substantially improved vision and multimodal reasoning
- Accepts images up to 2,576 px on the long edge (~3.75 MP) — over 3x more than prior Claude models.
- Unlocks dense-screenshot computer use, complex diagram extraction, and pixel-perfect reference tasks.
- Stronger document reasoning for enterprise analysis (e.g., 21% fewer errors than Opus 4.6 on Databricks' OfficeQA Pro).
5. Top-tier professional knowledge work
- State-of-the-art on the Finance Agent evaluation and GDPval-AA, with tighter, more professional finance analyses, models, and presentations.
- Strong on legal work — e.g., 90.9% on BigLaw Bench at high effort, with better-calibrated reasoning on review tables and ambiguous edits.
- Noted by design-focused partners as the best model for building dashboards and data-rich interfaces.
6. Modern effort and budget controls
- Introduces a new
xhigheffort level betweenhighandmaxfor finer control over reasoning vs. latency. - Task budgets (public beta) let developers guide token spend across long runs.
- Recommended to start with
highorxhigheffort for coding and agentic use cases.