TR

GEPA Unveils 'optimize_anything': Universal AI API to Optimize Code, Prompts, and Agents

GEPA AI has launched 'optimize_anything,' a groundbreaking open-source API that uses AI-driven search to optimize any text-based artifact—from code and prompts to agent architectures—by leveraging diagnostic feedback and Pareto-efficient multi-objective search. The tool has demonstrated dramatic improvements across eight domains, including a 47% speed boost for Claude Code and an ARC-AGI agent rising from 32.5% to 89.5% accuracy.

calendar_today🇹🇷Türkçe versiyonu
GEPA Unveils 'optimize_anything': Universal AI API to Optimize Code, Prompts, and Agents

In a significant leap forward for artificial intelligence automation, GEPA AI has open-sourced optimize_anything, a universal API designed to optimize any text-based artifact—whether it’s source code, LLM prompts, agent configurations, or system scheduling policies—by treating optimization as a search problem grounded in measurable outcomes. Unlike traditional optimization tools that rely on hand-tuned heuristics or single-metric gradients, optimize_anything employs a novel framework where an AI proposer iteratively refines candidates based on rich diagnostic feedback, such as stack traces, profiler outputs, and rendered visualizations, enabling targeted, context-aware improvements.

According to GEPA’s official blog, the API extends the company’s earlier breakthrough, GEPA (Generative Evolutionary Prompt Accelerator), beyond natural language prompts to encompass a wide array of computational artifacts. The system operates by accepting a seed candidate and a custom evaluator function that returns both a scalar score and diagnostic metadata. The AI then conducts a Pareto-efficient search across multiple competing objectives—such as accuracy, speed, memory usage, and cost—preserving specialized strengths rather than averaging them into mediocrity. This approach has yielded remarkable results across eight distinct domains, including a 47% reduction in inference latency for Claude Code while simultaneously pushing its accuracy to near-perfect levels, and a cloud scheduling algorithm that slashed operational costs by 40%.

One of the most striking demonstrations involved an ARC-AGI agent, a benchmark for abstract reasoning in AI systems. The agent’s performance jumped from 32.5% to 89.5% accuracy after just a few hundred optimization cycles, showcasing the API’s ability to unlock latent capabilities in under-tuned models. Similarly, CUDA kernels optimized with optimize_anything outperformed hand-crafted baselines, while circle-packing solutions surpassed those generated by AlphaEvolve, a prior state-of-the-art evolutionary optimizer. Even blackbox optimization tasks, typically dominated by tools like Optuna, were matched and in some cases exceeded, demonstrating the framework’s versatility.

What sets optimize_anything apart is its treatment of diagnostics as a first-class citizen. Traditional AI optimizers often treat evaluation as a black box—returning only a score. GEPA’s innovation is to feed the LLM proposer structured diagnostic information, allowing it to understand why a candidate failed. For example, if a Python script crashes with a memory error, the API doesn’t just score it low—it passes the traceback and memory usage logs to the proposer, which then generates a revised version that explicitly addresses the bottleneck. This mimics how human engineers debug systems, but at machine scale and speed.

While consumer-focused platforms like Ten Forums offer tutorials on optimizing Windows 10 performance through manual defragmentation and system tweaks, GEPA’s tool operates at a fundamentally different level: it automates the optimization of digital artifacts that power the very systems those tutorials seek to improve. In essence, optimize_anything doesn’t just tweak settings—it rewrites the logic behind them.

Developers can install the tool via pip install gepa and begin optimizing within minutes. GEPA provides runnable code examples for all eight case studies, making it accessible to researchers and engineers alike. The open-source nature of the project, hosted on GitHub, invites community contributions and domain-specific evaluators, potentially accelerating innovation across AI, systems engineering, and computational science.

As AI systems grow in complexity, the need for automated, multi-objective optimization tools becomes increasingly critical. optimize_anything represents not just a new library, but a paradigm shift—transforming optimization from an art practiced by experts into a reproducible, scalable process accessible to all. The implications span from reducing cloud infrastructure costs to accelerating the development of next-generation AI agents, making it one of the most promising open-source tools to emerge in 2026.

AI-Powered Content

recommendRelated Articles