Google is introducing a new agentic AI tool called Gemini CLI, aimed at integrating its Gemini AI models directly into developers’ coding environments.
Announced on Wednesday, Gemini CLI is designed to run locally from the terminal, allowing developers to interact with Google’s Gemini AI models using natural language. Users can request the tool to explain complex code, write new features, debug issues, or execute commands.

This launch is part of Google’s strategy to encourage developers to incorporate its AI models into their workflows. Alongside Gemini CLI, Google offers other AI coding tools like Gemini Code Assist and the asynchronous AI assistant, Jules. However, Gemini CLI competes with established command-line AI tools such as OpenAI’s Codex CLI and Anthropic’s Claude Code, which are often seen as easier to integrate and more efficient.
Since the introduction of Gemini 2.5 Pro in April, Google’s AI models have gained traction among developers. The success of Gemini 2.5 Pro has led to increased use of third-party AI coding tools like Cursor and GitHub Copilot. In response, Google aims to strengthen its relationship with developers by providing in-house solutions.
ICYMI: Another Building Collapse in Cape Coast Claims Life, Sparks Safety Concerns
While primarily designed for coding tasks, Gemini CLI can also be used for various functions, such as creating videos with Google’s Veo 3 model, generating research reports with the Deep Research agent, or retrieving real-time information via Google Search. Additionally, Gemini CLI can connect to MCP servers, enabling access to external databases.
To promote adoption, Google is open-sourcing Gemini CLI under the Apache 2.0 license, encouraging developer contributions on GitHub. The company is also offering generous usage limits; free users can make up to 60 model requests per minute and 1,000 requests per day—approximately double the average requests made with other tools.
Despite the rising popularity of AI coding tools, developers remain cautious. A 2024 Stack Overflow survey revealed that only 43% of developers trust the accuracy of AI tools, with studies indicating that code-generating AI models can sometimes introduce errors or overlook security vulnerabilities.
SOURCE: TECH CRUNCH