Skip to content

Resolve Persistent HTTP 429 (Too Many Requests) by Re-authenticating OAuth Session #24384

@alieonsido

Description

@alieonsido

TL;DR:
alias gemini="if [ -f ~/.gemini/oauth_creds.json ]; then jq 'del(.access_token) | .expiry_date = 0' ~/.gemini/oauth_creds.json > ~/.gemini/oauth_creds.tmp && mv ~/.gemini/oauth_creds.tmp ~/.gemini/oauth_creds.json; fi; \gemini"

Put it to your .bashrc or any CLI Settings to let gemini server force to refresh access_token when you execute gemini in CLI.
It will mitigate the 429 problem at least half an hour in my situation.

Problem Overview

  • Widespread Issue: Many users are encountering frequent HTTP 429 (Too Many Requests) errors when using the gemini-3.1-pro-preview model in gemini-cli.

  • Unsustainability of Downgrading: While a common suggestion in the community is to downgrade the version, this is not a sustainable solution for long-term development.

  • Effective Workaround: Testing has shown that re-authenticating (re-logging via OAuth) can immediately and effectively resolve this rate-limiting state.

Technical Hypothesis

  • Credential Decay: This phenomenon may not be actual traffic congestion. Instead, expired or corrupted OAuth tokens might cause the server to relegate requests to a default pool with extremely low quotas.

  • Quota Re-association: Re-running the login process to obtain a fresh token forces the server to re-associate the requests with the correct Google Cloud project quota.

  • Cache Clearance: Re-authentication also clears local cache issues that might lead to background retry loops, which often trigger automatic API gateway protection.

Suggested Workaround for Users

If you are facing persistent HTTP 429 errors, please try the following before considering a downgrade:

  1. Log out of your current session or clear existing credentials.

  2. Re-run the OAuth login process (e.g., executing the auth login command).

  3. Ensure that your default project is correctly configured in your environment variables or config file.

Result: This process typically PERHAPS assigns a fresh request counter and session quota, restoring normal operation immediately.

Request for Feedback

I encourage other users facing this issue to report if this workaround works for them. This will help the maintainers determine if the root cause lies in credential management or the backend API's quota allocation logic.

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/platformIssues related to Build infra, Release mgmt, Testing, Eval infra, Capacity, Quota mgmtstatus/need-triageIssues that need to be triaged by the triage automation.

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions