Copilot API Proxy
English | 中文
[!NOTE]
About This Fork
This project is forked from ericc-ch/copilot-api. Since the original author has discontinued maintenance and no longer supports the new API, we have redesigned and rewritten it.
Special thanks to @ericc-ch for the original work and contribution!
[!WARNING]
This is a reverse-engineered proxy of GitHub Copilot API. It is not supported by GitHub, and may break unexpectedly. Use at your own risk.
[!WARNING]
GitHub Security Notice:
Excessive automated or scripted use of Copilot (including rapid or bulk requests, such as via automated tools) may trigger GitHub's abuse-detection systems.
You may receive a warning from GitHub Security, and further anomalous activity could result in temporary suspension of your Copilot access.
GitHub prohibits use of their servers for excessive automated bulk activity or any activity that places undue burden on their infrastructure.
Please review:
Use this proxy responsibly to avoid account restrictions.
Note: If you are using opencode, you do not need this project. Opencode supports GitHub Copilot provider out of the box.
Project Overview
A reverse-engineered proxy for the GitHub Copilot API that exposes OpenAI-compatible, Anthropic-compatible, and Gemini-compatible interfaces. The gateway routes by model supported_endpoints capabilities and performs protocol translation when needed, so clients using OpenAI Chat Completions, OpenAI Responses, Anthropic Messages, or Gemini generateContent-style calls can all work with the same backend (including Claude Code).
Architecture
The project currently works as a capability-driven routing gateway, not a single-path passthrough proxy:
- It exposes OpenAI / Anthropic / Gemini-compatible ingress endpoints.
- It selects upstream endpoint paths dynamically from model
supported_endpoints.
- Ingress protocol and final upstream protocol may differ (with bidirectional format translation).
flowchart TB
subgraph Clients["Clients"]
C1[OpenAI-compatible clients]
C2[Anthropic-compatible clients]
C3[Gemini-compatible clients]
end
subgraph Proxy["copilot-api"]
direction TB
subgraph Ingress["Ingress"]
I1["/v1/chat/completions"]
I2["/v1/messages"]
I3["/v1/responses"]
I4["/v1beta/models/{model}:generateContent<br/>:streamGenerateContent"]
end
subgraph Router["Capability-driven routing"]
R1[Route by supported_endpoints]
end
subgraph Upstream["Copilot upstream endpoints"]
U1["/chat/completions"]
U2["/v1/messages"]
U3["/responses"]
end
subgraph Admin["Management & state"]
A1["/admin"]
A2["/usage"]
A3["/token"]
A4[config.json + runtime state]
end
end
C1 --> I1
C2 --> I2
C3 --> I4
I1 --> R1
I2 --> R1
I3 --> R1
I4 --> R1
R1 --> U1
R1 --> U2
R1 --> U3
Request Flow (Current)
/v1/messages (Anthropic ingress)
- If model supports messages -> use
/v1/messages
- Else if model supports responses -> translate and use
/responses
- Else -> translate and use
/chat/completions
/v1/chat/completions (OpenAI Chat ingress)
- If model supports chat -> use
/chat/completions
- Else if model supports messages -> fallback to
/v1/messages
- Else if model supports responses -> fallback to
/responses
- If model declares
supported_endpoints but none match -> return 400
- If endpoint metadata is missing/empty -> default to chat path
/v1/responses (OpenAI Responses ingress)
- Allowed only when model supports responses
- If not supported -> direct 400 (no multi-endpoint fallback)
/v1beta/models/{model}:generateContent / streamGenerateContent (Gemini-compatible ingress)
- Fixed chat-only design: always translate Gemini request to
/chat/completions
- Execution order is: validate model capability first, then transform Gemini -> Chat payload
- If chat is not supported -> direct 400 (no messages/responses fallback)
- Currently text-input only via
contents.parts.text
Features