GitHub header
Update - We are continuing to investigate and test solutions internally while working with our model provider on a deeper investigation into the cause. We will update again when we have identified a mitigation.
Oct 01, 2025 - 18:16 UTC
Update - We are testing other internal mitigations so that we can return to the higher maximum input length. We are still working with our upstream model provider to understand the contributing factors for this sudden decrease in input limits.
Oct 01, 2025 - 17:37 UTC
Update - We are experiencing a service regression for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. The maximum input length of Gemini 2.5 prompts been decreased. Long prompts or large context windows may result in errors. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.

Oct 01, 2025 - 16:49 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Oct 01, 2025 - 16:43 UTC

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Partial Outage
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Oct 1, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Oct 1, 16:55 UTC
Update - We are seeing some recovery for image queueing and continuing to monitor.
Oct 1, 16:27 UTC
Update - We are continuing work to restore capacity for our MacOS ARM runners.
Oct 1, 14:41 UTC
Update - Our team continues to work hard on restoring capacity for the Mac runners.
Oct 1, 13:58 UTC
Update - Work continues on restoring capacity on the Mac runners.
Oct 1, 13:12 UTC
Update - MacOS ARM runners continue to be at reduced capacity, causing queuing of jobs. Investigation is ongoing.
Oct 1, 12:32 UTC
Update - Work continues to bring the full runner capacity back online. Resources are focused on improving the recovery of certain runner types.
Oct 1, 11:51 UTC
Update - We are continuing to see recovery of some runner capacity and investigating slow recovery of certain runner types.
Oct 1, 11:11 UTC
Update - We are seeing recovery of some runner capacity, while also investigating slow recovery of certain runner types.
Oct 1, 10:30 UTC
Update - MacOS runners are coming back online and starting to process queued work.
Oct 1, 09:44 UTC
Update - We are continuing to deploy the necessary changes to restore MacOS runner capacity.
Oct 1, 08:59 UTC
Update - We have identified the cause and are deploying a change to restore MacOS runner capacity.
Oct 1, 08:27 UTC
Update - Customers using GitHub Actions Mac OS runners are experiencing job start delays and failures. We are aware of this issue and actively investigating.
Oct 1, 08:17 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Oct 1, 08:09 UTC
Investigating - We are currently investigating this issue.
Oct 1, 07:59 UTC
Sep 30, 2025

No incidents reported.

Sep 29, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Sep 29, 19:12 UTC
Update - The upstream model provided has resolved the issue and we are seeing full availability for Gemini 2.5 Pro and Gemini 2.0 Flash.
Sep 29, 19:12 UTC
Update - We are experiencing degraded availability for the Gemini 2.5 Pro & Gemini 2.0 Flash models in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.

Sep 29, 18:40 UTC
Investigating - We are currently investigating this issue.
Sep 29, 18:39 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Sep 29, 17:33 UTC
Update - Customers are getting 404 responses when connecting to the GitHub MCP server. We have reverted a change we believe is contributing to the impact, and are seeing resolution in deployed environments.
Sep 29, 17:28 UTC
Investigating - We are currently investigating this issue.
Sep 29, 16:45 UTC
Sep 28, 2025

No incidents reported.

Sep 27, 2025

No incidents reported.

Sep 26, 2025

No incidents reported.

Sep 25, 2025
Resolved - On September 26, 2025 between 16:22 UTC and 18:32 UTC raw file access was degraded for a small set of four repositories. On average, raw file access error rate was 0.01% and peaked at 0.16% of requests. This was due to a caching bug exposed by excessive traffic to a handful of repositories.

We mitigated the incident by resetting the state of the cache for raw file access and are working to improve cache usage and testing to prevent issues like this in the future.

Sep 25, 17:36 UTC
Update - We are seeing issues related to our ability to serve raw file access across a small percentage of our requests.
Sep 25, 17:06 UTC
Investigating - We are currently investigating this issue.
Sep 25, 17:00 UTC
Sep 24, 2025
Resolved - On September 23, 2025, between 15:29 UTC and 17:38 UTC and also on September 24, 2025 between 15:02 UTC and 15:12, email deliveries were delayed up to 50 minutes which resulted in significant delays for most types of email notifications. This occurred due to an unusually high volume of traffic which caused resource contention on some of our outbound email servers.

We have updated the configuration we use to better allocate capacity when there is a high volume of traffic and are also updating our monitors so we can detect this type of issue before it becomes a customer impacting incident.

Sep 24, 15:36 UTC
Update - We are seeing delays in email delivery, which is impacting notifications and user signup email verification. We are investigating and working on mitigation.
Sep 24, 14:55 UTC
Investigating - We are currently investigating this issue.
Sep 24, 14:46 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Sep 24, 09:18 UTC
Update - Between around 8:16 UTC and 8:51 UTC we saw elevated errors on Claude Opus 4 and Opus 4.1, up to 49% of requests were failing. This has recovered to around 4% of requests failing, we are monitoring recovery.
Sep 24, 09:16 UTC
Investigating - We are currently investigating this issue.
Sep 24, 09:08 UTC
Resolved - Between 20:06 UTC September 23 and 04:58 UTC September 24, 2025, the Copilot service experienced degraded availability for Claude Sonnet 4 and 3.7 model requests.

During this period, 0.46% of Claude 4 requests and 7.83% of Claude 3.7 requests failed.

The reduced availability resulted from Copilot disabling routing to an upstream provider that was experiencing issues and reallocating capacity to other providers to manage requests for Claude Sonnet 3.7 and 4.
We are continuing to investigate the source of the issues with this provider and will provide an update as more information becomes available.

Sep 24, 00:26 UTC
Update - The issues with our upstream model provider have been resolved, and Claude Sonnet 3.7 and Claude Sonnet 4 are once again available in Copilot Chat, VS Code and other Copilot products.

We will continue monitoring to ensure stability, but mitigation is complete.

Sep 24, 00:26 UTC
Update - We are experiencing degraded availability for the Claude Sonnet 3.7 and Claude Sonnet 4 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.

Sep 23, 22:22 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Sep 23, 22:22 UTC
Sep 23, 2025
Resolved - On September 23, between 17:11 and 17:40 UTC, customers experienced failures and delays when running workflows on GitHub Actions and building or deploying GitHub Pages. The issue was caused by a faulty configuration change that disrupted service to service communication in GitHub Actions. During this period, in-progress jobs were delayed and new jobs would not start due to a failure to acquire runners, and about 30% of all jobs failed. GitHub Pages users were unable to build or deploy their Pages during this period.

The offending change was rolled back within 15 minutes of its deployment, after which Actions workflows and Pages deployments began to succeed. Actions customers continued to experience delays for about 15 minutes after the rollback was completed while services worked through the backlog of queued jobs. We are planning to implement additional rollout checks to help detect and prevent similar issues in the future.

Sep 23, 17:41 UTC
Update - We are investigating delays in Actions Workflows.
Sep 23, 17:33 UTC
Investigating - We are investigating reports of degraded performance for Actions and Pages
Sep 23, 17:28 UTC
Resolved - On September 23, 2025, between 15:29 UTC and 17:38 UTC and also on September 24, 2025 between 15:02 UTC and 15:12, email deliveries were delayed up to 50 minutes which resulted in significant delays for most types of email notifications. This occurred due to an unusually high volume of traffic which caused resource contention on some of our outbound email servers.

We have updated the configuration we use to better allocate capacity when there is a high volume of traffic and are also updating our monitors so we can detect this type of issue before it becomes a customer impacting incident.

Sep 23, 17:40 UTC
Update - We're seeing delays related to outbound emails and are investigating.
Sep 23, 16:50 UTC
Investigating - We are currently investigating this issue.
Sep 23, 16:46 UTC
Sep 22, 2025

No incidents reported.

Sep 21, 2025

No incidents reported.

Sep 20, 2025

No incidents reported.

Sep 19, 2025

No incidents reported.

Sep 18, 2025

No incidents reported.

Sep 17, 2025
Resolved - On September 17, 2025 between 13:23 and 16:51 UTC some users in West Europe experienced issues with Codespaces that had shut down due to network disconnections and subsequently failed to restart. Codespace creations and resumes were failed over to another region at 15:01 UTC. While many of the impacted instances self-recovered after mitigation efforts, approximately 2,000 codespaces remained stuck in a "shutting down" state while the team evaluated possible methods to recover unpushed data from the latest active session of affected codespaces. Unfortunately, recovery of that data was not possible. We unblocked shutdown of those codespaces, with all instances either shut down or available by 8:26 UTC on September 19.

The disconnects were triggered by an exhaustion of resources in the network relay infrastructure in that region, but the lack of self-recovery was caused by an unhandled error impacting the local agent, which led to an unclean shutdown.

We are improving the resilience of the local agent to disconnect events to ensure shutdown of codespaces is always clean without data loss. We have also addressed the exhausted resources in the network relay and will be investing in improved detection and resilience to reduce the impact of similar events in the future.

Sep 17, 17:55 UTC
Update - We have confirmed the original mitigation to failover has resolved the issue causing Codespaces to become unavailable. We are evaluating if there is a path to recover unpushed data from the approximately 2000 Codespaces that are currently in the shutting down state. We will be resolving this incident and will detail the next steps in our public summary.
Sep 17, 17:55 UTC
Update - For Codespaces that were stuck in the shutting down state and have been resumed, we've identified an issue that is causing the contents Codespace to be irrecoverably lost which has impacted approximately 250 Codespaces. We are actively working on a mitigation to prevent any more Codespaces currently in this state from being forced to shut down to prevent the potential data loss.
Sep 17, 16:51 UTC
Update - We're continuing to see improvement with Codespaces that were stuck in in the shutting down state and we anticipate the remaining should self resolve in about an hour.
Sep 17, 16:07 UTC
Update - Some users with Codespaces in West Europe were unable to connect to Codespaces, we have failed over that region and users should be able to create new Codespaces. If a user has a Codespace in a shutting down state, we are still investigating potential fixes and mitigations.
Sep 17, 15:31 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
Sep 17, 15:04 UTC