GitHub header

All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Sep 30, 2025

No incidents reported today.

Sep 29, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Sep 29, 19:12 UTC
Update - The upstream model provided has resolved the issue and we are seeing full availability for Gemini 2.5 Pro and Gemini 2.0 Flash.
Sep 29, 19:12 UTC
Update - We are experiencing degraded availability for the Gemini 2.5 Pro & Gemini 2.0 Flash models in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.

Sep 29, 18:40 UTC
Investigating - We are currently investigating this issue.
Sep 29, 18:39 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Sep 29, 17:33 UTC
Update - Customers are getting 404 responses when connecting to the GitHub MCP server. We have reverted a change we believe is contributing to the impact, and are seeing resolution in deployed environments.
Sep 29, 17:28 UTC
Investigating - We are currently investigating this issue.
Sep 29, 16:45 UTC
Sep 28, 2025

No incidents reported.

Sep 27, 2025

No incidents reported.

Sep 26, 2025

No incidents reported.

Sep 25, 2025
Resolved - On September 26, 2025 between 16:22 UTC and 18:32 UTC raw file access was degraded for a small set of four repositories. On average, raw file access error rate was 0.01% and peaked at 0.16% of requests. This was due to a caching bug exposed by excessive traffic to a handful of repositories.

We mitigated the incident by resetting the state of the cache for raw file access and are working to improve cache usage and testing to prevent issues like this in the future.

Sep 25, 17:36 UTC
Update - We are seeing issues related to our ability to serve raw file access across a small percentage of our requests.
Sep 25, 17:06 UTC
Investigating - We are currently investigating this issue.
Sep 25, 17:00 UTC
Sep 24, 2025
Resolved - On September 23, 2025, between 15:29 UTC and 17:38 UTC and also on September 24, 2025 between 15:02 UTC and 15:12, email deliveries were delayed up to 50 minutes which resulted in significant delays for most types of email notifications. This occurred due to an unusually high volume of traffic which caused resource contention on some of our outbound email servers.

We have updated the configuration we use to better allocate capacity when there is a high volume of traffic and are also updating our monitors so we can detect this type of issue before it becomes a customer impacting incident.

Sep 24, 15:36 UTC
Update - We are seeing delays in email delivery, which is impacting notifications and user signup email verification. We are investigating and working on mitigation.
Sep 24, 14:55 UTC
Investigating - We are currently investigating this issue.
Sep 24, 14:46 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Sep 24, 09:18 UTC
Update - Between around 8:16 UTC and 8:51 UTC we saw elevated errors on Claude Opus 4 and Opus 4.1, up to 49% of requests were failing. This has recovered to around 4% of requests failing, we are monitoring recovery.
Sep 24, 09:16 UTC
Investigating - We are currently investigating this issue.
Sep 24, 09:08 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Sep 24, 00:26 UTC
Update - The issues with our upstream model provider have been resolved, and Claude Sonnet 3.7 and Claude Sonnet 4 are once again available in Copilot Chat, VS Code and other Copilot products.

We will continue monitoring to ensure stability, but mitigation is complete.

Sep 24, 00:26 UTC
Update - We are experiencing degraded availability for the Claude Sonnet 3.7 and Claude Sonnet 4 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.

Sep 23, 22:22 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Sep 23, 22:22 UTC
Sep 23, 2025
Resolved - On September 23, between 17:11 and 17:40 UTC, customers experienced failures and delays when running workflows on GitHub Actions and building or deploying GitHub Pages. The issue was caused by a faulty configuration change that disrupted service to service communication in GitHub Actions. During this period, in-progress jobs were delayed and new jobs would not start due to a failure to acquire runners, and about 30% of all jobs failed. GitHub Pages users were unable to build or deploy their Pages during this period.

The offending change was rolled back within 15 minutes of its deployment, after which Actions workflows and Pages deployments began to succeed. Actions customers continued to experience delays for about 15 minutes after the rollback was completed while services worked through the backlog of queued jobs. We are planning to implement additional rollout checks to help detect and prevent similar issues in the future.

Sep 23, 17:41 UTC
Update - We are investigating delays in Actions Workflows.
Sep 23, 17:33 UTC
Investigating - We are investigating reports of degraded performance for Actions and Pages
Sep 23, 17:28 UTC
Resolved - On September 23, 2025, between 15:29 UTC and 17:38 UTC and also on September 24, 2025 between 15:02 UTC and 15:12, email deliveries were delayed up to 50 minutes which resulted in significant delays for most types of email notifications. This occurred due to an unusually high volume of traffic which caused resource contention on some of our outbound email servers.

We have updated the configuration we use to better allocate capacity when there is a high volume of traffic and are also updating our monitors so we can detect this type of issue before it becomes a customer impacting incident.

Sep 23, 17:40 UTC
Update - We're seeing delays related to outbound emails and are investigating.
Sep 23, 16:50 UTC
Investigating - We are currently investigating this issue.
Sep 23, 16:46 UTC
Sep 22, 2025

No incidents reported.

Sep 21, 2025

No incidents reported.

Sep 20, 2025

No incidents reported.

Sep 19, 2025

No incidents reported.

Sep 18, 2025

No incidents reported.

Sep 17, 2025
Resolved - On September 17, 2025 between 13:23 and 16:51 UTC some users in West Europe experienced issues with Codespaces that had shut down due to network disconnections and subsequently failed to restart. Codespace creations and resumes were failed over to another region at 15:01 UTC. While many of the impacted instances self-recovered after mitigation efforts, approximately 2,000 codespaces remained stuck in a "shutting down" state while the team evaluated possible methods to recover unpushed data from the latest active session of affected codespaces. Unfortunately, recovery of that data was not possible. We unblocked shutdown of those codespaces, with all instances either shut down or available by 8:26 UTC on September 19.

The disconnects were triggered by an exhaustion of resources in the network relay infrastructure in that region, but the lack of self-recovery was caused by an unhandled error impacting the local agent, which led to an unclean shutdown.

We are improving the resilience of the local agent to disconnect events to ensure shutdown of codespaces is always clean without data loss. We have also addressed the exhausted resources in the network relay and will be investing in improved detection and resilience to reduce the impact of similar events in the future.

Sep 17, 17:55 UTC
Update - We have confirmed the original mitigation to failover has resolved the issue causing Codespaces to become unavailable. We are evaluating if there is a path to recover unpushed data from the approximately 2000 Codespaces that are currently in the shutting down state. We will be resolving this incident and will detail the next steps in our public summary.
Sep 17, 17:55 UTC
Update - For Codespaces that were stuck in the shutting down state and have been resumed, we've identified an issue that is causing the contents Codespace to be irrecoverably lost which has impacted approximately 250 Codespaces. We are actively working on a mitigation to prevent any more Codespaces currently in this state from being forced to shut down to prevent the potential data loss.
Sep 17, 16:51 UTC
Update - We're continuing to see improvement with Codespaces that were stuck in in the shutting down state and we anticipate the remaining should self resolve in about an hour.
Sep 17, 16:07 UTC
Update - Some users with Codespaces in West Europe were unable to connect to Codespaces, we have failed over that region and users should be able to create new Codespaces. If a user has a Codespace in a shutting down state, we are still investigating potential fixes and mitigations.
Sep 17, 15:31 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
Sep 17, 15:04 UTC
Sep 16, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Sep 16, 18:30 UTC
Update - We have mitigated the issue and are monitoring the results
Sep 16, 18:29 UTC
Update - Git Operations is experiencing degraded performance. We are continuing to investigate.
Sep 16, 18:02 UTC
Update - A recent change to our API routing inadvertently added an authentication requirement to the anonymous route for LFS requests. We're in the process of fixing the change, but in the interim retrying should eventually succeed.
Sep 16, 17:55 UTC
Investigating - We are currently investigating this issue.
Sep 16, 17:55 UTC
Resolved - Between 16:26 UTC on September 15th and 18:30 UTC on September 16th, anonymous REST API calls to approximately 20 endpoints were incorrectly rejected because they were not authenticated. While this caused unauthenticated requests to be rejected by these endpoints, all authenticated requests were unaffected, and no protected endpoints were exposed.

This resulted in 100% of requests to these endpoints failing at peak, representing less than 0.1% of GitHub’s overall request volume. On average, the error rate for these endpoints was less than 50% and peaked at 100% for about 26 hours over September 16th. API requests to the impacted endpoints were rejected with a 401 error code. This was due to a mismatch in authentication policies, for specific endpoints, during a system migration.

The failure to detect the errors was the result of the issue occurring for a low percentage of traffic.

We mitigated the incident by reverting the policy in question, and correcting the logic associated with the degraded endpoints. We are working to improve our test suite to further validate mismatches, and refining our monitors for proactive detection.

Sep 16, 17:45 UTC
Update - We have mitigated the issue and are monitoring the results
Sep 16, 17:27 UTC
Update - A recent change to our API routing inadvertently added an authentication requirement to the anonymous route for creating GitHub apps. We're in the process of fixing the change, but in the interim retrying should eventually succeed.
Sep 16, 17:15 UTC
Investigating - We are currently investigating this issue.
Sep 16, 17:14 UTC