GitHub header

All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Sep 19, 2025

No incidents reported today.

Sep 18, 2025

No incidents reported.

Sep 17, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Sep 17, 17:55 UTC
Update - We have confirmed the original mitigation to failover has resolved the issue causing Codespaces to become unavailable. We are evaluating if there is a path to recover unpushed data from the approximately 2000 Codespaces that are currently in the shutting down state. We will be resolving this incident and will detail the next steps in our public summary.
Sep 17, 17:55 UTC
Update - For Codespaces that were stuck in the shutting down state and have been resumed, we've identified an issue that is causing the contents Codespace to be irrecoverably lost which has impacted approximately 250 Codespaces. We are actively working on a mitigation to prevent any more Codespaces currently in this state from being forced to shut down to prevent the potential data loss.
Sep 17, 16:51 UTC
Update - We're continuing to see improvement with Codespaces that were stuck in in the shutting down state and we anticipate the remaining should self resolve in about an hour.
Sep 17, 16:07 UTC
Update - Some users with Codespaces in West Europe were unable to connect to Codespaces, we have failed over that region and users should be able to create new Codespaces. If a user has a Codespace in a shutting down state, we are still investigating potential fixes and mitigations.
Sep 17, 15:31 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
Sep 17, 15:04 UTC
Sep 16, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Sep 16, 18:30 UTC
Update - We have mitigated the issue and are monitoring the results
Sep 16, 18:29 UTC
Update - Git Operations is experiencing degraded performance. We are continuing to investigate.
Sep 16, 18:02 UTC
Update - A recent change to our API routing inadvertently added an authentication requirement to the anonymous route for LFS requests. We're in the process of fixing the change, but in the interim retrying should eventually succeed.
Sep 16, 17:55 UTC
Investigating - We are currently investigating this issue.
Sep 16, 17:55 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Sep 16, 17:45 UTC
Update - We have mitigated the issue and are monitoring the results
Sep 16, 17:27 UTC
Update - A recent change to our API routing inadvertently added an authentication requirement to the anonymous route for creating GitHub apps. We're in the process of fixing the change, but in the interim retrying should eventually succeed.
Sep 16, 17:15 UTC
Investigating - We are currently investigating this issue.
Sep 16, 17:14 UTC
Sep 15, 2025
Resolved - At around 18:45 UTC on Friday, September 12, 2025, a change was deployed that unintentionally affected search index management. As a result, approximately 25% of repositories were temporarily missing from search results.

By 12:45 UTC on Saturday, September 14, most missing repositories were restored from an earlier search index snapshot, and repositories updated between the snapshot and the restoration were reindexed. This backfill was completed at 21:25 UTC.

After these repairs, about 98.5% of repositories were once again searchable. We are performing a full reconciliation of the search index and customers can expect to see records being updated and content becoming searchable for all repos again between now and Sept 25.

NOTE: Users who notice missing or outdated repositories in search results can force reindexing by starring or un-starring the repository. Other repository actions such as adding topics, or updating the repository description, will also result in reindexing. In general, changes to searchable artifacts in GitHub will also update their respective search index in near-real time.

User impact has been mitigated with the exception of the 1.5% of repos that are missing from the search index. The change responsible for the search issue has been reverted, and full reconciliation of the search index is underway, expected to complete by September 23. We have added additional checks to our indexing model to ensure this failure does not happen again. We are also investigating faster repair alternatives.

To avoid resource contention and possible further issues we are currently not repairing repositories or organizations individually at this time. No repository data was lost, and other search types were not affected.

Sep 15, 21:01 UTC
Update - Most searchable repositories should again be visible in search results. Up to 1.5% of repositories may still be missing from search results.

Many different actions synchronize the repository state with the search index, so we expect natural recovery for repositories that see more frequent user and API-driven interactions.

A complete index reconciliation is underway to restore stagnant repositories that were deleted from the index. We will update again once we have a clear timeline of when we expect full recovery for those missing search results.

Sep 13, 22:39 UTC
Update - Customers are not seeing repositories they expect to see in search results. We have restored a snapshot of this search index from Fri 12 Sep at 21:00 UTC. Changes made since then will be unavailable while we work to backfill the rest of the search index. Any new changes will be available in near-real time as expected.
Sep 13, 12:49 UTC
Investigating - We are currently investigating this issue.
Sep 13, 12:44 UTC
Resolved - On September 15th between 17:55 and 18:20 UTC, Copilot experienced degraded availability for all features. This was due a partial deployment of a feature flag to a global rate limiter. The flag triggered behavior that unintentionally rate limited all requests, resulting in 100% of them returning 403 errors. The issue was resolved by reverting the feature flag which resulted in immediate recovery.

The root cause of the incident was from an undetected edge case in our rate limiting logic. The flag was meant to scale down rate limiting for a subset of users, but unintentionally put our rate limiting configuration into an invalid state.

To prevent this from happening again, we have addressed the bug with our rate limiting. We are also adding additional monitors to detect anomalies in our traffic patterns, which will allow us to identify similar issues during future deployments. Furthermore, we are exploring ways to test our rate limit scaling in our internal environment to enhance our pre-production validation process.

Sep 15, 18:28 UTC
Investigating - We are currently investigating this issue.
Sep 15, 18:21 UTC
Sep 14, 2025

No incidents reported.

Sep 13, 2025
Sep 12, 2025

No incidents reported.

Sep 11, 2025

No incidents reported.

Sep 10, 2025
Resolved - On September 10, 2025 between 13:00 and 14:15 UTC, Actions users experienced failed jobs and run start delays for Ubuntu 24 and Ubuntu 22 jobs on standard runners in private repositories. Additionally, larger runner customers experienced run start delays for runner groups with private networking configured in the eastus2 region. This was due to an outage in an underlying compute service provider in eastus2. 1.06% of Ubuntu 24 jobs and 0.16% of Ubuntu 22 jobs failed during this period. Jobs for larger runners using private networking in the eastus2 region were unable to start for the duration of the incident.

We have identified and are working on improvements in our resilience to single partner region outages for standard runners so impact is reduced in similar scenarios in the future.

Sep 10, 14:02 UTC
Update - Actions hosted runners are taking longer to come online, leading to high wait times or job failures.
Sep 10, 13:31 UTC
Investigating - We are investigating reports of degraded performance for Actions
Sep 10, 13:23 UTC
Sep 9, 2025

No incidents reported.

Sep 8, 2025

No incidents reported.

Sep 7, 2025

No incidents reported.

Sep 6, 2025

No incidents reported.

Sep 5, 2025

No incidents reported.