GitHub header
All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Jan 28, 2025

No incidents reported today.

Jan 27, 2025
Resolved - This incident has been resolved.
Jan 27, 23:41 UTC
Update - Our Audit Log Streaming service is experiencing degradation but is experiencing no data outage.
Jan 27, 23:32 UTC
Investigating - We are currently investigating this issue.
Jan 27, 23:32 UTC
Jan 26, 2025

No incidents reported.

Jan 25, 2025

No incidents reported.

Jan 24, 2025

No incidents reported.

Jan 23, 2025
Resolved - On January 23, 2025, between 9:49 and 17:00 UTC, the available capacity of large hosted runners was degraded. On average, 26% of jobs requiring large runners had a >5min delay getting a runner assigned. This was caused by the rollback of a configuration change and a latent bug in event processing, which was triggered by the mixed data shape that resulted from the rollback. The processing would reprocess the same events unnecessarily and cause the background job that manages large runner creation and deletion to run out of resources. It would automatically restart and continue processing, but the jobs were not able to keep up with production traffic. We mitigated the impact by using a feature flag to bypass the problematic event processing logic. While these changes had been rolling out in stages over the last few months and had been safely rolled back previously, an unrelated change prevented rollback from causing this problem in earlier stages.

We are reviewing and updating the feature flags in this event processing workflow to ensure that we have high confidence in rollback in all rollout stages. We are also improving observability of the event processing to reduce the time to diagnose and mitigate similar issues going forward.

Jan 23, 17:27 UTC
Update - We are seeing recovery with the latest mitigation. Queue time for a very small percentage of larger runner jobs are still longer than expected so we are monitoring those for full recovery before going green.
Jan 23, 17:03 UTC
Update - We are actively applying mitigations to help improve larger runner start times. We are currently seeing delays starting about 25% of larger runner jobs.
Jan 23, 16:25 UTC
Update - We are still actively investigating a slowdown in larger runner assignment and are working to apply additional mitigations.
Jan 23, 15:33 UTC
Update - We're still applying mitigations to unblock queueing Actions in large runners. We are monitoring for full recovery.
Jan 23, 14:53 UTC
Update - We are applying further mitigations to fix the issues with delayed queuing for Actions jobs in large runners. We continue to monitor for full recovery.
Jan 23, 14:17 UTC
Update - We are investigating further mitigations for queueing Actions jobs in large runners. We continue to watch telemetry and are monitoring for full recovery.
Jan 23, 13:42 UTC
Update - We've applied a mitigation to fix the issues with queuing and running Actions jobs. We are seeing improvements in telemetry and are monitoring for full recovery.
Jan 23, 13:09 UTC
Update - The team continues to apply mitigations for issues with some Actions jobs delayed being enqueued for larger runners seen by a small number of customers. We will continue providing updates on the progress towards full mitigation.
Jan 23, 12:36 UTC
Update - The team continues to apply mitigations for issues with some Actions jobs delayed being enqueued for larger runners. We will continue providing updates on the progress towards full mitigation.
Jan 23, 12:03 UTC
Update - The team continues to investigate issues with some Actions jobs delayed being enqueued for larger runners. We will continue providing updates on the progress towards mitigation.
Jan 23, 11:31 UTC
Update - The team continues to investigate issues with some Actions jobs having delays in being queued for larger runners. We will continue providing updates on the progress towards mitigation.
Jan 23, 10:58 UTC
Investigating - We are investigating reports of degraded performance for Actions
Jan 23, 10:25 UTC
Jan 22, 2025

No incidents reported.

Jan 21, 2025

No incidents reported.

Jan 20, 2025

No incidents reported.

Jan 19, 2025

No incidents reported.

Jan 18, 2025

No incidents reported.

Jan 17, 2025

No incidents reported.

Jan 16, 2025
Resolved - This incident has been resolved.
Jan 16, 09:40 UTC
Update - The incident has been resolved, but please note affected pull requests will self repair when any commits are pushed to the pull requests' base branch or head branch. If you encounter problems with a rebase and merge, either click the "update branch" button or push a commit to the PR's branch.
Jan 16, 09:39 UTC
Update - We have mitigated the incident, and any new pull request rebase merges should be recovered. We are working on recovery steps for any pull requests that attempted to merge during this incident.
Jan 16, 09:18 UTC
Update - We believe to have found a root cause, and in the process of verifying the mitigation.
Jan 16, 08:37 UTC
Update - We are still continuing to investigate.
Jan 16, 07:38 UTC
Update - We are still experiencing failures for rebase merges in pull requests, we are continuing to investigate.
Jan 16, 07:05 UTC
Investigating - We are investigating reports of degraded performance for Pull Requests
Jan 16, 06:22 UTC
Jan 15, 2025

No incidents reported.

Jan 14, 2025
Resolved - On January 14, 2025, between 19:13 UTC and 21:210 UTC the Codespaces service was degraded and led to connection failures with running codespaces, with a 7.6% failure rate for connections during the degradation. Users with bad connections could not use impacted codespaces until they were stopped and restarted.

This was caused by bad connections left behind after a deployment in an upstream dependency that the Codespaces service still provided to clients. The incident self-mitigated as new connections replaced stale ones. We are coordinating to ensure connection stability with future deployments of this nature.

Jan 14, 21:20 UTC
Update - We are beginning to see recovery for users connecting to Codespaces. Any users continuing to see impact should attempt a restart.
Jan 14, 21:19 UTC
Update - We are investigating reports of timeouts for Codespaces users creating new or connecting to existing Codespaces. We will continue to keep users updated on progress towards mitigation.
Jan 14, 20:55 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
Jan 14, 20:55 UTC
Resolved - On January 13, 2025, between 23:35 UTC and 00:24 UTC all Git operations were unavailable due to a configuration change causing our internal load balancer to drop requests between services that Git relies upon.

We mitigated the incident by rolling back the configuration change.

We are improving our monitoring and deployment practices to reduce our time to detection and automated mitigation for issues like this in the future.

Jan 14, 00:28 UTC
Update - We've identified a cause of degraded git operations, which may affect other GitHub services that rely upon git. We're working to remediate.
Jan 14, 00:15 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Jan 13, 23:57 UTC
Update - Pages is experiencing degraded performance. We are continuing to investigate.
Jan 13, 23:46 UTC
Investigating - We are investigating reports of degraded availability for Git Operations
Jan 13, 23:44 UTC