GitHub header
All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Dec 17, 2024

No incidents reported today.

Dec 16, 2024

No incidents reported.

Dec 15, 2024

No incidents reported.

Dec 14, 2024

No incidents reported.

Dec 13, 2024

No incidents reported.

Dec 12, 2024

No incidents reported.

Dec 11, 2024

No incidents reported.

Dec 10, 2024

No incidents reported.

Dec 9, 2024

No incidents reported.

Dec 8, 2024

No incidents reported.

Dec 7, 2024

No incidents reported.

Dec 6, 2024
Resolved - Upon further investigation, the degradation in migrations in the EU was caused by an internal configuration issue, which was promptly identified and resolved. No customer migrations were impacted during this time and the issue only affected GitHub Enterprise Cloud - EU and had no impact on Github.com. The service is now fully operational. We are following up by improving our processes for these internal configuration changes to prevent a recurrence, and to have incidents that affect GitHub Enterprise Cloud - EU be reported on https://eu.githubstatus.com/.
Dec 6, 17:17 UTC
Update - Migrations are failing for a subset of users in the EU region with data residency. We believe we have resolved the issue and are monitoring for resolution.
Dec 6, 17:17 UTC
Investigating - We are currently investigating this issue.
Dec 6, 16:58 UTC
Dec 5, 2024

No incidents reported.

Dec 4, 2024
Resolved - This incident has been resolved.
Dec 4, 19:27 UTC
Update - Pull Requests is operating normally.
Dec 4, 19:26 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Dec 4, 19:21 UTC
Update - Issues is operating normally.
Dec 4, 19:20 UTC
Update - API Requests is operating normally.
Dec 4, 19:18 UTC
Update - Webhooks is operating normally.
Dec 4, 19:17 UTC
Update - We have identified the cause of timeouts impacting users across multiple services. This change was rolled back and we are seeing recovery. We will continue to monitor for complete recovery.
Dec 4, 19:11 UTC
Update - Issues is experiencing degraded performance. We are continuing to investigate.
Dec 4, 19:07 UTC
Update - API Requests is experiencing degraded performance. We are continuing to investigate.
Dec 4, 19:05 UTC
Update - Webhooks is experiencing degraded performance. We are continuing to investigate.
Dec 4, 19:05 UTC
Investigating - We are currently investigating this issue.
Dec 4, 18:58 UTC
Dec 3, 2024
Resolved - On December 3rd, between 23:29 and 23:43 UTC, Pull Requests experienced a brief outage and teams have confirmed the issue to be resolved. Due to brevity of incident it was not publicly statused at the time however an RCA will be conducted and shared in due course.
Dec 3, 23:30 UTC
Resolved - On December 3, 2024, between 19:35 UTC and 20:05 UTC API requests, Actions, Pull Requests and Issues were degraded. Web and API requests for Pull Requests experienced a 3.5% error rate and Issues had a 1.2% error rate. The highest impact was for users who experienced errors while creating and commenting on Pull Requests and Issues. Actions had a 3.3% error rate in jobs and delays on some updates during this time.

This was due to an erroneous database credential change impacting write access to Issues and Pull Requests data. We mitigated the incident by reverting the credential change at 19:52 UTC. We continued to monitor service recovery before resolving the incident at 20:05 UTC.

There are a few improvements we are making in response to this. We are investing in safe guards to the change management process in order to prevent erroneous database credential changes. Additionally, the initial rollback attempt was unsuccessful which led to a longer time to mitigate. We were able to revert through an alternative method and are updating our playbooks to document this mitigation strategy.

Dec 3, 20:05 UTC
Update - Pull Requests is operating normally.
Dec 3, 20:05 UTC
Update - Actions is operating normally.
Dec 3, 20:04 UTC
Update - API Requests is operating normally.
Dec 3, 20:02 UTC
Update - We have taken mitigating actions and are starting to see recovery but are continuing to monitor and ensure full recovery. Some users may still see errors.
Dec 3, 19:59 UTC
Update - Some users will experience problems with certain features of pull requests, actions, issues and other areas. We are aware of the issue, know the cause, and are working on a mitigation.
Dec 3, 19:54 UTC
Investigating - We are investigating reports of degraded performance for API Requests, Actions and Pull Requests
Dec 3, 19:48 UTC
Resolved - Between Dec 3 03:35 UTC and 04:35 UTC, availability of large hosted runners for Actions was degraded due to failures in background VM provisioning jobs. This was a shorter recurrence of the issue that occurred the previous day. Users would see workflows queued waiting for a large runner. On average, 13.5% of all workflows requiring large runners over the incident time were affected, peaking at 46% of requests. Standard and Mac runners were not affected.

Following the Dec 1 incident, we had disabled non-critical paths in the provisioning job and believed that would eliminate any impact while we understood and addressed the timeouts. Unfortunately, the timeouts were a symptom of broader job health issues, so those changes did not prevent this second occurrence the following day. We now understand that other jobs on these agents had issues that resulted in them hanging and consuming available job agent capacity. The reduced capacity led to saturation of the remaining agents and significant performance degradation in the running jobs.

In addition to the immediate improvements shared in the previous incident summary, we immediately initiated regular recycles of all agents in this area while we continue to address the issues in both the jobs themselves and the resiliency of the agents. We also continue to improve our detection to ensure we are automatically detecting these delays.

Dec 3, 04:39 UTC
Update - We saw a recurrence of the large hosted runner incident (https://www.githubstatus.com/incidents/qq1m7mqcl6zk) from 12/1/2024. We've applied the same mitigation and see improvements. We will continue to work on a long term solution.
Dec 3, 04:38 UTC
Update - We are investigating reports of degraded performance for Hosted Runners
Dec 3, 04:16 UTC
Investigating - We are currently investigating this issue.
Dec 3, 04:11 UTC