GitHub header
Update - We are still actively investigating this issue.
Jul 28, 2025 - 18:11 UTC
Update - Git Operations is experiencing degraded performance. We are continuing to investigate.
Jul 28, 2025 - 17:30 UTC
Update - We are investigating errors affecting some archive and raw file downloads. Users may experience rate limit warnings or server errors until this is resolved.
Jul 28, 2025 - 17:17 UTC
Investigating - We are currently investigating this issue.
Jul 28, 2025 - 16:50 UTC

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations ? Partial Outage
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Jul 28, 2025

Unresolved incident: Disruption with some GitHub services.

Jul 27, 2025

No incidents reported.

Jul 26, 2025

No incidents reported.

Jul 25, 2025

No incidents reported.

Jul 24, 2025

No incidents reported.

Jul 23, 2025
Resolved - On July 23rd, 2025, from approximately 14:30 to 16:30 UTC, GitHub Actions experienced delayed job starts for workflows in private repos using Ubuntu-24 standard hosted runners. This was due to resource provisioning failures in one of our datacenter regions. During this period, approximately 2% of Ubuntu-24 hosted runner jobs on private repos were delayed. Other hosted runners, self-hosted runners, and public repo workflows were unaffected.

To mitigate the issue, additional worker capacity was added from a different datacenter region at 15:35 UTC and further increased at 16:00 UTC. By 16:30 UTC, job queues were healthy and service was operating normally. Since the incident, we have deployed changes to improve how regional health is accounted for when allocating new runners, and we are investigating further improvements to our automated capacity scaling logic and manual overrides to prevent a recurrence.

Jul 23, 16:30 UTC
Update - We are applying mitigations to increase Actions Hosted Runners capacity, and are starting to see recovery. We’re monitoring to ensure continued stability.
Jul 23, 16:11 UTC
Update - We're investigating delays provisioning Actions Hosted Runners. Customers may see delays over 5 minutes for jobs starting.
Jul 23, 15:36 UTC
Investigating - We are investigating reports of degraded performance for Actions
Jul 23, 15:31 UTC
Jul 22, 2025
Resolved - On July 22nd, 2025, between 17:58 and 18:35 UTC, the Copilot service experienced degraded availability for Claude Sonnet 4 requests. 4.7% of Claude 4 requests failed during this time. No other models were impacted. The issue was caused by an upstream problem affecting our ability to serve requests.

We mitigated by rerouting capacity and monitoring recovery. We are improving detection and mitigation to reduce future impact.

Jul 22, 18:49 UTC
Update - We are experiencing degraded availability for the Claude Sonnet 4 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.


Jul 22, 18:35 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Jul 22, 18:35 UTC
Jul 21, 2025
Resolved - On July 21st, 2025, between 07:00 UTC and 09:45 UTC the API, Codespaces, Copilot, Issues, Package Registry, Pull Requests and Webhook services were degraded and experienced dropped requests and increased latency. At the peak of this incident (a 2 minute period around 07:00 UTC) error rates peaked at 11% and went down shortly after. Average web request load times rose to 1 second during this same time frame. After this period, traffic gradually recovered but error rate and latency remained slightly elevated until the end of the incident.

This incident was triggered by a kernel bug that caused a crash of some of our load balancers during a scheduled process after a kernel upgrade. In order to mitigate the incident, we halted the roll out of our upgrades, and rolled back the impacted instances. We are working to make sure the kernel version is fully removed from our fleet. As a precaution, we temporarily paused the scheduled process to prevent any unintended use in the affected kernel. We also tuned our alerting so we can more quickly detect and mitigate future instances where we experience a sudden drop in load-balancing capacity.

Jul 21, 09:48 UTC
Update - API Requests and Codespaces are operating normally.
Jul 21, 09:47 UTC
Update - Copilot is operating normally.
Jul 21, 09:45 UTC
Update - Webhooks is operating normally.
Jul 21, 09:44 UTC
Update - Mitigations have been applied and we are seeing recovery. We are continuing to closely monitor the situation to ensure complete recovery has been achieved.
Jul 21, 09:41 UTC
Update - Issues is operating normally.
Jul 21, 09:34 UTC
Update - Packages is operating normally.
Jul 21, 09:19 UTC
Update - We are currently implementing mitigations for this issue.
Jul 21, 09:00 UTC
Update - Copilot is experiencing degraded performance. We are continuing to investigate.
Jul 21, 08:48 UTC
Update - We continue to investigate reports of degraded performance and intermittent timeouts across GitHub.com.
Jul 21, 08:27 UTC
Update - Pull Requests is operating normally.
Jul 21, 08:10 UTC
Update - We're continuing to investigate reports of degraded performance and intermitiant timeouts across GitHub.com.
Jul 21, 07:46 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Jul 21, 07:25 UTC
Update - Packages is experiencing degraded performance. We are continuing to investigate.
Jul 21, 07:23 UTC
Update - API Requests is experiencing degraded performance. We are continuing to investigate.
Jul 21, 07:22 UTC
Update - Codespaces is experiencing degraded performance. We are continuing to investigate.
Jul 21, 07:19 UTC
Investigating - We are investigating reports of degraded performance for Issues and Webhooks
Jul 21, 07:15 UTC
Resolved - On July 21, 2025, between 07:20 UTC and 08:00 UTC, the Copilot service experienced degraded availability for Claude 4 requests. 2% of Claude 4 requests failed during this time. The issue was caused by an upstream problem affecting our ability to serve requests.
We mitigated by rerouting capacity and monitoring recovery. We are improving detection and mitigation to reduce future impact.

Jul 21, 07:50 UTC
Investigating - We are currently investigating this issue.
Jul 21, 07:36 UTC
Jul 20, 2025

No incidents reported.

Jul 19, 2025

No incidents reported.

Jul 18, 2025

No incidents reported.

Jul 17, 2025

No incidents reported.

Jul 16, 2025
Resolved - On July 16, 2025, between 05:20 UTC and 08:30 UTC, the Copilot service experienced degraded availability for Claude 3.7 requests. Around 10% of Claude 3.7 requests failed during this time. The issue was caused by an upstream problem affecting our ability to serve requests.
We mitigated by rerouting capacity and monitoring recovery. We are improving detection and mitigation to reduce future impact.

Jul 16, 08:58 UTC
Update - We have seen recovery on our provider's side but have not yet confirmed if the issue is fully resolved. We will update our status in the next 20 minutes as we know more.
Jul 16, 08:45 UTC
Update - We are experiencing degraded availability for the Claude 3.7 Sonnet model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.
Jul 16, 08:21 UTC
Investigating - We are currently investigating this issue.
Jul 16, 08:16 UTC
Jul 15, 2025
Resolved - On 15 July, between 19:55 and 19:58 UTC, requests to GitHub had a high failure rate while successful requests suffered up to 10x expected latency.

Browser-based requests saw a failure rate of up to 20%, GraphQL had up to a 9% failure rate and 2% of REST API requests failed. Any downstream service dependent on GitHub APIs was also affected during this short window.

The failure stemmed from a database query change, and was rolled back by our deployment tooling which automatically detected the issue. We will continue to invest in automated detection and rollback with a goal of minimizing time to recovery.

Jul 15, 20:00 UTC
Jul 14, 2025

No incidents reported.