GitHub header
All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Mar 11, 2025

No incidents reported today.

Mar 10, 2025

No incidents reported.

Mar 9, 2025

No incidents reported.

Mar 8, 2025
Resolved - This incident has been resolved.
Mar 8, 18:11 UTC
Update - Actions is operating normally.
Mar 8, 18:11 UTC
Update - Actions run start delays are mitigated. Actions runs that failed will need to be re-run. Impacted Pages updates will need to re-run their deployments.
Mar 8, 18:10 UTC
Update - Pages is operating normally.
Mar 8, 18:00 UTC
Update - We are investigating impact to Actions run start delays, about 40% of runs are not starting within five minutes and Pages deployments are impacted for GitHub hosted runners.
Mar 8, 17:50 UTC
Investigating - We are investigating reports of degraded performance for Actions and Pages
Mar 8, 17:45 UTC
Mar 7, 2025
Resolved - On March 7, 2025, from 09:30 UTC to 11:07 UTC, we experienced a networking event that disrupted connectivity to our search infrastructure, impacting about 25% of search queries and indexing attempts. Searches for PRs, Issues, Actions workflow runs, Packages, Releases, and other products were impacted, resulting in failed requests or stale data. The connectivity issue self-resolved after 90 minutes. The backlog of indexing jobs was fully processed and saw recovery soon after, and queries to all indexes also saw an immediate return to normal throughput.

We are working with our cloud provider to identify the root cause and are researching additional layers of redundancy to reduce customer impact in the future for issues like this one. We are also exploring mitigation strategies for faster resolution.

Mar 7, 11:24 UTC
Update - We continue investigating degraded experience with searching for issues, pull, requests and actions workflow runs.
Mar 7, 10:54 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Mar 7, 10:27 UTC
Update - Searches for issues and pull-requests may be slower than normal and timeout for some users
Mar 7, 10:12 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Mar 7, 10:06 UTC
Update - Issues is experiencing degraded performance. We are continuing to investigate.
Mar 7, 10:05 UTC
Investigating - We are currently investigating this issue.
Mar 7, 10:03 UTC
Mar 6, 2025

No incidents reported.

Mar 5, 2025

No incidents reported.

Mar 4, 2025

No incidents reported.

Mar 3, 2025
Resolved - On March 3rd 2025 between 04:07 UTC and 09:36 UTC various GitHub services were degraded with an average error rate of 0.03% and peak error rate of 9%. This issue impacted web requests, API requests, and git operations.

This incident was triggered because a network node in one of GitHub's datacenter sites partially failed, resulting in silent packet drops for traffic served by that site. At 09:22 UTC, we identified the failing network node, and at 09:36 UTC we addressed the issue by removing the faulty network node from production.

In response to this incident, we are improving our monitoring capabilities to identify and respond to similar silent errors more effectively in the future.

Mar 3, 05:31 UTC
Update - We have seen recovery across our services and impact is mitigated.
Mar 3, 05:30 UTC
Update - Git Operations is operating normally.
Mar 3, 05:20 UTC
Update - Webhooks is operating normally.
Mar 3, 05:20 UTC
Update - We are investigating intermittent connectivity issues between our backend and databases and will provide further updates as we have them. The current impact is you may see elevated latency while using our services.
Mar 3, 04:54 UTC
Update - We are seeing intermittent timeouts across our various services. We are currently investigating and will provide updates.
Mar 3, 04:23 UTC
Update - Webhooks is experiencing degraded performance. We are continuing to investigate.
Mar 3, 04:21 UTC
Investigating - We are investigating reports of degraded performance for API Requests, Git Operations and Issues
Mar 3, 04:20 UTC
Mar 2, 2025

No incidents reported.

Mar 1, 2025
Completed - The scheduled maintenance has been completed.
Mar 1, 02:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 28, 17:00 UTC
Scheduled - Codespaces will be undergoing maintenance in Europe and Southeast Asia from 17:00 UTC Friday February 28 to 02:00 UTC Saturday March 1. Maintenance will begin in North Europe at 17:00 UTC Friday February 28, followed by Southeast Asia, concluding in UK South. Each region will take 2-3 hours to complete.

During this time period, users may experience connectivity issues with new and existing Codespaces.

Please ensure that any uncommitted changes that you may need during the maintenance window are committed and pushed. Codespaces with any uncommitted changes will be accessible as usual once maintenance is complete.

Thank you for your patience as we work to improve our systems.

Feb 27, 21:09 UTC
Feb 28, 2025
Resolved - On February 28th, 2025, between 05:49 UTC and 06:55 UTC, a newly deployed background job caused increased load on GitHub’s primary database hosts, resulting in connection pool exhaustion. This led to degraded performance, manifesting as increased latency for write operations and elevated request timeout rates across multiple services.

The incident was mitigated by halting execution of the problematic background job and disabling the feature flag controlling the job execution. To prevent similar incidents in the future, we are collaborating on a plan to improve our production signals to better detect and respond to query performance issues.

Feb 28, 06:55 UTC
Update - Issues and Pull Requests are experiencing degraded performance. We are continuing to investigate.
Feb 28, 06:29 UTC
Investigating - We are currently investigating this issue.
Feb 28, 06:12 UTC
Feb 27, 2025
Resolved - On February 27, 2025, between 11:30 UTC and 12:22 UTC, Actions experienced degraded performance, leading to delays in workflow runs. On average, 5% of Actions workflow runs were delayed by 31 minutes. The delays were caused by updates in a dependent service that led to failures in Redis connectivity in one region. We mitigated the incident by failing over the impacted service and re-routing the service’s traffic out of that region. We are working to improve monitoring and processes of failover to reduce our time to detection and mitigation of issues like this one in the future.
Feb 27, 12:22 UTC
Update - The team is confident that recovery is complete. Thank you for your patience as this issue was investigated.
Feb 27, 12:22 UTC
Update - Our mitigations have rolled out successfully and have seen recovery for all Actions run starts back within expected range. Users should see Actions runs working normally.

We will keep this incident open for a short time while we continue to validate these results.

Feb 27, 12:16 UTC
Update - We have identified the cause of the delays to starting Action runs.

Our team is working to roll out mitigations and we hope to see recovery as these take effect in our systems over the next 10-20 minutes.

Further updates as we have more information.

Feb 27, 12:01 UTC
Update - We are seeing an increase in run start delays since 1104 UTC. This is impacting ~3% of Action runs at this time.

The team is working to understand the causes of this and to mitigate impact. We will continue to update as we have more information.

Feb 27, 11:39 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Feb 27, 11:31 UTC
Investigating - We are currently investigating this issue.
Feb 27, 11:28 UTC
Feb 26, 2025
Resolved - On February 26, 2025, between 14:51 UTC and 17:19 UTC, GitHub Packages experienced a service degradation, leading to billing-related failures when uploading and downloading Packages. During this period, the billing usage and budget pages were also inaccessible. Initially, we reported that GitHub Actions was affected, but we later determined that the impact was limited to jobs interacting with Packages services, while jobs that did not upload or download Packages remained unaffected.

The incident occurred due to an error in newly introduced code, which caused containers to get into a bad state, ultimately leading to billing API calls failing with 503 errors. We mitigated the issue by rolling back the contributing change. In response to this incident, we are enhancing error handling, improving the resiliency of our billing API calls to minimize customer impact, and improving change rollout practices to catch these potential issues prior to deployment.

Feb 26, 17:19 UTC
Update - Actions and Packages are operating normally.
Feb 26, 17:19 UTC
Update - We're continuing our investigation into Billing interfaces and retrieval of packages causing Actions workflow run failures.
Feb 26, 16:41 UTC
Update - We’re investigating issues related to billing and the retrieval of packages that are causing Actions workflow run failures.
Feb 26, 16:17 UTC
Update - We're investigating issues related to the Billing interfaces and Packages downloads failing for enterprise customers.
Feb 26, 15:56 UTC
Investigating - We are investigating reports of degraded performance for Actions and Packages
Feb 26, 15:51 UTC
Feb 25, 2025
Resolved - On February 25th, 2025, between 14:25 UTC and 16:44 UTC email and web notifications experienced delivery delays. At the peak of the incident the delay resulted in ~10% of all notifications taking over 10 minutes to be delivered, with the remaining ~90% being delivered within 5-10 minutes. This was due to insufficient capacity in worker pools as a result of increased load during peak hours.

We also encountered delivery delays for a small number of webhooks, with delays of up-to 2.5 minutes to be delivered.

We mitigated the incident by scaling out the service to meet the demand.

The increase in capacity gives us extra headroom, and we are working to improve our capacity planning to prevent issues like this occurring in the future.

Feb 25, 16:50 UTC
Update - Web and email notifications are caught up, resolving the incident.
Feb 25, 16:49 UTC
Update - We're continuing to investigate delayed web and email notifications.
Feb 25, 16:16 UTC
Update - We're continuing to investigate delayed web and email notifications.
Feb 25, 15:43 UTC
Update - We're investigating delays in web and email notifications impacting all customers.
Feb 25, 15:13 UTC
Investigating - We are currently investigating this issue.
Feb 25, 15:12 UTC
Resolved - On February 25, 2025 between 13:40 UTC and 15:45 UTC the Claude 3.7 Sonnet model for GitHub Copilot Chat experienced degraded performance. During the impact, occasional requests to Claude would result in an immediate error to the user. This was due to upstream errors with one of our infrastructure providers, which have since been mitigated.

We are working with our infrastructure providers to reduce time to detection and implement additional failover options, to mitigate issues like this one in the future.

Feb 25, 15:45 UTC
Update - We have disabled Claude 3.7 Sonnet models in Copilot Chat and across IDE integrations (VSCode, Visual Studio, JetBrains) due to an issue with our provider.

Users may still see these models as available for a brief period but we recommend switching to a different model. Other models were not impacted and are available.

Once our provider has resolved the issues impacting Claude 3.7 Sonnet models, we will re-enable them.

Feb 25, 15:25 UTC
Update - Copilot is experiencing degraded performance. We are continuing to investigate.
Feb 25, 14:44 UTC
Update - We are currently experiencing partial availability for the Claude 3.7 Sonnet and Claude 3.7 Thinking models in Copilot Chat, VSCode and other Copilot products. This is due to problems with an upstream provider. We are working to resolve these issues and will update with more information as it is made available.

Other Copilot models are available and working as expected.

Feb 25, 14:43 UTC
Investigating - We are currently investigating this issue.
Feb 25, 14:40 UTC
Resolved - On February 25, 2025, between 00:17 UTC and 01:08 UTC, GitHub Packages experienced a service degradation, leading to failures uploading and downloading packages, along with increased latency for all requests to GitHub Packages registry. At peak impact, about 14% of uploads and downloads failed, and all Packages requests were delayed by an average of 7 seconds. The incident was caused by the rollout of a database configuration change that resulted in a degradation in database performance. We mitigated the incident by rolling back the contributing change and failing over the database. In response to this incident, we are tuning database configurations and resolving a source of deadlocks. We are also redistributing certain workloads to read replicas to reduce latency and enhance overall database performance.
Feb 25, 01:08 UTC
Update - We have confirmed recovery for the majority of our systems. Some systems may still experience higher than normal latency as they catch up.
Feb 25, 01:08 UTC
Update - We have identified the issue impacting packages and have rolled out a fix. We are seeing signs of recovery and continue to monitor the situation.
Feb 25, 00:41 UTC
Investigating - We are investigating reports of degraded performance for Packages
Feb 25, 00:17 UTC