GitHub header

All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Aug 16, 2025

No incidents reported today.

Aug 15, 2025

No incidents reported.

Aug 14, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Aug 14, 18:37 UTC
Update - The NPM registry has now returned to normal functioning.
Aug 14, 18:37 UTC
Update - The NPM registry service is currently experiencing intermittent availability issues. Other package registries should be unaffected. Investigations are ongoing.
Aug 14, 18:11 UTC
Investigating - We are investigating reports of degraded performance for Packages
Aug 14, 18:06 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Aug 14, 06:23 UTC
Update - We are investigating reports of issues with service(s): Actions. We will continue to keep users updated on progress towards mitigation.
Aug 14, 05:42 UTC
Investigating - We are investigating reports of degraded performance for Actions
Aug 14, 05:03 UTC
Aug 13, 2025

No incidents reported.

Aug 12, 2025
Resolved - On August 12, 2025, between 13:30 UTC and 17:14 UTC, GitHub search was in a degraded state. Users experienced inaccurate or incomplete results, failures to load certain pages (like Issues, Pull Requests, Projects, and Deployments), and broken components like Actions workflow and label filters.

Most user impact occurred between 14:00 UTC and 15:30 UTC, when up to 75% of search queries failed, and updates to search results were delayed by up to 100 minutes.

The incident was triggered by intermittent connectivity issues between our load balancers and search hosts. While retry logic initially masked these problems, retry queues eventually overwhelmed the load balancers, causing failure. The query failures were mitigated at 15:30 UTC after throttling our search indexing pipeline to reduce load and stabilize retries. The connectivity failures were resolved at 17:14 UTC after the automated reboot of a search host, causing the rest of the system to recover.

We have improved internal monitors and playbooks, and tuned our search cluster load balancer to further mitigate the recurrence of this failure mode. We continue to invest in resolving the underlying connectivity issues.

Aug 12, 17:56 UTC
Update - Service availability has been mostly restored, but some users will continue to see increased request latency and stale search results. We are still working towards full recovery.
Aug 12, 17:07 UTC
Update - Service availability has been mostly restored, but increased load/query latency and stale search results persist. We continue to work towards full mitigation.
Aug 12, 16:33 UTC
Update - We are seeing partial recovery in service availability, but still see inconsistent experiences and stale search data across services. Investigation and mitigations are underway.
Aug 12, 15:48 UTC
Update - We are experiencing increased latency in our API layers and inconsistently degraded experiences when loading or querying issues, pull requests, labels, packages, releases, workflow runs, projects, and repositories, among others. Investigation is underway.
Aug 12, 15:20 UTC
Update - We are investigating reports of degraded performance in services backed by search. The team continues to investigate why requests are failing to reach our search clusters.
Aug 12, 14:53 UTC
Update - Packages is experiencing degraded performance. We are continuing to investigate.
Aug 12, 14:30 UTC
Investigating - We are investigating reports of degraded performance for API Requests, Actions, Issues and Pull Requests
Aug 12, 14:12 UTC
Aug 11, 2025
Resolved - On August 11, 2025, from 18:41 to 18:57 UTC, GitHub customers experienced errors and increased latency when loading GitHub’s web interface. During this time, a configuration change to improve our UI deployment system caused a surge in requests to a backend datastore. This change led to an unexpected spike in connection attempts to our datastore and saturated its connection backlog and resulted in intermittent failures to serve required UI content. This resulted in elevated error rates for frontend requests.

The incident was mitigated by reverting the configuration, which restored normal service.

Following mitigation, we are evaluating improvements to our alerting thresholds and exploring architectural changes to reduce load to this datastore and improve the resilience of our UI delivery pipeline.

Aug 11, 18:57 UTC
Update - Logged out users may see intermittent errors when loading github.com webpages. Investigation is ongoing.
Aug 11, 18:53 UTC
Investigating - We are currently investigating this issue.
Aug 11, 18:51 UTC
Aug 10, 2025

No incidents reported.

Aug 9, 2025

No incidents reported.

Aug 8, 2025

No incidents reported.

Aug 7, 2025

No incidents reported.

Aug 6, 2025

No incidents reported.

Aug 5, 2025
Resolved - At 15:33 UTC on August 5, 2025, we initiated a production database migration to drop a column from a table backing pull request functionality. While the column was no longer in direct use, our ORM continued to reference the dropped column in a subset of pull request queries. As a result, there were elevated error rates across pushes, webhooks, notifications, and pull requests with impact peaking at approximately 4% of all web and REST API traffic.

We mitigated the issue by deploying a change that instructed the ORM to ignore the removed column. Most affected services recovered by 16:13 UTC. However, that fix was applied only to our largest production environment. An update to some of our custom and canary environments did not pick up the fix and this triggered a secondary incident affecting ~0.1% of pull request traffic, which was fully resolved by 19:45 UTC.

While migrations have protections such as progressive roll-out first targeting validation environments and acknowledge gates, this incident identified an application monitoring gap that would have prevented continued rollout when impact was observed. We will add additional automation and safeguards to prevent future incidents without requiring human intervention. We are also already working on a way to streamline some types of changes across environments, which would have prevented the second incident from occurring.

Aug 5, 19:46 UTC
Update - Pull Requests is operating normally.
Aug 5, 19:46 UTC
Update - We continue to investigate issues with PRs. Impact remains limited to less than 2% of users.
Aug 5, 19:20 UTC
Update - We continue to investigate issues with PRs impacting less than 2% of customers.
Aug 5, 18:49 UTC
Update - We continue to investigate issues with PRs impacting less than 2% of customers.
Aug 5, 18:23 UTC
Update - We're seeing issues related to PR are investigating. Less than 2% of users are impacted.
Aug 5, 18:07 UTC
Investigating - We are investigating reports of degraded performance for Pull Requests
Aug 5, 17:53 UTC
Resolved - At 15:33 UTC on August 5, 2025, we initiated a production database migration to drop a column from a table backing pull request functionality. While the column was no longer in direct use, our ORM continued to reference the dropped column in a subset of pull request queries. As a result, there were elevated error rates across pushes, webhooks, notifications, and pull requests with impact peaking at approximately 4% of all web and REST API traffic.

We mitigated the issue by deploying a change that instructed the ORM to ignore the removed column. Most affected services recovered by 16:13 UTC. However, that fix was applied only to our largest production environment. An update to some of our custom and canary environments did not pick up the fix and this triggered a secondary incident affecting ~0.1% of pull request traffic, which was fully resolved by 19:45 UTC.

While migrations have protections such as progressive roll-out first targeting validation environments and acknowledge gates, this incident identified an application monitoring gap that would have prevented continued rollout when impact was observed. We will add additional automation and safeguards to prevent future incidents without requiring human intervention. We are also already working on a way to streamline some types of changes across environments, which would have prevented the second incident from occurring.

Aug 5, 16:14 UTC
Update - Actions is operating normally.
Aug 5, 16:14 UTC
Update - Pull Requests is operating normally.
Aug 5, 16:14 UTC
Update - Issues is operating normally.
Aug 5, 16:14 UTC
Update - Webhooks is operating normally.
Aug 5, 16:14 UTC
Update - Git Operations is operating normally.
Aug 5, 16:13 UTC
Update - We have fully mitigated this issue and all services are operating normally.
Aug 5, 16:13 UTC
Update - Webhooks is experiencing degraded performance. We are continuing to investigate.
Aug 5, 16:08 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Aug 5, 16:06 UTC
Update - We have identified a change that was made in the Pull Request area for GitHub. Users may be unable to use certain pull request and issues features and may see some webhooks impacted. We have identified the issue, taken mitigation and are starting to see recovery but will continue to monitor and post updates as we have them.
Aug 5, 16:05 UTC
Update - Webhooks is experiencing degraded availability. We are continuing to investigate.
Aug 5, 15:56 UTC
Update - Git Operations is experiencing degraded performance. We are continuing to investigate.
Aug 5, 15:55 UTC
Update - Pull Requests is experiencing degraded availability. We are continuing to investigate.
Aug 5, 15:54 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Aug 5, 15:51 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Aug 5, 15:51 UTC
Investigating - We are investigating reports of degraded performance for Issues and Webhooks
Aug 5, 15:42 UTC
Aug 4, 2025

No incidents reported.

Aug 3, 2025

No incidents reported.

Aug 2, 2025

No incidents reported.