GitHub header

All Systems Operational

About This Site

Check GitHub Enterprise Cloud status by region:
- Australia: au.githubstatus.com
- EU: eu.githubstatus.com
- Japan: jp.githubstatus.com
- US: us.githubstatus.com

Git Operations Operational
90 days ago
99.92 % uptime
Today
Webhooks Operational
90 days ago
99.88 % uptime
Today
Visit www.githubstatus.com for more information Operational
API Requests Operational
90 days ago
99.91 % uptime
Today
Issues Operational
90 days ago
99.76 % uptime
Today
Pull Requests Operational
90 days ago
99.77 % uptime
Today
Actions Operational
90 days ago
99.46 % uptime
Today
Packages Operational
90 days ago
99.96 % uptime
Today
Pages Operational
90 days ago
99.9 % uptime
Today
Codespaces Operational
90 days ago
99.68 % uptime
Today
Copilot Operational
90 days ago
99.6 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Feb 27, 2026
Resolved - Starting February 26, 2026 at 22:10 UTC through February 27, 05:50 UTC, the repository browsing UI was degraded and users were unable to load pages for files and directories with non-ASCII characters (including Japanese, Chinese, and other non-Latin scripts). On average, the error rate was 0.014% and peaked at 0.06% of requests to the service. Affected users saw 404 errors when navigating to repository directories and files with non-ASCII names. This was due to a code change that altered how file and directory names were processed, which caused incorrectly formatted data to be stored in an application cache.

We mitigated the incident by deploying a fix that invalidated the affected cache entries and progressively rolling it out across all production environments.

We are working to improve our pre-production testing to cover non-ASCII character handling, establish better cache invalidation mechanisms, and enhance our monitoring to detect this type of failure mode earlier, to reduce our time to detection and mitigation of issues like this one in the future.

Feb 27, 06:04 UTC
Update - We have cleared all caches and everything is operating normally.
Feb 27, 06:03 UTC
Update - We have mitigated the issue but are working on invalidating caches in order to fix the issue for all impacted repos.
Feb 27, 05:21 UTC
Update - We have performed a mitigation but some repositories may still see issues. We are working on a full mitigation.
Feb 27, 04:17 UTC
Update - We are looking into recent code changes to mitigate the error loading some code view pages.
Feb 27, 03:28 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 27, 03:08 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 27, 00:04 UTC
Update - Webhooks is experiencing degraded performance. We are continuing to investigate.
Feb 27, 00:02 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 27, 00:01 UTC
Feb 26, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 26, 11:06 UTC
Update - Copilot is operating normally.
Feb 26, 11:06 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Feb 26, 10:22 UTC
Feb 25, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 25, 16:44 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Feb 25, 16:38 UTC
Feb 24, 2026
Resolved - Between 2026-02-23 19:10 and 2026-02-24 00:46 UTC, all lexical code search queries in GitHub.com and the code search API were significantly slowed, and during this incident, between 5 and 10% of search queries timed out. This was caused by a single customer who had created a network of hundreds of orchestrated accounts which searched with a uniquely expensive search query. This search query concentrated load on a single hot shard within the search index, slowing down all queries. After we identified the source of the load and stopped the traffic, latency returned to normal.

To avoid this situation occurring again in the future, we are making a number of improvements to our systems, including: improved rate limiting that accounts for highly skewed load on hot shards, improved system resilience for when a small number of shards time out, improved tooling to recognize abusive actors, and capabilities that will allow us to shed load on a single shard in emergencies.

Feb 24, 00:46 UTC
Update - We have identified a cause for the latency and timeouts and have implemented a fix. We are observing initial recovery now.
Feb 24, 00:38 UTC
Update - Customers using code search continue to see increased latency and timeout errors. We are working to mitigate issues on the affected shard.
Feb 23, 23:10 UTC
Update - Elevated latency and timeouts for code search is isolated to a single shard experiencing elevated CPU. We are taking steps to isolate and mitigate the affected shard.
Feb 23, 22:22 UTC
Update - Elevated latency and timeouts for code search is isolated to a single shard experiencing elevated CPU. We are continuing to investigate the cause and steps to mitigate.
Feb 23, 21:18 UTC
Update - We are continuing to investigate elevated latency and timeouts for code search.
Feb 23, 20:33 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 23, 19:59 UTC
Feb 23, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 23, 21:30 UTC
Update - Some customers are seeing timeout errors when searching for issues or pull requests. Team is currently investigating a fix.
Feb 23, 21:24 UTC
Investigating - We are investigating reports of degraded performance for Issues and Pull Requests
Feb 23, 21:16 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 23, 17:03 UTC
Investigating - We are investigating reports of degraded performance for Actions
Feb 23, 16:17 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 23, 16:19 UTC
Update - Copilot is operating normally.
Feb 23, 15:59 UTC
Update - The issues with our upstream model provider have been resolved, and Haiku 4.5 is once again available in Copilot Chat and across IDE integrations.

We will continue monitoring to ensure stability, but mitigation is complete.

Feb 23, 15:59 UTC
Update - Our provider has recovered and we are not seeing errors but we are awaiting a signal from them that the issue will not regress before we go green.
Feb 23, 15:13 UTC
Update - We are experiencing degraded availability for the Haiku 4.5 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.

Feb 23, 14:56 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Feb 23, 14:56 UTC
Feb 22, 2026

No incidents reported.

Feb 21, 2026

No incidents reported.

Feb 20, 2026
Resolved - On February 20, 2026, between 17:45 UTC and 20:41 UTC, 4.2% of workflows running on GitHub Larger Hosted Runners were delayed by an average of 18 minutes. Standard, Mac, and Self-Hosted Runners were not impacted.

The delays were caused by communication failures between backend services for one deployment of larger runners. Those failures prevented expected automated scaling and provisioning of larger hosted runner capacity within that deployment. This was mitigated when the affected infrastructure was recycled, larger runner pools in the affected deployment successfully scaled up, and queued jobs processed.

We are working to improve the time to detect and diagnose this class of failures and improve the performance of recovery mechanisms for this degraded network state. In addition, we have architectural changes underway that will enable other deployments to pick up work in similar situations, so there is no customer impact due to deployment-specific infrastructure issues like this.

Feb 20, 20:41 UTC
Update - The team continues to investigate issues with some larger runner jobs being queued for a long time. We are though seeing improvement in the queue times. We will continue providing updates on the progress towards mitigation.
Feb 20, 20:36 UTC
Update - We are investigating reports of degraded performance for Larger Hosted Runners
Feb 20, 20:01 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 20, 20:00 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 20, 11:41 UTC
Update - The issues with our upstream model provider have been resolved, and GPT 5.1 Codex is once again available in Copilot Chat and across IDE integrations [VSCode, Visual Studio, JetBrains].
We will continue monitoring to ensure stability, but mitigation is complete.

Feb 20, 11:19 UTC
Update - We are still experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Feb 20, 10:36 UTC
Update - We are experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.
Other models are available and working as expected.

Feb 20, 10:02 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Feb 20, 10:02 UTC
Feb 19, 2026

No incidents reported.

Feb 18, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 18, 19:20 UTC
Update - We have seen significant recovery in merge queue we are continuing to monitor for any other degraded services.
Feb 18, 19:18 UTC
Update - We are investigating reports of issues with merge queue. We will continue to keep users updated on progress towards mitigation.
Feb 18, 18:27 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Feb 18, 18:26 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 18, 18:25 UTC
Feb 17, 2026
Resolved - On February 17, 2026, between 17:07 UTC and 19:06 UTC, some customers experienced intermittent authentication failures affecting GitHub Actions, parts of Git operations, and other authentication-dependent requests. On average, the Actions error rate was approximately 0.6% of affected API requests. Git operations ssh read error rate was approximately 0.29%, while ssh write and http operations were not impacted. During the incident, a subset of requests failed due to token verification lookups intermittently failing, leading to 401 errors and degraded reliability for impacted workflows.

The issue was caused by elevated replication lag in the token verification database cluster. In the days leading up to the incident, the token store’s write volume grew enough to exceed the cluster’s available capacity. Under peak load, older replica hosts were unable to keep up, replica lag increased, and some token lookups became inconsistent, resulting in intermittent authentication failures.

We mitigated the incident by adjusting the database replica topology to route reads away from lagging replicas and by adding/bringing additional replica capacity online. Service health improved progressively after the change, with GitHub Actions recovering by ~19:00 UTC and the incident resolved at 19:06 UTC.

We are working to prevent recurrence by improving the resilience and scalability of our underlying token verification data stores to better handle continued growth.

Feb 17, 19:06 UTC
Update - We are continuing to monitor the mitigation and continuing to see signs of recovery.
Feb 17, 18:55 UTC
Update - We have rolled out a mitigation and are seeing signs of recovery and are continuing to monitor.
Feb 17, 18:18 UTC
Update - We have identified a low rate of authentication failures affecting GitHub App server to server tokens, GitHub Actions authentication tokens, and git operations. Some customers may experience intermittent API request failures when using these tokens. We believe we've identified the cause and are working to mitigate impact.
Feb 17, 17:46 UTC
Investigating - We are investigating reports of degraded performance for Actions and Git Operations
Feb 17, 17:46 UTC
Feb 16, 2026

No incidents reported.

Feb 15, 2026

No incidents reported.

Feb 14, 2026

No incidents reported.

Feb 13, 2026
Resolved - On February 13, 2026, between 21:46 UTC and 22:58 UTC (72 minutes), the GitHub file upload service was degraded and users uploading from a web browser on GitHub.com were unable to upload files to repositories, create release assets, or upload manifest files. During the incident, successful upload completions dropped by ~85% from baseline levels. This was due to a code change that inadvertently modified browser request behavior and violated CORS (Cross-Origin Resource Sharing) policy requirements, causing upload requests to be blocked before reaching the upload service.

We mitigated the incident by reverting the code change that introduced the issue.

We are working to improve automated testing for browser-side request changes and to add monitoring/automated safeguards for upload flows to reduce our time to detection and mitigation of similar issues in the future.

Feb 13, 22:58 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 13, 22:30 UTC