GitHub header
Update - We identified an issue causing increased errors when accessing Pull Requests. The mitigation is being applied across our infrastructure and we will continue to provide updates as the mitigation rolls out.
Mar 31, 2026 - 17:16 UTC
Update - We are seeing recovery in latency and timeouts of requests related to pull requests, even though 500s are still elevated. While we are continuing to investigate, we are applying a mitigation and expect further recovery after it is applied.
Mar 31, 2026 - 16:35 UTC
Update - We are continuing to investigate increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause.
Mar 31, 2026 - 16:15 UTC
Update - We are investigating increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause.
Mar 31, 2026 - 15:39 UTC
Update - We are seeing a higher than average number of 500s due to timeouts across GitHub services. We have a potential mitigation in flight and are continuing to investigate.
Mar 31, 2026 - 15:06 UTC
Investigating - We are investigating reports of degraded performance for Pull Requests
Mar 31, 2026 - 15:05 UTC

About This Site

Check GitHub Enterprise Cloud status by region:
- Australia: au.githubstatus.com
- EU: eu.githubstatus.com
- Japan: jp.githubstatus.com
- US: us.githubstatus.com

Git Operations Operational
90 days ago
99.78 % uptime
Today
Webhooks Operational
90 days ago
99.64 % uptime
Today
Visit www.githubstatus.com for more information Operational
API Requests Operational
90 days ago
99.92 % uptime
Today
Issues Operational
90 days ago
99.68 % uptime
Today
Pull Requests Partial Outage
90 days ago
99.69 % uptime
Today
Actions Operational
90 days ago
99.33 % uptime
Today
Packages Operational
90 days ago
99.97 % uptime
Today
Pages Operational
90 days ago
99.92 % uptime
Today
Codespaces Operational
90 days ago
99.61 % uptime
Today
Copilot Operational
90 days ago
99.61 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Mar 31, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Mar 31, 15:10 UTC
Monitoring - The degradation has been mitigated. We are monitoring to ensure stability.
Mar 31, 15:01 UTC
Update - We have applied mitigations to a data store related to billing reports, and are seeing partial recovery to billing report generation. We continue to monitor for full recovery.
Mar 31, 14:59 UTC
Update - We are seeing a high number of 500s due to timeouts across GitHub services. We are redeploying some of our core services and we expect that this allow us to recover.
Mar 31, 14:56 UTC
Update - We're continuing to see high failure rates on billing report generation, and are working on mitigations for a data store related to billing reports.
Mar 31, 14:39 UTC
Update - We're seeing issues related to metered billing reports, intermittently affecting metered usage graphs and reports on the billing page. We have identified an issue with a data store, and are working on mitigations.
Mar 31, 13:56 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Mar 31, 13:47 UTC
Mar 30, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Mar 30, 13:25 UTC
Update - The degradation has been mitigated. We are monitoring to ensure stability.
Mar 30, 13:25 UTC
Monitoring - The degradation affecting Actions and Pull Requests has been mitigated. We are monitoring to ensure stability.
Mar 30, 13:20 UTC
Investigating - We are investigating reports of degraded performance for Actions and Pull Requests
Mar 30, 13:02 UTC
Mar 29, 2026

No incidents reported.

Mar 28, 2026

No incidents reported.

Mar 27, 2026
Resolved - On March 27, 2026, from 02:30 to 04:56 UTC, a misconfiguration in our rate limiting system caused users on Copilot Free, Student, Pro, and Pro+ plans to experience unexpected rate limit errors. The configuration that was incorrectly applied was intended solely for internal staff testing of rate-limiting experiences. Copilot Business and Copilot Enterprise accounts were not affected.

During this period, affected users received error messages instructing them to retry after a certain time. Approximately 32% of active Free users, 35% of active Student users, 46% of active Pro users, and 66% of active Pro+ users were affected.

After identifying the root cause, we reverted the change and restored the expected rate limits. We are reviewing our deployment and validation processes to help ensure configurations used for internal testing cannot be inadvertently applied to production environments.

Mar 27, 05:00 UTC
Mar 26, 2026

No incidents reported.

Mar 25, 2026

No incidents reported.

Mar 24, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Mar 24, 20:56 UTC
Update - We are investigating elevated error rates affecting multiple GitHub services including Actions, Issues, Pull Requests, Webhooks, Codespaces, and login functionality. Some users may have experienced errors when accessing these features. Most services are now showing signs of recovery. We'll post another update by 21:00 UTC.
Mar 24, 20:38 UTC
Update - Issues is experiencing degraded performance. We are continuing to investigate.
Mar 24, 20:23 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Mar 24, 20:23 UTC
Update - Webhooks is experiencing degraded performance. We are continuing to investigate.
Mar 24, 20:20 UTC
Investigating - We are investigating reports of degraded performance for Actions
Mar 24, 20:18 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Mar 24, 19:51 UTC
Update - We are experiencing degraded availability from Azure Teams APIs, which is impacting notifications from GitHub to Microsoft Teams. We are awaiting resolution from Azure.
Mar 24, 18:50 UTC
Update - We are experiencing degraded availability from Azure APIs, which is impacting notifications from GitHub to Microsoft Teams. We are working with Azure to resolve the issue.
Mar 24, 17:43 UTC
Update - We found an issue impacting notifications from GitHub to Microsoft Teams. We are working on mitigation and will keep users updated on progress towards mitigation.
Mar 24, 17:09 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Mar 24, 16:59 UTC
Mar 23, 2026

No incidents reported.

Mar 22, 2026
Resolved - On March 22, 2026, between 09:05 UTC and 10:02 UTC, users may have experienced intermittent errors and increased latency when performing Git http read operations. On average, the error rate was 3.84% and peaked at 15.55% of requests to the service. The issue was caused by elevated latency in an internal authentication service within one of our regional clusters. We mitigated the issue by redirecting traffic away from the affected cluster at 09:39 UTC, after which error rates returned to normal. The incident was fully resolved at 10:02 UTC.

We are working to scale the authentication service and reduce our time to detection and mitigation of issues like this one in the future.

Mar 22, 10:02 UTC
Update - We are investigating intermittently high latency and errors from Git operations.
Mar 22, 09:27 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Mar 22, 09:08 UTC
Mar 21, 2026

No incidents reported.

Mar 20, 2026
Resolved - On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and
peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its
backing datastore.

We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first.

We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future.

Mar 20, 01:58 UTC
Update - We are rolling out our mitigation and are seeing recovery.
Mar 20, 01:26 UTC
Update - We are seeing widespread issues starting and viewing Copilot Agent sessions. We understand the cause and are working on remediation.
Mar 20, 01:00 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Mar 20, 00:58 UTC
Resolved - On March 19, 2026 between 16:10 UTC and 00:05 UTC (March 20), Git operations (clone, fetch, push) from the US west coast experienced elevated latency and degraded throughput. Users reported clone speeds dropping from typical speeds to under 1 MiB/s in extreme cases. The root cause was network transport link saturation at our Seattle edge site, where a fiber cut affecting our backbone transport resulted in saturation and packet loss. We had a planned scale-up in progress for the site that was accelerated to resolve the backbone capacity pressure. We also brought online additional edge capacity in a cloud region and redirected some users there. Current scale with the upgraded network capacity is sufficient to prevent reoccurrence, as we upgraded from 800Gbps to 3.2Tbps total capacity on this path. We will continue to monitor network health and respond to any further issues.
Mar 20, 00:05 UTC
Update - We have reached stability with git operations through our changes deployed today.
Mar 20, 00:05 UTC
Update - We are seeing early signs of improvement. We are working on one more small change to further improve traffic routing on the west coast.
Mar 19, 23:52 UTC
Update - We have completed the rollout of our new network path and are monitoring its impact.
Mar 19, 22:57 UTC
Update - We are beginning the rollout of our new network path. During this change, users will continue to see higher latency from the west coast. We will provide another update when the rollout is complete.
Mar 19, 21:59 UTC
Update - We are working to enable a new network path in the west coast to reduce load and will monitor the impact on latency for Git Operations
Mar 19, 18:27 UTC
Update - We are still seeing elevated latency for Git operations in the west coast and are continuing to investigate
Mar 19, 17:49 UTC
Update - We are redirecting traffic back to our Seattle region and customers should see a decrease in latency for Git operations
Mar 19, 17:01 UTC
Investigating - We are investigating reports of degraded performance for Git Operations
Mar 19, 16:25 UTC
Mar 19, 2026
Resolved - On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and
peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its
backing datastore.

We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first.

We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future.

Mar 19, 14:32 UTC
Update - Copilot is operating normally.
Mar 19, 14:06 UTC
Update - We are investigating reports that Copilot Coding Agent session logs are not available in the UI.
Mar 19, 14:02 UTC
Update - Copilot is experiencing degraded performance. We are continuing to investigate.
Mar 19, 13:45 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Mar 19, 13:44 UTC
Resolved - On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and
peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its
backing datastore.

We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first.

We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future.

Mar 19, 02:52 UTC
Update - We have rolled out our mitigation and are seeing recovery for Copilot Coding Agent sessions
Mar 19, 02:46 UTC
Update - We are seeing widespread issues starting and viewing Copilot Agent sessions. We have a hypothesis for the cause and are working on remediation.
Mar 19, 02:25 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Mar 19, 02:05 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Mar 19, 01:44 UTC
Update - We are seeing recovery in git operations for customers on the West Coast of the US.
Mar 19, 01:43 UTC
Update - We continue to investigate the slow performance of Git Operations affecting the US West Coast.
Mar 19, 00:56 UTC
Update - We continue to investigate degraded performance for git operations from the US West Coast.
Mar 19, 00:10 UTC
Update - We are continuing to investigate degraded performance for git operations from the US West Coast.
Mar 18, 23:33 UTC
Update - We are experiencing increased latency when performing git operations, especially large pushes and pulls from customers on the west coast of the US. We are not seeing an increase in failures. We are continuing to investigate.
Mar 18, 22:48 UTC
Update - Git Operations is experiencing degraded performance. We are continuing to investigate.
Mar 18, 22:36 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Mar 18, 22:36 UTC
Mar 18, 2026
Resolved - On March 18, 2026, between 18:18 UTC and 19:46 UTC all webhook deliveries experienced elevated latency. During this time, average delivery latency increased from a baseline of approximately 5 seconds to a peak of approximately 160 seconds. This was due to resource constraints in the webhook delivery pipeline, which caused queue backlog growth and increased delivery latency. We mitigated the incident by shifting traffic and adding capacity, after which webhook delivery latency returned to normal. We are working to improve capacity management and detection in the webhook delivery pipeline to help prevent similar issues in the future.
Mar 18, 19:46 UTC
Update - We are seeing recovery and are continuing to monitor the latency for webhook deliveries
Mar 18, 19:25 UTC
Investigating - We are investigating reports of degraded performance for Webhooks
Mar 18, 18:51 UTC
Mar 17, 2026

No incidents reported.