GitHub header
Update - At 19:43 UTC on 2025-11-19, we paused the queue that processes Mannequin Reclaiming work done at the end of a migration.

This was done after observing load that threatened the health of the overall system.

The cause has been identified, and efforts to fix are underway.

In the current state:
- all requests to Reclaim Mannequins will be held in a queue
- those requests will be processed when repair work is complete and the queue unpaused, at which time the incident will be closed

This does not impact processing of migration runs, only mannequin reclamation.

Nov 19, 2025 - 16:13 UTC
Investigating - We are currently investigating this issue.
Nov 19, 2025 - 16:13 UTC

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Nov 19, 2025

Unresolved incident: Disruption with some GitHub services.

Nov 18, 2025
Resolved - From Nov 18, 2025 20:30 UTC to Nov 18, 2025 21:34 UTC we experienced failures on all Git operations, including both SSH and HTTP Git client interactions, as well as raw file access. These failures also impacted products that rely on Git operations.

The root cause was an expired TLS certificate used for internal service-to-service communication. We mitigated the incident by replacing the expired certificate and restarting impacted services. Once those services were restarted we saw a full recovery.

We have updated our alerting to cover the expired certificate and are performing an audit of other certificates in this area to ensure they also have the proper alerting and automation before expiration. In parallel, we are accelerating efforts to eliminate our remaining manually managed certificates, ensuring all service-to-service communication is fully automated and aligned with modern security practices.

Nov 18, 21:59 UTC
Update - Git Operations is operating normally.
Nov 18, 21:56 UTC
Update - We are seeing full recovery after rolling out the fix and all services are operational.
Nov 18, 21:55 UTC
Update - Codespaces is operating normally.
Nov 18, 21:55 UTC
Update - We have shipped a fix and are seeing recovery in some areas and will continue to provide updates.
Nov 18, 21:36 UTC
Update - We have identified the likely cause of the incident and are working on a fix. We will provide another update as we get closer to deploying the fix.
Nov 18, 21:27 UTC
Update - Codespaces is experiencing degraded availability. We are continuing to investigate.
Nov 18, 21:25 UTC
Update - We are currently investigating failures on all Git operations, including both SSH and HTTP.
Nov 18, 21:11 UTC
Update - We are seeing failures for some git http operations and are investigating
Nov 18, 20:52 UTC
Update - Git Operations is experiencing degraded availability. We are continuing to investigate.
Nov 18, 20:39 UTC
Investigating - We are currently investigating this issue.
Nov 18, 20:39 UTC
Resolved - Between November 17, 2025 21:24 UTC and November 18, 2025 00:04 UTC the gists service was degraded and users were unable to create gists via the web UI. 100% of gist creation requests failed with a 404 response. This was due to a change in the web middleware that inadvertently triggered a routing error. We resolved the incident by rolling back the change. We are working on more effective monitoring to reduce the time it takes to detect similar issues and evaluating our testing approach for middleware functionality.
Nov 18, 00:10 UTC
Update - We are investigating reports of 404s creating gists.
Nov 17, 23:01 UTC
Investigating - We are currently investigating this issue.
Nov 17, 23:01 UTC
Nov 17, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Nov 17, 19:08 UTC
Update - We continue to see recovery and dependabot jobs are currently processing as expected.
Nov 17, 18:54 UTC
Update - We are applying a configuration change and will monitor for recovery.
Nov 17, 18:18 UTC
Update - We are continuing to investigate dependabot failures and a configuration change to mitigate.
Nov 17, 17:50 UTC
Update - We are investigating dependabot job failures affecting approximately 50% of version updates and 25% of security updates.
Nov 17, 17:15 UTC
Investigating - We are currently investigating this issue.
Nov 17, 16:52 UTC
Nov 16, 2025

No incidents reported.

Nov 15, 2025

No incidents reported.

Nov 14, 2025

No incidents reported.

Nov 13, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Nov 13, 15:13 UTC
Update - Git Operations is experiencing degraded performance. We are continuing to investigate.
Nov 13, 15:03 UTC
Investigating - We are currently investigating this issue.
Nov 13, 15:00 UTC
Nov 12, 2025
Resolved - On November 12, 2025, between 22:10 UTC and 23:04 UTC, Codespaces used internally at GitHub were impacted. There was no impact to external customers. The scope of impact was not clear in the initial steps of incident response, so it was considered public until confirmed otherwise. One improvement from this will be improved clarity of internal versus public impact for similar failures to better inform our status decisions going forward.
Nov 12, 23:04 UTC
Update - We are continuing to investigate connectivity issues with codespaces
Nov 12, 22:51 UTC
Update - We are investing reports of codespaces no longer appearing in the UI or API. Users may experience connectivity issues to the impacted codespaces.
Nov 12, 22:26 UTC
Investigating - We are currently investigating this issue.
Nov 12, 22:26 UTC
Resolved - On November 12th, 2025, from 13:10 - 17:40 UTC, notifications service was degraded, showing an increase in web notifications latency and increasing delays in notification deliveries. A change to the notifications settings access path introduced additional load to the settings system, degrading its response times. This impacted both requests to web notifications (with p99 response times as high as 1.5s, while lower percentiles remained stable) and notification deliveries, which reached a peak delay of 24 minutes on average. System capacity was increased around 15:10 UTC and the problematic change was fully reverted soon after that, restoring the latency of web notifications and increasing notification delivery throughput, decreasing the delay in notification deliveries. The notification queue was fully emptied around 17:40 UTC.

We are working to adjust capacity in the affected systems and to improve the time needed to address these capacity issues.

Nov 12, 17:39 UTC
Update - We are continuing to monitor our mitigations to delays in notification deliveries. Some users may still experience delays of over 10 minutes.
Nov 12, 17:15 UTC
Update - We are continuing to work on mitigating delays in notification deliveries. Some users may still experience delays of over 10 minutes.
Nov 12, 15:35 UTC
Update - We are continuing to work on mitigating delays in notification deliveries. Some users may experience delays of over 10 minutes.
Nov 12, 14:58 UTC
Update - We are investigating delays of up to 10 minutes in notification deliveries. Our team has identified the likely cause and is actively working to mitigate the issue.
Nov 12, 14:25 UTC
Investigating - We are currently investigating this issue.
Nov 12, 14:23 UTC
Nov 11, 2025
Resolved - On November 11, 2025, between 16:28 UTC and 20:54 UTC, GitHub Actions larger hosted runners experienced degraded performance, with 0.4% of overall workflow runs and 8.8% of larger hosted runner jobs failing to start within 5 minutes. The majority of impact was mitigated by 18:44, with a small tail of organizations taking longer to recover.

The impact was caused by the same database infrastructure issue that caused similar larger hosted runner performance degradation on October 23rd, 2025. In this case, it was triggered by a brief infrastructure event in this incident rather than a database change.

Through this incident, we identified and implemented a better solution for both prevention and faster mitigation. In addition to this, a durable solution for the underlying database issue is rolling out soon.

Nov 11, 20:54 UTC
Update - Mitigation is complete and all new jobs targeting Larger Hosted Runners should not experience delays.
Nov 11, 20:53 UTC
Update - The team is continuing to apply the mitigation for Large Hosted Runners. We will provide updates as we progress.
Nov 11, 19:40 UTC
Update - The team continues to investigate delays with Large Hosted Runners. We will continue providing updates on the progress towards mitigation.
Nov 11, 18:37 UTC
Investigating - We are currently investigating this issue.
Nov 11, 18:02 UTC
Nov 10, 2025

No incidents reported.

Nov 9, 2025

No incidents reported.

Nov 8, 2025

No incidents reported.

Nov 7, 2025

No incidents reported.

Nov 6, 2025
Resolved - Between November 5, 2025 23:27 UTC and November 6, 2025 00:06 UTC, ghost text requests experienced errors from upstream model providers. This was a continuation of the service disruption for which we statused Copilot earlier that day, although more limited in scope.

During the service disruption, users were again automatically re-routed to healthy model hosts, minimizing impact to users and we are updating our monitors and failover mechanism to mitigate similar issues in the future.

Nov 6, 00:06 UTC
Update - We have recovered from our earlier performance issues. Copilot code completions should be functioning normally at this time.
Nov 6, 00:06 UTC
Update - Copilot Code Completions are partially unavailable. Our engineering team is engaged and investigating.
Nov 5, 23:41 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Nov 5, 23:41 UTC
Nov 5, 2025
Resolved - On November 5, 2025, between 21:46 and 23:36 UTC, ghost text requests experienced errors from upstream model providers that resulted in 0.9% of users seeing elevated error rates.

During the service disruption, users were automatically re-routed to healthy model hosts but may have experienced increased latency in response times as a result of re-routing.

We are updating our monitors and tuning our failover mechanism to more quickly mitigate issues like this in the future.

Nov 5, 23:26 UTC
Update - We have identified and resolved the underlying issues with Code Completions. Customers should see full recovery.
Nov 5, 23:26 UTC
Update - We are investigating increased error rates affecting Copilot Code Completions. Some users may experience delays or partial unavailability. Our engineering team is monitoring the situation and working to identify the cause.
Nov 5, 22:57 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Nov 5, 22:56 UTC