GitHub header

All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Nov 18, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Nov 18, 21:59 UTC
Update - Git Operations is operating normally.
Nov 18, 21:56 UTC
Update - We are seeing full recovery after rolling out the fix and all services are operational.
Nov 18, 21:55 UTC
Update - Codespaces is operating normally.
Nov 18, 21:55 UTC
Update - We have shipped a fix and are seeing recovery in some areas and will continue to provide updates.
Nov 18, 21:36 UTC
Update - We have identified the likely cause of the incident and are working on a fix. We will provide another update as we get closer to deploying the fix.
Nov 18, 21:27 UTC
Update - Codespaces is experiencing degraded availability. We are continuing to investigate.
Nov 18, 21:25 UTC
Update - We are currently investigating failures on all Git operations, including both SSH and HTTP.
Nov 18, 21:11 UTC
Update - We are seeing failures for some git http operations and are investigating
Nov 18, 20:52 UTC
Update - Git Operations is experiencing degraded availability. We are continuing to investigate.
Nov 18, 20:39 UTC
Investigating - We are currently investigating this issue.
Nov 18, 20:39 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Nov 18, 00:10 UTC
Update - We are investigating reports of 404s creating gists.
Nov 17, 23:01 UTC
Investigating - We are currently investigating this issue.
Nov 17, 23:01 UTC
Nov 17, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Nov 17, 19:08 UTC
Update - We continue to see recovery and dependabot jobs are currently processing as expected.
Nov 17, 18:54 UTC
Update - We are applying a configuration change and will monitor for recovery.
Nov 17, 18:18 UTC
Update - We are continuing to investigate dependabot failures and a configuration change to mitigate.
Nov 17, 17:50 UTC
Update - We are investigating dependabot job failures affecting approximately 50% of version updates and 25% of security updates.
Nov 17, 17:15 UTC
Investigating - We are currently investigating this issue.
Nov 17, 16:52 UTC
Nov 16, 2025

No incidents reported.

Nov 15, 2025

No incidents reported.

Nov 14, 2025

No incidents reported.

Nov 13, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Nov 13, 15:13 UTC
Update - Git Operations is experiencing degraded performance. We are continuing to investigate.
Nov 13, 15:03 UTC
Investigating - We are currently investigating this issue.
Nov 13, 15:00 UTC
Nov 12, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Nov 12, 23:04 UTC
Update - We are continuing to investigate connectivity issues with codespaces
Nov 12, 22:51 UTC
Update - We are investing reports of codespaces no longer appearing in the UI or API. Users may experience connectivity issues to the impacted codespaces.
Nov 12, 22:26 UTC
Investigating - We are currently investigating this issue.
Nov 12, 22:26 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Nov 12, 17:39 UTC
Update - We are continuing to monitor our mitigations to delays in notification deliveries. Some users may still experience delays of over 10 minutes.
Nov 12, 17:15 UTC
Update - We are continuing to work on mitigating delays in notification deliveries. Some users may still experience delays of over 10 minutes.
Nov 12, 15:35 UTC
Update - We are continuing to work on mitigating delays in notification deliveries. Some users may experience delays of over 10 minutes.
Nov 12, 14:58 UTC
Update - We are investigating delays of up to 10 minutes in notification deliveries. Our team has identified the likely cause and is actively working to mitigate the issue.
Nov 12, 14:25 UTC
Investigating - We are currently investigating this issue.
Nov 12, 14:23 UTC
Nov 11, 2025
Resolved - On November 11, 2025, between 16:28 UTC and 20:54 UTC, GitHub Actions larger hosted runners experienced degraded performance, with 0.4% of overall workflow runs and 8.8% of larger hosted runner jobs failing to start within 5 minutes. The majority of impact was mitigated by 18:44, with a small tail of organizations taking longer to recover.

The impact was caused by the same database infrastructure issue that caused similar larger hosted runner performance degradation on October 23rd, 2025. In this case, it was triggered by a brief infrastructure event in this incident rather than a database change.

Through this incident, we identified and implemented a better solution for both prevention and faster mitigation. In addition to this, a durable solution for the underlying database issue is rolling out soon.

Nov 11, 20:54 UTC
Update - Mitigation is complete and all new jobs targeting Larger Hosted Runners should not experience delays.
Nov 11, 20:53 UTC
Update - The team is continuing to apply the mitigation for Large Hosted Runners. We will provide updates as we progress.
Nov 11, 19:40 UTC
Update - The team continues to investigate delays with Large Hosted Runners. We will continue providing updates on the progress towards mitigation.
Nov 11, 18:37 UTC
Investigating - We are currently investigating this issue.
Nov 11, 18:02 UTC
Nov 10, 2025

No incidents reported.

Nov 9, 2025

No incidents reported.

Nov 8, 2025

No incidents reported.

Nov 7, 2025

No incidents reported.

Nov 6, 2025
Resolved - Between November 5, 2025 23:27 UTC and November 6, 2025 00:06 UTC, ghost text requests experienced errors from upstream model providers. This was a continuation of the service disruption for which we statused Copilot earlier that day, although more limited in scope.

During the service disruption, users were again automatically re-routed to healthy model hosts, minimizing impact to users and we are updating our monitors and failover mechanism to mitigate similar issues in the future.

Nov 6, 00:06 UTC
Update - We have recovered from our earlier performance issues. Copilot code completions should be functioning normally at this time.
Nov 6, 00:06 UTC
Update - Copilot Code Completions are partially unavailable. Our engineering team is engaged and investigating.
Nov 5, 23:41 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Nov 5, 23:41 UTC
Nov 5, 2025
Resolved - On November 5, 2025, between 21:46 and 23:36 UTC, ghost text requests experienced errors from upstream model providers that resulted in 0.9% of users seeing elevated error rates.

During the service disruption, users were automatically re-routed to healthy model hosts but may have experienced increased latency in response times as a result of re-routing.

We are updating our monitors and tuning our failover mechanism to more quickly mitigate issues like this in the future.

Nov 5, 23:26 UTC
Update - We have identified and resolved the underlying issues with Code Completions. Customers should see full recovery.
Nov 5, 23:26 UTC
Update - We are investigating increased error rates affecting Copilot Code Completions. Some users may experience delays or partial unavailability. Our engineering team is monitoring the situation and working to identify the cause.
Nov 5, 22:57 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Nov 5, 22:56 UTC
Nov 4, 2025
Resolved - On November 4, 2025, GitHub Enterprise Importer experienced a period of degraded migration performance and elevated error rates between 18:04 UTC and 23:36 UTC. During this interval customers queueing and running migrations experienced prolonged queue times and slower processing.

The degradation was ultimately connected to higher than normal system load, once load was reduced error rates returned to normal. The investigation is ongoing to pinpoint the precise root cause and prevent future recurrence.

Long-term work is planned to strengthen system resilience under high load and promote better visibility into migration status for customers.

Nov 4, 22:00 UTC