GitHub header

All Systems Operational

About This Site

Check GitHub Enterprise Cloud status by region:
- Australia: au.githubstatus.com
- EU: eu.githubstatus.com
- Japan: jp.githubstatus.com
- US: us.githubstatus.com

Git Operations Operational
Webhooks Operational
Visit www.githubstatus.com for more information Operational
API Requests Operational
Issues Operational
Pull Requests Operational
Actions Operational
Packages Operational
Pages Operational
Codespaces Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Dec 30, 2025

No incidents reported today.

Dec 29, 2025

No incidents reported.

Dec 28, 2025

No incidents reported.

Dec 27, 2025

No incidents reported.

Dec 26, 2025

No incidents reported.

Dec 25, 2025

No incidents reported.

Dec 24, 2025

No incidents reported.

Dec 23, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Dec 23, 10:32 UTC
Update - Issues and Pull Requests are operating normally.
Dec 23, 10:32 UTC
Update - We are seeing recovery in search indexing for Issues and Pull Requests. The queue has returned to normal processing times, and we continue to monitor service health. We'll post another update by 11:00 UTC.
Dec 23, 10:29 UTC
Update - We're experiencing delays in search indexing for Issues and Pull Requests. Search results may show data up to three minutes old due to elevated processing times in our indexing pipeline. We're working to restore normal performance. We'll post another update by 10:30 UTC.
Dec 23, 09:58 UTC
Investigating - We are investigating reports of degraded performance for Issues and Pull Requests
Dec 23, 09:56 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Dec 23, 00:17 UTC
Update - All services at healthy levels. We're finalizing the change to prevent future degradations from the same source.
Dec 23, 00:06 UTC
Update - We're investigating elevated traffic affecting GitHub services, primarily impacting logged-out users with some increased latency on Issues. We're preparing additional mitigations to prevent further spikes.
Dec 22, 23:32 UTC
Update - We are experiencing elevated traffic affecting some GitHub services, primarily impacting logged-out users. We're actively investigating the full scope and working to restore normal service. We'll post another update by 23:45 UTC.
Dec 22, 22:57 UTC
Update - Issues is experiencing degraded performance. We are continuing to investigate.
Dec 22, 22:48 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Dec 22, 22:31 UTC
Dec 22, 2025
Dec 21, 2025

No incidents reported.

Dec 20, 2025

No incidents reported.

Dec 19, 2025

No incidents reported.

Dec 18, 2025
Resolved - On December 18, 2025, between 16:25 UTC and 19:09 UTC the service underlying Copilot policies was degraded and users, organizations, and enterprises were not able to update any policies related to Copilot. No other GitHub services, including other Copilot services were impacted. This was due to a database migration causing a schema drift.

We mitigated the incident by synchronizing the schema. We have hardened the service to make sure schema drift does not cause any further incidents, and will investigate improvements in our deployment pipeline to shorten time to mitigation in the future.

Dec 18, 19:09 UTC
Update - Copilot is operating normally.
Dec 18, 19:09 UTC
Update - We have observed full recovery with updating copilot policy settings, and are validating that that there is no further impact.
Dec 18, 19:05 UTC
Update - Copilot is experiencing degraded performance. We are continuing to investigate.
Dec 18, 18:43 UTC
Update - We have identified the source of this regression and are preparing a fix for deployment. We will update again in one hour.
Dec 18, 18:10 UTC
Update - We are seeing an increase in errors on the User and Org policy settings page when updating policies. The errors are affecting the user copilot policies settings page, org copilot policies settings page when updating a policy.

Dec 18, 17:36 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Dec 18, 17:36 UTC
Resolved - On December 18th, 2025, from 08:15 UTC to 17:11 UTC, some GitHub Actions runners experienced intermittent timeouts for Github API calls, which led to failures during runner setup and workflow execution. This was caused by network packet loss between runners in the West US region and one of GitHub’s edge sites. Approximately 1.5% of jobs on larger and standard hosted runners in the West US region were impacted, 0.28% of all Actions jobs during this period.

By 17:11 UTC, all traffic was routed away from the affected edge site, mitigating the timeouts. We are working to improve early detection of cross-cloud connectivity issues and faster mitigation paths to reduce the impact of similar issues in the future.

Dec 18, 17:41 UTC
Update - We are observing recovery with request from GitHub-hosted Actions runners and will continue to monitor.
Dec 18, 17:29 UTC
Update - Since approximately 8:00 UTC, we have observed intermittent failures on GitHub-hosted actions runners. The failures have been observed both during runner setup, and workflow execution. We are continuing to investigate.

Self-hosted runners are not impacted.

Dec 18, 16:35 UTC
Investigating - We are investigating reports of degraded performance for Actions
Dec 18, 16:33 UTC
Dec 17, 2025

No incidents reported.

Dec 16, 2025
Resolved - From 11:50-12:25 UTC, Copilot Coding Agent was unable to process new agent requests. This affected all users creating new jobs during this timeframe, while existing jobs remained unaffected. The cause of this issue was a change to the actions configuration where Copilot Coding Agent runs, which caused the setup of the Actions runner to fail, and the issue was resolved by rolling back this change.
As a short term solution, we hope to increase our alerting criteria so that we can be alerted more quickly when an incident occurs, and in the long term we hope to harden our runner configuration to be more resilient against errors.

Dec 16, 12:00 UTC