GitHub header
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Apr 02, 2025 - 17:00 UTC
Scheduled - Codespaces will be undergoing maintenance in all regions starting from 17:00 UTC on Wednesday, April 2 to 17:00 UTC on Thursday, April 3. Maintenance will begin in Southeast Asia, Central India, Australia Central, and Australia East regions. Once it is complete, maintenance will start in UK South and West Europe, followed by East US, East US2, West US2, and West US3. Each batch of regions will take approximately three to four hours to complete.

During this time period, users may experience connectivity issues with new and existing Codespaces.

If you have uncommitted changes you may need during the maintenance window, you should verify they are committed and pushed before maintenance starts. Codespaces with any uncommitted changes will be accessible as usual once maintenance is complete.

Apr 2, 2025 17:00 - Apr 3, 2025 17:00 UTC

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Under Maintenance
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Apr 2, 2025
Resolved - This incident has been resolved.
Apr 2, 20:20 UTC
Update - We are aware that the generation of the Dormant Users Report is delayed for some of our customers, and that the resulting report may be inaccurate. We are actively investigating the root cause and a possible remediation.
Apr 2, 19:09 UTC
Investigating - We are currently investigating this issue.
Apr 2, 19:08 UTC
Apr 1, 2025
Resolved - This incident has been resolved.
Apr 1, 09:29 UTC
Update - The Audit Log is experiencing an increase of failed queries due to availability issues with the associated data store. Audit Log data is experiencing a delay in availability. We have identified the issue and we are deploying mitigating measures.
Apr 1, 09:04 UTC
Investigating - We are currently investigating this issue.
Apr 1, 08:31 UTC
Mar 31, 2025
Resolved - Between March 29 7:00 UTC and March 31 17:00 UTC users were unable to unsubscribe from GitHub marketing email subscriptions due to a service outage. Additionally, on March 31, 2025 from 7:00 UTC to 16:40 UTC users were unable to submit eBook and event registration forms on resources.github.com, also due to a service outage.

The incident occurred due to expired credentials used for an internal service. We mitigated it by renewing the credentials and redeploying the affected services. To improve future response times and prevent similar issues, we are enhancing our credential expiry detection, rotation processes, and on-call observability and alerting.

Mar 31, 17:57 UTC
Update - We are currently applying a mitigation to resolve an issue with managing marketing email subscriptions.
Mar 31, 16:46 UTC
Investigating - We are currently investigating this issue.
Mar 31, 16:27 UTC
Mar 30, 2025

No incidents reported.

Mar 29, 2025

No incidents reported.

Mar 28, 2025
Resolved - Beginning at 21:24 UTC on March 28 and lasting until 21:50 UTC, some customers of github.com had issues with PR tracking refs not being updated due to processing delays and increased failure rates. We did not status before we completed the rollback, and the incident is currently resolved. We are sorry for the delayed post on githubstatus.com.
Mar 28, 22:50 UTC
Resolved - This incident was opened by mistake. Public services are currently functional.
Mar 28, 18:14 UTC
Investigating - We are currently investigating this issue.
Mar 28, 17:53 UTC
Resolved - Between March 27, 2025, 23:45 UTC and March 28, 2025, 01:40 UTC the Pull Requests service was degraded and failed to update refs for repositories with higher traffic activity. This was due to a large repository migration that resulted in a larger than usual number of enqueued jobs; while simultaneously impacting git fileservers where the problematic repository was hosted. This resulted in an increase in queue depth due to retries on failures to perform those jobs causing delays for non-migration sourced jobs.

We declared an incident once we confirmed that this issue was not isolated to the problematic migration and other repositories were also failing to process ref updates.

We mitigated the issue by stopping the migration and short circuiting the remaining jobs. Additionally, we increased the worker pool of this job to reduce the time required to recover.

As a result of this incident, we are revisiting our repository migration process and are working to isolate potentially problematic migration workloads from non-migration workloads.

Mar 28, 01:40 UTC
Update - This issue has been mitigated and we are operating normally.
Mar 28, 01:40 UTC
Update - We are continuing to monitor for recovery.
Mar 28, 00:54 UTC
Update - We believe we have identified the source of the issue and are monitoring for recovery.
Mar 28, 00:20 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Mar 27, 23:52 UTC
Investigating - We are currently investigating this issue.
Mar 27, 23:49 UTC
Mar 27, 2025
Mar 26, 2025

No incidents reported.

Mar 25, 2025

No incidents reported.

Mar 24, 2025

No incidents reported.

Mar 23, 2025
Resolved - Between 2024-03-23 18:10 UTC and 2024-03-24 16:10 UTC, migration jobs submitted through the GitHub UI experienced processing delays and increased failure rates. This issue only affected migrations initiated via the web interface. Migrations started through the API or the command line tool continued to function normally. We are sorry for the delayed post on githubstatus.com.
Mar 23, 18:00 UTC
Mar 22, 2025

No incidents reported.

Mar 21, 2025
Resolved - On March 21st, 2025, between 11:45 UTC and 13:20 UTC, users were unable to interact with GitHub Copilot Chat in GitHub. The issue was caused by a recently deployed Ruby change that unintentionally overwrote a global value. This led to GitHub Copilot Chat in GitHub being misconfigured with an invalid URL, preventing it from connecting to our chat server. Other Copilot clients were not affected.

We mitigated the incident by identifying the source of the problematic query and rolling back the deployment.

We are reviewing our deployment tooling to reduce the time to mitigate similar incidents in the future. In parallel, we are also improving our test coverage for this category of error to prevent them from being deployed to production.

Mar 21, 13:44 UTC
Update - Copilot is operating normally.
Mar 21, 13:44 UTC
Update - Mitigation is complete and we are seeing full recovery for GitHub Copilot Chat in GitHub.
Mar 21, 13:43 UTC
Update - We have identified the problem and have a mitigation in progress.
Mar 21, 13:16 UTC
Update - Copilot is experiencing degraded performance. We are continuing to investigate.
Mar 21, 13:00 UTC
Update - We are investigating issues with GitHub Copilot Chat in GitHub. We will continue to keep users updated on progress toward mitigation.
Mar 21, 12:42 UTC
Investigating - We are currently investigating this issue.
Mar 21, 12:40 UTC
Resolved - On March 21st, 2025, between 05:43 UTC and 08:49 UTC, the Actions service experienced degradation, leading to workflow run failures. During the incident, approximately 2.45% of workflow runs failed due to an infrastructure failure. This incident was caused by intermittent failures in communicating with an underlying service provider. We are working to improve our resilience to downtime in this service provider and to reduce the time to mitigate in any future recurrences.
Mar 21, 09:34 UTC
Update - Actions is operating normally.
Mar 21, 09:34 UTC
Update - We have made progress understanding the source of these errors and are working on a mitigation.
Mar 21, 09:05 UTC
Update - We're continuing to investigate elevated errors during GitHub Actions workflow runs. At this stage our monitoring indicates that these errors are impacting no more than 3% of all runs.
Mar 21, 08:20 UTC
Update - We're continuing to investigate intermittent failures with GitHub Actions workflow runs.
Mar 21, 07:27 UTC
Update - We're seeing errors reported with a subset of GitHub Actions workflow runs, and are continuing to investigate.
Mar 21, 06:55 UTC
Investigating - We are investigating reports of degraded performance for Actions
Mar 21, 06:21 UTC
Resolved - On March 21, 2025 between 01:00 UTC and 02:45 UTC, the Codespaces service was degraded and users in various regions experienced intermittent connection failures. The peak error error was 30% of connection attempts across 38% of Codespaces. This was due to a service deployment.

The incident was mitigated by completing the deployment to the impacted regions.

We are working with the service team to identify the cause of the connection losses and perform necessary repairs to avoid future occurrences.

Mar 21, 03:08 UTC
Update - Codespaces is operating normally.
Mar 21, 03:08 UTC
Update - We have seen full recovery in the last 15 minutes for Codespaces connections. GitHub Codespaces are healthy. For users who are still seeing connection problems, restarting the Codespace may help resolve the issue.
Mar 21, 03:08 UTC
Update - We are continuing to investigate issues with failed connections to Codespaces. We are seeing recovery over the last 10 minutes.
Mar 21, 02:53 UTC
Update - Customers may be experiencing issues connecting to Codespaces on GitHub.com. We are currently investigating the underlying issue.
Mar 21, 02:19 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
Mar 21, 02:12 UTC
Mar 20, 2025
Resolved - On March 20, 2025, between 19:24 UTC and 20:42 UTC the GitHub Pages experience was degraded and returned 503s for some customers. We saw an error rate of roughly 2% for Pages views, and new page builds were unable to complete successfully before timing out.

This was due to replication failure at the database layer between a write destination and read destination. We mitigated the incident by redirecting reads to the same destination as writes.

The error with replication occurred while in this transitory phase, as we are in the process of migrating the underlying data for Pages to new database infrastructure. Additionally our monitors failed to detect the error.

We are addressing the underlying cause of the failed replication and telemetry.

Mar 20, 20:54 UTC
Update - We have resolved the issue for Pages. If you're still experiencing issues with your GitHub Pages site, please rebuild.
Mar 20, 20:53 UTC
Update - Customers may not be able to create or make changes to their GitHub Pages sites. Customers who rely on webhook events from Pages builds might also experience a downgraded experience.
Mar 20, 20:38 UTC
Update - Webhooks is experiencing degraded performance. We are continuing to investigate.
Mar 20, 20:33 UTC
Investigating - We are investigating reports of degraded performance for Pages
Mar 20, 20:04 UTC
Mar 19, 2025
Completed - The scheduled maintenance has been completed.
Mar 19, 05:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 18, 21:00 UTC
Scheduled - Migrations will be undergoing maintenance starting at 21:00 UTC on Tuesday, March 18 2025 with an expected duration of up to eight hours.

During this maintenance period, users will experience delays importing repositories into GitHub.

Once the maintenance period is complete, all pending imports will automatically proceed.

Mar 18, 19:28 UTC
Resolved - On March 18th, 2025, between 23:20 UTC and March 19th, 2025 00:15 UTC, the Actions service experienced degradation, leading to run start delays. During the incident, about 0.3% of all workflow runs queued during the time failed to start, about 0.67% of all workflow runs were delayed by an average of 10 minutes, and about 0.16% of all workflow runs ultimately ended with an infrastructure failure. This was due to a networking issue with an underlying service provider. At 00:15 UTC the service provider mitigated their issue, and service was restored immediately for Actions. We are working to improve our resilience to downtime in this service provider to reduce the time to mitigate in any future recurrences.
Mar 19, 00:55 UTC
Update - Actions is operating normally.
Mar 19, 00:55 UTC
Update - The provider has reported full mitigation of the underlying issue, and Actions has been healthy since approximately 00:15 UTC.
Mar 19, 00:55 UTC
Update - We are continuing to investigate issues with delayed or failed workflow runs with Actions. We are engaged with a third-party provider who is also investigating issues and has confirmed we are impacted.
Mar 19, 00:22 UTC
Update - Some customers may be experiencing delays or failures when queueing workflow runs
Mar 18, 23:45 UTC
Investigating - We are investigating reports of degraded performance for Actions
Mar 18, 23:45 UTC