All Systems Operational
North America (N. Virginia) ? Operational
90 days ago
99.91 % uptime
Today
Web Application Operational
90 days ago
99.93 % uptime
Today
IDE Operational
90 days ago
99.76 % uptime
Today
Scheduled Jobs Operational
90 days ago
99.8 % uptime
Today
API Operational
90 days ago
99.98 % uptime
Today
Metadata API ? Operational
90 days ago
100.0 % uptime
Today
Metadata Ingestion ? Operational
90 days ago
100.0 % uptime
Today
AWS kms-us-east-1 Operational
AWS s3-us-standard Operational
AWS ec2-us-east-1 Operational
Auth0 User Authentication Operational
Auth0 Multi-factor Authentication Operational
AWS sns-us-east-1 Operational
Auth0 Management API Operational
Europe (Frankfurt) ? Operational
90 days ago
99.94 % uptime
Today
Web Application Operational
90 days ago
99.93 % uptime
Today
IDE Operational
90 days ago
99.88 % uptime
Today
Scheduled Jobs Operational
90 days ago
99.84 % uptime
Today
Metadata API ? Operational
90 days ago
100.0 % uptime
Today
Metadata Ingestion ? Operational
90 days ago
100.0 % uptime
Today
API Operational
90 days ago
99.98 % uptime
Today
AWS kms-eu-central-1 Operational
AWS s3-eu-central-1 Operational
AWS ec2-eu-central-1 Operational
Auth0 User Authentication Operational
Auth0 Multi-factor Authentication Operational
Auth0 Management API Operational
Australia (Sydney) ? Operational
90 days ago
99.97 % uptime
Today
Web Application Operational
90 days ago
99.93 % uptime
Today
IDE Operational
90 days ago
99.93 % uptime
Today
Scheduled Jobs Operational
90 days ago
100.0 % uptime
Today
API Operational
90 days ago
99.98 % uptime
Today
Metadata API ? Operational
90 days ago
100.0 % uptime
Today
Metadata Ingestion ? Operational
90 days ago
100.0 % uptime
Today
AWS ec2-ap-southeast-2 Operational
AWS kms-ap-southeast-2 Operational
AWS s3-ap-southeast-2 Operational
Auth0 User Authentication Operational
Auth0 Management API Operational
Auth0 Multi-factor Authentication Operational
North America Cell 1 ? Operational
90 days ago
99.94 % uptime
Today
Web Application Operational
90 days ago
99.94 % uptime
Today
IDE Operational
90 days ago
99.83 % uptime
Today
Scheduled Jobs Operational
90 days ago
99.89 % uptime
Today
API Operational
90 days ago
99.98 % uptime
Today
Metadata API ? Operational
90 days ago
100.0 % uptime
Today
Metadata Ingestion ? Operational
90 days ago
100.0 % uptime
Today
North America Cell 2 ? Operational
90 days ago
99.97 % uptime
Today
Web Application Operational
90 days ago
99.94 % uptime
Today
IDE Operational
90 days ago
99.93 % uptime
Today
Scheduled Jobs Operational
90 days ago
100.0 % uptime
Today
API Operational
90 days ago
99.98 % uptime
Today
Metadata API ? Operational
90 days ago
100.0 % uptime
Today
Metadata Ingestion ? Operational
90 days ago
100.0 % uptime
Today
External Dependencies Operational
90 days ago
100.0 % uptime
Today
Slack Operational
Slack Messaging Operational
Slack Apps/Integrations/APIs Operational
GitHub Git Operations Operational
GitHub Webhooks Operational
GitHub API Requests Operational
Atlassian Bitbucket Git via SSH Operational
PostMark Email Delivery ? Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Past Incidents
Jul 26, 2024

No incidents reported today.

Jul 25, 2024

No incidents reported.

Jul 24, 2024

No incidents reported.

Jul 23, 2024

No incidents reported.

Jul 22, 2024
Postmortem - Read details
Jul 26, 22:19 EDT
Resolved - The issue has been resolved, and all affected systems are now functioning normally.
Please contact Support via email at support@getdbt.com if you continue to experience any issues and are unsure of the root cause.
We understand how critical dbt Cloud is to your ability to get work done day-to-day and your experience matters to us. We’re grateful to you for your patience during this incident.

Jul 22, 00:03 EDT
Update - We believe this issue is now resolved. Please note that any prior runs may still be stuck and require to be re-triggered. Please contact us at support@getdbt.com if you require any assistance or have any questions or concerns.
Jul 21, 23:17 EDT
Monitoring - We have deployed a fix for the reported issue that was causing jobs to queue and/or time out. Please reach out to support@getdbt.com if you're experiencing any issues or have any questions or concerns.
Jul 21, 22:59 EDT
Update - We are continuing to investigate this issue. Please reach out to us at support@getdbt.com with any questions or concerns.
Jul 21, 21:51 EDT
Investigating - We're investigating an issue with jobs on dbt Cloud accounts hosted on AU instance being stuck in a "starting" stage. The team is working on a resolution and we will provide updates at approximately 30 minute intervals or as soon as new information becomes available.
Jul 21, 20:50 EDT
Jul 21, 2024
Jul 20, 2024

No incidents reported.

Jul 19, 2024
Resolved - We believe all services have recovered. Please contact support at support@getdbt.com with any questions or concerns.
Jul 19, 06:50 EDT
Monitoring - We are observing signs of recovery on dbt Cloud's side. Latest update from Azure:
"We’ve determined the underlying cause. A backend cluster management workflow deployed a configuration change causing backend access to be blocked between a subset of Azure Storage clusters and compute resources in the Central US region. This resulted in the compute resources automatically restarting when connectivity was lost to virtual disks. We are currently applying mitigation. Customers should see signs of recovery at this time as mitigation applies across resources in the region."

Jul 18, 22:55 EDT
Update - Update from Azure:
"Customers should see signs of recovery at this time as mitigation applies across resources in the region."

Jul 18, 22:21 EDT
Update - Update from Azure:
"We’ve determined the underlying cause and are currently applying mitigation through multiple workstreams."

Jul 18, 21:29 EDT
Identified - Based on the latest update from Microsoft Azure, this is an ongoing issue. The latest report has been as follows:
"We are aware of this issue and have engaged multiple teams to investigate. As part of the investigation, we are reviewing previous deployments, and are running other workstreams to investigate for an underlying cause. The next update will be provided in 60 minutes, or as events warrant.
Customers with disaster recovery procedures set up can consider taking steps to failover their services to another region"

We will continue monitoring this closely. Please contact us at support@getdbt.com with any questions or concerns.

Jul 18, 20:18 EDT
Update - The current outage has been reported on Microsoft Azure status page. Please find more information here: https://azure.status.microsoft/en-us/status.
Jul 18, 19:48 EDT
Investigating - We're investigating an issue with Single Tenant instances hosted on Azure in Central region experiencing some outage.
This is only impacting Central region Single Tenant accounts hosted on Azure and does not impact other accounts. The team is working on a resolution and we will provide updates as soon as possible

Jul 18, 19:42 EDT
Resolved - This incident has been resolved.
Jul 19, 06:46 EDT
Monitoring - We are observing signs of recovery on dbt Cloud's side. Latest update from Azure:
"We’ve determined the underlying cause. A backend cluster management workflow deployed a configuration change causing backend access to be blocked between a subset of Azure Storage clusters and compute resources in the Central US region. This resulted in the compute resources automatically restarting when connectivity was lost to virtual disks. We are currently applying mitigation. Customers should see signs of recovery at this time as mitigation applies across resources in the region."

Jul 18, 22:55 EDT
Update - Update from Azure:
"Customers should see signs of recovery at this time as mitigation applies across resources in the region."

Jul 18, 22:20 EDT
Update - Update from Azure:
"We’ve determined the underlying cause and are currently applying mitigation through multiple workstreams."

Jul 18, 21:31 EDT
Identified - Due to an ongoing outage with Microsoft Azure's Central region accounts, some accounts may be experiencing failure of repository cloning in their jobs or development sessions. The issue has been reported and is being monitored by Azure. Their status page can be found here: https://azure.status.microsoft/en-us/status. Please contact support at support@getdbt.com with any questions or concerns.
Jul 18, 20:37 EDT
Jul 18, 2024
Resolved - We identified a database error that lead to blocking runs and have rolled out a fix. After monitoring for several hours, we've determined this issue is resolved. Please contact support for any issues or questions relating to this incident.
Jul 18, 17:45 EDT
Update - We are continuing to monitor for further issues.
Jul 18, 10:13 EDT
Monitoring - The problem has been remediated, but we are still working on a long term fix. We will continue monitoring the issue for several hours to ensure the issue does not occur again.
Jul 18, 10:05 EDT
Update - We are still investigating root cause so that a long term fix can be implemented.
Jul 18, 09:43 EDT
Update - The short term fix is still in place, so at this time jobs should be running again. We are still investigating root cause so that a long term fix can be implemented.
Jul 18, 09:15 EDT
Update - Our team has implemented a short term fix to alleviate the issue while continuing to investigate the root cause.
Jul 18, 08:32 EDT
Investigating - We are currently investigating issue that Job are not running and being stuck in queue. We will provide additional updates in the next 30 minutes.
Jul 18, 07:58 EDT
Jul 17, 2024

No incidents reported.

Jul 16, 2024
Resolved - A network configuration change led to downtime with the Semantic Layer API between 02:43 PM UTC and 03:07 PM UTC.

The issue has been resolved, and all affected systems are now functioning normally as of 03:07 PM UTC.

Please contact Support via email support@getdbt.com if you continue to experience delays or have any follow up questions.

Jul 16, 11:00 EDT
Jul 15, 2024

No incidents reported.

Jul 14, 2024

No incidents reported.

Jul 13, 2024
Resolved - This issue is resolved.
Jul 13, 16:19 EDT
Monitoring - We were able to determine that this issue was caused by a change by the database provider and are awaiting confirmation that the cause of the issue has been rolled back on their end. We will provide another update shortly.
Jul 13, 13:01 EDT
Update - We are continuing to investigate this issue.
Jul 13, 12:29 EDT
Update - We are continuing to investigate this issue.
Jul 13, 11:43 EDT
Investigating - We're investigating an issue with Databricks production runs that is causing job failures returning the message "invalid literal for int() with base 10: 'Created'". This is currently only impacting our US multi-tenant instance. The team is working on a resolution and we will provide updates at approximately 30 minute intervals or as soon as new information becomes available.
Jul 13, 11:40 EDT
Jul 12, 2024
Resolved - The issue has been resolved, and all affected systems are now functioning normally as of 4:35 EST.
Jul 12, 16:35 EDT
Monitoring - We've observed development and deployment environments have returned to their normal states. We will monitor the situation for the next 30 minutes to ensure everything is back to normal.
Jul 12, 15:45 EDT
Update - We are continuing to investigate this issue.
Jul 12, 14:22 EDT
Update - We are still identifying a root cause of the issue, and recommend re-entering keypair credentials in your environment settings in the meantime.
Jul 12, 14:19 EDT
Update - We are continuing to investigate this issue.
Jul 12, 13:58 EDT
Investigating - We're investigating an issue with Snowflake user credentials being wiped from development and deployment environments in dbt Cloud. This is impacting key pair credentials in development and deployment environments in our US MT instance. The team is working on a resolution and we will provide updates at approximately 30 minute intervals or as soon as new information becomes available.
Jul 12, 13:54 EDT
Resolved - We would like to confirm that the issue has been resolved. The reported issue was that users were unable to re-authenticate with Snowflake OAuth option successfully. A fix was applied and users should now be able to re-authenticate successfully. Please feel free to contact Support at support@getdbt.com if you continue to experience any further issues.
Jul 12, 13:19 EDT
Identified - We have identified an issue preventing re-authentication of Snowflake users from dbt Cloud. A fix is being implemented, and we will provide an update shortly.
Jul 12, 12:05 EDT
Update - We are still investigating the issue with reconnecting to Snowflake and seeing the Server Error "Runtime Error Credentials in profile "user", target "default" invalid: None is not of type 'string'.

Our team is in the process of trying to reproduce this issue in order to identify the cause, and test an appropriate solution. We will update again within the next 30 minutes.

Jul 12, 11:28 EDT
Update - We are continuing to investigate this issue.
Jul 12, 10:39 EDT
Investigating - We are currently investigating a Server Error seen when trying to reconnect or authenticate to Snowflake Oauth. The error may look like this:
Encountered an error: Runtime Error Credentials in profile "user", target "default" invalid: None is not of type 'string'

We will provide more updates as the investigation moves forward.

Jul 12, 10:01 EDT
Resolved - Condition keeping IDE and production jobs from starting has been resolved.
Jul 12, 13:12 EDT
Monitoring - We have deployed a fix for pods on AWS ST instances that was causing issues with invoking dbt in the IDE and production jobs. We are continuing to monitor progress.
Jul 12, 11:25 EDT
Identified - We have identified an issue with pods failing to schedule in our AWS ST instances that resulted in an inability to access the IDE and an inability to spin up new job runs. A fix is being implemented, and we will provide an update shortly. Thanks for your patience.
Jul 12, 10:56 EDT