dbt Cloud

Report a problemSubscribe to updates
Status pages / US Cell 2 AWS
Powered by
Privacy policy

·

Terms of service
Write-up
Delays in artifact processing for completed job runs
Degraded performance
View the incident
Summary

Between approximately 6:00 AM UTC on April 6 and 4:00 AM UTC on April 7, 2026, model execution data stopped refreshing in the dbt Discovery API and dbt Catalog. This affected information related to which models had run, their run times, and their success or failure status. At the peak of the issue, this data was up to 22 hours out of date.

Impact

This issue affected Discovery API data and the dbt Catalog UI, causing runs to appear stale or missing. Runs continued to execute normally throughout the incident; only the reporting and visibility of those runs in the API and Catalog was impacted. All data is now current.

Root Cause

During a planned infrastructure update to improve how dbt processes model execution data, a bug was introduced that caused the new data pipeline to stop recording run results — without triggering any visible errors. The API itself remained fully operational throughout; only the freshness of run state data was affected.

As a result, the Discovery API and dbt Catalog were serving stale data showing model run history, statuses, and timing that had not updated since the issue began.

Mitigation

Once the root cause was identified, the team immediately rolled back the affected pipeline to restore data freshness. A targeted fix was then developed and deployed on April 7, 2026, and has been running in production since.

Next Steps and Preventative Measures
Planned Remediation
  • Expand end-to-end test coverage for high-impact user flows to improve reliability and catch critical regressions earlier.

  • Build proactive data freshness monitoring for our internal metadata database, ensuring timely alerts that surface staleness issues before they impact downstream processes.