This release includes performance improvements and bug fixes.
CLI version 7.6.8 is available.
Version 7.6.7
This release includes performance improvements and bug fixes.
CLI version 7.6.7 is available.
Version 7.6.6
Bug Fixes
Fixed: Accounts with a email address as the username in Snowflake were receiving connection errors.
Fixed: Unsupported data type changes failed on the first deploy, but succeed on the second attempt with no code changes.
Fixed: Nodes weren’t deploying in order under certain circumstances.
Fixed: CLI versions greater than 7.4.3 would return a
RangeError: Invalid string length
when runningplan
.
Version 7.6
Version 7.6.5
Updates
A new
locations.yml
is part of commits. It contains the default storage location and a list of storage locations.You can now retry a Job from failure in the Coalesce app. When you retry a Job, the run details will have a link to the previous run.
We’ve improved the error logging for the CLI.
You can now extend and modify Packages.
Bug Fixes
We fixed a bug where the API did not return the correct run URLs.
We fixed a bug where if there were too many Hash columns, scrolling wasn’t enabled.
We fixed a bug where users who were connected to a large number of workspaces weren’t able to login using OAuth.
We fixed a bug where insert statements were being ran, despite a Snowflake network error.
Documentation
We’re launching a new documentation site! You can check it out at https://preview-docs.coalesce.io/docs/.
What can you expect?
The week of September 29, 2024, Coalesce will implement the new site at https://docs.coalesce.io/docs. The documentation site URL is not changing.
Some page URLs have changed and redirects have been created.
Version 7.5.8
We fixed a bug where large jobs were failing due to repeated processing of the same information.
We’ve released CLI version 7.5.8
Coalesce Not Affected by Global CrowdStrike Outage
The global IT issues caused by CrowdStrike while widespread are not currently affecting Coalesce's systems. Coalesce does utilize CrowdStrike but this issue is isolated and not affecting our production cloud infrastructure. We are continuously monitoring our systems and are available via support for any questions.