v0.6.0 Release Notes

Release date: May, 2026

Release version: v0.6.0

SynxDB Cloud v0.6.0 continues to strengthen the platform’s capabilities in running and managing FoundationDB-backed deployments at scale, with multi-coordinator support, safer scale-down and cleanup workflows, and a warehouse-scoped view of catalog topology. This release also extends lakehouse access by enabling Hadoop Compatible FileSystem (HCFS) reads for Iceberg workloads through the Gopher v4.0.25 upgrade.

New features

Enhanced FoundationDB operations

Configurable catalog count for FoundationDB-backed accounts

The DBaaS Admin Console now enables the catalog count input in the Create Account and Edit Resource dialogs, and raises the per-account upper limit from 1 to 10. You can provision and resize FoundationDB-backed accounts with multiple catalog nodes directly from the console, without editing region configuration files.

See Create an account.

Multiple dispatch-mode Coordinators on FoundationDB catalog

SynxDB Cloud supports deploying multiple dispatch-mode Coordinator nodes when using FoundationDB as the metadata backend, distributing dispatch workloads across nodes for higher availability and concurrency. When you reduce the coordinator replica count on a FoundationDB-backed account, the DBaaS Admin Console automatically removes the stale coordinator entries from gp_segment_configuration before tearing down the corresponding pods, keeping the kernel topology consistent with the Kubernetes deployment.

Unified local coordinator selection and removal UDF

In multi-warehouse and multi-coordinator deployments, SynxDB Cloud now resolves the local coordinator deterministically using a single matcher on content, warehouseid, and dbid, eliminating a class of edge cases where a node could bind to the wrong coordinator during cluster operations.

Administrators can also use the new gp_remove_coordinator_by_dbid() UDF to remove a specific coordinator entry by its dbid. The function runs only on the coordinator in utility mode, requires superuser privileges, and acquires an AccessExclusiveLock, providing an explicit, auditable cleanup path when scaling down or replacing coordinator nodes.

See gp_remove_coordinator_by_dbid.

Warehouse-scoped filter for gp_segment_configuration

SynxDB Cloud introduces the cloud.gp_segconfig_filter_warehouse GUC, which injects a warehouseid = current_warehouse OR content = -1 predicate into scans of gp_segment_configuration. The filter is applied in the planner hook before standard_planner and recurses into subqueries and CTEs, so INSERT sub-selects and UDF SPI queries also see only the segments relevant to the current warehouse. This keeps tooling and regression behavior correct in multi-warehouse deployments that share a catalog.

See Configuration parameters.

Lakehouse access

Iceberg Hadoop Compatible FileSystem (HCFS) access via Gopher v4.0.25

SynxDB Cloud upgrades the Gopher component to v4.0.25, enabling Iceberg workloads to be accessed through the Hadoop Compatible FileSystem (HCFS) interface. The upgrade also adds a retry path for concurrent file opens, making access more resilient when the Gopher store evicts metadata under load.

Tools and utilities

Conditional display of ML Cluster information

The user detail page now hides the ML Cluster information block in both the DBaaS Admin Console and the user console when ML Cluster is disabled for the account, and uses a consistent display name across the two consoles. This prevents empty or misleading panels from appearing on accounts where the feature is turned off.

Product change information

UnionStore hidden by default

In previous releases, the UnionStore module was visible in the console by default. Starting with this release, the console hides UnionStore by default, joining the FoundationDB management, TP Server, AI Bot - Data, AI Bot - Doc, and ML Cluster modules that were already opt-in. To enable UnionStore, set enable-union-store to true under dbaas.region.features in the Helm values file before deploying the cluster. When the flag is off, the UnionStore management page, UnionStore-related versions, and the UnionStore metadata option in the Create Account dialog are all hidden.

See Prepare the configuration file.

GUC configuration parameters

Newly added GUCs

  • cloud.gp_segconfig_filter_warehouse: default off. Controls whether scans of gp_segment_configuration are restricted to the current warehouse. When set to on, the planner injects a warehouseid = current_warehouse OR content = -1 predicate into such scans, including those nested in subqueries and CTEs. See Configuration parameters.

Removed GUCs

The following GUC available in v0.5.0 is removed in v0.6.0:

  • cloud.debug_print_virtualcat

Conditionally registered GUCs

Because UnionStore is hidden by default in v0.6.0 (see UnionStore hidden by default), the following UnionStore-related GUCs are no longer registered in default deployments. They become available again only when UnionStore is explicitly enabled in the Helm values file:

  • unionstore.enable_slru_check

  • unionstore.file_cache_path

  • unionstore.file_cache_size_limit

  • unionstore.flush_output_after

  • unionstore.lsn_cache_size

  • unionstore.max_cluster_size

  • unionstore.max_file_cache_size

  • unionstore.pageserver_connstring

  • unionstore.readahead_buffer_size

  • unionstore.relsize_hash_size

  • unionstore.safekeeper_connect_timeout

  • unionstore.safekeeper_reconnect_timeout

  • unionstore.safekeeper_token_env

  • unionstore.safekeepers

  • unionstore.tenant_id

  • unionstore.timeline_id

  • unionstore.wal_send_timeout

  • unionstore.walsender_buffer_size

Changed default values

The default values of the following GUCs have changed in v0.6.0:

  • gp_autostats_mode: changed from none to on_no_stats. The system now automatically collects statistics on a table when a query inserts or updates rows in a table that has no statistics.

  • gp_interconnect_tcp_listener_backlog: changed from 256 to 128, reducing the pending-connection queue size for the TCP interconnect listener.

  • gp_workfile_limit_files_per_query: changed from 100000 to 10000. A single query is now limited to creating at most 10,000 spill files before being terminated.

  • statement_mem: changed from 128000 kB to 131072 kB (128 MB), aligning the per-query memory default with a power-of-two value.

Bug fixes

  • Fix a 401 Authentication failed error on the DBaaS Admin Console login page on first load by adding the settings API endpoint to the authentication whitelist.

  • Fix the i18n entry for the Catalog label on the Create Account page in the Russian locale, keeping terminology consistent with other locales.

  • Fix incorrect Docker image name prefixes in distributor-specific builds, so distributors pull images under the correct, distributor-appropriate path.

  • Fix a stale configChecksum overwrite in WarehouseOperatorReconciler.run() and suspend() by removing the redundant second status patch that re-applied configChecksum=null from an in-memory snapshot, so Warehouse CRs persist the correct checksum written by notifyCoordinator().

  • Fix orphaned FileCleaner rows accumulating after account deletion by adding a FileCleaner cleanup block to both AbstractAccountProvider.cleanAccountRelatedResources() and FakeAccountProvider.cleanAccountRelatedResources().

  • Replace a vendor-specific hardcoded placeholder for hdw.encryptor.password in the Hive Meta Sync template (and its test fixtures) with a neutral <your-password> placeholder, so distributors do not expose a vendor-branded sample password and users are clearly prompted to substitute their own value.

  • Fix label truncation in the AI Bot - Data and AI Bot - Doc create modals when the Russian locale is used.

  • Fix a postmaster crash caused by calling the async-signal-unsafe backtrace() from signal-handler paths. errstart() now calls backtrace() only when elevel >= ERROR, and errprintstack(true) callers defer collection to errprintstack() itself, preventing crashes when SIGCHLD interrupts _dl_runtime_resolve mid-execution.

  • Fix systable scan behavior on dispatched catalog toast relations by skipping the store scan only for TP Server toast relations, restoring correct catalog access for queries that hit catalog toast data.

  • Fix a build issue in the datalake_fdw extension’s Makefile so the foreign data wrapper builds cleanly from source.

  • Improve diagnosability of warehouse status-sync issues by logging the current CR status in the Run Warehouse instance entry of WarehouseOperatorReconciler.run(), and by logging both the stored and the newly computed checksums in notifyCoordinator() before the comparison that decides whether to skip the dbaas-cli sync.

  • Fix layout issues on the Organizations, UnionStore, FoundationDB, and related detail pages in narrow content areas and with long-label locales by adding global responsive CSS rules (ProTable overflow, query-filter responsive columns, Descriptions word-break), a shared responsive-cols constant module, and a useContainerColumns hook based on ResizeObserver for split-pane pages. The change also updates ResourceMetricCard, Metric/base, and Version/list to use flex-wrap and ellipsis instead of fixed viewport breakpoints.

  • Speed up DROP DATABASE on FoundationDB-backed catalogs by skipping per-table file enumeration over libpq, because the catalog archive process already handles file cleanup.

  • Shorten incremental build times by introducing a stamp-file mechanism so the three Java contrib modules only re-run mvn package when their sources change.