v0.5.0 Release Notes
Release date: April, 2026
Release version: v0.5.0
SynxDB Cloud v0.5.0 continues to strengthen the platform’s capabilities in query optimization, FoundationDB-backed operations, and cloud-native administration.
Query and storage optimizations
Bulk INSERT INTO…VALUES optimization: Falls back to the Postgres planner for large
VALUESclauses, delivering over 10x faster bulk ingestion.Common subquery SPJ to CTE: Rewrites shareable subqueries as CTEs to eliminate redundant execution.
Expanded FoundationDB catalog capabilities
Time Travel on FDB catalog: Queries historical data and recovers deleted objects on FDB-backed deployments.
Directory tables on FDB catalog: Brings directory table DDL, DFS file management, and
COPYto FDB-backed deployments.
Cloud-native operations
AWS Marketplace deployment via Omnistrate: Subscribe to SynxDB Cloud as a BYOC offering and let Omnistrate provision EKS, RDS, and S3 in your own AWS account.
TP Server monitoring: Surfaces instance details, resource usage, performance indicators, and cache stats in the DBaaS Admin Console.
Feature flags for optional modules: Hides FoundationDB, TP Server, AI Bot - Data, AI Bot - Doc, and ML Cluster by default; enable via Helm values.
New features
Query processing and optimization
Ordered insertion for foreign tables
SynxDB Cloud’s query planner injects a Sort operator before the final insertion step when you use INSERT INTO ... SELECT ... ORDER BY to write to a foreign table, ensuring rows are delivered to the data handler in the specified order despite parallel segment execution. This guarantees deterministic, sorted output for ETL pipelines that write to external tables.
Common subquery SPJ to CTE optimization
SynxDB Cloud’s query optimizer detects common shareable subqueries in a query statement and rewrites them as CTEs, eliminating redundant subquery execution and reducing query time for complex analytical queries.
Bulk INSERT INTO…VALUES optimization
SynxDB Cloud automatically falls back to the Postgres planner when the number of rows in a VALUES clause exceeds the optimizer_valuescan_threshold threshold (default: 100), because GPORCA’s Value Scan plan becomes expensive at scale. The Postgres planner redistributes data across all segments for parallel processing, achieving over 10x faster execution for bulk inserts common in ETL and data ingestion pipelines.
Storage
Prefetch micro partition reader and file group
SynxDB Cloud prefetches the micro partition reader and the first group of a file before computation begins, overlapping I/O with processing. This hides I/O latency behind computation, reducing query latency and improving scan throughput when scanning Cloud-format tables on cloud storage.
Directory tables on FDB catalog
SynxDB Cloud supports directory tables on the FoundationDB (FDB) catalog backend, enabling DDL operations, DFS tablespace file management, and COPY operations in deployments with separated storage and compute. Use the TPSERVER clause to specify the target TP Server node when creating a directory table, or set cloud.default_tpserver to route automatically. To replace a file via COPY BINARY, remove it with remove_file() and commit before uploading.
See Directory tables and Create directory tables on TpServer.
Asynchronous file cleanup through FDB catalog
SynxDB Cloud uses a dedicated cleanup service to remove obsolete object storage files asynchronously after VACUUM on FDB-backed deployments, keeping VACUUM operations lightweight and preventing file cleanup from affecting concurrent query performance.
Time Travel on FDB catalog
SynxDB Cloud now supports Time Travel on the FoundationDB (FDB) catalog backend, allowing you to query the historical state of data at a specific point in the past and recover accidentally deleted database objects on FDB-backed deployments.
Data federation and lakehouse integration
Independent deployment for Hive Meta Sync (experimental)
Hive Meta Sync can now be deployed as an independent component, managed separately from the database cluster through the DBaaS Admin Console. When creating a sync configuration, you must select a Profile to allocate resources for the sync service, and can optionally specify an Environment Spec for the Kubernetes runtime. Sync tasks support in-place editing after creation. This feature is experimental; do not use it in production.
HDFS block location retrieval for data-locality-aware scheduling
Setting datalake.enable_get_block_location to on enables data-locality-aware scheduling for HDFS-backed tables by retrieving block location information during directory listing. Enable it only when database segments are co-located with HDFS data nodes; the parameter is disabled by default because retrieving block locations adds overhead.
See HDFS and OSS-related configuration parameters and Configuration parameters.
Tools and utilities
TOML-based configuration for AI Bot - Data and AI Bot - Doc instances
When creating AI Bot - Data or AI Bot - Doc instances in the DBaaS Admin Console, you can supply a service configuration through the Config Content field in TOML format. For AI Bot - Data, the configuration defines LLM providers (model name, endpoint URL, API key, and model identifier); for AI Bot - Doc, it defines service endpoints and model providers for embedding and language tasks. Click View Example to load a starter template, then Apply to Content to populate the editor.
In addition, AI Bot - Doc now supports selecting one or more ML Clusters when creating an instance.
See Manage AI Platforms.
Monitor TP Server metrics in the Admin Console
You can now view monitoring metrics for TP Servers directly in the DBaaS Admin Console. Opening a TP Server from the Account Detail page displays a monitoring panel covering instance details, infrastructure resource usage, core database performance indicators, and storage and cache performance. You can select a specific Pod from the dropdown menu at the top of each tab and use the top-right controls to set a custom time range or enable auto-refresh.
GUC parameter configuration enhancements
The DBaaS Admin Console now supports two additional configurable GUC parameters: max_locks_per_transaction controls the average number of object locks allocated per transaction, and pg_gophermeta.gopher_local_capacity_mb controls how much local disk space the GopherMeta process can use for caching. These parameters are now available in the GUC management interface, giving DBAs fine-grained control without manual configuration file edits.
The platform also enforces cross-validation between statement_mem and max_statement_mem: if statement_mem exceeds max_statement_mem, the platform rejects the update with an error. This validation prevents a misconfiguration that could cause query memory allocation failures.
Private registry support for offline deployments
For air-gapped or offline environments, set the image.registry field in each component’s Helm values file to redirect container image pulls to your private registry, enabling full offline installation without access to public container registries.
AWS Marketplace deployment via Omnistrate
SynxDB Cloud is now available on AWS Marketplace as a BYOC (Bring Your Own Cloud) offering through the Omnistrate platform. Subscribe from AWS Marketplace, connect your AWS account via CloudFormation, and Omnistrate automatically provisions the EKS cluster, RDS database, and S3 buckets required to run SynxDB Cloud in your own AWS account.
Standalone FileCleaner deployment
SynxDB Cloud deploys FileCleaner as a standalone Kubernetes Deployment, decoupled from the Catalog sidecar with its own resource specification, improving resource isolation and reducing the impact of FileCleaner failures on the Catalog component. For FDB-backed accounts, configure a FileCleaner Profile and Count (1–99) when creating the account; create the resource specification on the Profile page beforehand.
See FileCleaner and Create an account.
Multi-catalog support for FDB-backed accounts
FDB-backed accounts now support multiple catalog instances, enabling multi-tenant catalog isolation so you can partition metadata across different workloads or teams. When creating an account with FoundationDB as the metadata backend, the catalog count defaults to 1 for the initial account setup.
See Create an account.
Broadcast logout and channel management
When a user logs out, the DBaaS Admin Console broadcasts the logout event and closes all associated channels across distributed console instances, ensuring consistent session state and preventing stale sessions.
Product change information
GUC configuration parameters
Newly added GUCs
max_locks_per_transaction: default128. Controls the average number of object locks per transaction. Increase this value if you encounter “out of shared memory” errors when a single transaction touches many tables (for example,DROP SCHEMA ... CASCADE). See Configuration parameters.pg_gophermeta.gopher_local_capacity_mb: default1024000(MB). Controls the local disk space available to the GopherMeta process for caching. A recommended starting point is approximately 30% of available disk space. See Configuration parameters.datalake.enable_get_block_location: defaultoff. Controls whether HDFS block location information is retrieved during directory listing, enabling data-locality-aware scheduling for HDFS-backed tables. Enable only when database segments are co-located with HDFS data nodes. See Configuration parameters.
Bug fixes
Fix pathkey inheritance in
GroupAggregatethat incorrectly preserved ordering assumptions from subpaths for columns used as aggregate arguments, preventing invalidGather Motionplans.Fix
gp_session_idassertion failure in bootstrap mode by adding anIsBootstrapProcessingMode()check during utility-mode processing.Remove redundant auto-dependency between a materialized view and its auxiliary entry, simplifying catalog operations during creation and deletion.
Replace incorrect
heap_getnext()withtable_scan_getnext()when fetching tuples during database drop cleanup in cloud mode.Fix
gplogfiltermemory error caused byStringIO.truncate(0)not resetting the write cursor; add an explicitseek(0)before each row write to prevent unbounded buffer growth when processing large log files.Close directories in
LocalFileSystem::ListDirectoryon exception to prevent file descriptor leaks in the PAX storage engine.Remove unnecessary
PaxResourceListmanagement for local files in PAX, relying on C++ destructors for cleanup instead.Always consume future objects in the PAX
TableReaderclose function when scanning stops before a micro partition is fully read, preventing resource leaks during query cancellation orLIMIT-based early termination.Fix static variable inheritance and contamination issues across forked backend processes.
Release resources from
CacheMemoryContextandTopMemoryContextafter catalog access incloud_cleanup_dbfiles(), and use temporary memory contexts for transient data, preventing OOM when dropping databases with many tables.Validate hash function existence using
get_op_hash_functions()in ORCA before generatingHashAggplans, moving error detection from execution time to plan time for invalid hash operator configurations.Check
IsSortedbefore callingSortin ORCA’sCDatumSortedSetto reduce comparisons for pre-sortedIN-list predicates, improving query optimization time.Change error level to notice for repeated operations to ensure idempotent behavior and enable safe retries.
Fix the Create button overlapping tab headers on the account detail page, unstable pagination in the database parameter config list, and unsorted pod list ordering in the console.
Fix distributor brand switching by setting the distributor in build environment parameters and adding correct aliases for logo and icon paths.
Fix a
NullPointerExceptionin the console backend.Fix a generic type inference warning in the Java codebase.
Refactor the
UnionStoredetail page layout for improved usability.Prompt users to restart the ML Cluster after editing its profile to ensure configuration changes take effect.
Allow revoking privileges from warehouse or tablespace owners.
Align the count field width to the warehouse popup, replace the GPU enabled checkbox with a workload type radio group for ML Cluster creation, and adjust the warehouse creation popup for visual consistency.
Show the AI Bot - Doc option when the account metadata type is
UnionStore.Centralize public key generation into a shared
CertificateServiceto eliminate duplicated code.Set the FDB exporter container image from configuration instead of hard-coding the value.
Remove the
hostPathtype constraint for/etc/localtimeto enable deployment on AWS Bottlerocket nodes.Fix multiple typos in the console codebase and UI text.
Remove a redundant Custom Resource Definition from the Kubernetes operator.