Manage Database Connections
In SynxDB Cloud, you can centrally manage connections to external data sources, such as HDFS, Hive Connector, and Kerberos (KDC), through the Datebase Config page in the DBaaS Admin Console. This GUI-based configuration method replaces the previous cumbersome process of modifying configuration files using the kubectl command-line tool, making connection management more intuitive and convenient.
Access the Datebase Config page
Log in to the SynxDB Cloud DBaaS Admin Console.
In the left navigation pane, click Datebase Config.
View and manage existing connection configurations
On the Datebase Config page, you can find, view, and manage all your existing data source connections. The process for managing configurations is the same for HDFS, Hive, and Kerberos.
Click the tab for the type of connection you want to manage: HDFS, Hive Connector, KDC, or Iceberg OSS. The list will show all existing configurations for the selected type.
(Optional) To find a specific configuration, use the filter fields at the top of the list:
Select the Organization Name and/or Account Name from the dropdown lists.
Click Query.
To clear the filters, click Reset.
In the configuration list, locate the configuration you want to manage and perform one of the following actions in the Action column:
Delete: Permanently removes the configuration.
Activate/Deactivate: Toggles the status of the configuration. An
Activestatus means the connection is enabled and can be used by the system.Clone: Creates a copy of the existing configuration. This is useful when you need to create a new configuration that is similar to an existing one.
Create connection configurations
This section describes the step-by-step process to create new connection configurations for HDFS, Hive, and Kerberos.
Configure an HDFS connection
Configuring an HDFS connection is a three-step process where you provide basic information, specify the HDFS plugin details, and then review your configuration.
Note
If your hdfs_namenode_host is a hostname (rather than an IP address), or if the HDFS DataNodes return hostnames during block transfer (typical when dfs.datanode.hostname or dfs.client.use.datanode.hostname=true is set on the HDFS cluster), the SynxDB Cloud cluster must be able to resolve those hostnames. This typically requires an administrator to update the cluster’s CoreDNS configuration to map the hostnames to their corresponding IP addresses.
On the HDFS tab, click + Create.
In the Basic Information step, select the Organization and Account for this configuration, then click Next.
In the HDFS Plugin Configuration step, provide the connection details for your HDFS cluster.
Click + Add HDFS Plugin.
Select the Authentication Method for your HDFS cluster. The UI supports both
Simple AuthenticationandKerberos Authentication.Provide the HDFS configuration using one of the following methods:
Manual Input: Paste the HDFS configuration directly into the text field. The required parameters change based on the selected authentication method.
Here is an example for
Simple Authentication:hdfs-cluster-1: hdfs_namenode_host: mycluster hdfs_namenode_port: 9000
Here is a comprehensive example for
Kerberos Authenticationwith high availability (HA):Note
You need to replace the configuration options with your own ones. For the detailed description of each option, see the tables below.
In the configuration files, configuration options under cluster names must be indented to align with the cluster name lines. For example, in the following example, the configuration options (such as
hdfs_namenode_hostandhdfs_namenode_port) underhive-cluster-1must be indented.
hdfs-cluster-1: hdfs_namenode_host: mycluster hdfs_namenode_port: 9000 hdfs_auth_method: kerberos krb_principal: hdfs/10-13-9-156@EXAMPLE.COM krb_principal_keytab: /etc/kerberos/keytab/hdfs.keytab krb_service_principal: hdfs/10-13-9-156@EXAMPLE.COM is_ha_supported: true hadoop_rpc_protection: authentication data_transfer_protocol: true dfs.nameservices: mycluster dfs.ha.namenodes.mycluster: nn1,nn2 dfs.namenode.rpc-address.mycluster.nn1: 10.13.9.156:9000 dfs.namenode.rpc-address.mycluster.nn2: 10.13.9.157:9000 dfs.client.failover.proxy.provider.mycluster: org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
The following table describes the available configuration options. You can get this information from the
hdfs-site.xmlandcore-site.xmlfiles of the target HDFS cluster.General options:
Option name
Description
Default Value
hdfs_namenode_hostConfigures the host information of HDFS. For example,
hdfs://mycluster, wherehdfs://can be omitted./
hdfs_namenode_portThe port of the HDFS NameNode RPC service. Set this to match
fs.defaultFSin the HDFS cluster’score-site.xml. If omitted, defaults to9000. Common values are9000(Hadoop 2.x default) and8020(Hadoop 3.x default).9000hdfs_auth_methodConfigures the HDFS authentication method. Use
simplefor regular HDFS. Usekerberosfor HDFS with Kerberos./
hadoop_rpc_protectionMatches the
hadoop.rpc.protectionsetting incore-site.xml. Can beauthentication,integrity, orprivacy./
Kerberos options:
Option name
Description
Default Value
krb_principalKerberos principal. Required when
hdfs_auth_methodis set to “kerberos”./
krb_principal_keytabThe location on the cluster where the user-generated keytab is placed.
/
krb_service_principalThe service principal for the HDFS service. Required for Kerberos.
/
data_transfer_protectionThe quality of protection for data transfer. Can be
authentication,integrity, orprivacy./
data_transfer_protocolWhen the HDFS cluster has block data transfer encryption enabled (that is,
dfs.encrypt.data.transfer=trueinhdfs-site.xml), set this totrue./
High availability (HA) options:
Option name
Description
Default Value
is_ha_supportedSet to
trueto enable High Availability (HA) support.falseIf
is_ha_supportedis set totrue, you must also provide the following HA-specific properties. Replace<nameservice>with your actual HDFS nameservice ID.HA option name
Description
dfs.nameservicesThe logical name for the HA name service.
dfs.ha.namenodes.<nameservice>The unique identifiers for each NameNode in the name service (for example,
nn1,nn2).dfs.namenode.rpc-address.<nameservice>.<namenode_id>The fully-qualified RPC address for each NameNode to listen on.
dfs.client.failover.proxy.provider.<nameservice>The Java class that HDFS clients use to contact the Active NameNode.
When
Kerberos Authenticationis selected, you also need to upload the corresponding Keytab file. The process is identical to the one described in the Configure a Kerberos connection section.File Upload: Upload an HDFS configuration file (for example,
gphdfs.conforhdfs-site.xml). Supported formats are.xml,.conf, and.txt.
(Optional) To add another HDFS cluster to this configuration, click + Add Another Plugin and repeat the steps above.
Click Next.
In the Configuration Preview step, carefully review all the details you have entered.
If everything is correct, click Submit to create the HDFS connection configuration.
Configure a Hive connection
Configuring a Hive connection follows a similar three-step process: providing basic information, specifying the Hive connector details, and then reviewing your configuration before submission.
On the Hive Connector tab, click + Create.
In the Basic Information step, select the Organization and Account for this configuration, then click Next.
In the Hive Connector Configuration step, provide the connection details for your Hive Metastore.
Click + Add Hive Connector.
Select the Authentication Method:
Simple Authentication: For Hive clusters without Kerberos.
Kerberos Authentication: For Hive clusters secured with Kerberos.
Provide the Hive configuration using one of the following methods:
Manual Input: Paste the Hive configuration directly into the text field. The required parameters change based on the selected authentication method.
Here is an example for
Simple Authentication:hive-cluster-1: uris: thrift://10.13.9.156:9083 auth_method: simple
Here is an example for
Kerberos Authenticationwith high availability (HA):hive-cluster-1: uris: thrift://10.13.9.156:9083,thrift://10.13.9.157:9083 auth_method: kerberos krb_service_principal: hive/_HOST@EXAMPLE.COM krb_client_principal: hive/10-13-9-156@EXAMPLE.COM krb_client_keytab: /etc/kerberos/keytab/hive.keytab
The following table describes the available configuration options. You can typically find this information in the
hive-site.xmlfile of the target Hive cluster.Option name
Description
Default value
urisThe listening address of the Hive Metastore Service (the HMS hostname). For high availability (HA), you can provide multiple URIs separated by commas.
/
auth_methodThe authentication method for the Hive Metastore Service:
simpleorkerberos.simplekrb_service_principalThe service principal required for Kerberos authentication of the Hive Metastore Service. When using the HMS HA feature, configure the instance in the principal as
_HOST, for example,hive/_HOST@EXAMPLE.krb_client_principalThe client principal required for Kerberos authentication of the Hive Metastore Service.
krb_client_keytabThe keytab file of the client principal required for Kerberos authentication.
debugThe debug flag for the Hive Connector:
trueorfalse.falseIf you select Kerberos Authentication, you must also upload the corresponding Keytab file.
File Upload: Upload a Hive configuration file (for example,
gphive.conforhive-site.xml). Supported formats are.xml,.conf, and.txt.
Click Next.
In the Configuration Preview step, review all the details you have entered.
If everything is correct, click Submit to create the Hive connection configuration.
Configure a Kerberos connection
Configuring a Kerberos (KDC) connection involves three main steps: providing basic information, supplying the Kerberos configuration details, and reviewing the setup before submission.
Prerequisites
Before you begin, ensure that the SynxDB Cloud cluster can resolve the hostnames of your KDC server and any other Kerberized services (such as HDFS NameNodes or Hive Metastore servers) if you are using hostnames instead of IP addresses in your configuration files. This might require an administrator to update the cluster’s CoreDNS configuration to map the hostnames to their corresponding IP addresses.
Steps to configure the KDC connection
On the KDC tab, click + Create.
In the Basic Information step, select the Organization and Account for this configuration, then click Next.
In the Kerberos Configuration step, you need to provide the
krb5.confcontent and optionally any Kerberos snippets.Krb5.conf Configuration: Provide the main Kerberos configuration file content using one of these methods:
Manual Input: Paste the content of your
krb5.conffile directly into the text field.Here is an example of a typical
krb5.conffile. You need to replace the placeholder values (for example,<kdc_ip>and<admin_server_ip>) with the actual IP addresses or fully qualified domain names (FQDNs) of your servers.[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt default_realm = EXAMPLE.COM [realms] EXAMPLE.COM = { kdc = <kdc_ip> admin_server = <admin_server_ip> } [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM
File Upload: Upload your
krb5.conffile. Supported formats are.confand.txt.
Kerberos Snippets: If you have additional Kerberos configuration snippets, you must upload them as a single file. This is an optional step.
Click Next.
In the Configuration Preview step, carefully review all the details you have provided.
If the configuration is correct, click Submit.
Configure an Iceberg OSS connection
Configuring an Iceberg OSS connection allows SynxDB Cloud to access Iceberg tables on S3-compatible object storage. This process is completed through a three-step wizard where you provide basic information, specify the S3 connection details, and then review your configuration. This GUI-based method simplifies the setup process, replacing the need to manually create and manage s3.conf files on the cluster.
On the Iceberg OSS tab, click + Create.
In the Basic Information step, provide the following details, then click Next:
Organization: Select the organization for this configuration.
Account: Select the account for this configuration.
Service Configuration Template: Select a service configuration template.
In the Iceberg OSS Configuration step, provide the S3 connection details using one of the following methods:
Manual Input: Paste the S3 configuration directly into the text field.
Here is an example configuration:
s3_cluster: # The following configuration options are required. fs.s3a.endpoint: http://127.0.0.1:8000 fs.s3a.access.key: admin fs.s3a.secret.key: password fs.s3a.aws.credentials.provider: org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider # The following configuration options are optional and have default values. fs.s3a.path.style.access: true fs.defaultFS: s3a:// fs.s3a.impl: org.apache.hadoop.fs.s3a.S3AFileSystem
The following table describes the available configuration options.
Option name
Description
fs.s3a.endpointThe endpoint URL for the S3-compatible object storage service.
fs.s3a.access.keyThe access key for authenticating with the S3 service.
fs.s3a.secret.keyThe secret key for authenticating with the S3 service.
fs.s3a.path.style.access(Optional) Set to
trueto use path-style access to buckets, which is common for private cloud S3 implementations.fs.defaultFS(Optional)The default file system name. For S3, this should be set to
s3a://.fs.s3a.impl(Optional) The Java class that implements the S3A file system client.
File Upload: Click or drag your S3 configuration file (for example,
s3.conf) to the upload area. Supported formats are.confand.txt.
After providing the configuration, click Next.
In the Configuration Preview step, carefully review all the details you have entered.
If everything is correct, click Submit to create the Iceberg OSS connection configuration.
Configure Hive Metadata Auto Sync
Warning
Hive Metadata Auto Sync is an experimental feature in the current version. Do not use it in production environments.
The Hive Meta Sync tab on the Database Config page is the console-side entry point for Hive Metadata Auto Sync. Because the feature also needs preparation on the Hive cluster and inside the target SynxDB Cloud database, the full setup lives in a separate document. See Configure Hive Metadata Auto Sync for the end-to-end procedure, including how to install the listener plugin on Hive, prepare the target database, fill in the Meta Sync YAML, and verify the synchronization.