Manage Database Connections

In SynxDB Cloud, you can centrally manage connections to external data sources, such as HDFS, Hive Connector, and Kerberos (KDC), through the Datebase Config page in the DBaaS Admin Console. This GUI-based configuration method replaces the previous cumbersome process of modifying configuration files using the kubectl command-line tool, making connection management more intuitive and convenient.

Access the Datebase Config page

  1. Log in to the SynxDB Cloud DBaaS Admin Console.

  2. In the left navigation pane, click Datebase Config.

View and manage existing connection configurations

On the Datebase Config page, you can find, view, and manage all your existing data source connections. The process for managing configurations is the same for HDFS, Hive, and Kerberos.

  1. Click the tab for the type of connection you want to manage: HDFS, Hive Connector, KDC, or Iceberg OSS. The list will show all existing configurations for the selected type.

  2. (Optional) To find a specific configuration, use the filter fields at the top of the list:

    1. Select the Organization Name and/or Account Name from the dropdown lists.

    2. Click Query.

    3. To clear the filters, click Reset.

  3. In the configuration list, locate the configuration you want to manage and perform one of the following actions in the Action column:

    • Delete: Permanently removes the configuration.

    • Activate/Deactivate: Toggles the status of the configuration. An Active status means the connection is enabled and can be used by the system.

    • Clone: Creates a copy of the existing configuration. This is useful when you need to create a new configuration that is similar to an existing one.

Create connection configurations

This section describes the step-by-step process to create new connection configurations for HDFS, Hive, and Kerberos.

Configure an HDFS connection

Configuring an HDFS connection is a three-step process where you provide basic information, specify the HDFS plugin details, and then review your configuration.

Note

If your hdfs_namenode_host is a hostname (rather than an IP address), or if the HDFS DataNodes return hostnames during block transfer (typical when dfs.datanode.hostname or dfs.client.use.datanode.hostname=true is set on the HDFS cluster), the SynxDB Cloud cluster must be able to resolve those hostnames. This typically requires an administrator to update the cluster’s CoreDNS configuration to map the hostnames to their corresponding IP addresses.

  1. On the HDFS tab, click + Create.

  2. In the Basic Information step, select the Organization and Account for this configuration, then click Next.

  3. In the HDFS Plugin Configuration step, provide the connection details for your HDFS cluster.

    1. Click + Add HDFS Plugin.

    2. Select the Authentication Method for your HDFS cluster. The UI supports both Simple Authentication and Kerberos Authentication.

    3. Provide the HDFS configuration using one of the following methods:

      • Manual Input: Paste the HDFS configuration directly into the text field. The required parameters change based on the selected authentication method.

        Here is an example for Simple Authentication:

        hdfs-cluster-1:
            hdfs_namenode_host: mycluster
            hdfs_namenode_port: 9000
        

        Here is a comprehensive example for Kerberos Authentication with high availability (HA):

        Note

        • You need to replace the configuration options with your own ones. For the detailed description of each option, see the tables below.

        • In the configuration files, configuration options under cluster names must be indented to align with the cluster name lines. For example, in the following example, the configuration options (such as hdfs_namenode_host and hdfs_namenode_port) under hive-cluster-1 must be indented.

        hdfs-cluster-1:
            hdfs_namenode_host: mycluster
            hdfs_namenode_port: 9000
            hdfs_auth_method: kerberos
            krb_principal: hdfs/10-13-9-156@EXAMPLE.COM   
            krb_principal_keytab: /etc/kerberos/keytab/hdfs.keytab
            krb_service_principal: hdfs/10-13-9-156@EXAMPLE.COM
            is_ha_supported: true
            hadoop_rpc_protection: authentication
            data_transfer_protocol: true
            dfs.nameservices: mycluster
            dfs.ha.namenodes.mycluster: nn1,nn2
            dfs.namenode.rpc-address.mycluster.nn1: 10.13.9.156:9000
            dfs.namenode.rpc-address.mycluster.nn2: 10.13.9.157:9000
            dfs.client.failover.proxy.provider.mycluster: org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
        

        The following table describes the available configuration options. You can get this information from the hdfs-site.xml and core-site.xml files of the target HDFS cluster.

        General options:

        Option name

        Description

        Default Value

        hdfs_namenode_host

        Configures the host information of HDFS. For example, hdfs://mycluster, where hdfs:// can be omitted.

        /

        hdfs_namenode_port

        The port of the HDFS NameNode RPC service. Set this to match fs.defaultFS in the HDFS cluster’s core-site.xml. If omitted, defaults to 9000. Common values are 9000 (Hadoop 2.x default) and 8020 (Hadoop 3.x default).

        9000

        hdfs_auth_method

        Configures the HDFS authentication method. Use simple for regular HDFS. Use kerberos for HDFS with Kerberos.

        /

        hadoop_rpc_protection

        Matches the hadoop.rpc.protection setting in core-site.xml. Can be authentication, integrity, or privacy.

        /

        Kerberos options:

        Option name

        Description

        Default Value

        krb_principal

        Kerberos principal. Required when hdfs_auth_method is set to “kerberos”.

        /

        krb_principal_keytab

        The location on the cluster where the user-generated keytab is placed.

        /

        krb_service_principal

        The service principal for the HDFS service. Required for Kerberos.

        /

        data_transfer_protection

        The quality of protection for data transfer. Can be authentication, integrity, or privacy.

        /

        data_transfer_protocol

        When the HDFS cluster has block data transfer encryption enabled (that is, dfs.encrypt.data.transfer=true in hdfs-site.xml), set this to true.

        /

        High availability (HA) options:

        Option name

        Description

        Default Value

        is_ha_supported

        Set to true to enable High Availability (HA) support.

        false

        If is_ha_supported is set to true, you must also provide the following HA-specific properties. Replace <nameservice> with your actual HDFS nameservice ID.

        HA option name

        Description

        dfs.nameservices

        The logical name for the HA name service.

        dfs.ha.namenodes.<nameservice>

        The unique identifiers for each NameNode in the name service (for example, nn1,nn2).

        dfs.namenode.rpc-address.<nameservice>.<namenode_id>

        The fully-qualified RPC address for each NameNode to listen on.

        dfs.client.failover.proxy.provider.<nameservice>

        The Java class that HDFS clients use to contact the Active NameNode.

        When Kerberos Authentication is selected, you also need to upload the corresponding Keytab file. The process is identical to the one described in the Configure a Kerberos connection section.

      • File Upload: Upload an HDFS configuration file (for example, gphdfs.conf or hdfs-site.xml). Supported formats are .xml, .conf, and .txt.

    4. (Optional) To add another HDFS cluster to this configuration, click + Add Another Plugin and repeat the steps above.

    5. Click Next.

  4. In the Configuration Preview step, carefully review all the details you have entered.

  5. If everything is correct, click Submit to create the HDFS connection configuration.

Configure a Hive connection

Configuring a Hive connection follows a similar three-step process: providing basic information, specifying the Hive connector details, and then reviewing your configuration before submission.

  1. On the Hive Connector tab, click + Create.

  2. In the Basic Information step, select the Organization and Account for this configuration, then click Next.

  3. In the Hive Connector Configuration step, provide the connection details for your Hive Metastore.

    1. Click + Add Hive Connector.

    2. Select the Authentication Method:

      • Simple Authentication: For Hive clusters without Kerberos.

      • Kerberos Authentication: For Hive clusters secured with Kerberos.

    3. Provide the Hive configuration using one of the following methods:

      • Manual Input: Paste the Hive configuration directly into the text field. The required parameters change based on the selected authentication method.

        Here is an example for Simple Authentication:

        hive-cluster-1:
            uris: thrift://10.13.9.156:9083
            auth_method: simple
        

        Here is an example for Kerberos Authentication with high availability (HA):

        hive-cluster-1:
            uris: thrift://10.13.9.156:9083,thrift://10.13.9.157:9083
            auth_method: kerberos
            krb_service_principal: hive/_HOST@EXAMPLE.COM
            krb_client_principal: hive/10-13-9-156@EXAMPLE.COM
            krb_client_keytab: /etc/kerberos/keytab/hive.keytab
        

        The following table describes the available configuration options. You can typically find this information in the hive-site.xml file of the target Hive cluster.

        Option name

        Description

        Default value

        uris

        The listening address of the Hive Metastore Service (the HMS hostname). For high availability (HA), you can provide multiple URIs separated by commas.

        /

        auth_method

        The authentication method for the Hive Metastore Service: simple or kerberos.

        simple

        krb_service_principal

        The service principal required for Kerberos authentication of the Hive Metastore Service. When using the HMS HA feature, configure the instance in the principal as _HOST, for example, hive/_HOST@EXAMPLE.

        krb_client_principal

        The client principal required for Kerberos authentication of the Hive Metastore Service.

        krb_client_keytab

        The keytab file of the client principal required for Kerberos authentication.

        debug

        The debug flag for the Hive Connector: true or false.

        false

        If you select Kerberos Authentication, you must also upload the corresponding Keytab file.

      • File Upload: Upload a Hive configuration file (for example, gphive.conf or hive-site.xml). Supported formats are .xml, .conf, and .txt.

    4. Click Next.

  4. In the Configuration Preview step, review all the details you have entered.

  5. If everything is correct, click Submit to create the Hive connection configuration.

Configure a Kerberos connection

Configuring a Kerberos (KDC) connection involves three main steps: providing basic information, supplying the Kerberos configuration details, and reviewing the setup before submission.

Prerequisites

Before you begin, ensure that the SynxDB Cloud cluster can resolve the hostnames of your KDC server and any other Kerberized services (such as HDFS NameNodes or Hive Metastore servers) if you are using hostnames instead of IP addresses in your configuration files. This might require an administrator to update the cluster’s CoreDNS configuration to map the hostnames to their corresponding IP addresses.

Steps to configure the KDC connection

  1. On the KDC tab, click + Create.

  2. In the Basic Information step, select the Organization and Account for this configuration, then click Next.

  3. In the Kerberos Configuration step, you need to provide the krb5.conf content and optionally any Kerberos snippets.

    1. Krb5.conf Configuration: Provide the main Kerberos configuration file content using one of these methods:

      • Manual Input: Paste the content of your krb5.conf file directly into the text field.

        Here is an example of a typical krb5.conf file. You need to replace the placeholder values (for example, <kdc_ip> and <admin_server_ip>) with the actual IP addresses or fully qualified domain names (FQDNs) of your servers.

        [logging]
        default = FILE:/var/log/krb5libs.log
        kdc = FILE:/var/log/krb5kdc.log
        admin_server = FILE:/var/log/kadmind.log
        
        [libdefaults]
        dns_lookup_realm = false
        ticket_lifetime = 24h
        renew_lifetime = 7d
        forwardable = true
        rdns = false
        pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt
        default_realm = EXAMPLE.COM
        
        [realms]
        EXAMPLE.COM = {
          kdc = <kdc_ip>
          admin_server = <admin_server_ip>
         }
        
        [domain_realm]
         .example.com = EXAMPLE.COM
         example.com = EXAMPLE.COM
        
      • File Upload: Upload your krb5.conf file. Supported formats are .conf and .txt.

    2. Kerberos Snippets: If you have additional Kerberos configuration snippets, you must upload them as a single file. This is an optional step.

    3. Click Next.

  4. In the Configuration Preview step, carefully review all the details you have provided.

  5. If the configuration is correct, click Submit.

Configure an Iceberg OSS connection

Configuring an Iceberg OSS connection allows SynxDB Cloud to access Iceberg tables on S3-compatible object storage. This process is completed through a three-step wizard where you provide basic information, specify the S3 connection details, and then review your configuration. This GUI-based method simplifies the setup process, replacing the need to manually create and manage s3.conf files on the cluster.

  1. On the Iceberg OSS tab, click + Create.

  2. In the Basic Information step, provide the following details, then click Next:

    • Organization: Select the organization for this configuration.

    • Account: Select the account for this configuration.

    • Service Configuration Template: Select a service configuration template.

  3. In the Iceberg OSS Configuration step, provide the S3 connection details using one of the following methods:

    • Manual Input: Paste the S3 configuration directly into the text field.

      Here is an example configuration:

      s3_cluster:
         # The following configuration options are required.
         fs.s3a.endpoint: http://127.0.0.1:8000
         fs.s3a.access.key: admin
         fs.s3a.secret.key: password
         fs.s3a.aws.credentials.provider: org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
      
         # The following configuration options are optional and have default values.
         fs.s3a.path.style.access: true
         fs.defaultFS: s3a:// 
         fs.s3a.impl: org.apache.hadoop.fs.s3a.S3AFileSystem
      

      The following table describes the available configuration options.

      Option name

      Description

      fs.s3a.endpoint

      The endpoint URL for the S3-compatible object storage service.

      fs.s3a.access.key

      The access key for authenticating with the S3 service.

      fs.s3a.secret.key

      The secret key for authenticating with the S3 service.

      fs.s3a.path.style.access

      (Optional) Set to true to use path-style access to buckets, which is common for private cloud S3 implementations.

      fs.defaultFS

      (Optional)The default file system name. For S3, this should be set to s3a://.

      fs.s3a.impl

      (Optional) The Java class that implements the S3A file system client.

    • File Upload: Click or drag your S3 configuration file (for example, s3.conf) to the upload area. Supported formats are .conf and .txt.

    After providing the configuration, click Next.

  4. In the Configuration Preview step, carefully review all the details you have entered.

  5. If everything is correct, click Submit to create the Iceberg OSS connection configuration.

Configure Hive Metadata Auto Sync

Warning

Hive Metadata Auto Sync is an experimental feature in the current version. Do not use it in production environments.

The Hive Meta Sync tab on the Database Config page is the console-side entry point for Hive Metadata Auto Sync. Because the feature also needs preparation on the Hive cluster and inside the target SynxDB Cloud database, the full setup lives in a separate document. See Configure Hive Metadata Auto Sync for the end-to-end procedure, including how to install the listener plugin on Hive, prepare the target database, fill in the Meta Sync YAML, and verify the synchronization.