Deploy on Single Node

This document introduces how to manually deploy SynxDB on a single physical or virtual machine.

SynxDB is not fully compatible with PostgreSQL, and some features and syntax are SynxDB-specific. If your business already relies on SynxDB and you want to use the SynxDB-specific syntax and features on a single node to avoid compatibility issues with PostgreSQL, you can consider deploying SynxDB free of segments.

SynxDB provides the single-computing-node deployment mode. This mode runs under the utility gp_role, with only one coordinator (QD) node and one coordinator standby node, without a segment node or data distribution. You can directly connect to the coordinator and run queries as if you were connecting to a regular multi-node cluster. Note that some SQL statements are not effective in this mode because data distribution does not exist, and some SQL statements are not supported. See User-behavior changes for details.

How to deploy

Step 1. Prepare to deploy

Log into each host as the root user, and modify the settings of each node host in the order of the following sections.

Add gpadmin admin user

Follow the example below to create a user group and username gpadmin, set the user group and username identifier to 520, and create and specify the home directory /home/gpadmin/.

groupadd -g 520 gpadmin  # _Adds user group gpadmin._
useradd -g 520 -u 520 -m -d /home/gpadmin/ -s /bin/bash gpadmin  # _Adds username gpadmin and creates the home directory._
passwd gpadmin  # _Sets a password for gpadmin. Follow the prompts to input the password after executing._

Disable SELinux and firewall software

Run systemctl status firewalld to view the firewall status. If the firewall is on, you need to turn it off by setting the SELINUX parameter to disabled in the /etc/selinux/config file.

SELINUX=disabled

You can also disable the firewall using the following commands:

systemctl stop firewalld.service
systemctl disable firewalld.service

Set system parameters

Set the parameters in the /etc/sysctl.conf file, and then run the sysctl -p command to reload the configuration.

The sysctl.conf parameters listed in this section are intended to improve performance, tunability, and consistency across environments. Adjust these configurations according to your specific situation. Details and recommended settings for some parameters are provided below. For more best practices on system configuration, see System Configuration Best Practices.

# kernel.shmall = _PHYS_PAGES / 2 # See shared memory settings
kernel.shmall = 197951838
# kernel.shmmax = kernel.shmall * PAGE_SIZE # See shared memory settings
kernel.shmmax = 810810728448
kernel.shmmni = 32768
vm.overcommit_memory = 2 # See memory settings for segment hosts
vm.overcommit_ratio = 95 # See memory settings for segment hosts

net.ipv4.ip_local_port_range = 10000 65535 # See port settings
kernel.sem = 32000 1048576000 1000 32768
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 32768
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ipfrag_high_thresh = 41943040
net.ipv4.ipfrag_low_thresh = 31457280
net.ipv4.ipfrag_time = 60
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
vm.swappiness = 1
vm.zone_reclaim_mode = 0
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100

vm.dirty_background_bytes = 1610612736 # See system memory settings
vm.dirty_background_ratio = 0 # See system memory settings
vm.dirty_ratio = 0 # See system memory settings
vm.dirty_bytes = 4294967296 # See system memory settings
Shared memory settings

In the /etc/sysctl.conf configuration file:

  • kernel.shmall represents the total amount of available shared memory, in pages. kernel.shmmax represents the maximum size of a single shared memory segment, in bytes. You can define these values using the operating system’s _PHYS_PAGES and PAGE_SIZE parameters:

    kernel.shmall = ( _PHYS_PAGES / 2)
    kernel.shmmax = ( _PHYS_PAGES / 2) * PAGE_SIZE
    

    To get the values of these two operating system parameters, you can use getconf, as shown below:

    $ echo $(expr $(getconf _PHYS_PAGES) / 2)
    $ echo $(expr $(getconf _PHYS_PAGES) / 2 \* $(getconf PAGE_SIZE))
    
  • vm.overcommit_memory is a Linux kernel parameter that controls the system’s memory overcommit handling. Setting vm.overcommit_memory to 2 means the system will refuse memory overcommit when the allocation exceeds 2 GB.

  • vm.overcommit_ratio is a kernel parameter that represents the percentage of RAM allocated to processes. The default value on CentOS is 50. The formula to calculate vm.overcommit_ratio is as follows:

    vm.overcommit_ratio = (RAM - 0.026 * gp_vmem) / RAM
    

    The method to calculate gp_vmem is as follows:

    # If the system memory is less than 256 GB, use the following formula:
    gp_vmem = ((SWAP + RAM)  (7.5GB + 0.05 * RAM)) / 1.7
    
    # If the system memory is greater than or equal to 256 GB, use the following formula:
    gp_vmem = ((SWAP + RAM)  (7.5GB + 0.05 * RAM)) / 1.17
    
    # In the formulas above, SWAP represents the swap space on the host, in GB.
    # RAM represents the installed memory on the host, in GB.
    
IP segmentation settings

When SynxDB uses the UDP protocol for the interconnect (that is, UDPIFC as the interconnect type), the network interface card (NIC) handles IP packet fragmentation and reassembly. If a UDP message exceeds the network maximum transmission unit (MTU), the IP layer fragments the message.

To address this, the following optimizations are recommended—especially on ARM-based servers. Increasing the memory buffers for fragment reassembly can greatly reduce interconnect-related errors.

  • net.ipv4.ipfrag_high_thresh: Sets the upper threshold (in bytes) for memory allocated to IP fragment reassembly. Once the upper threshold is reached, additional fragments are dropped until memory usage falls to the lower threshold. Increasing this value allows the system to handle more IP fragment reassembly requests without dropping fragments due to insufficient memory.

  • net.ipv4.ipfrag_low_thresh: Sets the lower threshold (in bytes) for memory allocated to IP fragment reassembly. Increasing this value helps ensure there is enough space to receive and reassemble new IP fragments even when memory usage is relatively low.

  • net.ipv4.ipfrag_time: A kernel parameter that controls the timeout for IP fragment reassembly. The default value is 30.

For systems with more than 16 GB of memory, the following starting values are recommended:

net.ipv4.ipfrag_high_thresh = 536870912
net.ipv4.ipfrag_low_thresh = 429496730
net.ipv4.ipfrag_time = 60

Note

  • On systems with more than 16 GB of memory, it is recommended to set the initial value of net.ipv4.ipfrag_high_thresh to 536870912 (512 MB). If interconnect errors persist, increase this value. This parameter can be raised up to 5% of available physical memory.

  • It is recommended to set net.ipv4.ipfrag_low_thresh to approximately 80% of net.ipv4.ipfrag_high_thresh; for example, an initial value of 429496730.

System memory
  • If the server memory exceeds 64 GB, the following parameters are recommended in the /etc/sysctl.conf configuration file:

    vm.dirty_background_ratio = 0
    vm.dirty_ratio = 0
    vm.dirty_background_bytes = 1610612736 # 1.5 GB
    vm.dirty_bytes = 4294967296 # 4 GB
    
  • If the server memory is less than 64 GB, you do not need to set vm.dirty_background_bytes or vm.dirty_bytes. It is recommended to set the following parameters in the /etc/sysctl.conf configuration file:

    vm.dirty_background_ratio = 3
    vm.dirty_ratio = 10
    
  • To deal with emergency situations when the system is under memory pressure, it is recommended to add the vm.min_free_kbytes parameter to the /etc/sysctl.conf configuration file to control the amount of available memory reserved by the system. It is recommended to set vm.min_free_kbytes to 3% of the system’s physical memory, with the following command:

    awk 'BEGIN {OFMT = "%.0f";} /MemTotal/ {print "vm.min_free_kbytes =", $2 * .03;}' /proc/meminfo  /etc/sysctl.conf
    
  • The setting of vm.min_free_kbytes is not recommended to exceed 5% of the system’s physical memory.

Resource limit settings

Edit the /etc/security/limits.conf file and add the following content, which will limit the amount of hardware and software resources.

* soft nofile 524288
* hard nofile 524288
* soft nproc 131072
* hard nproc 131072
CORE DUMP settings
  1. Add the following parameter to the /etc/sysctl.conf configuration file:

    kernel.core_pattern=/var/core/core.%h.%t
    
  2. Run the following command to make the configuration effective:

    sysctl -p
    
  3. Add the following parameter to /etc/security/limits.conf:

    soft core unlimited
    
Set mount options for the XFS file system

XFS is the file system for the data directory of SynxDB. XFS has the following mount options:

rw,nodev,noatime,inode64

You can set up XFS file mounting in the /etc/fstab file. See the following commands. You need to choose the file path according to the actual situation:

mkdir -p /data0/
mkfs.xfs -f /dev/vdc
echo "/dev/vdc /data0 xfs rw,nodev,noatime,nobarrier,inode64 0 0"  /etc/fstab
mount /data0
chown -R gpadmin:gpadmin /data0/

Run the following command to check whether the mounting is successful:

df -h
Blockdev value

The blockdev value for each disk device file should be 16384. To verify the blockdev value of a disk device, you can use the following command:

sudo /sbin/blockdev --getra <devname>

For example, to verify the blockdev value of the hard disk of the example server:

sudo /sbin/blockdev --getra /dev/vdc

To modify the blockdev value of a device file, you can use the following command:

sudo /sbin/blockdev --setra <bytes> <devname>

For example, to modify the blockdev value of the hard disk of the example server:

sudo /sbin/blockdev --setra 16384 /dev/vdc
I/O scheduling policy settings for disks

The disk type, operating system, and scheduling policy of SynxDB are as follows:

Storage device type

OS

Recommended scheduling policy

NVMe

RHEL 8

none

Ubuntu

none

SSD

RHEL 8

none

Ubuntu

none

Other

RHEL 8

mq-deadline

Ubuntu

mq-deadline

Refer to the following command to modify the scheduling policy. Note that this command is only a temporary modification, and the modification will become invalid after the server is restarted.

echo schedulername  /sys/block/<devname>/queue/scheduler

For example, to temporarily modify the disk I/O scheduling policy of the example server:

echo deadline  /sys/block/vdc/queue/scheduler

To permanently modify the scheduling policy, use the system utility grubby. After using grubby, the modification takes effect immediately after you restart the server. The sample command is as follows:

grubby --update-kernel=ALL --args="elevator=deadline"

You can view the kernel parameter settings by using the following command:

grubby --info=ALL
Disable Transparent Huge Pages (THP)

You need to disable Transparent Huge Pages (THP), because it reduces SynxDB performance. The command is as follows:

grubby --update-kernel=ALL --args="transparent_hugepage=never"

Check the status of THP:

cat /sys/kernel/mm/*transparent_hugepage/enabled
Disable IPC object deletion

Disable IPC object deletion by setting the value of RemoveIPC to no. You can set this parameter in SynxDB’s /etc/systemd/logind.conf file.

RemoveIPC=no

After disabling it, run the following command to restart the server to make the disabling setting effective:

service systemd-logind restart
SSH connection threshold

To set the SSH connection threshold, you need to modify the /etc/ssh/sshd_config configuration file’s MaxStartups and MaxSessions parameters. Both of the following writing methods are acceptable.

MaxStartups 200
MaxSessions 200
MaxStartups 10:30:200
MaxSessions 200

Run the following command to restart the server to make the setting take effect:

service sshd restart
Clock synchronization

SynxDB requires the clock synchronization to be configured for all hosts, and the clock synchronization service should be started when the host starts. You can choose one of the following synchronization methods:

  • Use the coordinator node’s time as the source, and other hosts synchronize the clock of the coordinator node host.

  • Synchronize clocks using an external clock source.

The example in this document uses an external clock source for synchronization, that is, adding the following configuration to the /etc/chrony.conf configuration file:

# Use public servers from the pool.ntp.org project
# Please consider joining the pool (http://www.pool.ntp.org/join.html)
server 0.centos.pool.ntp.org iburst

After setting, you can run the following command to check the clock synchronization status:

systemctl status chronyd

Step 2: Install SynxDB via RPM package

  1. Download the SynxDB RPM package to the gpadmin home directory /home/gpadmin/:

    wget -P /home/gpadmin <download address>
    
  2. Install the RPM package in the /home/gpadmin directory.

    When running the following command, you need to replace <RPM package path> with the actual RPM package path, and execute it as the root user. During installation, the default installation directory /usr/local/synxdb/ will be automatically created.

    cd /home/gpadmin
    yum install <RPM package path>
    
  3. Grant the gpadmin user permission for the installation directory:

    chown -R gpadmin:gpadmin /usr/local
    chown -R gpadmin:gpadmin /usr/local/synxdb*
    
  4. Configure local SSH login for the node. As the gpadmin user:

    ssh-keygen
    ssh-copy-id localhost
    ssh `hostname` # Ensure the local SSH login works properly
    

Step 3: Deploy SynxDB with a single computing node

Use the scripting tool gpdemo to quickly deploy SynxDB. gpdemo is included in the RPM package and will be installed in the GPHOME/bin directory along with the configuration scripts (gpinitsystem, gpstart, gpstop, and so on.), and it supports quickly deploying SynxDB with a single computing node. For more details about this tool, refer to gpdemo.

In the above Set mount options for the XFS file system, the XFS file system’s data directory is mounted on /data0. The following commands deploy a single-computing-node cluster in this data directory:

cd /data0
NUM_PRIMARY_MIRROR_PAIRS=0 gpdemo  # Uses the gpdemo tool

When gpdemo is running, a new warning will be output: [WARNING]:-SinglenodeMode has been enabled, no segment will be created., which indicates that SynxDB is currently being deployed in the single-computing-node mode.

Common issues

How to check the deployment mode of a cluster

Perform the following steps to confirm the deployment mode of the current SynxDB cluster:

  1. Connect to the coordinator node.

  2. Execute SHOW gp_role; to view the operating mode of the cluster.

    • If the result returns utility, it indicates that the cluster is in Utility mode, which is the maintenance mode where only the coordinator node is available.

      At this point, continue to run SHOW gp_internal_is_singlenode; to see whether the cluster is in the single-computing-node mode.

      • If the result returns on, it indicates that the current cluster is in the single-computing-node mode.

      • If the result returns off, it indicates that the current cluster is in regular utility maintenance mode.

    • If the result returns dispatch, it indicates that the current cluster is a regular cluster containing segment nodes. You can further confirm the number of segments, their status, ports, data directories, and other information by running SELECT * FROM gp_segment_configuration;.

Where is the data directory

gpdemo automatically creates a data directory in the current path ($PWD). For the single-computing-node deployment:

  • The default directory of the coordinator is ./datadirs/singlenodedir.

  • The default directory of the coordinator standby node is ./datadirs/standby.

How it works

When you are deploying SynxDB in the single-computing-node mode, the deployment script gpdemo writes gp_internal_is_singlenode = true to the configuration file postgresql.conf and starts a coordinator and a coordinator standby node with the gp_role = utility parameter setting. All data is written locally without a segment or data distribution.

User-behavior changes

In the single-computing-node mode, the product behavior of SynxDB has the following changes. You should pay attention to these changes before performing related operations:

  • When you execute CREATE TABLE to create a table, the DISTRIBUTED BY clause no longer takes effect. A warning is output: WARNING: DISTRIBUTED BY clause has no effect in singlenode mode.

  • The SCATTER BY clause of the SELECT statement is no longer effective. A warning is output: WARNING: SCATTER BY clause has no effect in singlenode mode.

  • Other statements that are not supported (for example, ALTER TABLE SET DISTRIBUTED BY) are declined with an error.

  • The lock level of UPDATE and DELETE statements will be reduced from ExclusiveLock to RowExclusiveLock to provide better concurrency performance, because there is only a single node without global transactions or global deadlocks. This behavior is consistent with PostgreSQL.