
Understanding MariaDB Galera Cluster Architecture
MariaDB Galera cluster provides synchronous multi-master replication for high-availability database systems. Unlike traditional master-slave setups, every node can accept both reads and writes simultaneously.
This architecture eliminates single points of failure. When one node fails, applications instantly connect to remaining cluster members without manual intervention.
Galera uses certification-based replication. Each transaction must pass cluster-wide validation before committing. This ensures data consistency across all nodes but adds slight latency compared to asynchronous replication.
Prerequisites and Server Requirements
You'll need three Ubuntu 24.04 VPS instances for this MariaDB Galera cluster setup. Two-node clusters risk split-brain scenarios. Three nodes provide proper quorum.
Each server requires at least 2GB RAM and 20GB disk space. Network connectivity between nodes should be stable with low latency.
HostMyCode VPS instances in the same data center work perfectly for this configuration.
Open these ports in your firewall:
- 3306 - MySQL client connections
- 4444 - State Snapshot Transfer (SST)
- 4567 - Galera cluster replication
- 4568 - Incremental State Transfer (IST)
Installing MariaDB on All Cluster Nodes
Start by updating package repositories on each server:
sudo apt update
sudo apt upgrade -y
Install MariaDB server and Galera packages:
sudo apt install mariadb-server mariadb-backup galera-4 -y
Stop the MariaDB service after installation. We'll configure clustering before starting:
sudo systemctl stop mariadb
Verify the installation includes Galera support:
mysqld --version | grep -i wsrep
This command should show wsrep (Write Set Replication) support. This confirms Galera availability.
Configuring Galera Cluster Settings
Create the main cluster configuration file on all nodes. Edit /etc/mysql/mariadb.conf.d/60-galera.cnf:
sudo nano /etc/mysql/mariadb.conf.d/60-galera.cnf
For the first node, use this configuration. Replace IP addresses with your actual server IPs:
[galera]
bind-address = 0.0.0.0
default_storage_engine = InnoDB
binlog_format = ROW
innodb_autoinc_lock_mode = 2
innodb_flush_log_at_trx_commit = 0
innodb_buffer_pool_size = 512M
wsrep_on = ON
wsrep_provider = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_name = "production_cluster"
wsrep_cluster_address = "gcomm://10.0.1.10,10.0.1.11,10.0.1.12"
wsrep_sst_method = rsync
wsrep_node_address = "10.0.1.10"
wsrep_node_name = "node1"
Adjust the wsrep_node_address and wsrep_node_name values for each server. Node2 uses 10.0.1.11 and node2. Node3 uses 10.0.1.12 and node3.
Set innodb_buffer_pool_size to roughly 70% of available RAM. Adjust based on your server specifications.
Bootstrapping the First Cluster Node
Initialize the cluster from the first node using the bootstrap command:
sudo galera_new_cluster
This starts MariaDB with special clustering parameters. Check the cluster status:
sudo mysql -e "SHOW STATUS LIKE 'wsrep_cluster_size';"
You should see a cluster size of 1. Now secure the MariaDB installation:
sudo mysql_secure_installation
Create a cluster user for node synchronization:
sudo mysql -e "CREATE USER 'cluster_user'@'%' IDENTIFIED BY 'SecurePassword123!';"
sudo mysql -e "GRANT ALL PRIVILEGES ON *.* TO 'cluster_user'@'%' WITH GRANT OPTION;"
sudo mysql -e "FLUSH PRIVILEGES;"
Adding Additional Nodes to the Cluster
Start MariaDB on the second and third nodes normally:
sudo systemctl start mariadb
sudo systemctl enable mariadb
These nodes will automatically join the existing cluster. They use the addresses specified in the configuration.
Monitor the join process by checking cluster size from any node:
sudo mysql -e "SHOW STATUS LIKE 'wsrep_%';" | grep -E '(cluster_size|local_state_comment|ready)'
A successful join shows wsrep_local_state_comment as "Synced". It also shows wsrep_ready as "ON".
If nodes fail to join, check the error log:
sudo tail -f /var/log/mysql/error.log
Testing Multi-Master Replication
Create a test database on the first node:
sudo mysql -e "CREATE DATABASE test_cluster;"
sudo mysql -e "USE test_cluster; CREATE TABLE sample (id INT AUTO_INCREMENT PRIMARY KEY, data VARCHAR(50));"
sudo mysql -e "USE test_cluster; INSERT INTO sample (data) VALUES ('Node 1 data');"
Check if the data appears on other nodes:
sudo mysql -e "USE test_cluster; SELECT * FROM sample;"
Insert data from the second node:
sudo mysql -e "USE test_cluster; INSERT INTO sample (data) VALUES ('Node 2 data');"
Verify synchronization across all cluster members. Each node should show both records immediately.
For comprehensive database management and monitoring, consider HostMyCode managed VPS hosting. It includes automated database maintenance and performance optimization.
Configuring Load Balancing with HAProxy
Install HAProxy on a separate server or existing node:
sudo apt install haproxy -y
Configure HAProxy to distribute database connections. Edit /etc/haproxy/haproxy.cfg:
global
daemon
user haproxy
group haproxy
defaults
mode tcp
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen mysql_cluster
bind *:3306
mode tcp
balance leastconn
option mysql-check user cluster_user
server node1 10.0.1.10:3306 check
server node2 10.0.1.11:3306 check
server node3 10.0.1.12:3306 check
Restart HAProxy to apply the configuration:
sudo systemctl restart haproxy
sudo systemctl enable haproxy
Applications can now connect to HAProxy's IP address on port 3306. Failed nodes are automatically removed from rotation.
Monitoring Cluster Health and Performance
Create a monitoring script to check cluster status regularly. Save this as /usr/local/bin/galera_monitor.sh:
#!/bin/bash
echo "=== Galera Cluster Status ==="
mysql -e "SHOW STATUS LIKE 'wsrep_cluster_size';"
mysql -e "SHOW STATUS LIKE 'wsrep_local_state_comment';"
mysql -e "SHOW STATUS LIKE 'wsrep_ready';"
echo "=== Node Status ==="
mysql -e "SHOW STATUS LIKE 'wsrep_cluster_status';"
mysql -e "SHOW STATUS LIKE 'wsrep_flow_control_paused';"
echo "=== Replication Lag ==="
mysql -e "SHOW STATUS LIKE 'wsrep_local_recv_queue_avg';"
Make the script executable and run it:
sudo chmod +x /usr/local/bin/galera_monitor.sh
sudo /usr/local/bin/galera_monitor.sh
Set up automated monitoring by adding this to your crontab:
*/5 * * * * /usr/local/bin/galera_monitor.sh >> /var/log/galera_status.log 2>&1
Handling Node Failures and Recovery
When a node fails, the cluster continues operating with remaining members. Check cluster size to confirm:
mysql -e "SHOW STATUS LIKE 'wsrep_cluster_size';"
To recover a failed node, simply restart MariaDB. It automatically rejoins and synchronizes:
sudo systemctl start mariadb
If the entire cluster shuts down, identify the node with the most recent data. Use grastate.dat:
sudo cat /var/lib/mysql/grastate.dat
Bootstrap from the node showing safe_to_bootstrap: 1. Or use the node with the highest sequence number.
For detailed database troubleshooting techniques, refer to our VPS database performance optimization guide.
Security Hardening for Production Clusters
Encrypt inter-node communication by adding SSL configuration to each node:
[galera]
wsrep_provider_options="socket.ssl_key=/etc/mysql/ssl/server-key.pem;socket.ssl_cert=/etc/mysql/ssl/server-cert.pem;socket.ssl_ca=/etc/mysql/ssl/ca-cert.pem"
Generate SSL certificates for each cluster member. Restrict network access using UFW:
sudo ufw allow from 10.0.1.0/24 to any port 3306
sudo ufw allow from 10.0.1.0/24 to any port 4567
sudo ufw allow from 10.0.1.0/24 to any port 4444
sudo ufw allow from 10.0.1.0/24 to any port 4568
Implement regular security updates and monitoring.
Our VPS server hardening guide covers additional security measures for database servers.
Disable unused features and services to reduce attack surface. Review user privileges regularly and implement proper backup strategies.
Building and maintaining a MariaDB Galera cluster requires reliable infrastructure and ongoing management. HostMyCode managed VPS hosting includes automated database monitoring, security updates, and 24/7 technical support. Our database hosting solutions are optimized for high-availability clustering with low-latency networking between nodes.
Frequently Asked Questions
How many nodes should I use in a MariaDB Galera cluster?
Use an odd number of nodes (3, 5, or 7) to prevent split-brain scenarios. Three nodes provide good balance between availability and complexity for most applications.
What happens if two nodes fail simultaneously?
The cluster becomes non-operational until quorum is restored. With three nodes, losing two means the remaining node cannot accept writes. Plan for geographic distribution or consider five-node clusters for critical applications.
Can I mix different MariaDB versions in the same cluster?
No, all cluster nodes must run identical MariaDB versions. Upgrade the entire cluster simultaneously during maintenance windows to maintain compatibility.
How do I backup a Galera cluster safely?
Use mysqldump or Percona XtraBackup from any cluster node. The backup represents a consistent point-in-time snapshot since all nodes maintain identical data through synchronous replication.
What's the performance impact of Galera clustering?
Write operations experience 20-30% latency increase due to cluster-wide certification. Read performance scales linearly with additional nodes. Optimize by using dedicated network interfaces for cluster communication.