Yum Deployment

You can use the yum tool to quickly deploy and start the CubeFS cluster in CentOS 7+ operating system.

Get Software

The RPM dependencies of this tool can be installed with the following command:

Note

The cluster is managed through Ansible, please make sure that Ansible has been deployed. ansible installation command :pip3 install ansible

# x86 version
$ yum install https://cubefs-rs.heytapdownload.com/rpm/3.3.2/cfs-install-3.3.2-el7.x86_64.rpm
# arm version
$ yum install https://cubefs-rs.heytapdownload.com/rpm/3.3.2/cfs-install-3.3.2-el7.aarch64.rpm
$ cd /cfs/install
$ tree -L 3
 .
 ├── install_cfs.yml
 ├── install.sh
 ├── iplist
 ├── src
 └── template
     ├── client.json.j2
     ├── create_vol.sh.j2
     ├── datanode.json.j2
     ├── grafana
     │   ├── grafana.ini
     │   ├── init.sh
     │   └── provisioning
     ├── master.json.j2
     ├── metanode.json.j2
     └── objectnode.json.j2

Note

The arm version deployment requires Glibc version 2.32 and above

Configuration Instructions

You can modify the parameters of the CubeFS cluster in the iplist file according to the actual environment.

  • master, datanode, metanode, objectnode, monitor, client contain the IP addresses of each module member.
  • The cfs:vars module defines the SSH login information of all nodes, and the login name and password of all nodes in the cluster need to be unified in advance.

Master Config

Defines the startup parameters of each Master node.

ParameterTypeDescriptionRequired
master_clusterNamestringCluster nameYes
master_listenstringPort number for http service listeningYes
master_profstringPort number for golang pprofYes
master_logDirstringDirectory for storing log filesYes
master_logLevelstringLog level, default is infoNo
master_retainLogsstringHow many raft logs to keepYes
master_walDirstringDirectory for storing raft wal logsYes
master_storeDirstringDirectory for storing RocksDB data. This directory must exist. If the directory does not exist, the service cannot be started.Yes
master_exporterPortintPort for prometheus to obtain monitoring dataNo
master_metaNodeReservedMemstringReserved memory size for metadata nodes. If the remaining memory is less than this value, the MetaNode becomes read-only. Unit: bytes, default value: 1073741824No

For more configuration information, please refer toMaster Configuration Instructions.

DataNode Config

Defines the startup parameters of each DataNode.

ParameterTypeDescriptionRequired
datanode_listenstringPort for DataNode to start TCP listening as a serverYes
datanode_profstringPort used by DataNode to provide HTTP interfaceYes
datanode_logDirstringPath to store logsYes
datanode_logLevelstringLog level. The default is info.No
datanode_raftHeartbeatstringPort used by RAFT to send heartbeat messages between nodesYes
datanode_raftReplicastringPort used by RAFT to send log messagesYes
datanode_raftDirstringPath to store RAFT debugging logs. The default is the binary file startup path.No
datanode_exporterPortstringPort for the monitoring system to collect dataNo
datanode_disksstring arrayFormat: PATH:RETAIN, PATH: disk mount path, RETAIN: the minimum reserved space under this path, and the disk is considered full if the remaining space is less than this value. Unit: bytes. (Recommended value: 20G~50G)Yes

For more configuration information, please refer to DataNode Configuration Instructions.

MetaNode Config

Defines the startup parameters of the MetaNode.

ParameterTypeDescriptionRequired
metanode_listenstringPort for listening and accepting requestsYes
metanode_profstringDebugging and administrator API interfaceYes
metanode_logLevelstringLog level. The default is info.No
metanode_metadataDirstringDirectory for storing metadata snapshotsYes
metanode_logDirstringDirectory for storing logsYes
metanode_raftDirstringDirectory for storing raft wal logsYes
metanode_raftHeartbeatPortstringPort for raft heartbeat communicationYes
metanode_raftReplicaPortstringPort for raft data transmissionYes
metanode_exporterPortstringPort for prometheus to obtain monitoring dataNo
metanode_totalMemstringMaximum available memory. This value needs to be higher than the value of metaNodeReservedMem in the master configuration. Unit: bytes.Yes

For more configuration information, please refer to MetaNode Configuration Instructions.

ObjectNode Config

Defines the startup parameters of the ObjectNode.

ParameterTypeDescriptionRequired
objectnode_listenstringIP address and port number for http service listeningYes
objectnode_domainsstring arrayConfigure domain names for S3-compatible interfaces to support DNS-style access to resources. Format: DOMAINNo
objectnode_logDirstringPath to store logsYes
objectnode_logLevelstringLog level. The default is error.No
objectnode_exporterPortstringPort for prometheus to obtain monitoring dataNo
objectnode_enableHTTPSstringWhether to support the HTTPS protocolYes

For more configuration information, please refer to ObjectNode Configuration Instructions.

Client Config

Defines the startup parameters of the FUSE client.

ParameterTypeDescriptionRequired
client_mountPointstringMount pointYes
client_volNamestringVolume nameNo
client_ownerstringVolume ownerYes
client_SizeGBstringIf the volume does not exist, a volume of this size will be created. Unit: GB.No
client_logDirstringPath to store logsYes
client_logLevelstringLog level: debug, info, warn, error, default is info.No
client_exporterPortstringPort for prometheus to obtain monitoring dataYes
client_profPortstringPort for golang pprof debuggingNo

For more configuration information, please refer to Client Configuration Instructions.

[master]
10.196.59.198
10.196.59.199
10.196.59.200
[datanode]
...
[cfs:vars]
ansible_ssh_port=22
ansible_ssh_user=root
ansible_ssh_pass="password"
...
#master config
...
#datanode config
...
datanode_disks =  '"/data0:10737418240","/data1:10737418240"'
...
#metanode config
...
metanode_totalMem = "28589934592"
...
#objectnode config
...

Note

CubeFS supports mixed deployment. If mixed deployment is adopted, pay attention to modifying the port configuration of each module to avoid port conflicts. the path corresponding to the datanode_disks configuration needs to be manually created before the datanode can be started

Start the Cluster

Use the install.sh script to start the CubeFS cluster, and make sure to start the Master first.

$ bash install.sh -h
Usage: install.sh -r | --role [datanode | metanode | master | objectnode | client | all | createvol ]
$ bash install.sh -r master
$ bash install.sh -r metanode
$ bash install.sh -r datanode
$ bash install.sh -r objectnode

$ bash install.sh -r createvol
$ bash install.sh -r client

After all roles are started, you can log in to the node where the client role is located to verify whether the mount point /cfs/mountpoint has been mounted to the CubeFS file system.

Edit on GitHub