官網介紹
Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace.
emmm,簡單說就是一個開源的分布式文件系統,可以用NFS或者SAMBA訪問。
實驗用node1作為客戶端,node2 node3 node4 搭建GlusterFS集群。
1、node2 node3 node4準備環境
[root@node2 ~]# vim prepare_bricks.sh[root@node2 ~]# cat prepare_bricks.sh#!/bin/bash
# Prepare brickspvcreate /dev/sdbvgcreate -s 4M vol /dev/sdblvcreate -l 100%FREE -T vol/poollvcreate -V 10G -T vol/pool -n brickmkfs.xfs -i size=512 /dev/vol/brickmkdir -p /data/brick${1}echo "/dev/vol/brick /data/brick${1} xfs defaults 0 0" >> /etc/fstabmount /data/brick${1}
# Install package and start serviceyum install -y glusterfs-serversystemctl start glusterdsystemctl enable glusterd
[root@node2 ~]# sh prepare_bricks.sh 2...略[root@node2 ~]# df -ThFilesystem Type Size Used Avail Use% Mounted on...略/dev/mapper/vol-brick xfs 10G 33M 10G 1% /data/brick2[root@node2 ~]# scp prepare_bricks.sh node3:/root...略root@node3's password: prepare_bricks.sh 100% 418 104.4KB/s 00:00 [root@node2 ~]# scp prepare_bricks.sh node4:/root...略root@node4's password: prepare_bricks.sh 100% 418 88.8KB/s 00:00[root@node3 ~]# sh prepare_bricks.sh 3...略[root@node4 ~]# sh prepare_bricks.sh 4...略加入存儲池
[root@node2 ~]# gluster peer probe node3peer probe: success. [root@node2 ~]# gluster peer probe node4peer probe: success. [root@node2 ~]# gluster pool listUUID Hostname State483b4b06-bcdb-4399-bc1c-a36e7a0e5274 node3 Connected fa553747-3feb-4422-8dad-b5f61a93aa39 node4 Connected 19d39a4f-4e92-4ff4-a3a2-539d44358dec localhost Connected [root@node2 ~]# gluster peer statusNumber of Peers: 2
Hostname: node3Uuid: 483b4b06-bcdb-4399-bc1c-a36e7a0e5274State: Peer in Cluster (Connected)
Hostname: node4Uuid: fa553747-3feb-4422-8dad-b5f61a93aa39State: Peer in Cluster (Connected)2、創建卷
卷有5種類型:
1)分布式 - Distributed
默認的類型,將文件按照hash算法隨機分布到 一個 brick 中存儲。
[root@node2 ~]# gluster volume create vol_distributed node2:/data/brick2/distributed node3:/data/brick3/distributed node4:/data/brick4/distributedvolume create: vol_distributed: success: please start the volume to access data[root@node2 ~]# gluster volume info vol_distributed
Volume Name: vol_distributedType: DistributeVolume ID: ecd70c34-5808-46ee-b813-9ed6f707b1a3Status: CreatedSnapshot Count: 0Number of Bricks: 3Transport-type: tcpBricks:Brick1: node2:/data/brick2/distributedBrick2: node3:/data/brick3/distributedBrick3: node4:/data/brick4/distributedOptions Reconfigured:transport.address-family: inetstorage.fips-mode-rchecksum: onnfs.disable: on[root@node2 ~]# gluster volume start vol_distributedvolume start: vol_distributed: success[root@node2 ~]# gluster volume statusStatus of volume: vol_distributedGluster process TCP Port RDMA Port Online Pid---Brick node2:/data/brick2/distributed 49152 0 Y 1821 Brick node3:/data/brick3/distributed 49152 0 Y 1770 Brick node4:/data/brick4/distributed 49152 0 Y 16476
Task Status of Volume vol_distributed---There are no active volume tasks2)複製 - Replicated
將數據按照指定的份數同時存儲到每個 brick 。
[root@node2 ~]# gluster volume create vol_replicated replica 3 node2:/data/brick2/replicated node3:/data/brick3/replicated node4:/data/brick4/replicatedvolume create: vol_replicated: success: please start the volume to access data[root@node2 ~]# gluster volume info vol_replicated
Volume Name: vol_replicatedType: ReplicateVolume ID: e50727b4-d71b-4dab-b74a-cfd2a0027bb3Status: CreatedSnapshot Count: 0Number of Bricks: 1 x 3 = 3Transport-type: tcpBricks:Brick1: node2:/data/brick2/replicatedBrick2: node3:/data/brick3/replicatedBrick3: node4:/data/brick4/replicatedOptions Reconfigured:transport.address-family: inetstorage.fips-mode-rchecksum: onnfs.disable: onperformance.client-io-threads: off[root@node2 ~]# gluster volume start vol_replicatedvolume start: vol_replicated: success[root@node2 ~]# gluster volume status vol_replicatedStatus of volume: vol_replicatedGluster process TCP Port RDMA Port Online Pid---Brick node2:/data/brick2/replicated 49153 0 Y 1873 Brick node3:/data/brick3/replicated 49153 0 Y 1828 Brick node4:/data/brick4/replicated 49153 0 Y 1811 Self-heal Daemon on localhost N/A N/A Y 1894 Self-heal Daemon on node4 N/A N/A Y 1832 Self-heal Daemon on node3 N/A N/A Y 1849
Task Status of Volume vol_replicated---There are no active volume tasks3)分散 - Dispersed
類似RAID5,數據分片存儲到 brick 中,其中一個用作奇偶校驗。
[root@node2 ~]# gluster volume create vol_dispersed disperse 3 redundancy 1 node2:/data/brick2/dispersed node3:/data/brick3/dispersed node4:/data/brick4/dispersedvolume create: vol_dispersed: success: please start the volume to access data[root@node2 ~]# gluster volume info vol_dispersed
Volume Name: vol_dispersedType: DisperseVolume ID: e3894a96-7823-43c7-8f24-c5b628eb86edStatus: CreatedSnapshot Count: 0Number of Bricks: 1 x (2 + 1) = 3Transport-type: tcpBricks:Brick1: node2:/data/brick2/dispersedBrick2: node3:/data/brick3/dispersedBrick3: node4:/data/brick4/dispersedOptions Reconfigured:transport.address-family: inetstorage.fips-mode-rchecksum: onnfs.disable: on[root@node2 ~]# gluster volume start vol_dispersedvolume start: vol_dispersed: success[root@node2 ~]# gluster volume status vol_dispersedStatus of volume: vol_dispersedGluster process TCP Port RDMA Port Online Pid---Brick node2:/data/brick2/dispersed 49154 0 Y 2028 Brick node3:/data/brick3/dispersed 49154 0 Y 1918 Brick node4:/data/brick4/dispersed 49154 0 Y 16630Self-heal Daemon on localhost N/A N/A Y 1930 Self-heal Daemon on node4 N/A N/A Y 16558Self-heal Daemon on node3 N/A N/A Y 1851
Task Status of Volume vol_dispersed---There are no active volume tasks4)分布複製 - Distributed Replicated
既分布,又複製。
[root@node2 ~]# gluster volume create vol_distributed_replicated replica 3 node2:/data/brick2/distributed_replicated21 node3:/data/brick3/distributed_replicated31 node4:/data/brick4/distributed_replicated41 node2:/data/brick2/distributed_replicated22 node3:/data/brick3/distributed_replicated32 node4:/data/brick4/distributed_replicated42volume create: vol_distributed_replicated: success: please start the volume to access data[root@node2 ~]# gluster volume info vol_distributed_replicated
Volume Name: vol_distributed_replicatedType: Distributed-ReplicateVolume ID: b8049701-9587-49ac-9cb2-1861421125c2Status: CreatedSnapshot Count: 0Number of Bricks: 2 x 3 = 6Transport-type: tcpBricks:Brick1: node2:/data/brick2/distributed_replicated21Brick2: node3:/data/brick3/distributed_replicated31Brick3: node4:/data/brick4/distributed_replicated41Brick4: node2:/data/brick2/distributed_replicated22Brick5: node3:/data/brick3/distributed_replicated32Brick6: node4:/data/brick4/distributed_replicated42Options Reconfigured:transport.address-family: inetstorage.fips-mode-rchecksum: onnfs.disable: onperformance.client-io-threads: off[root@node2 ~]# gluster volume start vol_distributed_replicatedvolume start: vol_distributed_replicated: success[root@node2 ~]# gluster volume status vol_distributed_replicatedStatus of volume: vol_distributed_replicatedGluster process TCP Port RDMA Port Online Pid---Brick node2:/data/brick2/distributed_replicated21 49155 0 Y 2277 Brick node3:/data/brick3/distributed_replicated31 49155 0 Y 2166 Brick node4:/data/brick4/distributed_replicated41 49155 0 Y 2141 Brick node2:/data/brick2/distributed_replicated22 49156 0 Y 2297 Brick node3:/data/brick3/distributed_replicated32 49156 0 Y 2186 Brick node4:/data/brick4/distributed_replicated42 49156 0 Y 2161 Self-heal Daemon on localhost N/A N/A Y 1894 Self-heal Daemon on node3 N/A N/A Y 1849 Self-heal Daemon on node4 N/A N/A Y 1832
Task Status of Volume vol_distributed_replicated---There are no active volume tasks5)分布分散 - Distributed Dispersed
既分布,又分散。
[root@node2 ~]# gluster volume create vol_distributed_dispersed disperse-data 2 redundancy 1 \> node2:/data/brick2/distributed_dispersed21 \> node3:/data/brick3/distributed_dispersed31 \> node4:/data/brick4/distributed_dispersed41 \> node2:/data/brick2/distributed_dispersed22 \> node3:/data/brick3/distributed_dispersed32 \> node4:/data/brick4/distributed_dispersed42volume create: vol_distributed_dispersed: success: please start the volume to access data[root@node2 ~]# gluster volume info vol_distributed_dispersed
Volume Name: vol_distributed_dispersedType: Distributed-DisperseVolume ID: 797e2e88-61e0-4df4-a308-16c5140b2480Status: CreatedSnapshot Count: 0Number of Bricks: 2 x (2 + 1) = 6Transport-type: tcpBricks:Brick1: node2:/data/brick2/distributed_dispersed21Brick2: node3:/data/brick3/distributed_dispersed31Brick3: node4:/data/brick4/distributed_dispersed41Brick4: node2:/data/brick2/distributed_dispersed22Brick5: node3:/data/brick3/distributed_dispersed32Brick6: node4:/data/brick4/distributed_dispersed42Options Reconfigured:transport.address-family: inetstorage.fips-mode-rchecksum: onnfs.disable: on[root@node2 ~]# gluster volume start vol_distributed_dispersedvolume start: vol_distributed_dispersed: success[root@node2 ~]# gluster volume status vol_distributed_dispersedStatus of volume: vol_distributed_dispersedGluster process TCP Port RDMA Port Online Pid---Brick node2:/data/brick2/distributed_dispersed21 49157 0 Y 2529 Brick node3:/data/brick3/distributed_dispersed31 49157 0 Y 17071Brick node4:/data/brick4/distributed_dispersed41 49157 0 Y 17051Brick node2:/data/brick2/distributed_dispersed22 49158 0 Y 2549 Brick node3:/data/brick3/distributed_dispersed32 49158 0 Y 17091Brick node4:/data/brick4/distributed_dispersed42 49158 0 Y 17071Self-heal Daemon on localhost N/A N/A Y 1894 Self-heal Daemon on node4 N/A N/A Y 1832 Self-heal Daemon on node3 N/A N/A Y 1849
Task Status of Volume vol_distributed_dispersed---There are no active volume tasks查看生成的目錄
[root@node2 ~]# tree /data/brick2//data/brick2/├── dispersed├── distributed├── distributed_dispersed21├── distributed_dispersed22├── distributed_replicated21├── distributed_replicated22└── replicated
7 directories, 0 files創建完卷,node2 node3 node4新建一個快照。
3、客戶端掛載
可以通過3種方式掛載:
1)通過glusterfs掛載
[root@node1 ~]# yum install -y glusterfs glusterfs-fuse[root@node1 ~]# mkdir /mnt/distributed[root@node1 ~]# mount -t glusterfs node2:/vol_distributed /mnt/distributed/[root@node1 ~]# df -ThFilesystem Type Size Used Avail Use% Mounted on...略node2:/vol_distributed fuse.glusterfs 30G 407M 30G 2% /mnt/distributed為防止由於node2掛掉而不可用,可以用多個節點掛載
[root@node1 ~][root@node1 ~]...略node2:/vol_distributed,node3:/vol_distributed,node4:/vol_distributed /mnt/distributed glusterfs defaults,_netdev 0 0[root@node1 ~][root@node1 ~][root@node1 ~]Filesystem Type Size Used Avail Use% Mounted on...略node2:/vol_distributed fuse.glusterfs 30G 407M 30G 2% /mnt/distributed[root@node1 ~]2)通過nfs掛載
用 NFS-Ganesha 導出目錄
[root@node2 ~][root@node2 ~][root@node2 ~][root@node2 ~]EXPORT{ Export_Id = 1 ; Path = "/vol_replicated";
FSAL { name = GLUSTER; hostname = "node2"; volume = "vol_replicated"; }
Access_type = RW; Squash = No_root_squash; Disable_ACL = TRUE; Pseudo = "/vol_replicated_pseudo"; Protocols = "3,4" ; Transports = "UDP,TCP" ; SecType = "sys"; }[root@node2 ~][root@node2 ~]客戶端掛載
[root@node1 ~]# yum install -y nfs-utils[root@node1 ~]# showmount -e node2Export list for node2:/vol_replicated (everyone)[root@node1 ~]# mkdir /mnt/replicated[root@node1 ~]# mount -t nfs node2:/vol_replicated /mnt/replicated/[root@node1 ~]# df -ThFilesystem Type Size Used Avail Use% Mounted on...略node2:/vol_replicated nfs 10G 135M 9.9G 2% /mnt/replicated[root@node1 ~]# echo "Here is node1" > /mnt/replicated/welcome.txt3)通過samba掛載
服務端準備
[root@node2 ~]...略[root@node2 ~][root@node2 ~]New SMB password:Retype new SMB password:Added user glusteruser.[root@node2 ~][root@node2 ~]...略[gluster_vol_dispersed] comment = For samba share of volume vol_dispersed vfs objects = glusterfs glusterfs:volume = vol_dispersed glusterfs:logfile = /var/log/samba/glusterfs.%M.log glusterfs:loglevel = 7 path = / read only = no guest ok = yes kernel share modes = no[root@node2 ~][root@node2 ~]客戶端掛載
立即用CIFS掛載的話不能寫入數據,先用FUSE掛載一下,修改權限再掛載就可以;
[root@node1 ~]# mkdir /mnt/dispersed_temp[root@node1 ~]# mount -t glusterfs node2:/vol_dispersed /mnt/dispersed_temp/[root@node1 ~]# echo "Here is node1" > /mnt/dispersed_temp/welcome.txt[root@node1 ~]# chmod 777 /mnt/dispersed_temp/[root@node1 ~]# umount /mnt/dispersed_temp/用CIFS掛載
[root@node1 ~][root@node1 ~]Enter SAMBA\glusteruser's password:
Sharename Type Comment print$ Disk Printer Drivers gluster_vol_dispersed Disk For samba share of volume repvol IPC$ IPC IPC Service (Samba 4.10.4) glusteruser Disk Home DirectoriesReconnecting with SMB1 for workgroup listing.
Server Comment
Workgroup Master [root@node1 ~][root@node1 ~][root@node1 ~]Filesystem Type Size Used Avail Use% Mounted on...略//node2/gluster_vol_dispersed cifs 20G 272M 20G 2% /mnt/dispersed[root@node1 ~]