34%
15.02.2012
32KB < < 128KB
64
128KB < < 256KB
0
256KB < < 512KB
2
512KB < < 1MB
3
1MB < < 10MB
87
10MB < < 100MB
0
100MB < < 1GB
34%
16.03.2021
.4MBps and random reads 1.9MBps. The good news is that whereas random writes dropped a tiny bit to 1.2MBps (Listing 6), random reads increased to almost double the throughput with a rate of 3.3MBps
34%
21.08.2012
Listing 3: Installing ganglia-gmond into the Master Node
[root@test1 RPMS]# yum install ganglia-gmond-3.4.0-1.el6.i686.rpm
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror ...
Listing 3 for Warewulf – Part 4
... Listing 3 ... Listing 3: Warewulf – Part 4
34%
30.01.2020
lvm2 --- <232.89g <232.89g
/dev/sdb lvm2 --- <6.37t <6.37t
Next, I add both volumes into a new volume group labeled vg-cache,
$ sudo vgcreate vg-cache /dev/nvme0n1 /dev
34%
21.08.2012
, with the use of chkconfig, that ganglia always starts when the master node boots:
[root@test1 ganglia]# chkconfig --list | more
NetworkManager 0:off 1:off 2:on 3:on 4:on 5:on 6:off
acpid
34%
07.10.2014
.snap
s ntestvm1.img 5 8.0 GB 292 MB 2.4 GB 2014-03-01 11:42 982a3a 2 mar.snap
s ntestvm1.img 6 8.0 GB 128 MB 2.6 GB 2014-03-10 19:48 982a3b 2 mar2.snap
ntestvm1.img 0 8.0 GB
34%
11.04.2016
(512 MB) copied, 49.1424 s, 10.4 MB/s
If you want to empty the read and write cache for benchmark purposes, you can do so using:
sync; echo 3 > /proc/sys/vm/drop_caches
Sequential access
34%
19.11.2019
volume "/dev/sdb" successfully created.
Then, I verify that the volumes have been appropriately labeled:
$ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1 lvm2 --- <232.89g <232.89g
/dev/sdb lvm2 --- <6
33%
25.03.2020
] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdd1[5] sde1[4] sdc1[2] sdb1[1] nvme0n1p1[0](J)
20508171264 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU
33%
20.05.2014
Viewing Server Topology
01 # numactl --hardware
available: 8 nodes (0-7)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9
node 0 size: 16373 MB
node 0 free: 15837 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19
node 1