54%
01.08.2012
| 54 kB 00:00
(2/7): perl-5.10.1-119.el6_1.1.x86_64.rpm | 10 MB 00:08
(3/7): perl-Module-Pluggable-3.90-119.el6 ...
Warewulf 3 Listing 1
54%
26.01.2012
32KB < < 128KB
64
128KB < < 256KB
0
256KB < < 512KB
2
512KB < < 1MB
3
1MB < < 10MB
87
10MB < < 100MB
0
100MB < < 1GB
54%
15.02.2012
32KB < < 128KB
64
128KB < < 256KB
0
256KB < < 512KB
2
512KB < < 1MB
3
1MB < < 10MB
87
10MB < < 100MB
0
100MB < < 1GB
54%
16.03.2021
.4MBps and random reads 1.9MBps. The good news is that whereas random writes dropped a tiny bit to 1.2MBps (Listing 6), random reads increased to almost double the throughput with a rate of 3.3MBps
54%
21.08.2012
Listing 3: Installing ganglia-gmond into the Master Node
[root@test1 RPMS]# yum install ganglia-gmond-3.4.0-1.el6.i686.rpm
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror ...
Listing 3 for Warewulf – Part 4
... Listing 3 ... Listing 3: Warewulf – Part 4
53%
30.01.2020
lvm2 --- <232.89g <232.89g
/dev/sdb lvm2 --- <6.37t <6.37t
Next, I add both volumes into a new volume group labeled vg-cache,
$ sudo vgcreate vg-cache /dev/nvme0n1 /dev
53%
21.08.2012
, with the use of chkconfig, that ganglia always starts when the master node boots:
[root@test1 ganglia]# chkconfig --list | more
NetworkManager 0:off 1:off 2:on 3:on 4:on 5:on 6:off
acpid
53%
07.10.2014
.snap
s ntestvm1.img 5 8.0 GB 292 MB 2.4 GB 2014-03-01 11:42 982a3a 2 mar.snap
s ntestvm1.img 6 8.0 GB 128 MB 2.6 GB 2014-03-10 19:48 982a3b 2 mar2.snap
ntestvm1.img 0 8.0 GB
53%
11.04.2016
(512 MB) copied, 49.1424 s, 10.4 MB/s
If you want to empty the read and write cache for benchmark purposes, you can do so using:
sync; echo 3 > /proc/sys/vm/drop_caches
Sequential access
53%
19.11.2019
volume "/dev/sdb" successfully created.
Then, I verify that the volumes have been appropriately labeled:
$ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1 lvm2 --- <232.89g <232.89g
/dev/sdb lvm2 --- <6