34%
12.03.2013
_nor/s rMB_dir/s wMB_dir/s rMB_svr/s wMB_svr/s ops/s rops/s wops/s
192.168.1.250:/home
1230649.19 1843536.81 0.00 0.00 1229407.77 1843781
34%
26.02.2014
reqs merged: 3.78/s Write reqs completed: 2.10/s
Read BW: 0.00 MB/s Write BW: 0.02 MB/s
Avg sector size issued 23.78 Avg
34%
11.02.2016
.40 <- < 71% idle >
0 1.00 0.00 0.37 0.00 0.00 0.06 0.00 0.00 0.00 98.57
1 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2 3.77 0.00
34%
05.12.2014
? S Oct 19 7-00:03:58 bro
(+) root 11792 11766 11.4 3.3 150888 51400 ? S Oct 19 2-07:05:05 bro
Listing 3
netstats
[BroControl] > netstats
bro: 1415509669
34%
05.11.2018
infinite 4/9/3/16 node[212-213,215-218,220-229]
This example lists the status, time limit, node information, and node list of the p100 partition.
sbatch
To submit a batch serial job to Slurm, use the
34%
13.12.2018
.
Listing 2
sinfo
$ sinfo -s
PARTITION AVAIL TIMELIMIT NODES(A/I/O/T) NODELIST
p100 up infinite 4/9/3/16 node[212-213,215-218,220-229]
sbatch
To submit a batch serial
33%
01.08.2012
| 54 kB 00:00
(2/7): perl-5.10.1-119.el6_1.1.x86_64.rpm | 10 MB 00:08
(3/7): perl-Module-Pluggable-3.90-119.el6 ...
Warewulf 3 Listing 1
32%
21.08.2012
=/vnfs
The VNFS grew a little bit in size from the ganglia additions to 72.3MB, which is still pretty small.
Now I can boot the compute node. Once it comes up (check this by ssh
ing to the node as a user
31%
27.08.2014
record size, (2) sequential read testing with 1MB record size, and (3) random write and read (4KB). In running these tests, I wanted to see what block layer information ioprof revealed.
The system I
31%
11.04.2016
(512 MB) copied, 49.1424 s, 10.4 MB/s
If you want to empty the read and write cache for benchmark purposes, you can do so using:
sync; echo 3 > /proc/sys/vm/drop_caches
Sequential access