17%
11.02.2016
wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 2.00 8.00 2.00 9500.00 16.00 151948.00 31.99 1.07 0.11 4.00 0.11 0.09 88.40
If your read or write
17%
30.11.2025
active_checks_enabled 1
22 passive_checks_enabled 0
23 check_command check_ssh
24 max_check_attempts 3
25
17%
30.11.2025
YZCD
03 # DaemonOpts: -f /var/log/collectl -r00:00,7 -m -F60 -s+YZCD --iosize
04 ################################################################################
05 # Collectl: V3.6.1-4 HiRes: 1 Options
17%
05.11.2018
infinite 4/9/3/16 node[212-213,215-218,220-229]
This example lists the status, time limit, node information, and node list of the p100 partition.
sbatch
To submit a batch serial job to Slurm, use the
17%
13.12.2018
.
Listing 2
sinfo
$ sinfo -s
PARTITION AVAIL TIMELIMIT NODES(A/I/O/T) NODELIST
p100 up infinite 4/9/3/16 node[212-213,215-218,220-229]
sbatch
To submit a batch serial
16%
20.02.2012
time: 11.79 secs
Data transferred: 2.47 MB
Response time: 0.22 secs
Transaction rate: 35.79 trans/sec
Throughput: 0
16%
30.01.2020
=test
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=654MiB/s][w=167k IOPS][eta 00m:00s
16%
19.11.2019
, ioengine=libaio, iodepth=32
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=475MiB/s][w=122k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1634: Mon Oct 14 22:18:59 2019
write: IOPS=118k, BW=463MiB/s
16%
21.08.2012
=/vnfs
The VNFS grew a little bit in size from the ganglia additions to 72.3MB, which is still pretty small.
Now I can boot the compute node. Once it comes up (check this by ssh
ing to the node as a user
16%
27.08.2014
record size, (2) sequential read testing with 1MB record size, and (3) random write and read (4KB). In running these tests, I wanted to see what block layer information ioprof revealed.
The system I