17%
30.11.2025
friday 00:00-24:00
10 saturday 00:00-24:00
11 }
12
13 define timeperiod{
14 timeperiod_name wochentags
15 alias Robot Robot
17%
30.11.2025
YZCD
03 # DaemonOpts: -f /var/log/collectl -r00:00,7 -m -F60 -s+YZCD --iosize
04 ################################################################################
05 # Collectl: V3.6.1-4 HiRes: 1 Options
17%
13.12.2018
disk reads: 1306 MB in 3.00 seconds = 434.77 MB/sec
federico@cybertron:~$ sudo hdparm -W /dev/sdb
/dev/sdb:
write-caching = 1 (on)
federico@cybertron:~$ sudo hdparm -W 0 /dev/sdb
/dev/sdb:
write
17%
11.02.2016
wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 2.00 8.00 2.00 9500.00 16.00 151948.00 31.99 1.07 0.11 4.00 0.11 0.09 88.40
If your read or write
17%
30.01.2020
=test
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=654MiB/s][w=167k IOPS][eta 00m:00s
17%
29.09.2020
sitting at less than 50MB (and using less than half the RAM of a standard cluster) the binary that runs K3s is a sight to behold and well worth getting your hands on. Especially when it's deemed production
17%
19.11.2019
, ioengine=libaio, iodepth=32
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=1401KiB/s][r=0,w=350 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3104: Sat Oct 12 14:39:08 2019
write: IOPS=352
17%
05.11.2018
infinite 4/9/3/16 node[212-213,215-218,220-229]
This example lists the status, time limit, node information, and node list of the p100 partition.
sbatch
To submit a batch serial job to Slurm, use the
17%
13.12.2018
.
Listing 2
sinfo
$ sinfo -s
PARTITION AVAIL TIMELIMIT NODES(A/I/O/T) NODELIST
p100 up infinite 4/9/3/16 node[212-213,215-218,220-229]
sbatch
To submit a batch serial
17%
20.02.2012
on five hits on the web stack (assuming you pushed all static assets to S3 or something static), the probability that each user is affected by a 1 percent failure rate (i.e., 100 percent minus availability