22%
13.12.2018
.conf file generated by configurator.html.
08 #
09 # See the slurm.conf man page for more information.
10 #
11 ClusterName=compute-cluster
12 ControlMachine=slurm-ctrl
13 #
14 SlurmUser=slurm
15 Slurmctld
22%
13.12.2018
disk reads: 1306 MB in 3.00 seconds = 434.77 MB/sec
federico@cybertron:~$ sudo hdparm -W /dev/sdb
/dev/sdb:
write-caching = 1 (on)
federico@cybertron:~$ sudo hdparm -W 0 /dev/sdb
/dev/sdb:
write
21%
05.08.2024
= [size][size]int {{0},{0},}
08
09 for i := 0; i < size; i++ {
10 for j := 0; j < size; j++ {
11 array[i][j]++
12 }
13 }
14
15
21%
30.01.2020
: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=1401KiB/s][r=0,w=350 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3104: Sat Oct 12 14:39:08 2019
write: IOPS=352, BW=1410KiB/s (1444kB/s)(82.8Mi
21%
20.02.2012
.57, 0.00, 12.76, 85, 0
2012-01-09 21:09:21, 84, 4.84, 0, 0.29, 17.36, 0.00, 5.09, 90, 0
2012-01-09 21:09:47, 80, 4
21%
19.11.2019
Jobs: 1 (f=1): [w(1)][100.0%][w=654MiB/s][w=167k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1225: Sat Oct 12 19:20:18 2019
write: IOPS=168k, BW=655MiB/s (687MB/s)(10.0GiB/15634msec); 0
21%
30.11.2025
creates a 256MB file in the current directory along with process for the job. This process reads complete file content in random order. Fio records the areas that have already been read and reads each area
21%
22.05.2012
:
Scientific Linux 6.2
2.6.32-220.4.1.el6.x86_64 kernel
GigaByte MAA78GM-US2H motherboard
AMD Phenom II X4 920 CPU (four cores)
8GB of memory (DDR2-800)
The OS and boot drive are on an IBM DTLA
21%
21.08.2012
just two nodes: test1, which is the master node, and n0001, which is the first compute node):
[laytonjb@test1 ~]$ pdsh -w test1,n0001 uptime
test1: 18:57:17 up 2:40, 5 users, load average: 0.00, 0.00
21%
07.10.2014
by the process
12m
12MB
S
Status of process
R
S
= sleeping, R
= running, Z
= zombie
%CPU
Percent CPU being used by the process on a per-CPU basis