22%
21.01.2020
8 2 488383488 sda2
12 8 16 6836191232 sdb
13 8 64 6836191232 sde
14 8 80 39078144 sdf
15 8 48 6836191232 sdd
16 8 32 6836191232 sdc
17 11 0 1048575 sr0
22%
25.03.2020
0 1048575 sr0
With the parted utility, you can create a single partition on each entire HDD:
$ for i in sdb sdc sdd sde; do sudo parted --script /dev/$i mklabel gpt mkpart primary 1MB 100
22%
30.01.2020
]
test: (groupid=0, jobs=1): err= 0: pid=1225: Sat Oct 12 19:20:18 2019
write: IOPS=168k, BW=655MiB/s (687MB/s)(10.0GiB/15634msec); 0 zone resets
[ ... ]
Run status group 0 (all jobs):
WRITE: bw=655Mi
22%
06.10.2022
bytes (419 MB, 400 MiB) copied, 0.535233 s, 784 MB/s
root@focal:~# dd if=/dev/zero of=/dev/mapper/encrypted-ram0 bs=4k count=100k
102400+0 records in
102400+0 records out
419430400 bytes (419 MB, 400 Mi
22%
17.02.2015
service_description PING
11 check_command check_ping!100.0,20%!500.0,60%
12 }
13 define service{
14 use generic-service ; Name of service template to use
15
22%
19.11.2019
Jobs: 1 (f=1): [w(1)][100.0%][w=654MiB/s][w=167k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1225: Sat Oct 12 19:20:18 2019
write: IOPS=168k, BW=655MiB/s (687MB/s)(10.0GiB/15634msec); 0
22%
28.03.2012
2 4 24 12 0 12 2 0 sda 0 0 0 0 0 0 0 0 0 0 0
20120310 13:39:20 sdb 0 0 0 1 3 17 12 0 27 8 1 sda 0 0 0 0 0 0 0 0 0 0 0
20120310 13:39:30 sdb 0 0 0 0 0 0 0 0 0 0 0 sda 0 0 0 0 0 0 0 0 0 0 0
20120310 13
22%
20.05.2014
Viewing Server Topology
01 # numactl --hardware
available: 8 nodes (0-7)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9
node 0 size: 16373 MB
node 0 free: 15837 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19
node 1
21%
21.01.2021
and the T3E were arguably highly successful systems. A 1,480-processor system was the first system on the TOP500 to top 1TFLOPS(10^12FLOPS) running a scientific application.
Cray did not just develop
21%
18.07.2013
100
100
000
Old_age
Always
–
2456
12
Power_Cycle_Count
0x0032
100
100
000
Old_age
Always