19%
25.03.2021
B/s (1444kB/s)(82.9MiB/60173msec); 0 zone resets
[ ... ]
Run status group 0 (all jobs):
WRITE: bw=1410KiB/s (1444kB/s), 1410KiB/s-1410KiB/s (1444kB/s-1444kB/s), io=82.9MiB (86.9MB), run=60173-60173msec
19%
12.05.2021
30.85 72.31 13.16 20.40 0.26 70.44 83.89 1.97 3.52
nvme0n1 58.80 12.22 17720.47 48.71 230.91 0.01 79.70 0.08 0.42 0.03 0
19%
02.08.2021
%util
sda 10.91 6.97 768.20 584.64 4.87 18.20 30.85 72.31 13.16 20.40 0.26 70.44 83.89 1.97 3.52
nvme0n1 58.80 12.22 17720.47 48.71 230
19%
18.07.2013
: XXXXXXXXXXXXXXXXXX
6 Firmware Revision: 2CV102HD
7 Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6
8 Standards:
9 Used: ATA/ATAPI-7 T13 1532
18%
28.03.2012
--------><----------Disks-----------><----------Network---------->
#cpu sys inter ctxsw KBRead Reads KBWrit Writes KBIn PktIn KBOut PktOut
3 1 1421 2168 0 0 41000 90 0 2 0 0
3 2 1509 2198 64 2 49712
17%
27.08.2014
was the sequential write test using 1MB record sizes:
./iozone -i 0 -c -e -w -r 1024k -s 32g -t 2 -+n > iozone_write_1.out
To gather the block statistics, I ran ioprof in a different terminal window before I ran
17%
14.11.2013
significantly fewer processes compete for the available CPU cores – and thus fewer context switches are needed. Far more memory is available for the buffer cache or shared pool because the minimum 350MB of SGA
17%
16.07.2015
command:
[laytonjb@home4 ~]$ module avail
-------------------------- /cluster/modulefiles/Core ---------------------------
gnu/4.8 lmod/6.0.1 settarg/6.0.1
Use ... Lmod 6.0: Exploring the Latest Edition of the Powerful Environment Module System
17%
30.11.2020
84c5f6e03bf0 45 hours ago 104MB
registry 2 2d4f4b5309b1 2 months ago 26.2MB
$ docker tag redis:latest 192.168.1.48:5000/redis:latest
and check
17%
14.06.2017
to compress the data to 91.53% of its uncompressed size, or to 328MB (327.34MB).
Notice that I used the time
command to time how long it took to run the command. The results were:
real 0m7.675s
user 0m