31%
12.05.2021
30.85 72.31 13.16 20.40 0.26 70.44 83.89 1.97 3.52
nvme0n1 58.80 12.22 17720.47 48.71 230.91 0.01 79.70 0.08 0.42 0.03 0.00
29%
16.03.2021
=libaio, iodepth=32
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=1420KiB/s][w=355 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3377: Sat Jan 9 15:31:04 2021
write: IOPS=352, BW=1410Ki
29%
15.08.2016
link
1: lo: mtu 65536 qdisc noop state DOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
The set command adds a device that already exists in a host system
29%
21.08.2012
| 119 kB 00:00
(6/19): glib2-2.22.5-6.el6.i686.rpm | 1.1 MB 00:00
(7/19): libX11-1.3-2.el6.i686.rpm
29%
05.03.2014
:15:01 all 2.08 0.00 0.96 0.02 0.00 96.94
12:25:01 all 1.96 0.00 0.82 0.06 0.00 97.16
12:35:01 PM all 1.22 0.00 0.73 0.00 0.00
29%
20.03.2014
all 1.22 0.00 0.73 0.00 0.00 98.05
12:45:01 PM all 1.32 0.00 0.72 0.01 0.00 97.95
12:55:01 PM all 1.79 0.00 0.75 0
28%
25.03.2021
0[0]
2094080 blocks super 1.2 [2/2] [UU]
**
unused devices:
The initialization time should be relatively quick here. Also, verify the RAID1 mirror details (Listing 13) and rerun the random
28%
30.01.2020
| %|Source code
------+----------+-------------+-------------+-------+-----------
1| 0| 0| 0| 0.00%|# md test code
2| 0| 0| 0| 0.00
28%
18.06.2015
XC40 before the system is placed into regular service.
The Excalibur system comes with 101,184 processors, and the Stanford team had access to 22,00 of them. The team was working on a new scalability
28%
05.09.2011
can see how the arp cache poisoning works:
$ sudo nemesis arp -v -r -d eth0 -S 192.168.1.2 \
-D 192.168.1.133 -h 00:22:6E:71:04:BB -m 00:0C:29:B2:78:9E \
-H 00:22:6E:71:04:BB -M 00:0C:29:B2:78:9E