30%
30.05.2021
on a fabric simultaneously. Existing Gen5 (16Gbps) and Gen6 (32Gbps) FC SANs can run FC NVMe over existing SAN fabrics with little change, because NVMe meets all specifications, according to the Fibre Channel
30%
09.06.2018
machines connected back-to-back with a 100Gbps (100G) network adapter and OpenSSL, our lab tested application s_time on the client and s_server on the server, with and without inline TLS enabled and using
30%
07.10.2025
are no longer looking at bandwidths in the 1Gbps range; instead, 25Gbps is the norm, and even 400Gbps is no longer uncommon. Other packet filters for Linux, most notably the now obsolete iptables, are simply too
30%
14.06.2018
Power9 processors and six NVIDIA Tesla V100 graphics processing unit accelerators. The compute servers are interconnected with Mellanox EDR 100Gbps InfiniBand. The system has more than 10 petabytes
30%
18.12.2013
common in HPC to illustrate these differences: C, Fortran 90, and Python (2.x series). I run the examples on a single 64-bit system with CentOS 6.2 using the default GCC compilers, GCC and GFortran (4.4.6
30%
28.08.2013
uses a single core. Sixty-three cores are sitting there idle until the Gzip finishes. Moreover, using a single core to Gzip a file on a 2PB Lustre system that is capable of 20GBps is like draining
30%
21.08.2012
Listing 4: Installing ganglia-metad into the Master Node
[root@test1 RPMS]# yum install ganglia-gmetad-3.4.0-1.el6.i686.rpm
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading
30%
19.05.2014
of completeness, I’m repeating some of the details from the original SSHFS article.
On my desktop I have a Samsung 840 SSD attached via a SATA II connection (6Gbps) and mounted as /data
. It is formatted with ext4
30%
16.05.2013
a9a6615fb5c045693 |
root@alice:~# quantum port-list
+--------------------------------------+------+-------------------+
| 0c478fa6-c12c-... | | fa:16:3e:29
30%
15.02.2012
,369
28,438
42,656
36,786
46,779
26,834
26,883
47,224
getdents
8
6
8
6
6
6
6
10
fseek
0