27%
16.03.2021
, ioengine=libaio, iodepth=32
fio-3.12
Starting 1 process
Jobs: 1 (f=1)
test: (groupid=0, jobs=1): err= 0: pid=5872: Sat Jan 9 16:35:08 2021
read: IOPS=251k, BW=979MiB/s (1026MB/s)(2045MiB/2089msec
26%
12.05.2020
TAG IMAGE ID CREATED SIZE
cuda 10.1-base-ubuntu19.04-octave b01ee7a9eb2d 47 seconds ago 873MB
nvidia/cuda 10.1-base-ubuntu18.04 3b55548ae91f 4 months ago 106MB
hello
25%
25.03.2021
.00 MiB 2144.34 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sat Jan 9 16:36:21 2021
State : clean, resyncing
25%
11.04.2016
(512 MB) copied, 49.1424 s, 10.4 MB/s
If you want to empty the read and write cache for benchmark purposes, you can do so using:
sync; echo 3 > /proc/sys/vm/drop_caches
Sequential access
24%
07.11.2023
5.3M 8.2k 5.3M 1% /run/lock
/dev/nvme1n1p1 1.1T 488G 468G 52% /home
/dev/nvme0n1p1 536M 6.4M 530M 2% /boot/efi
/dev/sda1 6.0T 3.4T 2.4T 60% /home2
tmpfs
24%
30.11.2020
integration
50 integral = trapezoidal(local_a, local_b, local_n, h)
51
52 # Add up the integrals calculated by each process
53 if (my_rank == 0):
54 total=integral
55 for source in range(1,p):
56
24%
06.10.2019
": executable file not found in $PATH
0a2091b63bc5de710238fadc68ba3f5e0f9af8800ec7f76fd52a84c49a1ab0a7
Listing 3 shows that I do have a working container, so I'll deal with the network namespace
error now
24%
29.09.2020
Building a Docker Container
$ docker build -t dockly .
Sending build context to Docker daemon 16.52MB
Step 1/9 : FROM node:8-alpine
Get https://registry-1.docker.io/v2/library/node/manifests/8-alpine
24%
05.11.2018
One way to share HPC systems among several users is to use a software tool called a resource manager. Slurm, probably the most common job scheduler in use today, is open source, scalable, and easy ... infinite 4/9/3/16 node[212-213,215-218,220-229]
This example lists the status, time limit, node information, and node list of the p100 partition.
sbatch
To submit a batch serial job to Slurm, use the ...
One way to share HPC systems among several users is to use a software tool called a resource manager. Slurm, probably the most common job scheduler in use today, is open source, scalable, and easy
23%
13.12.2018
.
Listing 2
sinfo
$ sinfo -s
PARTITION AVAIL TIMELIMIT NODES(A/I/O/T) NODELIST
p100 up infinite 4/9/3/16 node[212-213,215-218,220-229]
sbatch
To submit a batch serial