22%
22.06.2012
1-12:00:00
20
144400
1-16:06:40
21
160000
1-20:26:40
22
176400
2-01:00:00
23
193600
2-05:46:40
24
22%
16.10.2012
6), and start stream blocking (line 7), which executes the command and waits for the response. Now, write the output to a variable (lines 9-12), close the stream (line 14), and send the response
22%
20.06.2012
/local
53G 29G 22G 57% /vnfs/usr/local
From the output, it can be seen that only 217MB of memory is used on the compute node for storing the local OS. Given that you can easily and inexpensively buy 8GB
22%
18.12.2013
__ == "__main__":
12
13 local_dict = {'x':0, 'y':0, 'z':0,'value':0.0};
14 my_record = []; # define list
15
16 counter_limit = 2000;
17
18 f = open('test.bin', 'r+')
19 for counter in range(1,counter
22%
21.08.2012
6 ### 8/5/2012
7
8 ### Set the job name
9 #PBS -N mpi_pi_fortran90
10
11 ### Run in the queue named “batch”
12 #PBS -q batch
13
14 ### Specify the number of cpus for your job. This example
22%
25.10.2011
-256-cbc;
07 }
08 policy pfs2-aes256-sha1 {
09 perfect-forward-secrecy {
10 keys group2;
11 }
12 proposals aes256-sha1;
13 }
14 vpn racoonvpn {
15 bind
22%
17.01.2023
47 k
pixman x86_64 0.38.4-2.el8 appstream 256 k
slurm-contribs-ohpc x86_64 22.05.2-14.1.ohpc.2.6 OpenHPC-updates 22 k
slurm
22%
04.04.2023
47 k
pixman x86_64 0.38.4-2.el8 appstream 256 k
slurm-contribs-ohpc x86_64 22.05.2-14.1.ohpc.2.6 OpenHPC-updates 22 k
slurm
22%
05.11.2018
# for your environment.
05 #
06 #
07 # slurm.conf file generated by configurator.html.
08 #
09 # See the slurm.conf man page for more information.
10 #
11 ClusterName=compute-cluster
12 Control
22%
13.12.2018
.conf file generated by configurator.html.
08 #
09 # See the slurm.conf man page for more information.
10 #
11 ClusterName=compute-cluster
12 ControlMachine=slurm-ctrl
13 #
14 SlurmUser=slurm
15 Slurmctld