18%
09.01.2013
02 log 127.0.0.1 local0
03 maxconn 4000
04 daemon
05 uid 99
06 gid 99
07
08 defaults
09 log global
10 mode http
11 option httplog
12 option dontlognull
13 timeout server 5s
18%
18.08.2021
, the command I used for Darshan to gather I/O statistics on the application was:
env LD_PRELOAD=/home/laytonjb/bin/darshan-3.3.1/lib/libdarshan.so ./ex1
I copied the Darshan file from the log location
18%
18.12.2013
(One-by-One)
1 #include
2
3 /* Our structure */
4 struct rec
5 {
6 int x,y,z;
7 float value;
8 };
9
10 int main()
11 {
12 int counter;
13 struct rec my_record;
14 int counter_limit;
15
18%
05.08.2024
repositories. The most noble distribution that had picked up on this nifty tool was openSUSE [3]. Today, you can install it either with the usual git clone command or by going through a somewhat easier approach
18%
20.10.2013
of the drives (more on that later).
Smartmontools is compatible with all S.M.A.R.T. features and supports ATA/ATAPI/SATA-3 to -8 disks and SCSI disks and tape devices. It also supports the major Linux RAID cards
18%
31.10.2025
[3] to solving the questions. Although not the most efficient or fastest method for solving the problem, it does illustrate how one can use OpenMP to parallelize applications.
I will be using
18%
04.04.2023
TLS use case that we are focusing on in this article. Note that although a SPIFFE ID looks very much like a URI, it has no meaning in the DNS sense, and plays no role in establishing the initial layer 3 TCP
18%
30.01.2020
Zhao through his summer internship at IBM Research. The dm-cache module was integrated into the Linux kernel tree as of version 3.9. It is an all-purpose caching module and is written and designed to run
18%
31.10.2025
will provide sub-par performance. A deeper treatment of these issues can be found in a recent article called "Will HPC Work in the Cloud?" [3].
Finally, any remote computation scheme needs to address
18%
21.11.2012
of the solution matrix, u
(lines 119-123), (2) the iteration loop (lines 137-142), and (3) the update of the solution (lines 146-151). Loops are wonderful places for parallelization – they can use a great deal