16%
17.09.2013
), either coming from or going to the hardware or in the software stack. One key technology is ECC memory (error-correcting code memory).
The standard ECC memory used in systems today can detect and correct
16%
08.04.2014
return x
The computed results are stored on disk in the JOBLIB
directory below the directory defined by the cachedir
parameter. Here again, each memorized function has its own subdirectory, which, among
16%
02.04.2013
describe later.
To enable strong authentication in your Google account, go to https://accounts.google.com/SmsAuthConfig
and enter the phone number of your smartphone. Google sends a link to this number
16%
30.03.2012
.
Neighborhood Watch
In these darker times, when you set a freshly baked server live on the Internet, it’s critical to know how it’s fairing. Knowing how much bandwidth your server is using and how much email
16%
10.07.2012
class when I’m working on my MCSE?” My answer was a lengthy explanation of the origins of Unix, DOS, and Windows that ended with, “… and, because real environments are mixed. Few companies, of any size
16%
19.05.2014
with my /home/layton
directory on my local system (host = desktop
). I also access an HPC system that has its own /home/jlayton
directory (the login node is login1
). On the HPC system I only keep some
16%
09.01.2019
+MPI), which has been a prevalent programming technique for quite a while. Classically, each core assigned to an application is assigned an MPI rank and communicates over whatever network exists between
15%
17.07.2013
Manager, an ApplicationMaster, application Containers, and NodeManagers. The ResourceManager
is a pure scheduler. Its sole purpose is to manage available resources among multiple applications on the cluster
15%
16.10.2012
The problem faced by many system administrators is how to use a command-line scripting language, other than shell scripts, for automation.
PHP, now in its fifth major version, is one of the most
15%
18.08.2021
the training, which seems to be counterintuitive because DL training involves repeatedly going through the dataset in a different order for each epoch. Two things affect this: (1) TensorFlow has an efficient