30%
07.01.2013
OS
01 name: centos
02 summary: CentOS installation with BoxGrinder
03 os:
04 name: centos
05 version: 6
06 hardware:
07 partitions:
08 "/":
09 size: 4
10 "/home":
11 size: 1
12
30%
18.02.2018
public_key = "${file("${var.ssh_pub_key}")}"
07 }
08 resource "digitalocean_droplet" "mywebapp" {
09 image = "docker-16-04"
10 name: guest
11 region = "fra1"
12 size = "512mb"
13 ssh
30%
09.10.2017
boto3
3
4 s3 = boto3.resource('s3')
5 bucket = s3.Bucket('prosnapshot')
6 bucket.download_file('hello.txt', 'hello-down.txt')
Figure 2 ... Data on AWS S3 is not necessarily stuck there. If you want your data back, you can siphon it out all at once with a little Python pump. ... Data Exchange with AWS S3 ... Getting data from AWS S3 via Python scripts
30%
30.01.2020
: (groupid=0, jobs=1): err= 0: pid=7055: Sat Oct 12 19:09:53 2019
write: IOPS=34.8k, BW=136MiB/s (143MB/s)(9.97GiB/75084msec); 0 zone resets
[ ... ]
Run status group 0 (all jobs):
WRITE: bw=136MiB/s (143MB
30%
02.02.2021
2020-11-09T15:38:03.586Z BindFamily IPv4 Mapped IPv6
Listing 2
docker images
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
endlessh latest 6fc
29%
30.11.2020
cri-o-${CRIO_VERSION}
The following NEW packages will be installed
cri-o-1.17
0 to upgrade, 1 to newly install, 0 to remove and 0 not to upgrade.
Need to get 17.3 MB of archives.
After this operation
29%
11.02.2016
. Level 5 displays whether a file is changed; however, each processed file is listed in level 6:
# rdiff-backup -v5 /etc/ /mnt/backup
[...]
Incrementing mirror file /mnt/backup
Processing changed file X11
29%
19.11.2019
(f=1): [w(1)][100.0%][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=7055: Sat Oct 12 19:09:53 2019
write: IOPS=34.8k, BW=136MiB/s (143MB/s)(9.97GiB/75084msec); 0 zone resets
[ ... ]
Run
29%
25.09.2013
)
8 (4)
7.2
0.9
Nehalem-EP (2009)
8 (4)
32
4
Westmere-EP (2010)
12 (6)
42
3.5
Westmere-EP (2010)
8 (4)
42
29%
25.03.2020
7 1 56008 loop1
06 7 2 56184 loop2
07 7 3 91264 loop3
08 259 0 244198584 nvme0n1
09 8 0 488386584 sda
10 8 1 1024 sda1
11