Automated orchestration of a horizontally scalable build pipeline

Automated Jenkins CI

Ansible Modules and the Shell

The attentive reader might notice that many shell commands have been used in the playbooks, as opposed to Ansible modules. Shell commands have been used intentionally for the purposes of this exercise to demonstrate the commands behind the modules. This approach aids in the learning process with regard to shell commands and is useful for readers who might want to try some of the steps manually, without the aid of Ansible.

Bootstrapping Jenkins

At this stage, it is worth mentioning an important point regarding bootstrapping of the Jenkins Master Docker container. To be able to handle Jenkins command-line and API commands used later in the deployment, a small group of Jenkins plugins have to be installed from the outset. This step normally happens when Jenkins is first launched; it requests the initial token/key for activation, gives the option to install basic plugins, and sets up an initial user account.

To accomplish this in an automated way, a baseline Jenkins configuration is automatically cloned from the jenkins_home GitHub repository. The git clone command is executed toward the beginning of the Jenkins container bootstrapping process in the corresponding Ansible playbook to the URL https://github.com/blissnd/jenkins_home .

You do not have to clone this repository manually! It will be done automatically during the Ansible playbook runs. The above information and URL is given purely for informational purposes.

One point of potential confusion that is worth highlighting at this stage is the somewhat complex configuration of the SSH keys used by the Jenkins master to communicate with and bootstrap the Jenkins slaves. Jenkins has a quirk regarding storage of SSH keys.

Although the SSH slave configuration GUI appears to allow use of an SSH private key file, and indeed even presents the option to point to an existing .ssh directory, this does not appear to work as intended, at least not in the Jenkins Docker image used here. Most users likely will paste the SSH key ASCII text directly, which works fine, thus avoiding the aforementioned issue during manual deployments. This option is unfortunately not possible when using the automated Ansible process, which requires a programmatic or scripted solution from the command-line interface or REST API.

After some investigation, it turned out that Jenkins encrypts the SSH key, possibly with its own master key, and then stores that result inside the credentials.xml file in the Jenkins home directory. Therefore, for automated deployment of the Jenkins Docker slaves, a method had to be found (1) to use the command-line interface to get the Jenkins token for use in API calls and (2) to pass in a Groovy script to encrypt the private SSH key before storing it in credentials.xml .

Putting It All Together

Because of the many variables regarding timeouts, potential SSH problems, and so on that are difficult to anticipate, you might have to run

ansible-playbook main_deployment.yml

several times before it runs successfully through to the end. Normally, running the same Ansible deployment again will resolve such problems and allow it to move on.

Note: Do not rerun just because you see errors in red, because some can be ignored safely. If the Ansible run continues and reaches the end, everything should be running fine, even if errors in red are seen along the way.

At the end of the final (successful) run, you should see a result as in Figure 5. Now point your host browser to http://192.168.2.10:8080/ , which should be available both from outside the Docker container and outside the VM (i.e., from the host machine).

Figure 5: End of Ansible results after a successful run.

You should see the Jenkins login screen, where you can log in with username jenkins and password jenkins . Clicking on Build Executor Status on the left should show the master and three slave machines up and running (Figure 6).

Figure 6: Status of the master and slaves.

The red warning indications regarding zero swap space available can be ignored if using SSD storage. For conventional hard drive usage, swap space can simply be added to each of the VMs if desired, which will cause the swap space errors to disappear from the page.

A default test job is included on the main Jenkins page as an example. Click on the job (test1 ) and select Configure on the left. You will see that this job has been selected to run on the slave machine named slave2 (Figure 7). As can be seen from the screenshot, the option for where to run the job is Restrict where this project can be run .

Figure 7: Slave configuration inside the job configuration.

Now go back to the main Jenkins page (http://192.168.2.10:8080/ ) and click on the job test1 , then Build Now on the left to see the job running (a new blue circle appears on the left). Click on this topmost blue circle and then on Console Output on the left. You will see the job has run successfully on slave2 (Listing 3).

Listing 3: Console Output
Started by user Mr Charles Jenkins
Building remotely on slave2 in workspace /home/jenkins/workspace/test1
[test1] $ /bin/sh -xe §§ /tmp/jenkins5029989702823914095.sh
+ echo hello
hello
Finished: SUCCESS

Now if you ssh to VM-2 with Vagrant (which must be done from the virtualbox_vagrant_ansible directory),

vagrant ssh VM-2

the following command will show that both VMs are active within the Docker swarm (Figure 8):

docker node ls
Figure 8: Docker swarm node list.

Similarly, the following command shows the four slave replicas within the Docker swarm (Figure 9):

docker service ls
Figure 9: Docker swarm container list.

If you ssh to each virtual machine and do a docker ps on each, you will see the Jenkins slave Docker containers running across all virtual machine nodes active inside the Docker swarm's routing mesh (Figures 10 and 11).

Figure 10: Docker process list on VM-1.
Figure 11: Docker process list on VM-2.

Note that the VM-1 Docker process list includes the master Jenkins container. Other than that, Docker Swarm has distributed the Jenkins slave containers uniformly over the active virtual machine nodes.

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=