Zuul 3, a modern solution for CI/CD

Continuous Progress


Each Zuul configuration should have at least one Executor, the component responsible for preparing the execution of tasks. Because the definition of the verification process can be scattered over multiple YAML files, it has to be extracted and sent through Gearman to Zookeeper for execution. One Executor can handle a few dozen simultaneous tasks; however, you should first consider replicating it if you see any performance issues.

Zuul is designed to pull stable changes on production, so it sometimes has to perform merges between independent changes when they are proposed in parallel. Zuul performs a number of Git operations in the course of its work, for which it uses the Merger service, one of which comes built-in with each Executor. However, if the number of merges to be performed grows, you can separate Merger functionality onto separate hosts. Mergers, like Executors, can come in any quantity, depending on a project's needs.


The Web component is a simple dashboard to display the current status of Zuul's tasks. Here, you find tenants, projects, possible jobs, and pipelines, and you can track logs in real time. The dashboard also shows a list of all available nodes and providers defined in Nodepool, a system for managing test node resources. If an additional reporting set is present in the pipeline declaration, you can find there a history of all pipeline executions. Because Zuul Web is read-only, you can have as many instances of this component as you want; however, this configuration never needs replication of service, regardless of load.

Web servers need to be able to connect to the Gearman server (usually the Scheduler host). If SQL reports are generated, a web server needs to be able to connect to the database to which it reports. If a GitHub connection is configured, web servers need to be reachable by GitHub, so they can receive notifications.


Zookeeper was originally developed at Yahoo for streamlining processes running on big data clusters by storing the status in local logfiles on the Zookeeper servers. These servers communicate with the client machines to provide them the information. Zookeeper was developed to fix the bugs that occurred while deploying distributed big data applications and is a communication service between a Zuul installation and Nodepool and its nodes. Whenever a new node is needed, a request goes to one of the Zookeeper instances, and Zookeeper then obtains from Nodepool the host for execution.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=