Enterprise job scheduling with schedulix

Computing with a Plan

A Practical Example

Many companies generate management reports and back up their databases on a daily basis. The actions this involves, such as aggregating the data required for each report, are always related – at least in terms of time, and often in terms of content. You can imagine creating a PDF from the aggregated data that the management can then access via a link published on the intranet. At the same time, you would want to avoid database activity while you are backing up the database.

To model report aggregation, for example, you need three independent steps after analyzing the task. In the hierarchical modeling concepts used by schedulix, these are parent-child relations to the REPORT master job. To map this in schedulix, implementing numeric exit codes for logical states is a mandatory requirement. If you do not do this yourself, you will get classical Unix mapping, which is the default in schedulix (i.e., an exit code of 0 for success and all other codes evaluated as failures).

Working with logical states helps you document the workflow. The modules that help you do this in the GUI are Exit State Definition and Exit State Mappings . Additionally, you need to define the environment in which the processes need to be executed in the Environments module or menu. For example, Server_1 could be responsible for reporting, while the database system runs on Server_2 .

Where you can benefit from the GUI is in visualization of dependencies with arrows, and the option to change dependencies or branches in the job hierarchy with point and click (Figure 3). Another thing in favor of the GUI is that it visualizes the workflow's progress and clearly shows you which step is currently being performed, how long it has been running, and the extent to which the previous steps were successful.

Figure 3: Visualization of dependencies.


To understand how schedulix works, it is important to distinguish definitions from the execution layer (Figure 4). The definition layer comprises job definitions, which in turn have or pose resource requirements. In schedulix, these requirements are not tied to specific resources but to named resources, which users then need to define in the resource definitions.

Figure 4: The relationship between the definition and execution layers.

A submit operation turns a job definition into a job (i.e., an instance of the associated job definition). By instantiating a named resource in an execution environment (job server), you create resources. When a job (a submitted job definition) is assigned by the scheduling system for execution in one or multiple execution environments, this creates a link (resource allocation) between jobs and resources (instances of named resources).


Traditional scheduling systems are typically oriented on a defined daily workflow. Because the job scheduling system needs to recompute this daily workflow after every interruption, the schedulix vendor considers this mode of working ineffective. Instead, the system relies on a dynamic architecture that continuously recomputes the consequences of the current boundary conditions.

For example, schedulix offers more options for adapting the system behavior to changing requirements and efficiently using existing resources. The vendor, IndependIT, is quite obviously working on a solution for the somewhat outmoded design of the web GUI.

Schedulix and its commercial sibling BICsuite require thorough and thus time-consuming familiarization with the approach and modes of operation – something they have in common with all other enterprise scheduling solutions. Small to medium-sized enterprises who have put themselves under pressure with scripts and DIY process controls will need to evaluate for themselves whether the achievable benefits offset the overhead of introducing schedulix (open source and only for pure Linux environments) or BICsuite (multiplatform, but at a cost). The economic logic will depend on whether or not your company has budgeted for the expense of job scheduling and monitoring.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Linux I/O Schedulers
    The Linux kernel has several I/O schedulers that can greatly influence performance. We take a quick look at I/O scheduler concepts and the options that exist within Linux.
  • Linux I/O Schedulers

    The Linux kernel has several I/O schedulers that can greatly influence performance. We take a quick look at I/O scheduler concepts and the options that exist within Linux.

  • Professional PowerShell environments
    The stability, portability, and scalability of PowerShell scripts is becoming increasingly important as automation scripts start to resemble mission-critical apps.
  • The New Hadoop

    Hadoop version 2 expands Hadoop beyond MapReduce and opens the door to MPI applications operating on large parallel data stores.

  • Manage updates and configuration with Azure Automation
    Microsoft Azure Automation provides a cloud-based service for handling automation tasks, managing updates for operating systems, and configuring Azure and non-Azure environments. We focus on VM update management and restarting VMs.
comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=