Lead Image © Kheng Ho Toh, 123rf.com

Lead Image © Kheng Ho Toh, 123rf.com

Optimization and standardization of PowerShell scripts

Beautiful Code

Article from ADMIN 48/2018
By
When PowerShell one-liners become full-blown scripts that have an effect throughout the enterprise, IT managers need to review their software development strategies. We look at PowerShell best practices for script design, notation, error handling, and documentation.

The use of scripts in the work environment has changed considerably over the past decade. Initially, they were used to handle batch processing with rudimentary control structures, functions and executables that were only called as a function of events, and return values. The functional scope of the scripting language itself was therefore strongly focused on processing character strings.

The languages of the last century (e.g., Perl, Awk, and Bash shell scripting) are excellent tools for analyzing logfiles or the results of a command with regular expressions. PowerShell, on the other hand, focuses far more on the interfaces of server services, systems, and processes, with no need to detour through return values.

Another change in scripting relates to the relevance and design of a script app: Before PowerShell, scripts were typically developed by administrators to support their work. Their understanding of the use of the application as something fairly personal also affected the applied standards: In fact, there weren't any. The usual principles back then were:

  • Quick and dirty: Only the function is important.
  • Documentation is superfluous: After all, I wrote the script.

The lack of documentation can have negative consequences for the author, though, leading to cost-intensive delays in migration projects three years down the road if the admin no longer understands code or the purpose of the code.

Basic Principles for Business-Critical Scripts

The significance of PowerShell is best described as "enterprise scripting." On many Microsoft servers, PowerShell scripts are the only way to ensure comprehensive management – Exchange and Azure Active Directory being prime examples. The script thus gains business-critical relevance. When you create a script, you need to be aware of how its functionality is maintained in the server architecture when faced with staff changes, restructuring, and version changes.

The central principles are therefore ease of maintenance, outsourcing, and reusability, as well as detailed documentation, and these points should be the focus of script creation:

  • Standardization of the inner and outer structure of a script
  • Modularization through outsourcing of components
  • Naming conventions for variables and functions
  • Exception handling
  • Definition of uniform exit codes
  • Templates for scripts and functions
  • Standardized documentation of the code
  • Rules for optimal flow control

Additionally, it would be worth considering talking to your "scripting officer" to ensure compliance with corporate policy. Creating company-wide script repositories also helps prevent redundancy during development.

Building Stable Script Frameworks

A uniform, relative path structure allows an application to be ported to other systems. Absolute paths should be avoided because adapting them means unnecessary overhead. Creating a subfolder structure as shown in Figure 1 has proven successful: Below the home folder for all script apps are individual applications (e.g., Myapp1 and MyApp2). Each application folder contains only the main processing file, which uses the same name as the application folder (e.g., Myapp1.ps1). The application folder can be determined dynamically from within the PowerShell script:

Figure 1: Recommended subfolder structure for PowerShell scripts.
$StrScriptFolder = (myinvocation.MyCommand).Path |split-Path -Parent

The relative structure can then be represented easily in the code:

$StrOutputFolder = $ActualScriptPath + "\output"; [...]

The subfolders are assigned to the main script components Logging, Libraries, Reports, and External Control. Each script should be traceable: If critical errors occur during processing, they should be written to an errorlogs subfolder. For later analysis, I recommend saving as a CSV file with unambiguous column names: date and time of the error; processing that caused the error; and error levels like error , warning , info , and optionally line in source code are good choices. To standardize your error logs, it makes sense to use a real function (as opposed to Add-Content).

In addition to errors or unexpected return values, you should always log script actions for creating, deleting, moving, and renaming objects. To distinguish these logs from the error log, they are stored in the functionlogs subfolder. When a script creates reports, the output folder is the storage location. This also corresponds to the structure given by comment-based help, which is explained in the Documentation section.

Control information (e.g., which objects should be monitored in which domain and how to monitor them) should not reside within the source code. For one thing, retrospective editing is difficult because the information has to be found in the programming logic; for another, transferring data maintenance to specialist personnel without programming skills becomes difficult. The principle of maintainability is thus violated. The right place for control information is the input folder.

In addition to data, script fragments and constants can also be swapped out. A separate folder is recommended for these "scriptlets" with a view to reusability. In the history of software development, inc, short for "include" has established itself as the typical folder name for these components.

Format Source Code Cleanly

A clear internal structure greatly simplifies troubleshooting and error elimination. Here, too, uniform specifications should be available, as Figure 2 shows: The region keyword combines areas of the source code into a logical unit, but they are irrelevant for processing by the interpreter. The regions can be nested (i.e., they can also be created along the parent-child axis). Besides the basic regions init, process, and clear described in the figure, a test region is recommended. You can check whether paths in the filesystem or external libraries exist. Further regions can be formed, for example, from units within the main sections that are related in terms of content.

Figure 2: A clear-cut structure of the script facilitates troubleshooting.

Readable code also includes the delimitation of statement blocks. Wherever you have foreach, if, and so on in a nested form, a standardized approach to indentation becomes important (e.g., two to four blanks or a Tab setting). Some editors, such as Visual Studio Code, provide support for formatting the source code. Although the position of the opening and closing curly brackets is controversial among developers, placing the brackets on a separate line is a good idea (Figure 3).

Figure 3: Code clarity is improved by giving brackets their own lines.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=