Julia: A New Language For Technical Computing

Can this new language deliver on bold claims of fast, easy, and parallel?

In the early days of the personal computer, many people built and bought early desktop systems simply to explore computing on their own. Of course, they often had access to mainframe systems or even minicomputers, but something about having the computer physically next to you for your private use was appealing. As the sole user and owner, early users controlled everything, including the reset switch. Total ownership allowed early pioneers to tinker with hardware and software without concern for other users.

Some would argue that a whole new industry was launched from this “tinkering.” The relatively low cost of early desktop computing allowed anyone who was curious to explore and adapt early PCs to their needs. Initially, programming tools were rare, and many early users found themselves writing assembly language programs or even toggling in machine code instructions. It was not long until Microsoft Basic was available and became one of the first high-level languages used by the early PC crowd. Languages like C and Fortran that were previously only available on larger systems soon followed. The PC revolution created a new class of “developer” – someone who had specific domain experience and a programmable PC at their disposal. Countless applications seemed to spring up overnight. Some applications went on to become huge commercial successes, and others found a niche in their specific application area.

Applying these lessons to HPC, you might ask, “how do I tinker with HPC?” The answer is far from simple. In terms of hardware, a few PCs, an Ethernet switch, and MPI get you a small cluster; or, a video card and CUDA get you some GPU hardware. Like the PC revolution, a low cost of entry now invites the DIY crowd to learn and play with HPC methods, but, the real question is: What software can a domain specialist use to tinker with HPC?

Many of the core HPC programming tools are often too low level for most domain specialists. Learning Fortran, C/C++, MPI, CUDA, or OpenCL is a tall order. These tools tend to be “close to the hardware” and can be difficult to master without a significant time investment. To get closer to their problem domain, many technical computing users prefer languages like Python, R, and MATLAB. These higher level tools move the programming environment closer to the user’s problem and are often much easier to use than more traditional low-level tools.

Often you’ll see a “speed for convenience” trade-off in high-level programming tools, however. Whereas many of the performance languages like C or Fortran are statically compiled, most “convenient” languages use a slower dynamic compilation method that allows real-time interaction and tinkering with code sections. Moreover, the higher level languages are more expressive and often require fewer lines of code to create a program.

Another issue facing all languages is parallel computing. The advent of multicore has forced the issue because the typical desktop has at least four cores. Additionally, the introduction of multicore servers, HPC clusters, and GP-GPU computing has fragmented many traditional low-level programming models. High-level languages that want to hide these features from the user have various levels of success, but for the most part, parallel computation is an afterthought.

The lack of a good high-level “tinker language” for HPC has been an issue for quite a while; that is, how can a domain expert (e.g., a biologist) quickly and easily express a problem in such a way that they can use modern HPC hardware as easily as a desktop PC. An “HPC Basic” – a language that gets users started quickly without the need to understand the details of the underlying machine architecture – is needed. (To be fair, some would suggest there should have never been Basic in the first place!)

Julia Is Not Bashful

Recently, the new language Julia has been seen a lot of discussion as a tool for technical computing environments. The authors explain their justification for the language as follows:

We want a language that’s open source, with a liberal license. We want the speed of C with the dynamism of Ruby. We want a language that’s homoiconic (i.e., has the same representation of code and data), with true macros like Lisp, but with obvious, familiar mathematical notation like MATLAB. We want something as usable for general programming as Python, as easy for statistics as R, as natural for string processing as Perl, as powerful for linear algebra as MATLAB, as good at gluing programs together as the shell. Something that is dirt simple to learn, yet keeps the most serious hackers happy. We want it interactive and we want it compiled. … We want to write simple scalar loops that compile down to tight machine code using just the registers on a single CPU. We want to write A*B and launch a thousand computations on a thousand machines, calculating a vast matrix product together.

The Julia site has quite a bit more information about their bold plan, but the above paragraph sounds like a dream come true for many HPC users. In particular, some of the issues Julia addresses have been holes in the HPC landscape for years and seem almost impossible until you look at the micro-benchmarks in Table 1.

  Julia Python MATLAB Octave R JavaScript
  v3f670da0 v2.7.1 vR2011a v3.4 v2.14.2 v8 3.6.6.11
fib 1.97 31.47 1,336.37 2,383.80 225.23 1.55
parse_int 1.44 16.50 815.19 6,454.50 337.52 2.17
quicksort 1.49 55.84 132.71 3,127.50 713.77 4.11
mandel 5.55 31.15 65.44 824.68 156.68 5.67
pi_sum 0.74 18.03 1.08 328.33 164.69 0.75
rand_mat_stat 3.37 39.34 11.64 54.54 22.07 8.12
rand_mat_mul 1.00 1.18 0.70 1.65 8.64 41.79

Table 1: Benchmark Times Relative to C++ (Smaller is Better) (from the Julia website). Tests were run on a MacBook Pro with a 2.53GHz Intel Core 2 Duo CPU and 8GB of 1,066MHz DDR3 RAM

Considering the “P” in HPC is for Performance, the results in the above table should invite further investigation into Julia. One of the big assumptions about many high-level languages has been the loss of efficiency when compared with C or Fortran. The table above indicates that this does not need to be the case. Even getting close to the speeds of traditional compiled languages would be a welcome breakthrough in high-level HPC tools. In addition to speed, many nice features of Julia should appeal to the domain experts who use HPC. The following short list highlights some of these. (Consult the Julia page for a full set of features.)

  • Free and open source (MIT licensed)
  • Syntax similar to MATLAB
  • Designed for parallelism and distributed computation (multicore and cluster)
  • C functions called directly (no wrappers or special APIs needed)
  • Powerful shell-like capabilities for managing other processes
  • Lisp-like macros and other meta-programming facilities
  • User-defined types are as fast and compact as built-ins
  • LLVM-based, just-in-time (JIT) compiler that allows Julia to approach and often match the performance of C/C++
  • An extensive mathematical function library (written in Julia)
  • Integrated mature, best-of-breed C and Fortran libraries for linear algebra, random number generation, FFTs, and string processing

One ability all high-level languages need is to “glue” together existing libraries from other sources. Too much “good code” is available to ignore or re-write. Through the use of the LLVM compiler, Julia can use existing shared libraries compiled with GCC or Clang tools without any special glue code or compilation – even from the interactive prompt. The result is a high-performance, low-overhead method that lets Julia leverage existing libraries.

Another important HPC feature of Julia is a native parallel computing model based on two primitives: remote references and remote calls. Julia uses message passing behind the scenes but does not require the user to control the environment explicitly like MPI. Communication in Julia is generally “one-sided,” meaning the programmer needs to manage only one processor explicitly in a two-processor operation. Julia also has support for distributed arrays.

Finding Julia

Although Julia holds much promise for technical computing, it is still a young language and is currently undergoing some changes and improvements. Meanwhile, the Julia community is growing rapidly. As such, Julia is probably not ready for heavy production use at this point in time, although it is possible to tinker with Julia on almost any desktop machine. Source and binary packages are available; consult the Julia download and build instructions page for more information. In the next column, I’ll present examples of Julia programs along with an installation example.

Finally, I do not mean to diminish Julia in any way by labeling it a “tinker language.” Indeed, many “first” languages seem to be lifetime tools for many users, and early programs or prototypes grow and morph into larger, production-level projects. A nice quality of Julia is that there is almost no barrier to start coding right away. Those familiar with MATLAB will find it particularly easy to get started. Also, you do not need to use advanced features, like parallel computing, from the beginning. The nice thing about “tinkering” is you can try simple things first, test ideas, and end up with a working prototype in very little time. That your prototype runs almost as fast as in “real code” is a welcome benefit.

Aside: Check Out the CDP

In a previous article, I mentioned a new initiative called the Cluster Documentation Project (CDP) that is designed to document and professionally publish high-performance computing cluster information. If you have any interest in contributing financially or otherwise, please visit the CDP information page – Thanks!