We talk with Michael Wong and Matthijs van Waveren about the latest developments surrounding the OpenMP ARB consortium.

What's Ahead for OpenMP?

Michael Wong, CEO of OpenMP ARB Corp.
Left to right: Barbara Chapman, chairperson of OpenMP user’s group; Michael Wong, CEO of OpenMP ARB Corp; and Matthijs van Waveren, Marketing Coordinator of OpenMP ARB Corp.

The OpenMP Architecture Review Board (ARB) is a non-profit organization that supports and maintains the OpenMP API specification for parallel programming. The board recently announced some personnel changes with the arrival of new CEO Michael Wong and Marketing Coordinator Matthijs van Waveren. These latest appointments were a good reason to check in on the latest developments and the path ahead for OpenMP. ADMIN news editor Amber Ankerholz caught up with OpenMP’s Michael Wong and Matthijs van Waveren.

Amber Ankerholz: For readers who may not be familiar with OpenMP, can you provide a quick overview, including the OpenMP API?

Matthijs van Waveren: OpenMP (Open Multi-Processing) is an API that supports multiplatform shared-memory multiprocessing programming in C, C++, and Fortran on most processor architectures and operating systems, including Linux, Unix, AIX, Solaris, Mac OS X, and Microsoft Windows platforms. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.

OpenMP is the de facto standard for programming on shared-memory systems. You can find everything you need to know about it on our website. OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computers to the supercomputer.

OpenMP is already being used in many supercomputing applications for fluid dynamics simulations and in the oil and gas, financial, and biotechnology spaces.

AA: Can you tell us a little about your background? How did you get involved with the consortium?

MvW: I got involved in the consortium in 2000 when, as a Fujitsu employee, I proposed to my management to have Fujitsu join the OpenMP ARB. They accepted, and I have been representing Fujitsu in the OpenMP ARB since then. My activities in the consortium included spending some time the C/C++ subgroup and assisting in creating the link between Fortran 2003 and OpenMP. This year, the Board of Directors appointed me to take on the role of Marketing Coordinator in the OpenMP ARB consortium. I have made a video interview with insideHPC, which is now available on YouTube.

Michael Wong: I am IBM’s senior technical lead on the C++ compiler team. I became involved in OpenMP in 2005 when they needed a C++ expert in the group to grow the C++ portion. This was based on my years of activity as the IBM and Canadian representative to the ISO C++ Standards Committee. I spent that summer, shortly after my son was born, reading every book I could find on OpenMP, as well as a number of other popular parallel programming languages. Since then, I have been active in making OpenMP more supportive of object-oriented programming by adding various features in 3.0 and 3.1. Subsequently, I was asked to lead the Error Model subgroup, tasked with designing an Error Model for OpenMP that would allow it to move beyond traditional high-performance computing (HPC), into non-HPC areas where programs need to detect and handle errors.

AA: Would you describe your current role and responsibilities?

MvW: My current role is Marketing Coordinator at the OpenMP ARB. This means writing press releases and handling contacts with the press. For example, we have been quite busy with press contacts at the Supercomputing 2011 (SC11) conference and exhibition.

MW: I guess I am the head honcho now? Not really. What that really means: I am still learning as I go. It does mean that I have a great bunch of people, men and women from all over the world, each near the top of their profession – an expert in some area helping to evolve the specification – and I am just the guy who herds all the cats, buys the beer, and makes sure the journey is at least as interesting as the end. One part of the job that I do like is leading the group toward a future vision for OpenMP and connecting with other company executives to share support for each others’ product.

AA: Recently, three new members joined OpenMP ARB. What do these new members mean for the consortium?

MvW: You really did your research. OpenMP has in fact been growing for the last 2-3 years with five new members. The latest three new members are: NVidia, Texas Advanced Computing Center (TACC), and Oak Ridge National Laboratory. The other new members that joined in the previous years were CAPS Enterprise, Argonne National Lab, Los Alamos National Lab, and Texas Instruments.

These new members help in pushing the OpenMP ARB in new directions. OpenMP is already the de facto standard in the shared-memory space, but since NVidia is in the accelerator space, OpenMP gets pushed in the accelerator space. Texas Instruments, which joined the OpenMP ARB earlier, also pushed OpenMP in the embedded systems space.

This is in addition to the original members of AMD, Cray, Fujitsu, HP, IBM, Intel, Microsoft, NEC, SGI, Oracle/Sun, STMicroelectronics/PGI, ASCI, cOMPunity, EPCC, Lawrence Livermore National Lab, NASA, and RWTH Aachen. In total, we have 22 members now.

AA: What are the goals of the consortium going forward? Both short-term and long-term.

MvW: We just released OpenMP 3.1, which is mostly a bug fix release with a few enhancements. Compiler support is being rapidly released immediately by various vendors.

Short term, we are driving to release 4.0 – tentatively, next year with content which will probably include accelerator extensions, error handling, an improved tasking model, and user-defined reduction. We know that, because these are items we have already been working on for the last 2-3 years but did not want to release as yet because we were busy with bug fixes and needed time to polish these work items:

  • Error model: Letting the user stop all threads/tasks once a condition has been reached.
  • Affinity: Placement of threads on cores and data.
  • Accelerators: Extensions for running OpenMP code on systems with accelerators.
  • Tasks: Support irregular parallelism.
  • Reductions: Now you can apply them to user types, not just built-in types.

No matter what, we are committed to maintaining our leadership in the HPC space, with innovative features that support the supercomputing community.

Long term, we have been pushing OpenMP into embedded space and accelerators, where they badly need a high-level programming model that fits multiple languages. Many see OpenMP as an ideal candidate because we are supported not just by one vendor but all the major vendors, because we are a high-level language that fits on top of not just one, but three (C, C++, FORTRAN) high-level languages.

Time and time again, we have seen that the ability to support these three base languages is key to people’s decision to choose OpenMP over other parallel programming models that only support one language. After all, most software shops use more then one language to do their work. We will continue that push and look for other potential areas like graphics, games, and consumer electronics to expand into. Basically, anywhere multicores are showing up, leading to shared memory architecture, we can be there to help make the programming easier.

AA: What are the biggest challenges facing OpenMP and the API?

MvW: Parallel programming models are evolving so fast now that we need to make sure OpenMP can deliver what the users need to make programming multicores easy. The hard part is figuring out what that configuration will be in the future and how to fit that within the simple model of OpenMP where there are very few directives and APIs.

Right now, we are focused on getting OpenMP to be accepted as a de facto standard in the accelerator and embedded systems space. We are already accepted in the shared-memory space but want to expand to other spaces as well. Texas Instruments and NVidia use our API, and this shows how other companies in these spaces can also use it. In fact, a few of our member companies will be shortly releasing products to support accelerators based on the interim high-level language designed in the OpenMP accelerator subgroup.

AA: As CEO, how do you think OpenMP can best address these challenges? Can you give us specifics?

MW: OpenMP will address these challenges by delivering specifications that support the accelerator and embedded system spaces, while looking to add companies that can contribute their expertise in enhancing the specification to support their specific technology. OpenMP itself might need to change to enable holding multiple specifications that allow for accelerators, simd, even transactional memory.

MvW: Look for exciting changes to our website as we add further info on who uses OpenMP, and how; blogs; and FAQs. With that, we will try to knock down some of the myths people hold about OpenMP when compared with various other models, such as MPI, Cilk, or AMP. Somehow, we are the favorite comparison for all other programming models.

That makes us feel good in a way, as everyone feels like we are the one to beat and measures their performance, or ease of use, against us.

We have plans to expand our trade show participation. We have been steady supercomputing exhibitors annually, and recently we have started going to the Multicore Expo/Embedded Systems Conference. We are looking at other conferences, exhibitions, and shows that are a good fit with our growth strategy.

AA: What can our readers do to get involved?

MvW: Look for us at this year’s Multicore Expo/ESC show in San Jose and next year’s Supercomputing Conference (SC12) in Salt Lake City. We will be at the International Supercomputing Conference in Hamburg, Germany, at the RWTH Aachen booth.

Readers can get involved by petitioning their companies to become members of the OpenMP ARB. We are targeting companies like FreeScale and Samsung in the embedded system space, as well as heterogeneous computing. Additionally, we are looking for new member companies that can add their expertise to enhance how OpenMP can better support graphics, games, and consumer electronic devices.

They can come to our website and fill out our evaluation on who they are and how they use OpenMP.

AA: Is there anything you’d like to mention that I’ve not asked?

MvW: We feel that the greatest strengths of OpenMP are its simplicity, its scalability, its incremental approach to adding parallelism to existing code, its wide support by every major compiler vendor/hardware maker, and its ability alone among parallel programming languages to support three high-level languages.

These are strengths that we will not give up, and they will form the foundation of our drive forward in helping users work easier in the multicore world. We love the competition that is heating up in multicore programming because, honestly, every language designer learns a lot from each other and this can only be good for the user.

AA: Thank you both so much for your time and participation.