Waf and OpenCL check
Although I am quite used to CMake and the infamous Autotools, I wanted to try
out the Waf build system for a smallish side
project that I am investigating with a fellow student of mine. This project has
a limited number of dependencies which could also be integrated in a simple
Makefile
but it also needs to detect the include path and the libraries of any
OpenCL installation. Unfortunately, neither NVIDIA’s nor AMD’s OpenCL
distribution is installed in a standards compliant way. Although the NVIDIA
installation procedure suggests to install the CUDA toolkit into /usr/local
,
it does so by creating a new root folder /usr/local/cuda
with bin
, lib
,
include
and a bunch of non-UNIX directories right beneath. But I don’t want to
complain, let’s head right in how to solve this.
Download the latest Waf distribution file and create an empty wscript
file in
your source directory. This file is used to configure – check if required
and optional dependencies are met – and build your project. Because it contains
real Python code, you can configure your project in any imaginable way.
The configuration and build steps are mapped to the configure()
and build()
functions. To compile C source files, we need to tell the system which
translation facility to use:
def configure(conf):
conf.load('compiler_c')
conf.env.append_unique('CFLAGS', ['-g', '-std=c99', '-O3', '-Wall', '-Werror' ])
def build(bld):
bld.program(source='foo.c', target='foo')
As you can see, additional CFLAGS
are appended through the env
objects
append_x
methods. For some reason, we must also load the compiler in a
preceding options step, which tells the build system to export the compiler’s
command line options:
def options(opt):
opt.load('compiler_c')
Most Linux distributions ship
pkg-config files for the majority of
libraries. Fortunately, Waf is able to call the pkg-config binary out-of-the-box
with the check_cfg
function.
However, the NVIDIA installer copies the header files in non-standard locations
and does not provide any .pc
files. In most cases, the user will just hit the
enter key when prompted by the installer where to put the files, thus copying it
to /usr/local/cuda
. Others, like me, try to keep non-standard things in /opt
or place the distribution in their home directory and set the suggested in
environment variables. To cover these cases, we can compute a list with existing
paths:
def guess_cl_include_path():
import os
OPENCL_INC_PATHS = [
'/usr/local/cuda/include',
'/opt/cuda/include'
]
try:
OPENCL_INC_PATHS.append(os.environ['CUDA_INC_PATH'])
except:
pass
return filter(lambda d: os.path.exists(d), OPENCL_INC_PATHS)
We plug this little function into the configure function, tell the user what’s
going on with start_msg()
and abort with fatal()
in case we cannot find
anything. Last but not least, we add a check with check_cc()
for
libOpenCL.so
which should be installed in one of the standard library paths.
The final configure step looks like this:
def configure(conf):
conf.load('compiler_c')
conf.env.append_unique('CFLAGS', ['-g', '-std=c99', '-O3', '-Wall', '-Werror' ])
# use pkg-config
conf.check_cfg(package='glib-2.0', args='--cflags --libs', uselib_store='GLIB2')
conf.start_msg('Checking for OpenCL include path')
incs = guess_cl_include_path()
if incs:
conf.env.OPENCL_INC_PATH = incs[0]
conf.end_msg('found')
else:
conf.fatal('OpenCL include path not found')
conf.check_cc(lib='OpenCL', uselib_store='CL')
Configure information is stored across builds using the env
structure and the
uselib_store
keyword. When building the binary, we refer to the these
variables and we are good to go:
def build(bld):
bld.program(source='foo.c',
target='foo',
use=['GLIB2', 'CL'],
includes=bld.env.OPENCL_INC_PATH)