Bloerg
         

pkg-config is a freedesktop standard and implementation that helps developers finding correct compile and link flags of dependencies. Despite its very low version number, it is the de facto method for determining said flags on the Linux desktop. An oft-forgotten feature of pkg-config is the ability of a package to expose arbitrary key-value meta data. This allows dependent applications to find data paths, plugins to know where to install themselves or determining tool paths.

Although CMake supports pkg-config for quite some time with its FindPkgConfig module, it only queries for the compile and link information. If you want to get a variable value you are out of luck. But fear not! Just stick this into a CMake module

find_package(PkgConfig REQUIRED)

function(pkg_check_variable _pkg _name)
    string(TOUPPER ${_pkg} _pkg_upper)
    string(TOUPPER ${_name} _name_upper)
    string(REPLACE "-" "_" _pkg_upper ${_pkg_upper})
    string(REPLACE "-" "_" _name_upper ${_name_upper})
    set(_output_name "${_pkg_upper}_${_name_upper}")

    execute_process(COMMAND ${PKG_CONFIG_EXECUTABLE} --variable=${_name} ${_pkg}
                    OUTPUT_VARIABLE _pkg_result
                    OUTPUT_STRIP_TRAILING_WHITESPACE)

    set("${_output_name}" "${_pkg_result}" CACHE STRING "pkg-config variable ${_name} of ${_pkg}")
endfunction()

and you can easily query information like this

pkg_check_modules(GLIB glib-2.0)
pkg_check_variable(glib-2.0 glib-genmarshal)
message("Path: ${GLIB_2.0_GLIB_GENMARSHAL}")

I found the time to update my static comment script and will have the time to review comments again for the foreseeable future. Thus spam bots: ready, steady, go!

In this day and age, cloud storage is a convenient way to access data from any location. However for privacy and data theft reasons, using third party products always leaves a sour taste in my opinion. To host my own centralized and encrypted data storage for small, highly confidential data, I set up a system combining SSHFS and EncFS.

SSHFS is a FUSE file system to mount a remote file system securely through an SSH tunnel. As a nice side effect, this means that no additional client setup is required once the keys are distributed accordingly. EncFS is another FUSE file system that transparently decrypts an encrypted data source as a mounted directory. In the remaining post, I will explain how I use both systems together to read and write securely to an encrypted remote file system on my VPS.

Before mounting the file systems, we need two empty mount points: one that contains the SSHFS mounted encrypted directory (let’s call the path ENCBOX_PATH) and one that contains the decrypted data (let’s call that BOX_PATH). Before mounting the remote file system you should make sure that the remote directory actually exists. If that’s the case, you mount the remote directory with

sshfs foo@bar.com:/path/to/box $ENCBOX_PATH -o uid=$(id -u) -o gid=$(id -g)

I pass the uid and gid parameters to map my local user to the remote one. Otherwise I would not be able to access the data. Once $ENCBOX_PATH is available we can encrypt the contents with

encfs $ENCBOX_PATH $BOX_PATH

To avoid having to type in the password every time, we have to pass the password programmatically to EncFS. You could either get it from somewhere and pass it through stdin using the --stdin option or specify an external programm that prints the password on stdout. I use the latter together with my keyring-query script. It is a small Python script that uses the system’s keyring to store and retrieve arbitrary secrets. Thus before trying to mount the encrypted file system I will first store the password using

keyring-query --service=encbox --user=foo

The same command line is then used for the encfs command

encfs --extpass="keyring-query --service=encbox --user=foo" $ENCBOX_PATH $BOX_PATH

To avoid letting the box stay open indefinitely, I also set EncFS’ --idle option to five minutes.

Because all those steps are quite elaborate, I bundled this together in a small shell script. Besides the basic mounting of the file systems the script also takes care of checking error conditions and unmounting the file systems if they are already mounted.

The outlined solution is a huge relief for keeping confidential data centrally located that I don’t want to share with Google or Dropbox. However, this comes at a price: Although the bandwidth is not too bad (around 1.8 MB/s on a fast line), writing interactively (e.g. editing with Vim) becomes a pain due to the high latencies involved.

Compiling a LaTeX/BibTeX document is a tedious process does not integrate very well with the make idiom. Until now, I used rubber to compile my documents. However, there are several shortcomings that make rubber only a good rather than a perfect solution. Most notably, it is basically unmaintained for years now with the current release dating back to 2006. This wouldn’t be a big issue if it weren’t for some annoyances such as in-source specification of the LaTeX compiler1 and the inability to specify additional flags (e.g. -shell-escape).

Thanks to Austin Clements and his a spiritual successor to rubber, latexrun, the days of frustration and despair are over now. The name might not be as sexy but the feature list and improvements are certainly impressive. Most importantly, latexrun spits out errors and warnings in a format that makes Vim’s quickfix window happy and I can finally specify commands and arguments willy-nilly from outside of my document.

(Un)fortunately, latexrun likes to output auxiliary files in a separate build directory called latex.out. While that works for most packages, it breaks with the source highlighting package minted. As a temporary workaround, one can use the -O . option to use the current working directory as the build directory. But apart from that issue, I am very happy with this little program.

  1. Such as XeLaTeX which I use almost exclusively these days.