Canon EOS RP under Linux

Canon shipped their newest and cheapest full frame camera, the Canon EOS RP, end of February. In their infinite wisdom, they decided that they had to to develop a new CR3 format for this as well as the previously released EOS R and M50 cameras. And of course, so far no one was able to reverse-engineer the raw format entirely, so it will take a good while until those cameras will be supported by all major open source raw programs out of the box. The last option is running the Adobe DNG Converter in a Windows VM or under Wine to convert the proprietary CR3 format into their (intermediate) DNG format. After a good month, Adobe released a new version that supported the RP … but it stopped working under Wine 🤦. So, whoever stumbles upon

Unhandled exception: unimplemented function api-ms-win-core-winrt-error-l1-.GetRestrictedErrorInfo called in 64-bit code

open winecfg, go to Libraries, add two entries for api-ms-win-core-winrt-error-l1-1-0 and deactivate them. The DNG converter will complain but it will work (much faster than in a Windows 10 VM!) nevertheless.


Time tracking with Ledger

Whoever is reading this blog knows that I use Ledger-likes to track my finances. Some of you may also know that the currency is just some arbitrary name for any kind of unit. And a minority of you may also know there is special support for tracking time in the original Ledger program. This post explains how I use Ledger and a few Bash aliases to track different activities during a normal work day.

In a “regular” Ledger file you will find transactions that describe the flow of a commodity from one (or more) to another account at a certain time. There is however special support for timelog entries in the ledger program. They look similar but have special syntax to describe the start and end of a “transaction”

i 2019/03/02 00:30:20 Entertainment:Netflix
o 2019/03/02 00:35:20

which basically state what to account from i to o. Suppose you have a timelog file foo.ledger then ledger -f time.ledger bal would give you something like this:

               9.07h Work
               32.3m    Mail
                6.9m    Admin
               8.40h    Development
               1.69h Entertainment:Netflix
               14.2m Drinking
               11.9m Meetings
--------------------
               11.19h

Of course, you can customize the date range and hierarchy depth. Let’s alias that to

alias wasted='ledger -f ${TIMELOG} bal -b $(date -dlast-monday +%m/%d) --depth 2'

All good. Now, adding new entries by “punching in” new lines like that above is more than just cumbersome, it’s something a simple alias could do as well. I have defined something like

alias clock-in='echo i $(date +"%Y/%m/%d %H:%M:%S") >> ${TIMELOG}'
alias clock-out='echo o $(date +"%Y/%m/%d %H:%M:%S") >> ${TIMELOG}'

where TIMELOG points to the time.ledger file. Once you type

$ clock-in Work:Mail

an appropriate transaction will be made. To check what’s currently going on, this alias might help:

alias clock-status='[[ $(tail -1 ${TIMELOG} | cut -c 1) == "i" ]] && { echo "Clocked IN to $(tail -1 ${TIMELOG} | cut -d " " -f 4)"; wasted; } || { echo "Clocked OUT"; wasted;}' 

That all looks nice and dandy until you realize you don’t remember a particular activity you want to account you current time on. If you have used any CLI program you probably hit Tab more than twice. Wait no more, just define


function _clock_in ()
{
    local cur prev
    _get_comp_words_by_ref -n : cur

    local words="$(cut -d ' ' -s -f 4 ${TIMELOG} | sed '/^$/d' | sort | uniq)"
    COMPREPLY=($(compgen -W "${words}" -- ${cur}))
    __ltrim_colon_completions "${cur}"
}

complete -F _clock_in clock-in

and you are all set to complete the timelog entry while you clock in. You could write more elaborate logic in an appropriate scripting language but that’s an exercise for the reader.


meson and Google Test

I wrote about meson the awesome build system before. For C-based projects with many test executables there is nice infrastructure, however many C++ projects probably use Google Test or Catch and a single binary which runs the entire test suite. This is all nice and dandy with Google Test if you compile and link the test executable straight from the unit test source files. If however you build intermediate static libraries for organizational reasons you will quickly notice that Google Test won’t run anything at all because the symbols from Google Test itself won’t end up in the final binary without specifying the --whole-archive flag. Luckily, meson got the link_whole parameter since version 0.46, so instead of declaring your static test library as

test_lib = static_library('testlib',
  sources: test_sources,
  dependencies: [gtest_dep] + build_deps,
)

test_dep = declare_dependency(
  link_with: test_lib,
  dependencies: other_deps,
)

test_binary = executable('testfoo',
  sources: ['main.cpp'],
  dependencies: [test_dep],
)

you would change test_dep to

test_dep = declare_dependency(
  link_whole: test_lib,
  dependencies: other_deps,
)

and run your tests as usual.


Moving on to beancount

For the past three years I have been recording my finances with tools from the ledger family. While I had no major issues with them, there were a few minor annoyances, most notably recording capital gains in hledger without a virtual posting and a lack of a nice visual representation of my finances. I knew about beancount but was always a bit sceptical about its data format which is not really compatible with the other ledger tools. The deciding factor to take the plunge was the fava web interface and the comprehensive inventory system which makes recording capital gains a breeze.

Importing Deutsche Bank data

Moving the ledger data to the beancount format was pretty straightforward using the ledger2beancount tool. Unfortunately, I had to re-write the import tool from scratch because beancount does not provide a command similar to hledger csv. On the other hand, it was relatively simple to come up with these little bits of Python:

import os
import re
import codecs
import csv
import datetime
from beancount.core import number, data, amount
from beancount.ingest import importer

class Target(object):
    def __init__(self, account, payee=None, narration=None):
        self.account = account
        self.payee = payee
        self.narration = narration

class DeutscheBankImporter(importer.ImporterProtocol):
    def __init__(self, account, default, mapping):
        self.account = account
        self.default = default
        self.mapping = mapping

    def identify(self, fname):
        return re.match(r"Kontoumsaetze_\d+_\d+_\d+_\d+.csv",
            os.path.basename(fname.name))

    def file_account(self, fname):
        return self.account

    def extract(self, fname):
        fp = codecs.open(fname.name, 'r', 'iso-8859-1')
        lines = fp.readlines()

        # drop top and bottom stuff
        lines = lines[5:]
        lines = lines[:-1]
        entries = []

        def fix_decimals(s):
            return s.replace('.', '').replace(',', '.')

        for index, row in enumerate(csv.reader(lines, delimiter=';')):
            meta = data.new_metadata(fname.name, index)
            date = datetime.datetime.strptime(row[0], '%d.%m.%Y').date()
            desc = row[4]
            payee = row[3]
            credit = fix_decimals(row[15]) if row[15] != '' else None
            debit = fix_decimals(row[16]) if row[16] != '' else None
            currency = row[17]
            account = self.default
            num = number.D(credit if credit else debit)
            units = amount.Amount(num, currency)

            for p, t in self.mapping.items():
                if p in desc:
                    account = t.account

                    if t.narration:
                        desc = t.narration

                    if t.payee:
                        payee = t.payee

            frm = data.Posting(self.account, units, None, None, None, None)
            to = data.Posting(account, -units, None, None, None, None)
            txn = data.Transaction(meta, date, "*", payee, desc,
                    data.EMPTY_SET, data.EMPTY_SET, [frm, to])

            entries.append(txn)

        return entries

that you would plug in to your import config like this:

mappings = {
    'Salary':
        Target('Assets:Income', 'Foo Company'),
    'Walmart':
        Target('Expenses:Food:Groceries'),
}
CONFIG = [
    DeutscheBankImporter('Assets:Checking', 'Expenses:ReplaceMe', mappings)
]

Yes, that’s my answer to this statement from the official documentation:

My standard answer is that while it would be fun to have [automatic categorization], if you have a text editor with account name completion configured properly, it’s a breeze to do this manually and you don’t really need it.

On to the next years …