Matthias Beyer published two tips which according to him “are very critical when talking about speed when using vim”. I beg to differ and want to give my reasons.

Yes, leader mappings are important but not to gain speed but to increase the available mapping space by one additional dimension. For example, I map :Make<CR> to F6, whereas he maps it to <Leader>m. Speed gain in my opinion: zero. Moreover, there is some truth in it that “leader maps are pretty lame”.

Matthias’ next tip was: “don’t use the arrow keys!”. I agree but replacing them by using hjkl exclusively won’t cut it. You gain a little bit by staying in the home row but at the end of the day you still drag your cursor around one character/line after another. By far, my main navigation speedups come from moving by words with b, e and w, by paragraphs with { and }, by half-screens with Ctrl+u and Ctrl+d. However, navigation only makes up a tiny fraction of a typical Vim day: I gain a lot more by using Vim’s grammar to its full extent.

So here is my Vim tip of the day: learn it as good as you can.

Vim’s builtin tag integration is an incredibly easy way to jump from one source location to the definition of a function or class using Ctrl+]1 and Ctrl+t. To generate the tags file, I used to map the following command to <Leader>gt

:!ctags -R -f .tags --sort=yes --exclude=build --exclude=_build

As you probably can imagine, such command line will never cover all corner cases of files that need to be ex- or included. However, Git already knows about which files I’d like to ignore, so I now just feed the list of files already versioned to ctags which I map like this:

:!git ls-tree -r --name-only $(git rev-parse --abbrev-ref HEAD)  ctags -f .tags --sort=yes -L -

To open and close the whole folds of target destinations prior and after the jumps, I use the following functions and maps:

function TagJumpForward()
    execute "tag " . expand("<cword>")
    try | foldopen! | catch | | endtry

function TagJumpBack()
    try | foldclose! | catch | | endtry

nnoremap <silent> <C-i> :call TagJumpForward()<CR>
nnoremap <silent> <C-t> :call TagJumpBack()<CR>

Finally, if you use CtrlP for buffer and file navigation, you should enable the tag support and map the launcher, for example to Ctrl+B

let g:ctrlp_extensions = ['tag']
nnoremap <C-b> :CtrlPTag<CR>

That reduces navigations like <C-p>foo_file_c<CR>/bar_func n n n to <C-b>bar_func.

  1. Ctrl+] is pretty hard to reach on a German keyboard layout, so I mapped that to Ctrl+i.

A lot has been written about, a public directory of public encryption keys associated with usernames and additional “verification proofs”. Yesterday, I was admitted to the circle of alpha testers and could grab the user name. Despite some initial problems, the website itself works just fine. Using the keybase client turned out to be a dead end though. It requires a fairly recent NodeJS and npm which are both not packaged for my long-term choice of Linux distribution. Thus, right now, the service offered to me is quite limited, because I am also not willing to upload my client-encrypted key.

I have four invites left, so if you are interested in trying that service, just drop me a line. And yes, I have a Twitter account for five years now, but this is and (probably will) be the last occasion that I use it.

To analyze the run-time behaviour of an application, a common technique is to record traces of code execution by inserting statements like these:

start_trace ("foo");
do_foo ();
end_trace ("foo");

This is an effective way to analyze concurrent applications that are usually difficult to reason about. If the IDE1 does not support it, visualizing this kind of data can be tricky. Fortunately, the Chrome browser exposes its internal trace viewer via a generic JSON format interface. All you have to do is generate the appropriate data and load it in the about:tracing page:


Here is a very simple way how you could trace the execution of Python code without interspersing your code like stupid:

import os
import time
import threading
import functools
import json

class Manager(object):

    _START_CLOCK = time.time()

    def __init__(self):
        self._events = []
        self._pid = os.getpid()

    def _new_event(self, func, event):
        tid = threading.current_thread().ident
        timestamp = (time.time() - self._START_CLOCK) * 1000 * 1000
                                 cat='f', ph=event,
                                 tid=tid, pid=self._pid))

    def trace(self, func):
        def record(*args, **kwargs):
            self._new_event(func, 'B')
            func(*args, **kwargs)
            self._new_event(func, 'E')

        return record

    def __del__(self):
      with open('trace.json', 'w') as fp:
          json.dump(dict(traceEvents=self._events), fp)

The Manager records all events for functions that are decorated with the trace decorator, nothing fancy here. The distinction between the thread id tid and the process id pid stems from the fact that Chrome has a multi-process architecture, but you can of course use these fields in any way you like. Just remember that events from the same thread id are layed out in the same row and therefore need correctly ordered time stamps. Note that I also didn’t took special care for the category field, i.e. just set it to “f”. In the Managers destructor2 I just dump the events in the correct format.

The following test program demonstrates how to use the Manager and was used to made the top image:

m = Manager()

def foo(t):
    print 'going to sleep for {} seconds'.format(t)

threads = []

for i in range(25):
    thread = threading.Thread(target=foo, args=(random.random() * 0.25,))
    time.sleep(random.random() * 0.025)

for thread in threads:

So, now stop wasting your time guessing the run-time behaviour of your application and measure it!

  1. As far as I know, Eclipse has a mode to show thread execution.

  2. This pattern is typically frowned upon by seasoned Python hackers but once in a while it can be of a good use.