Category Archives: technical things

Bokeh: disable touch interaction (disable drag, zoom, pan)

Bokeh is quite cool. I was looking for a way to disable touch interaction with a plot, though. Was a little tedious to find a solution on the web. Found it, buried in the docs. Therefore this quick post.

Say you have created a figure object with

fig = bokeh.plotting.figure(...)

Then you can disable the individual touch controls with e.g.

fig.toolbar.active_drag = None
fig.toolbar.active_scroll = None
fig.toolbar.active_tap = None

Very easy to do, you just need someone to tell you :-).

(Tested with Bokeh 2.0.0)

Bulma: sticky footer (flexbox solution)

Bulma is nice. I was looking for a way to get a sticky footer, though. Like many others, I was a little surprised that it’s not built-in:

It’s of course doable. I have created a demo / minimal working example based on the solution proposed here.

<!DOCTYPE html>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <title>Footer at the bottom / sticky footer</title>
    <style type="text/css" media="screen">
      body {
        display: flex;
        min-height: 100vh;
        flex-direction: column;
      #wrapper {
        flex: 1;
    <div id="wrapper">
      <section class="hero is-medium is-primary is-bold">
        <div class="hero-body">
          <div class="container">
            <h1 class="title">
              The footer is at the bottom. Seriously. 🩠
            <h2 class="subtitle">
              By <a href="">Jan-Philip Gehrcke</a>
    <footer class="footer">
      <div class="content has-text-centered">
          I am down here.

I posted this in the GH thread, too:

Covid-19: HTTP API for German case numbers

Landing page:

The Robert Koch-Institut is certainly a cool organization, but I doubt they understand the role of (HTTP) APIs for data exchange. I believe that government institutions still vastly underestimate the power of collaboration on data.

Who would have believed that during a pandemic in 2020 we communicate current numerical data such as case counts via PDF documents or complex websites that can only be scraped with brittle tooling and headless browsers?

I closely monitored the situation for days, asked people, asked organizations. Nothing.

Now I have buit an HTTP API, providing the currently confirmed case numbers of Covid-19 infections in Germany:

The primary concerns are:

  • convenience (easy to consume for you in your tooling!)
  • interface stability
  • data credibility
  • availability
$ curl 2> /dev/null | jq
  "current_totals": {
    "cases": 9348,
    "deaths": 25,
    "recovered": 72,
    "tested": "unknown"
  "meta": {
    "contact": "Dr. Jan-Philip Gehrcke,",
    "source": " (aggregates data from individual ministries of health in Germany)",
    "time_source_last_consulted_iso8601": "2020-03-18T00:11:24+00:00",
    "time_source_last_updated_iso8601": "2020-03-17T21:22:00+01:00"

This is served by Google App Engine in Europe. The code can be found here:

I plan to

  • add time series data
  • add more localized data for individual states (BundeslĂ€nder)
  • enhance caching

Feel free to use this. Feedback welcome.

Huge shoutout to for doing the work of aggregating the numbers published by individual ministries of health.

For historical data, by all means and purposes as of today I recommend consuming For getting the current state, use the data exposed via the HTTP API described above.

For now, I am sure that the current case count as provided by is the best in terms of credibility and freshness. The actual underlying data sources are all official: these are the individual ministries of health.

The individual ministries publish their numbers usually once or twice during different times of the day. The journalists from try to incorporate these data points as quickly as possible right after publication, also during the afternoon and evening. In contrast to that, the Robert Koch-Institut (RKI) may incorporate a specific update from a specific health ministry only after 1-2 days.

The RKI also doesn’t do what I call an atomic sum, but instead seems to sum numbers published by different health ministries at vastly different times: the RKI tries to find one number per day, and that number is not found during the evening (after “all data has come in” from the individual states), but seemingly at some unfortunate mid-day point in time where some individual ministries of health have just delivered a fresh update for the day, and others didn’t yet. Non-atomic.

This explains why, for example, the RKI’s official number for March 17 was ~7000 confirmed cases, whereas already reported ~9300 at the same time (biggest contributor here is specifically that the last update from Nordrhein-Westfalen from March 17 didn’t make it into RKI’s sum for March 17).


Update: an official statement of the RKI about the delays in data processing, in German:

In Deutschland ĂŒbermitteln die rund 400 GesundheitsĂ€mter mindestens einmal tĂ€glich (in der aktuellen Lage noch hĂ€ufiger) pseudonymisierte Daten zu bestĂ€tigten COVID-19-FĂ€llen auf Grundlage des Infektionsschutzgesetzes elektronisch an die BundeslĂ€nder. Die wiederum ĂŒbermitteln die Daten zu den COVID-19-FĂ€llen elektronisch an das RKI. FĂŒr die Berichterstattung wird seit 18.03.2020 tĂ€glich der Datenstand 00:00 Uhr verwendet.

Zwischen dem Bekanntwerden eines Falls vor Ort, der Meldung an das Gesundheitsamt, der Eingabe der Daten in die Software, der Übermittlung an die zustĂ€ndige Landesbehörde und von dort an das RKI liegt eine gewisse Zeitspanne. Die kann gemĂ€ĂŸ den Vorgaben im Infektionsschutzgesetz zwei bis drei Arbeitstage lang sein. In der aktuellen Lage erfolgt die Übermittlung deutlich schneller als im Routinebetrieb, weil Daten schneller verarbeitet werden. Dass einige FĂ€lle mit etwas Verzögerung im Gesundheitsamt elektronisch erfasst werden, liegt auch daran, dass die GesundheitsĂ€mter zunĂ€chst Ermittlungen zu den einzelnen FĂ€llen und deren Kontaktpersonen durchfĂŒhren und prioritĂ€r Infektionsschutzmaßnahmen ergreifen mĂŒssen, was die Ressourcen der GesundheitsĂ€mter bereits stark in Anspruch nimmt. Ebenso werden die Daten am RKI validiert, um verlĂ€ssliche Daten zu veröffentlichen. Auch innerhalb dieses Prozesses kann es zu geringen Verzögerungen kommen.


SETI@home hibernation


On March 31, the volunteer computing part of SETI@home will stop distributing work and will go into hibernation.


That is emotional for me. I just posted this comment on HN, and decided to quickly turn it into a small blog post for me to properly archive this memory. Something to look back to again in 20 years from now.

Back then, I was quite young. Around 2002. We were like 5 boys getting into overclocking. For our SETI team, the “BĂŒcki crunching connection”, from my small hometown in Germany.

I just tried to find an old screenshot from back in the day, and wow I found one, from 2002:

So funny, it’s all so anonymous. But it is all there: ICQ, mIRC; an icon to launch Quake III. Gazillion of bookmarks about gaming. And some SETI crunching stats. In Internet Explorer.

Seemingly we were actually crunching under one account for the team (

You might have done the same, but I am still sharing this because this has influenced me a lot:

I bought an AMD Duron, some “Arctic Silver II” heat paste. I took a lead pencil to connect some dots on the CPU to unlock the multiplier freely, got a freaking heat sink, and overclocked the hell out of the Duron. I needed to hide this from my parents, but of course the plan was to crunch 24/7.

Looks like our team (“SETI Team”) was actually among the top 200 of all SETI teams. Wow, yeah there were some serious people in the team, like “Butcho”, ranking in the top 1000 of individuals. No idea who that guy was and where he got the compute resources from. That’s the romantic part of that Internet era.

I found another screenshot, the file is called “duri@fsb133.jpg”. Looks like I knew what I was doing:

Another hilarious screenshot, also showing my ICQ contact list from that time. I still know these people by their nicknames, but you don’t. Ha.:

GitHub: Y? Y!

I am not the type of person who remembers shortcuts/hotkeys. Only few valuable ones stick. Those that pass the test of time.

Before I share a GitHub link pointing to code I press y. This might be one of the most important hotkeys of my day-to-day work. And I would love to see more of you people out there doing that, too!

It makes the URL in the location bar point to the specific revision of the code you are looking at, as opposed to the head of the current branch (which often is master).

You have two options:

  1. Select the line of code, press y, copy the URL, share it (for example in a GitHub comment). In the future, that URL will either stop working or point to the file, line or code section you actually wanted to refer to. Either this or that. No room for misleading moving target effects (well, malicious intent excluded, but even that is hopefully close to impossible). It is quite likely that even after years the comment, prose, article or chat log in which you used that URL is still going to be perfectly meaningful. How cool is that?
  2. Select the line of code, copy the URL, share it. Lucky you, you have just saved yourself some work (did not press y). But: The URL you have just shared is quite likely to point to a moving target. In a busy project in a busy file, it might only take a day and it will not refer to quite the right thing anymore. On longer time scales it gets worse. Once you put this into a public GitHub comment, email, forum post or chat then your content might first start to look subtly wrong, and then very wrong in the far future. Especially in the spectrum in between not all future readers might realize what actually happened. That can be quite misleading.

Choose for yourself :-).

Given the role into which GitHub has evolved for how we do software engineering these days, I think it is pretty important to spread this. Tell your peeps!

Reference: Getting permanent links to files in the GitHub docs.