Category Archives: Linux

Process Google Sheets data in Python with Pandas (example: running distance over time)

Every now and then I do a little bit of data mangling for personal use using tools that I got to appreciate, mainly during professional work.

In this blog post I would like to share an example for simple and yet powerful end-to-end data processing using Google Sheets, Python, and Pandas (and I have to notice: my first blog post about data analysis with Pandas is already more than six years old, time flies by).

Raw data in a spreadsheet: a date and a distance for every run

I have a chaotic spreadsheet in Google Sheets where I keep track of my runs. It is easy to edit from the smartphone, and synchronized across devices.

This is a screenshot of a part of that spreadsheet (hiding some irrelevant columns, showing only a small fraction of the rows):

Each row in the date column refers to a specific day using the text format YYYY-MM-DD. Some rows in the km column contain numerical values. Each of these means that I have made a run of the distance given by the value, in kilometers, on the corresponding day. Missing values in the km column mean that on those days I did not do a run (or forgot to keep track of it). In the screenshot, it looks like days in the date column are represented without gaps, but that is not important.

Now you know about the kind of data that I have been entering manually for about half a year, about my runs.

Analysis goal

My goal today was to do a tiny bit of data analysis — for myself — and to then write this blog post, for you :-).

In terms of data analysis, my goal was to look into the evolution of my running performance over time. I wanted to start by looking at “running distance per week”. Specifically, my goal was to perform a rolling window analysis with a window width wider than a week (to focus on changes on longer time scales more than on fast fluctuations), and to plot the outcome.

So I built a small tool using Python and Pandas which

  • automatically fetches the current dataset from Google Sheets (as a CSV document)
  • processes and prepares the data (removing dirt, filling gaps, …)
  • performs statistical analyses
  • creates a plot (and writes a PNG graphics file)

Results first

Here is some code: https://github.com/jgehrcke/runni/blob/master/runni.py

Here is how I use it, and how you can use it, too:

# Get the code.
$ git clone https://github.com/jgehrcke/runni && cd runni
 
# Enable link sharing to the Google Sheet (so that anyone
# with the link can access the sheet). Get the corresponding
# ID/key from the URL, and set it as an environment variable
# (it's sensitive data).
$ export RUNNI_GSHEET_KEY='[snip]'
 
# Dependencies: Python 3, pandas, matplotlib, requests
 
# Run the analysis program.
$ python runni.py
...
200112-19:57:39.060 INFO: Writing PNG figure to 2020-01-12_running-distance-per-week-over-time.png

The resulting plot:
 

For each day in the data interval, a small gray data point explicitly shows the distance I ran on that very day (on most of the days this is 0 km). In the majority of the cases, every non-zero gray data point corresponds to a single run (the only run on the corresponding day), but more generally it is the distance sum of all runs on that very day.

The thick black line is the “distance per week”, derived from a rolling window analysis with a window width of 14 days.

in-Pandas data processing in more detail (the non-trivial part)

The following code block (from here) with its code comments shows the core of the in-Pandas data processing and is the main reason for why I write this blog post: I think this is a non-trivial part. In previous projects (goeffel, bouncer-log-analysis, dcos-dev-prod-analysis) I actually put a bit of thought into how to do a meaningful rolling window analysis with Pandas, and here I am simply re-using what I learned before. But some of that it is still non-trivial and deserves an explanation.

The code comments are supposed to provide a hopefully helpful level of explanation. From the comments, it should at least be obvious that several decisions need to be made before the data in the spreadsheet can be analyzed in a meaningful way in a rolling/sliding window analysis.

# Keep only those rows that have a value set in the `km` column. That is
# the criterion for having made a run on the corresponding day.
df = df[df.km.notnull()]
 
# Parse text in `date` column into a pd.Series of `datetime64` type values
# and make this series be the new index of the data frame.
df.index = pd.to_datetime(df["date"])
 
# Sort data frame by index (sort from past to future).
df = df.sort_index()
 
# Turn the data frame into a `pd.Series` object, representing the distance
# ran over time. Every row / data point in this series represents a run:
# the index field is the date (day) of the run and the value is the
# distance of the run.
km_per_run = df["km"]
 
# There may have been more than one run per day. In these cases, sum up the
# distances and have a single row represent all runs of the day.
 
# Example: two runs on 07-11:
# 2019-07-10    3.2
# 2019-07-11    4.5
# 2019-07-11    5.4
# 2019-07-17    4.5
 
# Group events per day and sum up the run distance:
km_per_run = km_per_run.groupby(km_per_run.index).sum()
 
# Outcome for above's example:
# 2019-07-10    3.2
# 2019-07-11    9.9
# 2019-07-17    4.5
 
# The time series index is expected to have gaps: days on which no run was
# recorded. Up-sample the time index to fill these gaps, with 1 day
# resolution. Fill the missing values with zeros. This is not strictly
# necessary for the subsequent analysis but makes the series easier to
# reason about, and makes the rolling window analysis a little simpler: it
# will contain one data point per day, precisely, within the represented
# time interval.
#
# Before:
#   In [28]: len(km_per_run)
#   Out[28]: 75
#
#   In[27]: km_per_run.head()
#   Out[27]:
#   2019-05-27    2.7
#   2019-06-06    2.9
#   2019-06-11    4.6
#   ...
#
# After:
#   In [30]: len(km_per_run)
#   Out[30]: 229
#
#   In [31]: km_per_run.head()
#   Out[31]:
#   2019-05-27    2.7
#   2019-05-28    0.0
#   2019-05-29    0.0
#   2019-05-30    0.0
#   ...
#
km_per_run = km_per_run.asfreq("1D", fill_value=0)
 
# Should be >= 7 to be meaningful.
window_width_days = opts.window_width_days
window = km_per_run.rolling(window="%sD" % window_width_days)
 
# For each window position get the sum of distances. For normalization,
# divide this by the window width (in days) to get values of the unit
# km/day -- and then convert to the new desired unit of km/week with an
# additional factor of 7.
km_per_week = window.sum() / (window_width_days / 7.0)
 
# During the rolling window analysis the value derived from the current
# window position is assigned to the right window boundary (i.e. to the
# newest timestamp in the window). For presentation it is more convenient
# and intuitive to have it assigned to the temporal center of the time
# window. Invoking `rolling(..., center=True)` however yields
# `NotImplementedError: center is not implemented for datetimelike and
# offset based windows`. As a workaround, shift the data by half the window
# size to 'the left': shift the timestamp index by a constant / offset.
offset = pd.DateOffset(days=window_width_days / 2.0)
km_per_week.index = km_per_week.index - offset

Closing remarks

  • What is shown above is I think a well-confined, simple example for real-world data analysis. I like the architecture of maintaining raw data in Google Sheets to then consume it via HTTP for analysis using proper tooling (the data analysis and plotting options within Google Sheets are very limited, heck). With that example, I hope I can inspire some of you people out there to do similar things. There are endless possibilities: in my spreadsheet, I have other columns such as the run duration, … :-).
  • I will keep adding data points to my spreadsheet after about every run and will keep re-generating the graph more or less regularly for my entertainment.
  • The meaning and impact of the window width in the rolling window analysis are critical. I have not explained that above. I think one of the best ways to grasp it is to visually play with it — that’s what the --window-width-days argument can help with.
  • Again, you can find the code for inspiration here: https://github.com/jgehrcke/runni

Setting und using variables in a Makefile: pitfalls

A debug session. I ran into a problem in CI where accessing /var/run/docker.sock from within a container failed with a permission error. I had this specific part working before. So I did a bit of bisecting and added debug output and found the critical difference. In this case I get a permission error (EACCES):

uid=2000(buildkite-agent) gid=0(root) groups=0(root)

In this case not:

uid=2000(buildkite-agent) gid=1001 groups=1001

The difference is in the unix group membership specifics of unix user uid=2000(buildkite-agent) within the specific container. If the user is member of gid=1001 then access is allowed, if the user is member of gid=0 then access is denied.

I found that in the erroneous case I was passing this command line argument to docker run ...:

-u 2000:

Nothing after the colon. This is where the group ID (gid) belongs. docker run ... did not error out. That is no good. This input was treated the same as -u 2000 or -u 2000:0.

But why was there no gid after the colon when I have this in my Makefile?

-u $(shell id -u):${DOCKER_GID_HOST}

Because when this line was run DOCKER_GID_HOST (a Make variable, not an environment variable) was actually not set.

A shell program would catch this case (variable used, but not set) when being used with the -o nounset option, and error out. Make does not have this kind of protection. There is no general protection against using variables that are not set. As far as I know!

Okay, but why was the variable DOCKER_GID_HOST not set when I have

DOCKER_GID_HOST := $(shell getent group docker | awk -F: '{print $$3}')

right before executing docker run? Well, because this is a Makefile. Where you cannot set a variable in one line of a recipe, and use it in the next one (a pattern that we use in almost every other programming environment).

The lines of a recipe are executed in independent shells. The next line’s state is very much decoupled from the previous line’s state, that’s how Makefiles work.

This is probably the most important thing to know about Make, and one of the most common mistakes, and I certainly knew this before, and I certainly made this same mistake before. And I made it again, like probably every single time that I had to do things with Make.

Makefiles are flexible and great, and sometimes they make you waste a great deal of time compared to other development environments, man.

Stack Overflow threads on the matter, with goodies like workarounds, best practices, and generally helpful discussion:

Command line: extract all zip files into individual directories

I have been professionally using Linux desktop environments for the last 10 years. They all have a number of deficiencies that get in the way of productivity. The lack of a decent GUI archive extraction helper, integrated with the graphical file manager, is just one tiny aspect.

On a fresh Windows system one of the first applications I typically install is 7zip. It adds convenient entries to the context menu of archive files. For example, it allows for selecting multiple archive files at once, offering a 7-Zip - Extract To "*\" in the context menu (found an example screenshot here). That will extract each selected archive file into an individual sub-directory (with the sub-directory’s name being the base name of the archive file w/o file extension). Quite useful!

I have looked a couple of times over the years, but I never found a similar thing for a modern Gnome desktop environment. Let me know if you know of a reliable solution that is well-integrated with one of the popular graphical file managers such as Thunar.

The same can of course be achieved from the terminal. What follows is the one-liner I have been using for the past couple of years. I usually look it up from my shell command history:

find -name '*.zip' -exec sh -c 'unzip -d "${1%.*}" "$1"' _ {} \;

This extracts all zip files in the current directory into individual sub-directories.

If you do not want to extract all zip files in the current directory but only a selection thereof then adjust the command (well, this is where a GUI-based solution would actually be quite handy, no?).

Running an eBPF program may require lifting the kernel lockdown

Update Sep 28: discussion on Hacker News
Update Sep 30: kernel lockdown merged into mainline kernel

A couple of days ago I wanted to try out the hot eBPF things using the BPF Compiler Collection (BCC) on my Fedora 30 desktop system, with Linux kernel 5.2.15. I could not load eBPF programs into the kernel: strace revealed that the bpf() system call failed with EPERM:

bpf(BPF_PROG_LOAD,{prog_type=[...]}, 112) = -1 EPERM
(Operation not permitted)

So, a lack of privileges. Why? I tried …

  • running natively as root instead of in a sudo environment.
  • disabling SELinux completely (instead of running in permissive mode).
  • following setuid-related hints.
  • building BCC from source to make it more likely that it correctly integrates with my system.
  • consulting BCC maintainers via GitHub.

No obvious solution, still EPERM.

I jumped on a few more discussions on GitHub and got a hint from GitHub user deg00 (thank you, anonymous person with no GitHub activity and a picture of a snail!). She wrote: “For Fedora 30, the problem is not selinux but kernel-lockdown”.

I did not know what kernel lockdown is, but I wondered how to disable it. I found the following resources useful:

Temporarily disabling kernel lockdown solved the problem

In the resources linked above, we find that there is a so-called sysrq mechanism that can influence kernel behavior. When configured with a 1 in /proc/sys/kernel/sysrq it has the widest set of privileges, including the privilege to lift the kernel lockdown. Sending an x into /proc/sysrq-trigger then actually uses the sysrq mechanism to lift the kernel lockdown.

That indeed worked for me. The following snippet shows the original symptom, despite running as root:

[root@jpx1carb jp]# python3 /usr/share/bcc/examples/hello_world.py 
bpf: Failed to load program: Operation not permitted
 
Traceback (most recent call last):
  File "/usr/share/bcc/examples/hello_world.py", line 12, in 
    BPF(text='int kprobe__sys_clone(void *ctx) { bpf_trace_printk("Hello, World!\\n"); return 0; }').trace_print()
  File "/usr/lib/python3.7/site-packages/bcc/__init__.py", line 344, in __init__
    self._trace_autoload()
  File "/usr/lib/python3.7/site-packages/bcc/__init__.py", line 1090, in _trace_autoload
    fn = self.load_func(func_name, BPF.KPROBE)
  File "/usr/lib/python3.7/site-packages/bcc/__init__.py", line 380, in load_func
    raise Exception("Need super-user privileges to run")
Exception: Need super-user privileges to run

The last error message “Need super-user privileges to run” is misleading. The “Operation not permitted” error further above corresponds to the EPERM shown in the strace output above.

This lifts the kernel lockdown via the sysrq mechanism, as discussed:

[root@jpx1carb jp]# echo 1 > /proc/sys/kernel/sysrq
[root@jpx1carb jp]# echo x > /proc/sysrq-trigger

Now BCC’s hello world example runs fine:

[root@jpx1carb jp]# python3 /usr/share/bcc/examples/hello_world.py 
b'     gnome-shell-3215  [005] .... 58317.922716: 0: Hello, World!'
b'   Socket Thread-26509 [001] .... 58322.093849: 0: Hello, World!'
b'     gnome-shell-3215  [003] .... 58322.923562: 0: Hello, World!'
[...]

Cool, stuff works.

What the heck just happened? I did not understand a thing and correspondingly started to read a bit about these new shiny topics.

What is the “kernel lockdown”?

Most importantly the concept of the “kernel lockdown” seems to still be evolving.

The the mission statement behind the kernel lockdown is hard to put into words without stepping onto anyone’s toes. This is how RedHat worded the goal in 2017:

The kernel lockdown feature is designed to prevent both direct and indirect access to a running kernel image, attempting to protect against unauthorised modification of the kernel image and to prevent access to security and cryptographic data located in kernel memory, whilst still permitting driver modules to be loaded.

However, that goal was and seems to still be subject to a technical as well as a political debate in the Linux ecosystem: In 2018, Zack Brown from LINUX Journal published a well-researched and quite entertaining article summarizing the heated discussion about the initial set of lockdown patches. If you would like to try to understand what kernel lockdown is (or tries to be) then that article is worth reading. A quote from the article’s last few paragraphs:

This type of discussion is unusual for kernel development, but not for this particular type of patch. The attempts to slip code into the kernel that will enable a vendor to lock users out of controlling their own systems always tend to involve the two sides completely talking past each other. Linus and Andy were unable to get Matthew to address the issue the way they wanted, and Matthew was unable to convince Linus and Andy that his technical explanations were genuine and legitimate.

Also, Jonathan Corbet’s LWN article titled Kernel lockdown in 4.17? from April 2018 is worth a read.

And how do I know if my kernel is locked down? dmesg!

Here’s some dmesg output from my system. It is quite revealing, almost prose:

[    0.000000] Kernel is locked down from EFI secure boot; see man kernel_lockdown.7
[...]
[    2.198433] Lockdown: systemd: BPF is restricted; see man kernel_lockdown.7
[...]
[58310.913828] Lifting lockdown

First, as you can see, the kernel told me exactly that it is “locked down” (even providing the reason: because EFI secure boot is enabled on my system).

Secondly, it was kind enough to say that this affects (e)BPF things! Maybe I should read the kernel messages more often :-).

Thirdly, after quite a bit of system uptime, the “Lifting lockdown” was emitted in response to, well, me lifting the lockdown with the above-mentioned sysrq mechanism.

That is, if you wonder if and how this affects your system, try doing a dmesg | grep lockdown !

The kernel acting “differently depending on some really esoteric detail in how it was booted”…?

When I approached the BCC maintainers about the EPERM error on Fedora 30 they first responded with (basically) “it’s working for me”. Someone actually booted a VM with a fresh Fedora 30 install. And they were unable to reproduce. How can that be? The difference was whether secure boot was enabled or not: it was for my actual desktop machine, but not for their VM setup. That is quite a lesson learned, and maybe an important take-home message from this blog post.

This annoying debugging experience was predicted by Linus Torvalds. A quote from one of his initial reviews of the kernel lockdown patches (source, April 2018):

I do not want my kernel to act differently depending on some really esoteric detail in how it was booted. That is fundamentally wrong. […] Is that really so hard to understand? […] Look at it this way: maybe lockdown breaks some application because that app does something odd. I get a report of that happening, and it so happens that the reporter is running the same distro I am, so I try it with his exact kernel configuration, and it works for me. […] It is *entirely* non-obvious that the reporter happened to run a distro kernel that had secure boot enabled, and I obviously do not.

Well, he was right.

Which kernel versions have the lockdown feature built-in?

Lockdown did not yet land in the mainline kernel. My Fedora 30 with kernel 5.2.15 is affected (with a specific variant of the lockdown patches, not necessarily the final thing!) because RedHat has chosen to build the lockdown patches into recent Fedora kernels, to try it out in the wild.

Will it land in the mainline kernel? When? And how will it behave, exactly? Just a couple of days ago Phoronix published an interesting article, titled Kernel Lockdown Feature Will Try To Land For Linux 5.4. Quote:

After going through 40+ rounds of revisions and review, the Linux kernel “LOCKDOWN” feature might finally make it into the Linux 5.4 mainline kernel.

While not yet acted upon by Linus Torvalds with the Linux 5.4 merge window not opening until next week, James Morris has submitted a pull request introducing the kernel lockdown mode for Linux 5.4.

The kernel lockdown support was previously rejected from mainline but since then it’s been separated from the EFI Secure Boot code as well as being implemented as a Linux security module (LSM) to address some of the earlier concerns over the code. There’s also been other improvements to the design of this module.

Various organizations seem to be pushing hard for this feature to land. It is taking long, but convergence around the details seems to take place.

What is the relationship between kernel lockdown and (e)BPF?

I think it is quite fair to ask: does it make sense that all-things-BPF are affected by the kernel lockdown feature? What does lockdown even have to do with eBPF in the first place?

I should say that I am not super qualified to talk about this because I have only researched this topic for about a day now. But I find highly interesting that

  • these questions seemingly have been under active debate since the first lockdown patch proposals
  • these questions seem to still be actively debated!

Andy Lutomirski reviewed in 2018:

“bpf: Restrict kernel image access functions when the kernel is locked
down”: This patch just sucks in general. At the very least, it should
only apply to […] But you should probably just force all eBPF
users through the unprivileged path when locked down instead, since eBPF
is really quite useful even with the stricter verification mode.

This shows that there was some pretty fundamental debate about the relationship between eBPF and kernel lockdown from the start.

I believe that the following quote shows how eBPF can, in general, conflict with the goal(s) of kernel lockdown (commit message of a 2019 version lockdown patch fragment):

From: David Howells <dhowells@redhat.com>

There are some bpf functions can be used to read kernel memory:
bpf_probe_read, bpf_probe_write_user and bpf_trace_printk.  These allow
private keys in kernel memory (e.g. the hibernation image signing key) to
be read by an eBPF program and kernel memory to be altered without
restriction. Disable them if the kernel has been locked down in
confidentiality mode.

Suggested-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Matthew Garrett <mjg59@google.com>
cc: netdev@vger.kernel.org
cc: Chun-Yi Lee <jlee@suse.com>
cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>

This commit message rather convincingly justifies that something needs to be done about eBPF when the kernel is locked down (so that the goals of the lockdown do not get undermined!). However, it is not entirely clear what exactly should be done, how exactly eBPF is supposed to be affected, how its inner workings and aspects are to be confined when the kernel is locked down: what follows is a reply from Andy Lutomirski to the above’s commit message: “:) This is yet another reason to get the new improved bpf_probe_user_read stuff landed!

And indeed, only last month (August 2019) Andy published a work-in-progress patch set titled bpf: A bit of progress toward unprivileged use.

What I learned is that with the current Fedora 30 and its 5.2.x kernel I neither see the “final” lockdown feature nor the “final” relationship between lockdown and eBPF. This is very much work in progress, worse than “cutting edge”: what works today might break tomorrow, with the next kernel update :-)!

By the way, I started to look into eBPF for https://github.com/jgehrcke/goeffel, a tool for measuring the resource utilization of a specific process over time.

Update Sept 30:  lockdown just landed in the mainline kernel, wow! Quote from the commit message, clarifying important topics (such as that lockdown will not be tightly coupled to secure boot):

This is the latest iteration of the kernel lockdown patchset, from
  Matthew Garrett, David Howells and others.
 
  From the original description:
 
    This patchset introduces an optional kernel lockdown feature,
    intended to strengthen the boundary between UID 0 and the kernel.
    When enabled, various pieces of kernel functionality are restricted.
    Applications that rely on low-level access to either hardware or the
    kernel may cease working as a result - therefore this should not be
    enabled without appropriate evaluation beforehand.
 
    The majority of mainstream distributions have been carrying variants
    of this patchset for many years now, so there's value in providing a
    doesn't meet every distribution requirement, but gets us much closer
    to not requiring external patches.
 
  There are two major changes since this was last proposed for mainline:
 
   - Separating lockdown from EFI secure boot. Background discussion is
     covered here: https://lwn.net/Articles/751061/
 
   -  Implementation as an LSM, with a default stackable lockdown LSM
      module. This allows the lockdown feature to be policy-driven,
      rather than encoding an implicit policy within the mechanism.

gipc 1.0.0 release

More than three years after the 0.6.0 release I have published gipc version 1.0.0 today.

Quick links

Release highlights

This release focuses on reliability and platform compatibility. It brings along a number of changes relevant for running it on Windows and macOS, as well as for running it under PyPy.

Both, gevent 1.2 and 1.3 are now officially supported. On Linux, gipc now officially supports CPython 2.7, 3.4, 3.5, 3.6, PyPy2.7, and PyPy3. On Windows, gipc officially supports gevent 1.3 on CPython 2.7, 3.4, 3.5, 3.6, and 3.7. Support for gevent 1.1 and CPython 3.3 has been dropped.

The API did not change. In view of the stability of the API over the recent years I thought it is time to officially declare it as such, and to follow the semantic versioning spec‘s point 5: “Version 1.0.0 defines the public API” :-).

For this release most of the work went into

  • fixing a small number of platform-dependent bugs (this one was interesting, and this one was pretty insightful and ugly).
  • setting up a continuous integration (CI) pipeline for Linux and macOS (on Travis CI) as well as for Windows (via AppVeyor).
  • Re-writing and re-styling significant parts of the documentation: the new docs are online and can be found at https://gehrcke.de/gipc (for comparison: old docs).
  • moving the repository from Bitbucket to GitHub (I also migrated issues using this well-engineered helper).
  • making tests more stable.
  • running the example programs as part of CI, on all supported platforms (required a number of consolidations).

Acknowledgements

I would like to thank the following people who have helped with this release, be it by submitting bug reports, by asking great questions, with testing, or with a bit of code: Heungsub Lee, James Addison, Akhil Acharya, Oliver Margetts.

Changelog for this release

For the record, the complete changelog for this release copied from CHANGELOG.rst:

New platform support:

  • Add support for PyPy on Linux. Thanks to Oliver Margetts and to Heungsub Lee for patches.

Fixes:

  • Fix a bug as of which gipc crashed when passing “raw” pipe handles between processes on Windows (see issue #63).
  • Fix can't pickle gevent._semaphore.Semaphore error on Windows.
  • Fix ModuleNotFoundError in test_wsgi_scenario.
  • Fix signal handling in example infinite_send_to_child.py.
  • Work around segmentation fault after fork on Mac OS X (affected test_wsgi_scenario and example program wsgimultiprocessing.py).

Test / continuous integration changes:

  • Fix a rare instability in test_exitcode_previous_to_join.
  • Make test_time_sync more stable.
  • Run the example programs as part of CI (run all on Linux and Mac, run most on Windows).
  • Linux main test matrix (all combinations are covered):
    • gevent dimension: gevent 1.2.x, gevent 1.3.x.
    • Python implementation dimension: CPython 2.7, 3.4, 3.5, 3.6, PyPy2.7, PyPy3.
  • Also test on Linux: CPython 3.7, pyenv-based PyPy3 and PyPy2.7 (all with gevent 1.3.x only).
  • Mac OS X tests (all with gevent 1.3.x):
    • pyenv Python builds: CPython 2.7, 3.6, PyPy3
    • system CPython
  • On Windows, test with gevent 1.3.x and CPython 2.7, 3.4, 3.5, 3.6, 3.7.

Potentially breaking changes:

  • gevent 1.1 is not tested anymore.
  • CPython 3.3 is not tested anymore.