Category Archives: Linux

Command line: extract all zip files into individual directories

I have been professionally using Linux desktop environments for the last 10 years. They all have a number of deficiencies that get in the way of productivity. The lack of a decent GUI archive extraction helper, integrated with the graphical file manager, is just one tiny aspect.

On a fresh Windows system one of the first applications I typically install is 7zip. It adds convenient entries to the context menu of archive files. For example, it allows for selecting multiple archive files at once, offering a 7-Zip - Extract To "*\" in the context menu (found an example screenshot here). That will extract each selected archive file into an individual sub-directory (with the sub-directory’s name being the base name of the archive file w/o file extension). Quite useful!

I have looked a couple of times over the years, but I never found a similar thing for a modern Gnome desktop environment. Let me know if you know of a reliable solution that is well-integrated with one of the popular graphical file managers such as Thunar.

The same can of course be achieved from the terminal. What follows is the one-liner I have been using for the past couple of years. I usually look it up from my shell command history:

find -name '*.zip' -exec sh -c 'unzip -d "${1%.*}" "$1"' _ {} \;

This extracts all zip files in the current directory into individual sub-directories.

If you do not want to extract all zip files in the current directory but only a selection thereof then adjust the command (well, this is where a GUI-based solution would actually be quite handy, no?).

Running an eBPF program may require lifting the kernel lockdown

Update Sep 28: discussion on Hacker News
Update Sep 30: kernel lockdown merged into mainline kernel

A couple of days ago I wanted to try out the hot eBPF things using the BPF Compiler Collection (BCC) on my Fedora 30 desktop system, with Linux kernel 5.2.15. I could not load eBPF programs into the kernel: strace revealed that the bpf() system call failed with EPERM:

bpf(BPF_PROG_LOAD,{prog_type=[...]}, 112) = -1 EPERM
(Operation not permitted)

So, a lack of privileges. Why? I tried …

  • running natively as root instead of in a sudo environment.
  • disabling SELinux completely (instead of running in permissive mode).
  • following setuid-related hints.
  • building BCC from source to make it more likely that it correctly integrates with my system.
  • consulting BCC maintainers via GitHub.

No obvious solution, still EPERM.

I jumped on a few more discussions on GitHub and got a hint from GitHub user deg00 (thank you, anonymous person with no GitHub activity and a picture of a snail!). She wrote: “For Fedora 30, the problem is not selinux but kernel-lockdown”.

I did not know what kernel lockdown is, but I wondered how to disable it. I found the following resources useful:

Temporarily disabling kernel lockdown solved the problem

In the resources linked above, we find that there is a so-called sysrq mechanism that can influence kernel behavior. When configured with a 1 in /proc/sys/kernel/sysrq it has the widest set of privileges, including the privilege to lift the kernel lockdown. Sending an x into /proc/sysrq-trigger then actually uses the sysrq mechanism to lift the kernel lockdown.

That indeed worked for me. The following snippet shows the original symptom, despite running as root:

[root@jpx1carb jp]# python3 /usr/share/bcc/examples/hello_world.py 
bpf: Failed to load program: Operation not permitted
 
Traceback (most recent call last):
  File "/usr/share/bcc/examples/hello_world.py", line 12, in 
    BPF(text='int kprobe__sys_clone(void *ctx) { bpf_trace_printk("Hello, World!\\n"); return 0; }').trace_print()
  File "/usr/lib/python3.7/site-packages/bcc/__init__.py", line 344, in __init__
    self._trace_autoload()
  File "/usr/lib/python3.7/site-packages/bcc/__init__.py", line 1090, in _trace_autoload
    fn = self.load_func(func_name, BPF.KPROBE)
  File "/usr/lib/python3.7/site-packages/bcc/__init__.py", line 380, in load_func
    raise Exception("Need super-user privileges to run")
Exception: Need super-user privileges to run

The last error message “Need super-user privileges to run” is misleading. The “Operation not permitted” error further above corresponds to the EPERM shown in the strace output above.

This lifts the kernel lockdown via the sysrq mechanism, as discussed:

[root@jpx1carb jp]# echo 1 > /proc/sys/kernel/sysrq
[root@jpx1carb jp]# echo x > /proc/sysrq-trigger

Now BCC’s hello world example runs fine:

[root@jpx1carb jp]# python3 /usr/share/bcc/examples/hello_world.py 
b'     gnome-shell-3215  [005] .... 58317.922716: 0: Hello, World!'
b'   Socket Thread-26509 [001] .... 58322.093849: 0: Hello, World!'
b'     gnome-shell-3215  [003] .... 58322.923562: 0: Hello, World!'
[...]

Cool, stuff works.

What the heck just happened? I did not understand a thing and correspondingly started to read a bit about these new shiny topics.

What is the “kernel lockdown”?

Most importantly the concept of the “kernel lockdown” seems to still be evolving.

The the mission statement behind the kernel lockdown is hard to put into words without stepping onto anyone’s toes. This is how RedHat worded the goal in 2017:

The kernel lockdown feature is designed to prevent both direct and indirect access to a running kernel image, attempting to protect against unauthorised modification of the kernel image and to prevent access to security and cryptographic data located in kernel memory, whilst still permitting driver modules to be loaded.

However, that goal was and seems to still be subject to a technical as well as a political debate in the Linux ecosystem: In 2018, Zack Brown from LINUX Journal published a well-researched and quite entertaining article summarizing the heated discussion about the initial set of lockdown patches. If you would like to try to understand what kernel lockdown is (or tries to be) then that article is worth reading. A quote from the article’s last few paragraphs:

This type of discussion is unusual for kernel development, but not for this particular type of patch. The attempts to slip code into the kernel that will enable a vendor to lock users out of controlling their own systems always tend to involve the two sides completely talking past each other. Linus and Andy were unable to get Matthew to address the issue the way they wanted, and Matthew was unable to convince Linus and Andy that his technical explanations were genuine and legitimate.

Also, Jonathan Corbet’s LWN article titled Kernel lockdown in 4.17? from April 2018 is worth a read.

And how do I know if my kernel is locked down? dmesg!

Here’s some dmesg output from my system. It is quite revealing, almost prose:

[    0.000000] Kernel is locked down from EFI secure boot; see man kernel_lockdown.7
[...]
[    2.198433] Lockdown: systemd: BPF is restricted; see man kernel_lockdown.7
[...]
[58310.913828] Lifting lockdown

First, as you can see, the kernel told me exactly that it is “locked down” (even providing the reason: because EFI secure boot is enabled on my system).

Secondly, it was kind enough to say that this affects (e)BPF things! Maybe I should read the kernel messages more often :-).

Thirdly, after quite a bit of system uptime, the “Lifting lockdown” was emitted in response to, well, me lifting the lockdown with the above-mentioned sysrq mechanism.

That is, if you wonder if and how this affects your system, try doing a dmesg | grep lockdown !

The kernel acting “differently depending on some really esoteric detail in how it was booted”…?

When I approached the BCC maintainers about the EPERM error on Fedora 30 they first responded with (basically) “it’s working for me”. Someone actually booted a VM with a fresh Fedora 30 install. And they were unable to reproduce. How can that be? The difference was whether secure boot was enabled or not: it was for my actual desktop machine, but not for their VM setup. That is quite a lesson learned, and maybe an important take-home message from this blog post.

This annoying debugging experience was predicted by Linus Torvalds. A quote from one of his initial reviews of the kernel lockdown patches (source, April 2018):

I do not want my kernel to act differently depending on some really esoteric detail in how it was booted. That is fundamentally wrong. […] Is that really so hard to understand? […] Look at it this way: maybe lockdown breaks some application because that app does something odd. I get a report of that happening, and it so happens that the reporter is running the same distro I am, so I try it with his exact kernel configuration, and it works for me. […] It is *entirely* non-obvious that the reporter happened to run a distro kernel that had secure boot enabled, and I obviously do not.

Well, he was right.

Which kernel versions have the lockdown feature built-in?

Lockdown did not yet land in the mainline kernel. My Fedora 30 with kernel 5.2.15 is affected (with a specific variant of the lockdown patches, not necessarily the final thing!) because RedHat has chosen to build the lockdown patches into recent Fedora kernels, to try it out in the wild.

Will it land in the mainline kernel? When? And how will it behave, exactly? Just a couple of days ago Phoronix published an interesting article, titled Kernel Lockdown Feature Will Try To Land For Linux 5.4. Quote:

After going through 40+ rounds of revisions and review, the Linux kernel “LOCKDOWN” feature might finally make it into the Linux 5.4 mainline kernel.

While not yet acted upon by Linus Torvalds with the Linux 5.4 merge window not opening until next week, James Morris has submitted a pull request introducing the kernel lockdown mode for Linux 5.4.

The kernel lockdown support was previously rejected from mainline but since then it’s been separated from the EFI Secure Boot code as well as being implemented as a Linux security module (LSM) to address some of the earlier concerns over the code. There’s also been other improvements to the design of this module.

Various organizations seem to be pushing hard for this feature to land. It is taking long, but convergence around the details seems to take place.

What is the relationship between kernel lockdown and (e)BPF?

I think it is quite fair to ask: does it make sense that all-things-BPF are affected by the kernel lockdown feature? What does lockdown even have to do with eBPF in the first place?

I should say that I am not super qualified to talk about this because I have only researched this topic for about a day now. But I find highly interesting that

  • these questions seemingly have been under active debate since the first lockdown patch proposals
  • these questions seem to still be actively debated!

Andy Lutomirski reviewed in 2018:

“bpf: Restrict kernel image access functions when the kernel is locked
down”: This patch just sucks in general. At the very least, it should
only apply to […] But you should probably just force all eBPF
users through the unprivileged path when locked down instead, since eBPF
is really quite useful even with the stricter verification mode.

This shows that there was some pretty fundamental debate about the relationship between eBPF and kernel lockdown from the start.

I believe that the following quote shows how eBPF can, in general, conflict with the goal(s) of kernel lockdown (commit message of a 2019 version lockdown patch fragment):

From: David Howells <dhowells@redhat.com>

There are some bpf functions can be used to read kernel memory:
bpf_probe_read, bpf_probe_write_user and bpf_trace_printk.  These allow
private keys in kernel memory (e.g. the hibernation image signing key) to
be read by an eBPF program and kernel memory to be altered without
restriction. Disable them if the kernel has been locked down in
confidentiality mode.

Suggested-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Matthew Garrett <mjg59@google.com>
cc: netdev@vger.kernel.org
cc: Chun-Yi Lee <jlee@suse.com>
cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>

This commit message rather convincingly justifies that something needs to be done about eBPF when the kernel is locked down (so that the goals of the lockdown do not get undermined!). However, it is not entirely clear what exactly should be done, how exactly eBPF is supposed to be affected, how its inner workings and aspects are to be confined when the kernel is locked down: what follows is a reply from Andy Lutomirski to the above’s commit message: “:) This is yet another reason to get the new improved bpf_probe_user_read stuff landed!

And indeed, only last month (August 2019) Andy published a work-in-progress patch set titled bpf: A bit of progress toward unprivileged use.

What I learned is that with the current Fedora 30 and its 5.2.x kernel I neither see the “final” lockdown feature nor the “final” relationship between lockdown and eBPF. This is very much work in progress, worse than “cutting edge”: what works today might break tomorrow, with the next kernel update :-)!

By the way, I started to look into eBPF for https://github.com/jgehrcke/goeffel, a tool for measuring the resource utilization of a specific process over time.

Update Sept 30:  lockdown just landed in the mainline kernel, wow! Quote from the commit message, clarifying important topics (such as that lockdown will not be tightly coupled to secure boot):

This is the latest iteration of the kernel lockdown patchset, from
  Matthew Garrett, David Howells and others.
 
  From the original description:
 
    This patchset introduces an optional kernel lockdown feature,
    intended to strengthen the boundary between UID 0 and the kernel.
    When enabled, various pieces of kernel functionality are restricted.
    Applications that rely on low-level access to either hardware or the
    kernel may cease working as a result - therefore this should not be
    enabled without appropriate evaluation beforehand.
 
    The majority of mainstream distributions have been carrying variants
    of this patchset for many years now, so there's value in providing a
    doesn't meet every distribution requirement, but gets us much closer
    to not requiring external patches.
 
  There are two major changes since this was last proposed for mainline:
 
   - Separating lockdown from EFI secure boot. Background discussion is
     covered here: https://lwn.net/Articles/751061/
 
   -  Implementation as an LSM, with a default stackable lockdown LSM
      module. This allows the lockdown feature to be policy-driven,
      rather than encoding an implicit policy within the mechanism.

gipc 1.0.0 release

More than three years after the 0.6.0 release I have published gipc version 1.0.0 today.

Quick links

Release highlights

This release focuses on reliability and platform compatibility. It brings along a number of changes relevant for running it on Windows and macOS, as well as for running it under PyPy.

Both, gevent 1.2 and 1.3 are now officially supported. On Linux, gipc now officially supports CPython 2.7, 3.4, 3.5, 3.6, PyPy2.7, and PyPy3. On Windows, gipc officially supports gevent 1.3 on CPython 2.7, 3.4, 3.5, 3.6, and 3.7. Support for gevent 1.1 and CPython 3.3 has been dropped.

The API did not change. In view of the stability of the API over the recent years I thought it is time to officially declare it as such, and to follow the semantic versioning spec‘s point 5: “Version 1.0.0 defines the public API” :-).

For this release most of the work went into

  • fixing a small number of platform-dependent bugs (this one was interesting, and this one was pretty insightful and ugly).
  • setting up a continuous integration (CI) pipeline for Linux and macOS (on Travis CI) as well as for Windows (via AppVeyor).
  • Re-writing and re-styling significant parts of the documentation: the new docs are online and can be found at https://gehrcke.de/gipc (for comparison: old docs).
  • moving the repository from Bitbucket to GitHub (I also migrated issues using this well-engineered helper).
  • making tests more stable.
  • running the example programs as part of CI, on all supported platforms (required a number of consolidations).

Acknowledgements

I would like to thank the following people who have helped with this release, be it by submitting bug reports, by asking great questions, with testing, or with a bit of code: Heungsub Lee, James Addison, Akhil Acharya, Oliver Margetts.

Changelog for this release

For the record, the complete changelog for this release copied from CHANGELOG.rst:

New platform support:

  • Add support for PyPy on Linux. Thanks to Oliver Margetts and to Heungsub Lee for patches.

Fixes:

  • Fix a bug as of which gipc crashed when passing “raw” pipe handles between processes on Windows (see issue #63).
  • Fix can't pickle gevent._semaphore.Semaphore error on Windows.
  • Fix ModuleNotFoundError in test_wsgi_scenario.
  • Fix signal handling in example infinite_send_to_child.py.
  • Work around segmentation fault after fork on Mac OS X (affected test_wsgi_scenario and example program wsgimultiprocessing.py).

Test / continuous integration changes:

  • Fix a rare instability in test_exitcode_previous_to_join.
  • Make test_time_sync more stable.
  • Run the example programs as part of CI (run all on Linux and Mac, run most on Windows).
  • Linux main test matrix (all combinations are covered):
    • gevent dimension: gevent 1.2.x, gevent 1.3.x.
    • Python implementation dimension: CPython 2.7, 3.4, 3.5, 3.6, PyPy2.7, PyPy3.
  • Also test on Linux: CPython 3.7, pyenv-based PyPy3 and PyPy2.7 (all with gevent 1.3.x only).
  • Mac OS X tests (all with gevent 1.3.x):
    • pyenv Python builds: CPython 2.7, 3.6, PyPy3
    • system CPython
  • On Windows, test with gevent 1.3.x and CPython 2.7, 3.4, 3.5, 3.6, 3.7.

Potentially breaking changes:

  • gevent 1.1 is not tested anymore.
  • CPython 3.3 is not tested anymore.

Atomically switch a directory tree served by nginx

This post briefly demonstrates how to atomically switch a directory tree of static files served by nginx.

Consider the following minimal nginx config file:

$ cat conf/nginx.conf 
events {
    use epoll;
}
 
http {
    server {
        listen 0.0.0.0:80;
        location / {
            root /static/current ;
        }
    }
}

The goal is to replace the directory /static/current atomically while nginx is running.

This snippet shows the directory layout that I started out with:

$ tree
.
├── conf
│   └── nginx.conf
└── static
    ├── version-1
    │   └── hello.html
    └── version-2
        └── hello.html

conf/nginx.conf is shown above. The static directory contains two sub trees, and the goal is to switch from version-1 to version-2.

For this demonstration I have started a containerized nginx from its official Docker image:

$ docker run -v $(realpath static):/static:ro -v $(realpath conf):/etc/nginx:ro -p 127.0.0.1:8088:80 nginx nginx -g 'daemon off;'

This mounts the ./static directory as well as the nginx configuration file into the container, and exposes nginx listening on port 8088 on the local network interface of the host machine.

Then, in the ./static directory one can choose the directory tree served by nginx by setting a symbolic link, and one can subsequently switch the directory tree atomically, as follows:

1) No symbolic link is set yet — leading to a 404 HTTP response (the path /static/current does not exist in the container from nginx’ point of view):

$ curl http://localhost:8088/hello.html
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.15.6</center>
</body>
</html>

2) Set the current symlink to serve version-1:

$ cd static
$ ln -s version-1 current && curl http://localhost:8088/hello.html
hello 1

3) Prepare a new symlink for version-2 (but don’t switch yet):

$ ln -s version-2 newcurrent

4) Atomically switch to serving version-2:

$ mv -fT newcurrent current && curl http://localhost:8088/hello.html
hello 2

In step (4) It is essential to use mv -fT which changes the symlink with a rename() system call. ln -sfn would also appear to work, but it uses two system calls under the hood and therefore leaves a brief time window during which opening files can fail because the path is invalid.

Final directory layout including the symlink current (currently pointing to version-2):

$ tree
.
├── conf
│   └── nginx.conf
└── static
    ├── current -> version-2
    ├── version-1
    │   └── hello.html
    └── version-2
        └── hello.html

Kudos to https://rcrowley.org/2010/01/06/things-unix-can-do-atomically.html for being a great reference.

Download article as PDF file from Elsevier’s ScienceDirect via command line (curl)

When not in the office, we often times cannot directly access scientific literature, because access control is usually based on IP addresses. However, we usually have SSH access to the university network. Being logged in to a machine in the university network we should — in theory — be able to access a certain article. Most of the times it is the PDF file that we are interested in and not the “web page” corresponding to an article. So, can’t we just $ curl http://whatever.com/article.pdf to get that file? Most of the times, this does not work, because access to journal articles usually happens through rather complex web sites, such as Elsevier’s ScienceDirect:

ScienceDirect is a leading full-text scientific database offering journal articles and book chapters from nearly 2,500 journals and 26,000 books.

Such web sites add a considerable amount of complexity to the technical task of downloading a file. The problem usually starts with obtaining the direct URL to the PDF file. Also, HTTP redirection and cookies are usually involved. Often times, the only solution people see to solve these issues is to set up a VPN and then to use a fully fledged browser through that VPN, and let the browser deal with the complexity.

However, I prefer to get back to the basics and always strive to somehow find a direct URL to the PDF file to then download it via curl or wget.

This is my solution for Elsevier’s ScienceDirect:

Say, for instance, you wish to download the PDF version of this article: http://www.sciencedirect.com/science/article/pii/S0169433215012131

Then all you need is that URL and the following commands executed on a common Linux system:

export SDURL="http://www.sciencedirect.com/science/article/pii/S0169433215012131"
curl -Lc cookiejar "${SDURL}" | grep pdfurl | perl -pe 's|.* pdfurl=\"(.*?)\".*|\1|' > pdfurl
curl -Lc cookiejar "$(cat pdfurl)" > article.pdf

The method first parses the HTML source code of the main page corresponding to the article and extracts a URL to the PDF file. At the same time, it also stores the HTTP cookie(s) set by the web server when accessing named web page. These cookies are then re-used when accessing the PDF file directly. This has reproducibly worked for me.

If it does not work for you, I recommend having a look into the file pdfurl and see if that part of the process has lead to a meaningful result or not. Obviously, the second step can only succeed aver having obtained a proper URL to the PDF file.

This snippet should not be treated as a black box. Please execute it in an empty directory. Also note that this snippet only works subject to the condition that ScienceDirect keeps functioning the way it does right now (which most likely is the case for the next couple of months or years).

Don’t hesitate to get back to me if you have any questions!