Category Archives: Technology

NodeJS http.ClientRequest “finished” event: has the request body been flushed out?

The popular got HTTP client from the NodeJS ecosystem uses the http-timer package which itself uses the “finish” event on a http.ClientRequest to determine the point in time when the HTTP request body has been written to the remote end (has been “uploaded”). Code.

This point in time is then used as the reference for measuring the time it takes for the HTTP server on the other end to generate a response: often called “time to first byte” (TTFB), this client-side metric measures the duration between sending out the last byte of the request to receiving the first byte of the response. TTFB is often used as a server performance metric, indicating the time it took the server to process the request.

My suspicion was that the TTFB numbers I saw in my scenario were pretty off.

I have then made a quick verification experiment in which I send an HTTP request with about 10 MB body size to an HTTP server under my control. I have confirmed the TCP upload to take roughly 30 seconds (reproducibly, through my slowish but stable Internet connection), and I have confirmed the HTTP server to immediately send a response once it has consumed the request body. For all means and purposes of this quick sanity check the actual request upload duration therefore is ~30 seconds, and the actual TTFB is practically zero.

What did http-timer measure? In one attempt, the “finish” event on the http.ClientRequest fired after about 17 seconds, resulting in an alleged TTFB of about 13 seconds. Repetitions yielded 20 s / 10 s, 15 s / 15 s, and more samples in the same ball park. That is, the method seems to significantly underestimate the request upload duration (it determines the point in time when the HTTP request body has been written to the remote end as too early), resulting in an overestimated TTFB (many seconds instead of ~zero seconds).

I suspected that the presence of some big buffer(s) architecturally skews the numbers. So I did this verification experiment with small TCP write buffers on my host operating system:

$ echo 'net.ipv4.tcp_wmem = 4096 16384 65536' >> /etc/sysctl.conf
$ echo 'net.core.wmem_max=65536' >> /etc/sysctl.conf
 
$ sysctl -p
 
$ cat /proc/sys/net/core/wmem_max
65536
 
$ cat /proc/sys/net/ipv4/tcp_wmem
4096    16384   65536

From here I thought that most probably there is some kind of request buffering going on within the got/NodeJS system; the internals of which system are largely unknown to me.

At least, I believe that a big buffer between the client code and the host’s TCP stack would make the problem of “determining the point in time when the HTTP request body has been written to the remote end” kind of an ill-posed problem, explaining the discrepancy between the actual timings and the measured timings.

In NodeJS, a http.ClientRequest is a Writable stream. it has the concept of a highWaterMark (which I understand to be a buffer capacity, roughly) which by default seems to be set to 16 kB, and it can be configured upon stream construction.

In the NodeJS standard library I have found that the highWaterMark is not explicitly set in the routine which largely implements http.ClientRequest construction. Is it set somewhere else? It does not seem to be set in the got code base either. grep has revealed that _http_outgoing.js is a place worth looking:

_http_outgoing.js:65:const HIGH_WATER_MARK = getDefaultHighWaterMark();

All I found here is that this simply sets the default (16 kB). I fiddled with the code a bit to be really sure, and found that an http.ClientRequest object in the context of got indeed uses the default of 16 kB (16384 bytes) for writableHighWaterMark.

From here I am pretty clueless. I am reasonably confident that my quick verification experiment has shown that there is a significant deviation between reality and what’s measured, but as of today with limited knowledge about NodejS inner workings I cannot explain this deviation. Can you? Is the TCP stack of my Linux system tricking me? Is there some additional buffering going on in the inner workings of libuv (the event loop underlying to NodeJS)? Let me know!

An interesting thing I noticed is that similar timing measurement work happens in the established HTTP client request. In their timing measurement code they however do not even attempt to determine the point in time when the HTTP request body has been written to the remote end. The timing measurement code was introduced with request/pull/2452, and there is some lovely discussion about the details. This mildly suggests that the problem is indeed ill-posed in NodeJS (and I’d still love to understand why!).

NPM: retry package download upon “429 Too Many Requests”

Sometimes the NPM package registry is a bit loaded and individual GET HTTP requests emitted by NPM would be responded to with 429 (Too Many Requests) HTTP responses.

It seems that by default NPM might actually try once or twice, but after all it errors out quickly with an error message like this:

npm ERR! 429 Too Many Requests - GET https://registry.npmjs.org/argparse/-/argparse-1.0.10.tgz

Especially in CI systems you want NPM to retry more often with an appropriate back-off, to increase the chance for successfully self-healing transient struggles.

To achieve that you can tune four retry-related configuration parameters documented here. For starters, I now simply set fetch-retries upon installing dependencies:

npm install . --fetch-retries 10

How to run minikube on Fedora (30)

Today I ran minikube on my notebook running Fedora 30. What follows is a list of steps to do that.

First, install KVM/libvirt and periphery in one go:

sudo dnf install @virtualization
sudo systemctl start libvirtd
sudo systemctl enable libvirtd

Do a bit of verification:

lsmod | grep kvm
kvm_intel             303104  0
kvm                   782336  1 kvm_intel
irqbypass              16384  1 kvm

Set up the most recent minikube release:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo mkdir -p /usr/local/bin/
sudo install minikube /usr/local/bin/

Start minikube using the kvm2 driver:

$ minikube start --vm-driver=kvm2
😄  minikube v1.7.1 on Fedora 30
✨  Using the kvm2 driver based on user configuration
💾  Downloading driver docker-machine-driver-kvm2:
    > docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s
    > docker-machine-driver-kvm2: 13.82 MiB / 13.82 MiB  100.00% 1.19 MiB p/s 1
💿  Downloading VM boot image ...
    > minikube-v1.7.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s
    > minikube-v1.7.0.iso: 166.68 MiB / 166.68 MiB [-] 100.00% 9.21 MiB p/s 19s
🔥  Creating kvm2 VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.17.2 on Docker '19.03.5' ...
💾  Downloading kubeadm v1.17.2
💾  Downloading kubelet v1.17.2
💾  Downloading kubectl v1.17.2
🚜  Pulling images ...
🚀  Launching Kubernetes ... 
🌟  Enabling addons: default-storageclass, storage-provisioner
⌛  Waiting for cluster to come online ...
🏄  Done! kubectl is now configured to use "minikube"
💡  For best results, install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Time lapse video of raw photos (ffmpeg)

I sometimes get back home with tens of gigabytes of image data distributed across thousands of photo files. This is when the actual work starts: sorting, labeling, selecting, agonizing over finding the best shots, and developing them.

This time I also wanted to make a “time lapse” video of all undeveloped photos I took. A nice little short film of all scenes I tried to take photos of. An attribution to all those photos that otherwise would end up in the photo graveyard — in the archive, never going to be looked at again.

For converting a series of JPEG image files into a properly encoded video the open-source program of choice is ffmpeg. Building up an appropriate ffmpeg command took a while. Here is the command I used:

ffmpeg.exe -f image2 -framerate 8 -i resized_renamed_2560/resized_%05d.jpg \
  -filter:v "scale='min(2560,iw)':'min(1440,ih)':force_original_aspect_ratio=decrease,pad=2560:1440:(ow-iw)/2:(oh-ih)/2" \
  -c:v libx265 -crf 28 -preset medium video-2560-h265-medium.mp4

The video filter syntax (the part after -filter:v) instructs ffmpeg to output a video with 2560 pixels width and 1440 pixels height, retaining the aspect ratio of the input image files. That last bit is important because in a large set of photos we usually find two different aspect ratios: we shoot most photos in landscape orientation, but others in portrait orientation. Putting both into the same video means that one of both types needs to be treated in a special way. The video is going to have the same aspect ratio as my landscape photos, so they are not treated in a special way. For portrait photos, though, the above command will retain their original aspect ratio by making the longest edge fit, by putting the image at the center of the video, and by filling the empty space at its left and right with black padding. Kudos to this and that.

The -framerate 8 is an arbitrary choice, it means that I want to have 8 of my input images per second of output video.

I chose the H265 settings based on reference examples here.

I ran ffmpeg on Windows (which is the platform I process my photos on, mainly with Lightroom).

Here is how I generated the JPEG files used as input for ffmpeg in the first place:

  • I first used a proprietary Sony program (Imaging Edge) for batch-converting my ~2500 raw (ARW) photos to JPEG files, with full resolution. I would have loved to use an open-source program for that. However, that is not recommendable. The goal here is to get reasonably good-looking JPEG images from the camera’s raw files despite not manually developing them. Automatically converting a camera’s raw image to a JPEG image already implies that the result looks crappy to a certain degree (contrasts, white balance, colors — all of that will just be flat, mediocre, a middle ground). In the spectrum between a really crappy outcome and a not-even-so-bad outcome the proprietary software simply does the best job.
  • I then used IrfanView for batch-processing the JPEG files from the previous step into JPEG files easily digestible by ffmpeg. Specifically, I down-scaled (longest side being 2560 pixels long, towards the desired video dimensions) and renamed (zero-filled, consecutive 5 digit counter in the file name). Corresponding to the resized_%05d.jpg in the ffmpeg command line shown above, the files were named like this:
    resized_00001.jpg
    resized_00002.jpg
    resized_00003.jpg
    resized_00004.jpg
    ...
    resized_02583.jpg

Process Google Sheets data in Python with Pandas (example: running distance over time)

Every now and then I do a little bit of data mangling for personal use using tools that I got to appreciate, mainly during professional work.

In this blog post I would like to share an example for simple and yet powerful end-to-end data processing using Google Sheets, Python, and Pandas (and I have to notice: my first blog post about data analysis with Pandas is already more than six years old, time flies by).

Raw data in a spreadsheet: a date and a distance for every run

I have a chaotic spreadsheet in Google Sheets where I keep track of my runs. It is easy to edit from the smartphone, and synchronized across devices.

This is a screenshot of a part of that spreadsheet (hiding some irrelevant columns, showing only a small fraction of the rows):

Each row in the date column refers to a specific day using the text format YYYY-MM-DD. Some rows in the km column contain numerical values. Each of these means that I have made a run of the distance given by the value, in kilometers, on the corresponding day. Missing values in the km column mean that on those days I did not do a run (or forgot to keep track of it). In the screenshot, it looks like days in the date column are represented without gaps, but that is not important.

Now you know about the kind of data that I have been entering manually for about half a year, about my runs.

Analysis goal

My goal today was to do a tiny bit of data analysis — for myself — and to then write this blog post, for you :-).

In terms of data analysis, my goal was to look into the evolution of my running performance over time. I wanted to start by looking at “running distance per week”. Specifically, my goal was to perform a rolling window analysis with a window width wider than a week (to focus on changes on longer time scales more than on fast fluctuations), and to plot the outcome.

So I built a small tool using Python and Pandas which

  • automatically fetches the current dataset from Google Sheets (as a CSV document)
  • processes and prepares the data (removing dirt, filling gaps, …)
  • performs statistical analyses
  • creates a plot (and writes a PNG graphics file)

Results first

Here is some code: https://github.com/jgehrcke/runni/blob/master/runni.py

Here is how I use it, and how you can use it, too:

# Get the code.
$ git clone https://github.com/jgehrcke/runni && cd runni
 
# Enable link sharing to the Google Sheet (so that anyone
# with the link can access the sheet). Get the corresponding
# ID/key from the URL, and set it as an environment variable
# (it's sensitive data).
$ export RUNNI_GSHEET_KEY='[snip]'
 
# Dependencies: Python 3, pandas, matplotlib, requests
 
# Run the analysis program.
$ python runni.py
...
200112-19:57:39.060 INFO: Writing PNG figure to 2020-01-12_running-distance-per-week-over-time.png

The resulting plot:
 

For each day in the data interval, a small gray data point explicitly shows the distance I ran on that very day (on most of the days this is 0 km). In the majority of the cases, every non-zero gray data point corresponds to a single run (the only run on the corresponding day), but more generally it is the distance sum of all runs on that very day.

The thick black line is the “distance per week”, derived from a rolling window analysis with a window width of 14 days.

in-Pandas data processing in more detail (the non-trivial part)

The following code block (from here) with its code comments shows the core of the in-Pandas data processing and is the main reason for why I write this blog post: I think this is a non-trivial part. In previous projects (goeffel, bouncer-log-analysis, dcos-dev-prod-analysis) I actually put a bit of thought into how to do a meaningful rolling window analysis with Pandas, and here I am simply re-using what I learned before. But some of that it is still non-trivial and deserves an explanation.

The code comments are supposed to provide a hopefully helpful level of explanation. From the comments, it should at least be obvious that several decisions need to be made before the data in the spreadsheet can be analyzed in a meaningful way in a rolling/sliding window analysis.

# Keep only those rows that have a value set in the `km` column. That is
# the criterion for having made a run on the corresponding day.
df = df[df.km.notnull()]
 
# Parse text in `date` column into a pd.Series of `datetime64` type values
# and make this series be the new index of the data frame.
df.index = pd.to_datetime(df["date"])
 
# Sort data frame by index (sort from past to future).
df = df.sort_index()
 
# Turn the data frame into a `pd.Series` object, representing the distance
# ran over time. Every row / data point in this series represents a run:
# the index field is the date (day) of the run and the value is the
# distance of the run.
km_per_run = df["km"]
 
# There may have been more than one run per day. In these cases, sum up the
# distances and have a single row represent all runs of the day.
 
# Example: two runs on 07-11:
# 2019-07-10    3.2
# 2019-07-11    4.5
# 2019-07-11    5.4
# 2019-07-17    4.5
 
# Group events per day and sum up the run distance:
km_per_run = km_per_run.groupby(km_per_run.index).sum()
 
# Outcome for above's example:
# 2019-07-10    3.2
# 2019-07-11    9.9
# 2019-07-17    4.5
 
# The time series index is expected to have gaps: days on which no run was
# recorded. Up-sample the time index to fill these gaps, with 1 day
# resolution. Fill the missing values with zeros. This is not strictly
# necessary for the subsequent analysis but makes the series easier to
# reason about, and makes the rolling window analysis a little simpler: it
# will contain one data point per day, precisely, within the represented
# time interval.
#
# Before:
#   In [28]: len(km_per_run)
#   Out[28]: 75
#
#   In[27]: km_per_run.head()
#   Out[27]:
#   2019-05-27    2.7
#   2019-06-06    2.9
#   2019-06-11    4.6
#   ...
#
# After:
#   In [30]: len(km_per_run)
#   Out[30]: 229
#
#   In [31]: km_per_run.head()
#   Out[31]:
#   2019-05-27    2.7
#   2019-05-28    0.0
#   2019-05-29    0.0
#   2019-05-30    0.0
#   ...
#
km_per_run = km_per_run.asfreq("1D", fill_value=0)
 
# Should be >= 7 to be meaningful.
window_width_days = opts.window_width_days
window = km_per_run.rolling(window="%sD" % window_width_days)
 
# For each window position get the sum of distances. For normalization,
# divide this by the window width (in days) to get values of the unit
# km/day -- and then convert to the new desired unit of km/week with an
# additional factor of 7.
km_per_week = window.sum() / (window_width_days / 7.0)
 
# During the rolling window analysis the value derived from the current
# window position is assigned to the right window boundary (i.e. to the
# newest timestamp in the window). For presentation it is more convenient
# and intuitive to have it assigned to the temporal center of the time
# window. Invoking `rolling(..., center=True)` however yields
# `NotImplementedError: center is not implemented for datetimelike and
# offset based windows`. As a workaround, shift the data by half the window
# size to 'the left': shift the timestamp index by a constant / offset.
offset = pd.DateOffset(days=window_width_days / 2.0)
km_per_week.index = km_per_week.index - offset

Closing remarks

  • What is shown above is I think a well-confined, simple example for real-world data analysis. I like the architecture of maintaining raw data in Google Sheets to then consume it via HTTP for analysis using proper tooling (the data analysis and plotting options within Google Sheets are very limited, heck). With that example, I hope I can inspire some of you people out there to do similar things. There are endless possibilities: in my spreadsheet, I have other columns such as the run duration, … :-).
  • I will keep adding data points to my spreadsheet after about every run and will keep re-generating the graph more or less regularly for my entertainment.
  • The meaning and impact of the window width in the rolling window analysis are critical. I have not explained that above. I think one of the best ways to grasp it is to visually play with it — that’s what the --window-width-days argument can help with.
  • Again, you can find the code for inspiration here: https://github.com/jgehrcke/runni