Monthly Archives: February 2020

NPM: retry package download upon “429 Too Many Requests”

Sometimes the NPM package registry is a bit loaded and individual GET HTTP requests emitted by NPM would be responded to with 429 (Too Many Requests) HTTP responses.

It seems that by default NPM might actually try once or twice, but after all it errors out quickly with an error message like this:

npm ERR! 429 Too Many Requests - GET https://registry.npmjs.org/argparse/-/argparse-1.0.10.tgz

Especially in CI systems you want NPM to retry more often with an appropriate back-off, to increase the chance for successfully self-healing transient struggles.

To achieve that you can tune four retry-related configuration parameters documented here. For starters, I now simply set fetch-retries upon installing dependencies:

npm install . --fetch-retries 10

How to run minikube on Fedora (30)

Today I ran minikube on my notebook running Fedora 30. What follows is a list of steps to do that.

First, install KVM/libvirt and periphery in one go:

sudo dnf install @virtualization
sudo systemctl start libvirtd
sudo systemctl enable libvirtd

Do a bit of verification:

lsmod | grep kvm
kvm_intel             303104  0
kvm                   782336  1 kvm_intel
irqbypass              16384  1 kvm

Set up the most recent minikube release:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo mkdir -p /usr/local/bin/
sudo install minikube /usr/local/bin/

Start minikube using the kvm2 driver:

$ minikube start --vm-driver=kvm2
πŸ˜„  minikube v1.7.1 on Fedora 30
✨  Using the kvm2 driver based on user configuration
πŸ’Ύ  Downloading driver docker-machine-driver-kvm2:
    > docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s
    > docker-machine-driver-kvm2: 13.82 MiB / 13.82 MiB  100.00% 1.19 MiB p/s 1
πŸ’Ώ  Downloading VM boot image ...
    > minikube-v1.7.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s
    > minikube-v1.7.0.iso: 166.68 MiB / 166.68 MiB [-] 100.00% 9.21 MiB p/s 19s
πŸ”₯  Creating kvm2 VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.17.2 on Docker '19.03.5' ...
πŸ’Ύ  Downloading kubeadm v1.17.2
πŸ’Ύ  Downloading kubelet v1.17.2
πŸ’Ύ  Downloading kubectl v1.17.2
🚜  Pulling images ...
πŸš€  Launching Kubernetes ... 
🌟  Enabling addons: default-storageclass, storage-provisioner
βŒ›  Waiting for cluster to come online ...
πŸ„  Done! kubectl is now configured to use "minikube"
πŸ’‘  For best results, install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Time lapse video of raw photos (ffmpeg)

I sometimes get back home with tens of gigabytes of image data distributed across thousands of photo files. This is when the actual work starts: sorting, labeling, selecting, agonizing over finding the best shots, and developing them.

This time I also wanted to make a “time lapse” video of all undeveloped photos I took. A nice little short film of all scenes I tried to take photos of. An attribution to all those photos that otherwise would end up in the photo graveyard β€” in the archive, never going to be looked at again.

For converting a series of JPEG image files into a properly encoded video the open-source program of choice is ffmpeg. Building up an appropriate ffmpeg command took a while. Here is the command I used:

ffmpeg.exe -f image2 -framerate 8 -i resized_renamed_2560/resized_%05d.jpg \
  -filter:v "scale='min(2560,iw)':'min(1440,ih)':force_original_aspect_ratio=decrease,pad=2560:1440:(ow-iw)/2:(oh-ih)/2" \
  -c:v libx265 -crf 28 -preset medium video-2560-h265-medium.mp4

The video filter syntax (the part after -filter:v) instructs ffmpeg to output a video with 2560 pixels width and 1440 pixels height, retaining the aspect ratio of the input image files. That last bit is important because in a large set of photos we usually find two different aspect ratios: we shoot most photos in landscape orientation, but others in portrait orientation. Putting both into the same video means that one of both types needs to be treated in a special way. The video is going to have the same aspect ratio as my landscape photos, so they are not treated in a special way. For portrait photos, though, the above command will retain their original aspect ratio by making the longest edge fit, by putting the image at the center of the video, and by filling the empty space at its left and right with black padding. Kudos to this and that.

The -framerate 8 is an arbitrary choice, it means that I want to have 8 of my input images per second of output video.

I chose the H265 settings based on reference examples here.

I ran ffmpeg on Windows (which is the platform I process my photos on, mainly with Lightroom).

Here is how I generated the JPEG files used as input for ffmpeg in the first place:

  • I first used a proprietary Sony program (Imaging Edge) for batch-converting my ~2500 raw (ARW) photos to JPEG files, with full resolution. I would have loved to use an open-source program for that. However, that is not recommendable. The goal here is to get reasonably good-looking JPEG images from the camera’s raw files despite not manually developing them. Automatically converting a camera’s raw image to a JPEG image already implies that the result looks crappy to a certain degree (contrasts, white balance, colors — all of that will just be flat, mediocre, a middle ground). In the spectrum between a really crappy outcome and a not-even-so-bad outcome the proprietary software simply does the best job.
  • I then used IrfanView for batch-processing the JPEG files from the previous step into JPEG files easily digestible by ffmpeg. Specifically, I down-scaled (longest side being 2560 pixels long, towards the desired video dimensions) and renamed (zero-filled, consecutive 5 digit counter in the file name). Corresponding to the resized_%05d.jpg in the ffmpeg command line shown above, the files were named like this:
    resized_00001.jpg
    resized_00002.jpg
    resized_00003.jpg
    resized_00004.jpg
    ...
    resized_02583.jpg