Nginx-Ingress Reverse Proxy in Front of an Object Storage for inflexible WebGL and stubborn CORS

Rationale: in an Unity App generating WebGL with remote-assets loading where we have very little control on the generated code, I’m limited to one solution and it’s to comply with CORS: hosting assets in the same domain as the app is running in.
In this context it seems indeed impossible to set correctly the crossorigin attribute in the generate WebGL code. If you think otherwise please teach me in the comments.

TLDR; It works fine of course, but don’t waste your time deploying your own reverse proxy if you want to keep the benefit of a CDN and if performances is a key in your deployment.

This wouldn’t be an issue if the current provider, DigitalOcean, would allowing me to add custom hostname to its Object Storage offer (Spaces). It actually allows it but only for domain managed by DigitalOcean itself. Which is nonsense to anyone able to add a DNS entry (anyone with two fingers and a tong?) and no-go for us as our DNS is hosted by CloudFlare and acts as a good protection for the infrastructure.

CloudFlare is a reverse proxy (and more) and hide our origin servers from the wild wild web. I would have expect this provider to allow me to proxy our origin server to any FQDN. For instance to forward requests and return content from This is unfortunately not available for the free tier and it seems available to the business plan under the name CNAME Setup, but I don’t agree with the price for such a “simple” feature. We are already paying with our data and it seems that concurrent are offering this feature to free tiers.

I excluded the idea to “write” a reverse proxy in Javascript and use CloudFlare workers because… well it’s nonsense shitty tech. The net is not a trash and needs no more bullshit solutions.

Our services are running in a Kubernetes cluster so if I tolerate the performance trade off to run a reverse proxy in my own rented infrastructure, this solution is “free”, relatively clean and allow us to get forward with our project.

There is drawbacks. Many. For anticipated heads-up and warnings, see the end of this post.

With the bellow service and ingress settings, requests sent to OBJECT-STORAGE-RP.ORG-DOMAIN.TLD (any FQDN) will return content from OBJECT-STORAGE-HOSTNAME (Object Storage on DigitalOcean Spaces, Amazon S3, Google Cloud Storage, etc.).

apiVersion: v1
kind: Service
  namespace: APP-NAMESPACES
  type: ExternalName

kind: Ingress
  namespace: APP-NAMESPACES
  annotations: "nginx" ORG-SSL-ISSUER https "OBJECT-STORAGE-HOSTNAME" |
      proxy_ssl_name OBJECT-STORAGE-HOSTNAME;
      proxy_ssl_server_name on;
  name: OBJECT-STORAGE-RP-ingress

  - hosts:
      - path: /
        pathType: Prefix
            name: OBJECT-STORAGE-RP
              number: 443

Where variables are:

  • OBJECT-STORAGE-RP = a name to define this reverse proxy
  • OBJECT-STORAGE-HOSTNAME = source host, where assets are hosted (Spaces in my case)
  • ORG-SSL-ISSUER = when using SSL, might be letsencrypt (certbot) and this is cluster-dependent
  • APP-NAMESPACES = a meaningful or random namespace
  • OBJECT-STORAGE-RP.ORG-DOMAIN.TLD = the reverse proxy FQDN, might completely differ from OBJECT-STORAGE-RP, or not

On one hand there is notable performances trade offs: most object storage provider offer a CDN option to boost assets access in client’s browsers and this feature won’t be a viable option.
If you use signed access it won’t be possible to add a caching CDN in front of this reverse proxy. This instance can still be hidden by CloudFlare for instance, but with no performances improvement as the caching won’t work.

Doing so I’m wasting resources because my queries are going through many gateways (proxies) and encrypt/decrypt queries for nothing more than renaming. And I’m paying (a bit) for it.

On the other hand there is advantages. Doing so is also blurring lines, which is a security improvement (and even a business if you look at CloudFront and other Cloud-Behemoth).

It’s working, settings are flexible and this means that we can continue to work without CORS issues.

However this is not a solution in itself. The right way would be either of the following:

  • being able to setup our own hostname (FQDN) directly for the bucket of our Object Storage provider.
  • setting up the Cloud reverse proxy directly to point to the right origin server. CloudFlare in our case.


Remove VPN/Network from Unify Controller using Command Line

After migrating a site from a self hosted network controller to a new Unify Cloud Key, I found myself in an annoying position: not being able to remove an old VTI VPN (from the previous configuration). The UI just didn’t offer this option like it should and actually does for other networks and VPNs. Searching the wild wild web didn’t help either, so I had to be creative.

But first let’s roll back a bit in time to better explain the issue: right after importing the site configuration, I had two sites configured. The “default” and “SITE2” sites. My newly imported “SITE2” site wasn’t the default one and this was an issue. I had to change it manually using this CLI technique because the UI doesn’t allow it.

So, based on the above mentioned technique, I succeeded to remove an old network from the settings, where the UI wasn’t competent.

SSH to your Cloud Key/Docker/Server wherever the Unify Network Controller is hosted. Then start the mongo DB CLI with mongo --port 27117.

Switch to the Network Controller database with use ace then get the list of networks with db.networkconf.find(). You should get something like this:

{ "_id" : ObjectId("HEX1"), "attr_no_delete" : true, "attr_hidden_id" : "WAN", "wan_networkgroup" : "WAN", "site_id" : "345634563465", "purpose" : "wan", "name" : "Default (WAN1)", "wan_type" : "pppoe", "wan_ip" : "", "wan_username" : "LOGIN", "wan_type_v6" : "disabled", "x_wan_password" : "MAYBEMAYBE", "wan_provider_capabilities" : { "download_kilobits_per_second" : 250000, "upload_kilobits_per_second" : 40000 }, "report_wan_event" : false, "wan_load_balance_type" : "failover-only", "wan_load_balance_weight" : 50, "wan_vlan_enabled" : false, "wan_vlan" : "", "wan_egress_qos" : "", "wan_smartq_enabled" : true, "mac_override_enabled" : false, "wan_dhcp_options" : [ ], "wan_ip_aliases" : [ ], "wan_dns_preference" : "auto", "setting_preference" : "manual", "wan_smartq_up_rate" : 40000, "wan_smartq_down_rate" : 250000 }
{ "_id" : ObjectId("HEX2"), "purpose" : "guest", "networkgroup" : "LAN", "dhcpd_enabled" : true, "dhcpd_leasetime" : 86400, "dhcpd_dns_enabled" : false, "dhcpd_gateway_enabled" : false, "dhcpd_time_offset_enabled" : false, "ipv6_interface_type" : "none", "ipv6_pd_start" : "::2", "ipv6_pd_stop" : "::7d1", "gateway_type" : "default", "nat_outbound_ip_addresses" : [ ], "name" : "Guests", "vlan" : "2", "ip_subnet" : "", "dhcpd_start" : "", "dhcpd_stop" : "", "dhcpguard_enabled" : true, "dhcpd_ip_1" : "", "enabled" : true, "is_nat" : true, "dhcp_relay_enabled" : false, "vlan_enabled" : true, "site_id" : "123453", "lte_lan_enabled" : false, "setting_preference" : "manual", "mdns_enabled" : false, "auto_scale_enabled" : false, "upnp_lan_enabled" : false }
{ "_id" : ObjectId("HEX3"), "attr_hidden_id" : "WAN_LTE_FAILOVER", "wan_networkgroup" : "WAN_LTE_FAILOVER", "purpose" : "wan", "name" : "LTE Failover WAN", "site_id" : "3563465", "wan_type" : "static", "report_wan_event" : true, "wan_load_balance_type" : "failover-only", "wan_ip" : "IPADDRESS", "wan_gateway" : "IPADDR", "wan_netmask" : "", "enabled" : true, "ip_subnet" : "", "wan_dns_preference" : "auto", "setting_preference" : "auto" }
{ "_id" : ObjectId("HEX4"), "enabled" : true, "purpose" : "remote-user-vpn", "ip_subnet" : "", "l2tp_interface" : "wan", "l2tp_local_wan_ip" : "any", "vpn_type" : "l2tp-server", "x_ipsec_pre_shared_key" : "SECRET", "setting_preference" : "auto", "site_id" : "12345, "name" : "VPN Server", "l2tp_allow_weak_ciphers" : false, "require_mschapv2" : false, "dhcpd_dns_enabled" : false, "radiusprofile_id" : "1234" }

Find the network that you cannot remove from the UI and type db.networkconf.deleteOne({ _id: ObjectId("HEX3") }) where HEX3 is your network ID. exit the CLI and check in the UI that the network has indeed been removed.

You default network should be the imported network now.

A nice nix-shell for Odoo

Since I recently switch to nixos and I’m working with odoo for a project, I had to update my debianesk habits and take what nixos has probably best to offer: an ultra-customized and optimized development environment using nix-shell.

This comes at the cost of extra readings and code, but the result offer a great flexibility. Now with 2 simple files I’m can load the whole environment and its dependencies just by cd‘ing in my project’s folder.

Beforehand, one needs to setup Direnv as described in the nix-shell doc.

Then, the following shell.nix file will suffice to load the whole environment when entering its folder:

{ pkgs ? import  {} }:
pkgs.mkShell {
  name = "odoo-env";
  buildInputs = with pkgs; [ python3 xclip openldap cyrus_sasl postgresql ];
  src = null;
  shellHook = ''
    # Allow the use of wheels.
    SOURCE_DATE_EPOCH=$(date +%s)

    if test ! -d $VENV; then
      python -m venv $VENV
      source $VENV/bin/activate
      pip install -r requirements.txt
    source $VENV/bin/activate

    export PYTHONPATH=`pwd`/$VENV/${pkgs.python.sitePackages}/:$PYTHONPATH
    # export LD_LIBRARY_PATH=${with pkgs; lib.makeLibraryPath [ libGL xorg.libX11 xorg.libXext xorg.libXrender mtdev ]}
{ pkgs ? import {} }: pkgs.mkShell { name = "odoo-env"; buildInputs = with pkgs; [ python3 xclip openldap cyrus_sasl postgresql ]; src = null; shellHook = '' # Allow the use of wheels. SOURCE_DATE_EPOCH=$(date +%s) VENV=.venv if test ! -d $VENV; then python -m venv $VENV source $VENV/bin/activate pip install -r requirements.txt fi source $VENV/bin/activate export PYTHONPATH=`pwd`/$VENV/${pkgs.python.sitePackages}/:$PYTHONPATH # export LD_LIBRARY_PATH=${with pkgs; lib.makeLibraryPath [ libGL xorg.libX11 xorg.libXext xorg.libXrender mtdev ]} ''; }

Just a note about odoo’s dependencies on nixos. I haven’t been able to install proprely pyldap 2.4.28 which is the required version for odoo 12. Instead I installed the version 3.0.0 which seems to do just fine with odoo 12 as well. To do so, I updated the requirements.txtfile and changed this line

pyldap==2.4.28; sys_platform != 'win32'

with the appropriate version

pyldap==3; sys_platform != 'win32'

rsnapshot on Qnap with Firmware 4.x

I find that rsync is still the best solution to plan backups of remote hosts (and in general) and rsnapshot is its best companion. This combo enable incremental backups on any sort of device running a decent *nix os.

The story with Qnap and community packages is quite long. In a nutshell, the installation of rsnapshot on a Qnap NAS with a recent firmware (4.x) depends on Entware. In my case a Qnap TS-269L with a firmware

Entware is the latest package manager successor for a variety of NAS. This will install the opkg command which this turn enables to install rsnapshot. The “Entware App” can be downloaded here and installed in the App Center of the Qnap UI. The small icon on the top right in the App Center enables to install the *.qpgk downloaded file.

manual install icon

While loading the file you might see a warning about alternative sources. The process continues then in a terminal, using ssh.

Once logged in as admin, the following command will suffice to install rsnapshot and its dependencies.

$ opkg install rsnapshot

Settings for rsnapshot can then be found in /opt/etc/rsnapshot.conf.

Then it’s just a matter of adding rsnapshot to cron, my setup looks like this:

$ crontab -e
5 * * * * /opt/bin/rsnapshot -c /opt/etc/rsnapshot.conf hourly
0 2 * * * /opt/bin/rsnapshot -c /opt/etc/rsnapshot.conf daily
30 3 1 * * /opt/bin/rsnapshot -c /opt/etc/rsnapshot.conf monthly
30 4 * * 6 /opt/bin/rsnapshot -c /opt/etc/rsnapshot.conf weekly

Enjoy a decent backup solution! The next step will be to monitor it…

Thinkpad Carbon X1 1st-3d Gen and Ubuntu 18.04

My dear friend Dirk just bought a second-hand Carbon X1 first generation, just like mine (3448 serie). I thought he might need a quick help to get going with Ubuntu 18.04 and this machine. Note that the content of this tutorial works for the 2nd and 3rd generation (20BS serie) as well.

UPDATE(2019-08-18) I just bought a second-hand T460s with a bit more ram than the Carbon X1 and followed this tutorial.

I had 3 complains after installing a brand new Ubuntu 18.04 LTS on an encrypted disk:

  • no hibernate feature
  • no fingerprint scanner feature
  • a too sensitive trackpad

Hopefully all these problems can be solved to provide the OS this laptop deserves.


This chapter is the trickiest part of this tutorial, therefore it comes first. Keep in mind that we are going to change partitions sizes, it’s best to do it right after linux have been installed: when all you can loose is your time, not your data. If your laptop has a running system with sensitive data: please do a backup first.

In order to enable the hibernation, we’ll need to fix the swap partition size, so the RAM content can fit in it when the machine goes to deep sleep. Then we’ll need to setup systemd so it suspends when the lid is closed, and hibernate after a (defined) while if not resumed in the meantime. And to close this chapter, we’ll setup some policies to enable the “hibernate” button in the system menu.


If you didn’t partition the SSD yourself during the installation, high are the chance that your swap partition is smaller than the amount of RAM in your machine. And if you succeeded to install Ubuntu 18.04 with a custom partitioning and and encrypted file-system, than please leave a note in the comment to explain how. I tried many different approach without success: whether the install failed or the machine didn’t boot properly. So let’s start with the assumption that you installed Ubuntu with an encrypted file-system, by letting the installer partition your disk.

You’ll need to boot on the installation disk/usb-drive again, because we shouldn’t change the partition of a running system. Open a Terminal window and decrypt the encrypted partition of your installed system:

ubuntu@ubuntu:~$ sudo cryptsetup luksOpen /dev/sda3 crypt1
Enter passphrase for /dev/sda3

In that example, ensure that /dev/sda3 is your encrypted volume. If you are not sure, check with fdisk -l which is the big partition on your hard-drive, this is most likely the last one. On a NVMe hard-drive the name might be very different.

Once the volume decrypted, scan the LVM volume-groups and active them with the following commands:

root@ubuntu:~# vgscan --mknode
Reading volume groups from cache.
Found volume group "ubuntu-vg" using metadata type lvm2
root@ubuntu:~# vgchange -ay
2 logical volume(s) in volume group "ubuntu-vg" now active

You’ll be then able to check that the volume-group has been activated correctly with:

root@ubuntu:~# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/crypt1 ubuntu-vg lvm2 a-- 237.25g 48.00m

and list logical-volumes with:

root@ubuntu:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root ubuntu-vg -wi-a----- 236.25g
swap_1 ubuntu-vg -wi-a----- 976.00m

It is recommended to scan the file-system prior to modifications:

root@ubuntu:~# e2fsck -f /dev/mapper/ubuntu--vg-root

Now you can resize your root partition to give some room for the swap partition. There is a whole science to calculate how big your swap should be if you want to hibernate, or you can follow the rule of thumb: RAM amount + 4GB. If the swap is exactly the size of your RAM this might work as well, a bit extra is recommended if your machine is actually swapping – rarely on modern machine – but this depend on you usage. If you start docker or virtual box sometime, follow the rule of thumb…

So in my case the volume-group is about 237GB big, I subtracted 12GB for the swap it leaves 225GB for the root (system) partition:

root@ubuntu:~# resize2fs -p /dev/mapper/ubuntu--vg-root 225g
resize2fs 1.44.1 (24-Mar-2018)
Resizing the filesystem on /dev/mapper/ubuntu--vg-root to 58982400 (4k) blocks.
Begin pass 2 (max = 2947)
Begin pass 3 (max = 1880)
Begin pass 4 (max = 24242)
The filesystem on /dev/mapper/ubuntu--vg-root is now 58982400 (4k) blocks long.

To be sure, just check the file-system again with:

root@ubuntu:~# e2fsck -f /dev/mapper/ubuntu--vg-root
e2fsck 1.44.1 (24-Mar-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/ubuntu--vg-root: 177123/14745600 files (0.1% non-contiguous), 2426029/58982400 blocks

you can now safely reduce the size of the logical-volume with:

root@ubuntu:~# lvreduce -L 225G -r /dev/ubuntu-vg/root 
fsck from util-linux 2.31.1
/dev/mapper/ubuntu--vg-root: clean, 177123/14745600 files, 2426029/58982400 blocks
resize2fs 1.44.1 (24-Mar-2018)
The filesystem is already 58982400 (4k) blocks long. Nothing to do!
Size of logical volume ubuntu-vg/root changed from 236.25 GiB (60481 extents) to 225.00 GiB (57600 extents).
Logical volume ubuntu-vg/root successfully resized.

You can remove the old swap volume:

root@ubuntu:~# lvremove /dev/ubuntu-vg/swap_1 
Do you really want to remove and DISCARD active logical volume ubuntu-vg/swap_1? [y/n]: y
Logical volume "swap_1" successfully removed

…and create a new bigger one. This following command says “use the remaining space”, and note that volume names are kept the same.

root@ubuntu:~# lvcreate -l 100%FREE -n swap_1 ubuntu-vg 
Logical volume "swap_1" created.

Check what’s just been done, the list of logical-volumes:

root@ubuntu:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root ubuntu-vg -wi-a----- 225.00g
swap_1 ubuntu-vg -wi-a----- 12.25g

Create the swap file-system:

root@ubuntu:~# mkswap -L swap_1 /dev/ubuntu-vg/swap_1 
Setting up swapspace version 1, size = 12.3 GiB (13157527552 bytes)
LABEL=swap_1, UUID=4a10e0f5-44d7-4f57-ae23-de172958e7f1

And adapt the fstab if required. If you gave it the same logical-volume name than it already was, you can skip this step. Ubuntu 18.04 uses LV names instead of UUID.

root@ubuntu:~# mount /dev/ubuntu-vg/root /mnt/
root@ubuntu:~# vi /mnt/etc/fstab
root@ubuntu:~# umount /mnt

Write last changes to the disk:

root@ubuntu:~# pvchange -x n /dev/mapper/crypt1
Physical volume "/dev/mapper/crypt1" changed
1 physical volume changed / 0 physical volumes not changed

…and deactivate open volume-groups:

root@ubuntu:~# vgchange -an
0 logical volume(s) in volume group "ubuntu-vg" now active
root@ubuntu:~# cryptsetup luksClose crypt1

You can now restart your machine and boot your installed operating system. You will be prompted for the password at boot to decrypt the LUKS volume. Everything should work as before, but this time with a bigger swap partition.

Lastly, you’ll have to change a line in /etc/default/grub to tell your kernel that there possibly is the content of your RAM in your swap when booting. If so, it should “resume” from hibernation.


Where the value of UUID is the string returned when running mkswap to make the swap. If you don’t find it back in your terminal, just run:

lsblk -o NAME,UUID

Once GRUB defaults updated you will need to update the actual GRUB config files with the command

sudo update-grub

Reboot your machine, test the hibernation. If this works as it should, tap your own shoulder: Congratulation, the hardest work is done.

Configure hardware events and enable “hibernate” in the system actions (top right menu)

I wanted my laptop to suspend when I close the lid and hibernate after a defined time if the suspend isn’t resumed (namely: the lid kept closed)

To enable that, edit as system-logind configuration file:

sudo vim /etc/systemd/logind.conf

You’ll be prompted for the root password. If you don’t know how to use vim just grow up and learn.
Vi won’t be be removed from major distribution anytime soon.

Well, uncomment and edit the HandleHibernateKey and HandleLidSwitch variables:


The timeout to switch from suspend to hibernate can be set in /etc/systemd/sleep.conf. The file might not yet exists.

sudo vim /etc/systemd/sleep.conf

I found 15 minutes a good delay for an old 1th generation with a battery holding about 2h. For a 3rd generation with a much better battery (and economic architecture), I switched it to one hour.

When you are done, restart systemd-logind service with the following command:

sudo systemctl restart systemd-logind.service

If you need to debug the service, and possibly find a mistake in your config file, just check the logs:

sudo journalctl -u systemd-logind.service

Next, to enable the “hibernate” button in the system menu edit the /etc/polkit-1/localauthority/50-local.d/com.ubuntu.enable-hibernate.pkla file.

sudo vim /etc/polkit-1/localauthority/50-local.d/com.ubuntu.enable-hibernate.pkla

with the following content

[Re-enable hibernate by default in upower]
Identity=unix-user:* Action=org.freedesktop.upower.hibernate ResultActive=yes

[Re-enable hibernate by default in logind]
Identity=unix-user:* Action=org.freedesktop.login1.hibernate;org.freedesktop.login1.handle-hibernate-key;org.freedesktop.login1;org.freedesktop.login1.hibernate-multiple-sessions;org.freedesktop.login1.hibernate-ignore-inhibit ResultActive=yes

When your computer is in hibernation and you push the power button, you’ll be asked for a password, just like a normal boot. That’s right, the content of your RAM has been saved to the hard-drive and the machine completely turned off. To resume it, the volume need to be decrypted again, and for that your password is asked.

That’s it for the hibernation.


If you find that the trackpad is too sensitive, install the synaptic driver


and copy the following content in /etc/X11/Xsession.d/56_synaptic_fix :

 export `xinput list | grep -i touchpad | awk '{ print $6 }'`
xinput --set-prop "$id" "Synaptics Noise Cancellation" 20 20
xinput --set-prop "$id" "Synaptics Finger" 35 45 250
xinput --set-prop "$id" "Synaptics Scrolling Distance" 180 180

Ensure that the owner of the file is root and permissions 644. Restart your session and you’ll have a usable trackpad.

Fingerprint scanner

You should probably not use the fingerprint scanner if you have strong security expectations. Considering that the filesystem is encrypted, one need a password to start the system. However, if the system is just in power-save mode and you wake it up, you’ll probably see a login prompt. This is the main case one could attack the fingerprint reader and try to gain access to the system.

Just install the pam package to enable fingerprint authentication on Ubuntu:

sudo apt install libpam-fprint

Then run the following command to teach your system to use the fingerprint as an authentication system:

sudo pam-auth-update

…and select the fingerprint option

You’ll find a new “Fingerprint login” option in users settings to register one finger.

Keep in mind that the finger-print authentication is not the safest thing ever. Your finger isn’t a password, it’s an image and the computer will try to “guess” if this is you, with potential mistakes. You will be able to log in with this technique, but not to unlock your password-keychain. For the latter you’ll always need a real password.