Category Archives: Linux

Dumping all remote mysql queries to a server

For a project recently we needed to see which users were accessing which databases on a remote mysql server. After a little bit of reading and experimenting I came up with this command – hopefully it will be useful for someone in the future.

This uses the excellent wireshark tool to capture traffic. The -f flag specifies a filter to only inspect inbound traffic to the server (as we don’t care about the responses).

The line beginning -T specifies the fields we wish to dump (in a tab-separated fashion), -d ensures that all traffic will attempt to be decoded as mysql. The final -Y flag ensures that only packets with queries or logins will be dumped, rather than TCP overhead or placeholder binding etc.

Note that user information is only given at the beginning of a session. Schema (database name) information may be on that same line, or you may have to scan the queries to look for a USE database type command as there are two different ways of setting the current database that will be accessed.

Recovering from unmountable btrfs filesystem issues

Here are some notes of how I recovered most of the data after my btrfs disk got horribly corrupted by bad memory. Fortunately I had upgraded the disk 6 months ago so I was able to start from that image left behind on the old disk, copied over using the excellent btrfs-clone tool.

After that I could restore most of my files to the last backup (a month or two back) and git repositories from the main server. But I still had a number of documents and other bits that I needed to recover.

The first thing prior to formatting the disk (I don’t have another spare fast SSD lying around) was to take a backup of the entire btrfs disk. However it was quite a bit larger than I easily had spare on another disk. So, I stored it in squashfs which reduced size by 50%.

After that I tested that it was mountable:

And erased and cloned the old btrfs disk to it.

I then started using the btrfs restore tool to try to recover the data. First you need to list the roots, usually the highest number will be the latest snapshot and it may have consistent data:

Then you can get a listing of the files under that root and whether they may be recoverable using the -v -D flags (-v means list files, -D means don’t actually try to restore any data. For example:

If that looks good then you can run the command with a few extra flags to try to get the files back as much as possible:

This can take a while but it seems to work well on smaller files. Unfortunately some virtual machine images (60gb or so each) didn’t recover because they had got corrupted in the middle.

If you want to recover only a particular point under the tree you can use the --path-regex parameter to specify this, however writing the regexps is very difficult. Here is a short bit of code which will generate the path regex correctly:

You can then restore just those files like:

Diagnosing faulty memory in Linux…

For the past year I’ve had very occasional chrome crashes (segfaults in rendering process) and an occasional bit of btrfs corruption. As it was always easily repairable with btrfs check --repair I never thought much about it, although I suspected it may be an issue with the memory. I ran memtest86 overnight one time but it didn’t show up any issues. There were never any read or SMART issues logged on the disk either, and it happened to another disk within the machine as well.

Recently though I was seeing btrfs corruption on a weekly basis, especially after upgrading to ubuntu 18.04 (from ubuntu 16.04). I thought it may be a kernel issue so I got one of the latest kernels. It seemed to happen especially when I was doing something quite file-system intense, for example browsing some cache-heavy pages while running a vm with a long build process going on.

Then, earlier in the week the hard drive got corrupted again, much more seriously and after spending some time fixing, running btrfs check --repair a few times it suddenly started deleting a load of inodes. Force rebooting the machine I discovered that the disk was un-mountable, although later I was able to recover quite a lot of key data from btrfs restore as documented in this post.

memtest86 was still not showing any issues, and so my first thought was that assuming the hard disk was not at fault it may be something to do only when the memory had a lot of contention (memtest86 was only able to run on a single core on my box). I booted a minimal version of linux and ran a multi-process test over a large amount (not all) of the memory:

where 8 is the number of processor/threads and 1400 is the amount of free memory on the system divided by that number (in my case I was testing 16gb of memory). 10 is the number of runs. It took about 45 min to run once over the 16gb, or about 25 min to run over 8gb (each of the individual sodimms in my laptop).

Within about 10 minutes it started showing issues on one of the chips. I’ve done a bit of research since this and seen that if a memory chip is going to fail then it would usually do it within the first 6 months of being used. However this is a kingston chip that has been in my laptop since I bought it 2 or 3 years back. I added another 8gb samsung chip a year ago and it seemed to be after that that the issues started, however that chip works out as fine. Perhaps adding another chip in broke something, or perhaps it just wore out or overheated somehow…

Whatsapp upgraded, crashes on start

Somehow today my wifes’ phone had managed to upgrade to a new version of WhatsApp. When she opened it it just said that the applicaiton had crashed. This also started happening recently with ‘Google Play Services’ and some other apps on her phone.

(As an aside, this is why I turn off auto-update where at all possible because you never know when something will break)

However after much research and debugging I learnt that the problem is not so much with WhatsApp itself as with the Cyanogenmod (custom ROM) that we use on our phones and will happen increasingly. Fortunately there is a relatively easy way to fix this – skip to the bottom of this article if you want to just fix the issue.

The technical root cause is documented on the google issue tracker and is caused by a change in the way apps are being built when they are upgraded to using the gradle 3 build-chain. It seems to be fixed in the latest versions of google build-tools so hopefully in the next 6 months this problem will go away but for the moment it will only increase as teams upgrade their android build chains. Basically in my quick scanning of the bug ticket the problem is that the implementation of some low-level part of reading an apk package on cyanogenmod and many other derived custom ROMs is slightly faulty. That code-path is not normally used but the new appt2 build-process creates some outputs that trigger the condition in libandroidfw which then cause the apps to not load.

This means that we just need to patch the library and it fixes the problem:

Download fix for cyanogenmod 12.1.

Download fix for cyanogenmod 13 (untested)

To install this fix you can put it onto your SD card and install via TWRP or whichever bootloader you use. Alternatively you can do it by hand if you have rooted your phone by connecting to your phone’s shell with adb shell and setting up the following:,

Then run the following from your computer to update (after having extracted the zip file):

Then reboot your phone and it should all work again.

Transparently serving WebP images from Apache

I’ve recently been working on a website where we are creating a tool to customize a product. We have various renders from the designers with lots of transparency and then combine these together on the frontend to produce the customized render. As a result of needing transparency we can’t use the jpeg format so we need to use PNG format, however as this is lossless it means the image sizes tend to be very big. Fortunately the WebP format can compress transparent images including the transparency layer (but this is not set by default). Running the WebP converter with light compression over our PNG assets for this projects produced a set of WebP’s which were in total only 25% of the size of the PNG assets and still a high quality. This means much faster loading for the site, especially when displaying multiple renders of the customized product and its 5-10 layers per render.

However, WebP support is only available in about 70% of the browsers today. Rather than trying to test for it on the client side, it would be great to just keep the browser-side code the same but serve different assets depending on whether the browser supports it or not.

I found a good start for apache support for transparent loading of WebPs on github, however there were a few bugs in the script. Here is the final version that I used – you need to put it under a <VirtualHost> section.

And here is a script to convert all png, jpg or gif files under your image directories to WebP format in such a way that they will be automatically served by the code above.

Note the -nt comparison that only updates files if the source has changed. You could add this script to git post-checkout and post-merge hooks to automatically keep your WebP assets in sync with the images in the code (and add a .gitignore entry for *.webp – no need to keep 2 copies of each resource in the repository).

Important note: If you’re using an older version of imagemagick such as on Ubuntu 14.04 (imagemagick 6.7.7), it doesn’t pass the alpha compression arguments through correctly so if you have a lot of transparency you won’t see much in the way of compression happening. Switch the convert line to be something like the below, however you need to remove the gif support as that requires using the gif2webp command to convert:

Also note that this causes some issues when you have for example a jpg and png of the same base name whose contents are different (I found a few in the old code I inherited). You can find the base name of any of these clashes clashes using the following command: