Recovering from unmountable btrfs filesystem issues

Here are some notes of how I recovered most of the data after my btrfs disk got horribly corrupted by bad memory. Fortunately I had upgraded the disk 6 months ago so I was able to start from that image left behind on the old disk, copied over using the excellent btrfs-clone tool.

After that I could restore most of my files to the last backup (a month or two back) and git repositories from the main server. But I still had a number of documents and other bits that I needed to recover.

The first thing prior to formatting the disk (I don’t have another spare fast SSD lying around) was to take a backup of the entire btrfs disk. However it was quite a bit larger than I easily had spare on another disk. So, I stored it in squashfs which reduced size by 50%.

mkdir empty-dir
mksquashfs empty-dir squash.img -p 'sdb3_backup.img f 444 root root dd if=/dev/sdb3 bs=4M'

After that I tested that it was mountable:

mount squash.img /mnt/tmp
btrfs restore -l /mnt/tmp/sdb3_backup.img

And erased and cloned the old btrfs disk to it.

I then started using the btrfs restore tool to try to recover the data. First you need to list the roots, usually the highest number will be the latest snapshot and it may have consistent data:

btrfs restore -l /mnt/tmp/sdb3_backup.img

Then you can get a listing of the files under that root and whether they may be recoverable using the -v -D flags (-v means list files, -D means don’t actually try to restore any data. For example:

btrfs restore -r 290 -v -D sdb3_backup.img /laptop/restore/

If that looks good then you can run the command with a few extra flags to try to get the files back as much as possible:

btrfs restore -r 290 -x -m -i sdb3_backup.img /laptop/restore/

This can take a while but it seems to work well on smaller files. Unfortunately some virtual machine images (60gb or so each) didn’t recover because they had got corrupted in the middle.

If you want to recover only a particular point under the tree you can use the --path-regex parameter to specify this, however writing the regexps is very difficult. Here is a short bit of code which will generate the path regex correctly:

perl -E 'for(@ARGV){ $p = () = m!/!g; s!/!(|/!g; $_.= "(|/.*))" . ")" x $p; say "--path-regex '\''^/(|$_\$'\''" }' 'mark/data/documents'

You can then restore just those files like:

btrfs restore -x -m -i  -r 290 --path-regex  '^/(|mark(|/data(|/documents(|/.*))))$' sdb3_backup.img /laptop/restore/

Diagnosing faulty memory in Linux…

For the past year I’ve had very occasional chrome crashes (segfaults in rendering process) and an occasional bit of btrfs corruption. As it was always easily repairable with btrfs check --repair I never thought much about it, although I suspected it may be an issue with the memory. I ran memtest86 overnight one time but it didn’t show up any issues. There were never any read or SMART issues logged on the disk either, and it happened to another disk within the machine as well.

Recently though I was seeing btrfs corruption on a weekly basis, especially after upgrading to ubuntu 18.04 (from ubuntu 16.04). I thought it may be a kernel issue so I got one of the latest kernels. It seemed to happen especially when I was doing something quite file-system intense, for example browsing some cache-heavy pages while running a vm with a long build process going on.

Then, earlier in the week the hard drive got corrupted again, much more seriously and after spending some time fixing, running `btrfs check –repair` a few times it suddenly started deleting a load of inodes. Force rebooting the machine I discovered that the disk was un-mountable, although later I was able to recover quite a lot of key data from btrfs restore as documented in this post.

memtest86 was still not showing any issues, and so my first thought was that assuming the hard disk was not at fault it may be something to do only when the memory had a lot of contention (memtest86 was only able to run on a single core on my box). I booted a minimal version of linux and ran a multi-process test over a large amount (not all) of the memory:

parallel sh -c 'memtester 1400 10; echo EXIT: $?' -- `seq 8`

where 8 is the number of processor/threads and 1400 is the amount of free memory on the system divided by that number (in my case I was testing 16gb of memory). 10 is the number of runs. It took about 45 min to run once over the 16gb, or about 25 min to run over 8gb (each of the individual sodimms in my laptop).

Within about 10 minutes it started showing issues on one of the chips. I’ve done a bit of research since this and seen that if a memory chip is going to fail then it would usually do it within the first 6 months of being used. However this is a kingston chip that has been in my laptop since I bought it 2 or 3 years back. I added another 8gb samsung chip a year ago and it seemed to be after that that the issues started, however that chip works out as fine. Perhaps adding another chip in broke something, or perhaps it just wore out or overheated somehow…

ESP8266 minimal setup

I’m sure there are many notes out there, but I often get confused about the minimal setup required to run an ESP8266. You actually only need 4 pins connected:

Connect GND to 0v, VCC and EN to +3.3v.

Then connect GPIO15 via a 2-10k (I usually use 3k3) resistor to GND to specify boot from flash.

And you’re good to go.

Obviously in order to do the initial flash of the device you need to connect the TX/RX and also connect GPIO0 to GND.

Whatsapp upgraded, crashes on start

Somehow today my wifes’ phone had managed to upgrade to a new version of WhatsApp. When she opened it it just said that the applicaiton had crashed. This also started happening recently with ‘Google Play Services’ and some other apps on her phone.

(As an aside, this is why I turn off auto-update where at all possible because you never know when something will break)

However after much research and debugging I learnt that the problem is not so much with WhatsApp itself as with the Cyanogenmod (custom ROM) that we use on our phones and will happen increasingly. Fortunately there is a relatively easy way to fix this – skip to the bottom of this article if you want to just fix the issue.

The technical root cause is documented on the google issue tracker and is caused by a change in the way apps are being built when they are upgraded to using the gradle 3 build-chain. It seems to be fixed in the latest versions of google build-tools so hopefully in the next 6 months this problem will go away but for the moment it will only increase as teams upgrade their android build chains. Basically in my quick scanning of the bug ticket the problem is that the implementation of some low-level part of reading an apk package on cyanogenmod and many other derived custom ROMs is slightly faulty. That code-path is not normally used but the new appt2 build-process creates some outputs that trigger the condition in libandroidfw which then cause the apps to not load.

This means that we just need to patch the library and it fixes the problem:

Download fix for cyanogenmod 12.1.

Download fix for cyanogenmod 13 (untested)

To install this fix you can put it onto your SD card and install via TWRP or whichever bootloader you use. Alternatively you can do it by hand if you have rooted your phone by connecting to your phone’s shell with adb shell and setting up the following:,

# Set the system partition read-write
mount -o remount,rw /system

# Create backup copy of the old library in case anything goes wrong
cp /system/lib/libandroidfw.so /system/lib/libandroidfw.so.bkup

Then run the following from your computer to update (after having extracted the zip file):

adb push system/lib/libandroidfw.so /system/lib/libandroidfw.so

Then reboot your phone and it should all work again.

Talk from my dads funeral

A bit of a different topic for the post today, but as several people have asked here is the text of the talk I gave at my dads funeral last week. If you prefer, you can find the audio of the talk on the page with other audio recordings from the funeral.


Looking through some photos of my dad Charles as a child, many people have said how similar we looked. I guess we say “A chip off the old block”. In some respects this is true – we were both interested in computers and work in IT, we both love to travel. In other respects we were very different – I have never liked wine and for the past 8 years we’ve not had any car – I’m not sure those things were ever true of Charles! However in the area that was most precious to my dad – that of following and loving Jesus – he raised all of his children to follow his good example. So, I wanted to share a few words about what my dad taught me about Jesus, and in particular the hope that it gave him as he lived and as he was dying.


I remember my first experience of death. I must have been about 4 or 5 and our family had been given a friends’ pet gerbil to look after while they went on holiday. As my mother Joyce didn’t really like animals this was the first pet we had ever had in the house, and as it turns out, also the last! The first morning I went downstairs to look through the cage bars at it, but I found it on its’ back not moving. I went and asked my mum and she said it had died. I remember we made a small marker and buried it in the back garden – I don’t know what my parents said to our friends when they got home though!

I didn’t really understand death, but I started to be both fascinated and frightened by the idea. Shortly after this incident Charles had his wisdom teeth come through. I remember him lying in bed for a few days in pain and asking my mum “Is daddy dying?” and being scared and fearful of this.


A few years later when I was about 8 years old my great grandfather died. I remember beforehand having nightmares about him dying and was very scared. On several occasions I ran downstairs crying to my dad. He couldn’t explain much, and certainly couldn’t comfort me that my great-granddad would not die or that I would not die, but he read these words of Jesus to me from the gospel of John (14:1)

“Do not let your hearts be troubled. You believe in God; believe also in me. 2 My Father’s house has many rooms; if that were not so, would I have told you that I am going there to prepare a place for you? 3 And if I go and prepare a place for you, I will come back and take you to be with me that you also may be where I am. … 6 I am the way and the truth and the life. No one comes to the Father except through me”

As he read those words and prayed for me to know this comfort, I felt the peace of the Holy Spirit come on me even though I couldn’t have described it like this at the time.


For several weeks before Charles died we feared this would happen, and again, following my dads’ example I turned to the Bible for comfort, reassurance and guidance, and thinking about how Charles would understand these verses. Paul writes to the Corinthians:

1 Cor 15:20 … Christ has indeed been raised from the dead, the firstfruits of those who have fallen asleep. 21 For since death came through a man, the resurrection of the dead comes also through a man. 22 For as in Adam all die, so in Christ all will be made alive. 23 But each in turn: Christ, the firstfruits; then, when he comes, those who belong to him.

In the story told by the Bible, God created humans in a world without death, without pain, without sorrow and with the presence and glory of God in every area of it. But because of our natural inclination to disobedience and sin; decay and death – both physical and spiritual – came into the world. This is the first creation, perfected by God but ruined by humans, symbolized and summarized in this passage by our forefather Adam.

But God could not bare to see his good creation continually be ruined by people and sought out people who wanted to follow Him – not those who thought they were perfect, but ordinary fallen people like you and me and Charles, through whom He could display His Glory and Grace. Throughout thousands of years through the people of Israel he showed that no-one could achieve this on their own merit but rather everyone was in need of a saviour. And throughout this time he promised that one day a new creation, a new beginning, a new dawn would be ushered in by the True King of the world. And in time Jesus Christ, Son of God and Son of Man came into the world and redeemed humanity dying, rising from the dead and ascending into heaven. Just as decay and death entered the world through our forefather Adam, so true renewal and life for all who wish entered through Jesus.

Paul says here that Jesus was the first example of new creation – one day God will finally return to recreate and restore this beautiful artwork of a world to its intended state. Charles didn’t believe that we would all sit in heaven strumming harps, he had a sure and certain knowledge that at the end of time God would return and usher in the new creation with no more pollution, no more ill health, no more grief or pain and no more death.


24 Then the end will come, when he hands over the kingdom to God the Father after he has destroyed all dominion, authority and power. 25 For he must reign until he has put all his enemies under his feet. 26 The last enemy to be destroyed is death.

As Jesus died on the cross he dealt the death blow to Satan and evil by turning their own weapons against them. Satan and evil are still in the world until the new creation, however they know are defeated and their time is limited. Their end will soon come.

35 But someone will ask, “How are the dead raised? With what kind of body will they come?” 36 How foolish! What you sow does not come to life unless it dies. 37 When you sow, you do not plant the body that will be, but just a seed, perhaps of wheat or of something else. 38 But God gives it a body as he has determined, and to each kind of seed he gives its own body.

Here Paul explains what the resurrection will be like, and why physical death is necessary. It’s like the difference between an apple pip and an apple tree. No-one looks at an apple pip and says “if I plant this I’ll get a giant apple pip” they say “if I plant an apple pip I’ll get an apple tree which will make lovely apples”. An apple tree, an apple pip and an apple itself all have the same essence – they are all the same thing – but how they look is totally different from each other. And Paul continues:

42 So will it be with the resurrection of the dead. The body that is sown is perishable, it is raised imperishable; 43 it is sown in dishonor, it is raised in glory; it is sown in weakness, it is raised in power; 44 it is sown a natural body, it is raised a spiritual body.

How can someone come to this more amazing form if they have not been planted, if they have not died and been buried? Charles understood that physical death is a necessary passage – like planting a grain of wheat, an apple pip or some other seed; or perhaps like a caterpillar going into a cocoon and then emerging from the chrysalis as a butterfly leaving behind its shell to rot and decay as it has no more need of it.

In the resurrection Charles will be the same essence of how we knew him in his life but a more perfected and glorious form. In Charles’ final days when he was unable to say more than a few words at a time, unable to move from his hospital bed we were so upset and sad because we remembered a Charles at the peak of his life. But in the resurrection, even the Charles we knew at the peak of his life will seem like this compared to the awesome, amazing, completed, glorified and purified Charles that we shall see then. As my brother read earlier

50 I declare to you, brothers and sisters, that flesh and blood cannot inherit the kingdom of God, nor does the perishable inherit the imperishable. 51 Listen, I tell you a mystery: We will not all sleep, but we will all be changed— 52 in a flash, in the twinkling of an eye, at the last trumpet. For the trumpet will sound, the dead will be raised imperishable, and we will be changed. 53 For the perishable must clothe itself with the imperishable, and the mortal with immortality. 54 When the perishable has been clothed with the imperishable, and the mortal with immortality, then the saying that is written will come true: “Death has been swallowed up in victory.”
55 “Where, O death, is your victory?
Where, O death, is your sting?”
56 The sting of death is sin, and the power of sin is the law. 57 But thanks be to God! He gives us the victory through our Lord Jesus Christ.

One day the end will come, one day the new creation will come, one day death will be rendered dead and Jesus will be shown to be victorious, the true King over all creation. Today as believers in Jesus we can look forward at that time and spit in the face of death – yes it hurts now, but this is part of a greater process of renewal of this world and of defeat of evil. Charles realised this – he wasn’t afraid of death – saddened by the thought but not worried or scared by it because he knew it is a necessary step in the great picture of God’s plan for the universe. He knew that death was not a permanent fixture here.

58 Therefore, my dear brothers and sisters, stand firm. Let nothing move you. Always give yourselves fully to the work of the Lord, because you know that your labor in the Lord is not in vain.

This is what Charles believed in, standing firm, letting nothing move him he gave himself fully to the work of the Lord because he know that this was the one thing that lasts. This didn’t mean he shied away from things of the world – work, enjoyment, relationships – but rather in all that he did wether it was starting his own company and looking after and mentoring his employees, enjoying a good game of Cricket or hosting people at his house and cooking for them; he did this with the aim of pleasing God and as a labour of love to Jesus, living life to the full with Him.

He did not fear death, knowing as he read to me all those years ago that Jesus had said:

“Do not let your hearts be troubled. You believe in God; believe also in me. 2 My Father’s house has many rooms; if that were not so, would I have told you that I am going there to prepare a place for you? 3 And if I go and prepare a place for you, I will come back and take you to be with me that you also may be where I am.”

Charles is in this place now, resting from his labours and enjoying the presence of his king and redeemer, Jesus. And one day all who follow Jesus will meet him in the new creation, with a renewed body, and realise that what we saw in this life was but a shadow of the true Charles, and get to enjoy an eternity together with him. We come today to celebrate his life, to say goodbye to this pip, this chrysalis of a body in the ground, and to assure each other that one day we will see him again in the fullness of life in the glorious new creation.

Styled cross-platform number input (in Angular but applicable to any HTML/CSS app)

Native elements in HTML 5 such as the number input are great, but unfortunately our designers often want them to render the same between different browsers and operating systems. For example on linux/chrome the number input has an up/down spinner on the right hand side which is always visible. On Mac the spinner is only visible on mouse over, on firefox it is rendered differently and on mobile the spinner is not there at all usually. In this case, my designer wanted number input with up/down buttons always available. This was for an order quantity input, so it should be integers from 1 upwards. With bootstrap you can easily add buttons etc to the left or right of an input, however I couldn’t see an easy way in ‘native’ bootstrap to have two buttons stacked as one of the addons, so I created my own html/less. This is for Angular 1, bootstrap 3 and fontawesome but it should be very easy to change for different platforms.

Here’s the HTML

<div class="number-input-group">
    <input class="noscroll" type="number" min=1 max=100 step=1 ng-model="extra.quantity"/>
    <div class="buttons">
        <div ng-click="extra.quantity = extra.quantity + 1"><i class="fa fa-caret-up"></i></div>
        <div ng-click="extra.quantity = extra.quantity > 1 ? extra.quantity - 1 : extra.quantity"><i class="fa fa-caret-down"></i></div>
    </div>
</div>

And the LESS:

.number-input-group {
    display: table; 
    width: 100%;    // fill container - remove if you want it as effectively an inline-block
    position: relative;
    border-collapse: separate;
    border-spacing: 0px;    
                                
    > input[type=number] {  
        display: table-cell;
        width: 100%; // keep biggest
        -moz-appearance:textfield;
        border-right: none;         
        &::-webkit-inner-spin-button, &::-webkit-outer-spin-button {
            -webkit-appearance: none;   
            margin: 0;                  
        }                           
    }                           
    > .buttons {            
        display: table-cell;
        width: 1%;      // shrink to smallest size
        vertical-align: top;
        border: @input-transparent-border-width solid @bespoke-light-black;
        //color: @bespoke-light-black;
        > div { 
            @number-input-group-arrow-box-size: (@input-padding-top * 2 + @input-line-height - @input-transparent-border-width - 1) / 2;
            line-height: @number-input-group-arrow-box-size * 0.8;  // make a bit smaller because ff and chrome mobile add a few px for some reason
            font-size: @number-input-group-arrow-box-size * 1;
            padding: 0 7px;
                            
            &:hover {
                background-color: lighten(@background-color, 15%);
            }

            &:last-child {
                border-top: @input-transparent-border-width solid @bespoke-light-black;
            }
        }
    }
}

Transparently serving WebP images from Apache

I’ve recently been working on a website where we are creating a tool to customize a product. We have various renders from the designers with lots of transparency and then combine these together on the frontend to produce the customized render. As a result of needing transparency we can’t use the jpeg format so we need to use PNG format, however as this is lossless it means the image sizes tend to be very big. Fortunately the WebP format can compress transparent images including the transparency layer (but this is not set by default). Running the WebP converter with light compression over our PNG assets for this projects produced a set of WebP’s which were in total only 25% of the size of the PNG assets and still a high quality. This means much faster loading for the site, especially when displaying multiple renders of the customized product and its 5-10 layers per render.

However, WebP support is only available in about 70% of the browsers today. Rather than trying to test for it on the client side, it would be great to just keep the browser-side code the same but serve different assets depending on whether the browser supports it or not.

I found a good start for apache support for transparent loading of WebPs on github, however there were a few bugs in the script. Here is the final version that I used – you need to put it under a <VirtualHost> section.

AddType image/webp .webp
<ifmodule mod_rewrite.c>
      # Does browser support WebP? 
      RewriteCond %{HTTP_ACCEPT} \bimage/webp\b

      # Capture image name
      RewriteCond %{REQUEST_URI}  (.*)(\.(jpe?g|png|gif))$

      # if you don't have all jpg/png images available
      # as webp then you want to uncomment the next line
      # so apache first checks if there is a webp file
      # otherwise leave it disabled as it removes the
      # need to query the disk
      RewriteCond %{DOCUMENT_ROOT}%1.webp -f

      # Route to WebP image 
      RewriteRule .* %1.webp [L,T=image/webp]
</ifmodule>

And here is a script to convert all png, jpg or gif files under your image directories to WebP format in such a way that they will be automatically served by the code above.

#!/bin/bash
# Convert all images to WebP
IMAGE_PATHS="assets/ imgs/"
for SRC in $(find $IMAGE_PATHS -name "*.png" -o -name "*.jpg" -o -name "*.jpeg" -o -name "*.gif"); do
    WEBP="${SRC%.*}.webp"
    if [ "$SRC" -nt "$WEBP" ]; then
        echo "Converting to $WEBP"
        convert "$SRC" -define webp:alpha-compression=1 -define webp:auto-filter=true -define webp:alpha-quality=90 -quality 95 "$WEBP"
        
    fi
done

Note the -nt comparison that only updates files if the source has changed. You could add this script to git post-checkout and post-merge hooks to automatically keep your WebP assets in sync with the images in the code (and add a .gitignore entry for *.webp – no need to keep 2 copies of each resource in the repository).

Important note: If you’re using an older version of imagemagick such as on Ubuntu 14.04 (imagemagick 6.7.7), it doesn’t pass the alpha compression arguments through correctly so if you have a lot of transparency you won’t see much in the way of compression happening. Switch the convert line to be something like the below, however you need to remove the gif support as that requires using the gif2webp command to convert:

cwebp -quiet "$SRC" -metadata none -alpha_q 80 -q 90 -o "$WEBP"

Also note that this causes some issues when you have for example a jpg and png of the same base name whose contents are different (I found a few in the old code I inherited). You can find the base name of any of these clashes clashes using the following command:

find $IMAGE_PATHS -name "*.png" -o -name "*.jpg" -o -name "*.jpeg" -o -name "*.gif" | perl -pe 's,\.[^.]+$,\n,' | sort |uniq -d

Using wildcards in ssh configuration to create per-client setups

In my role as a linux consultant, I tend to work with a number of different companies. Obviously they all use ssh for remote access, and many require going through a gateway/bastion server first in order to access the rest of the network. I want to treat these clients as separate and secure as possible so I’ll always create a new SSH key for each client. Most clients would have large numbers of machines on their network and rather than having to cut and paste a lot of different configurations together you can use wildcards in your ~/.ssh/config file.

However this is not amazingly easy – as SSH configuration requires the most general settings to be at the bottom of the file. So here’s a typical setup I might use for an imaginary client called abc:

# Long list of server names & IPs
host abc-server1
hostname 10.1.2.3

host abc-server2
hostname 10.2.3.4
...

# Gateway box through which all SSH connections need routing
host abc-gateway
hostname gateway.example.org

# Generic rule to access any box on ABC's network. Eg ssh abc-ip-10.2.3.4 is the same as ssh abc-server2.
# You could also use hostnames like ssh abc-ip-foo.local assuming these resolve from the abc-gateway box.
host abc-ip-*
ProxyCommand ssh abc-gateway -W $(echo %h | sed 's/^abc-ip-//'):22

# Proxy all ssh connections via the gateway machine
host !abc-gateway !abc-ip-* abc-*
ProxyCommand ssh abc-gateway -W %h:22

# Settings for all abc machines - my username & private key
host abc-*
user mark.zealey
IdentityFile ~/.ssh/abc-corp

Using Letsencrypt with Wowza Media Server

As part of a work project, I needed to set up Wowza Media Server to do video streaming. As the webapp (which I wrote using the excellent ionic 3 framework) is running under https, it won’t accept video traffic coming from non-encrypted sources. Wowza has some pricey solutions for automatically installing SSL certificates for you, you can also purchase ones however these days I don’t see why everyone doesn’t just use the free and easily automated letsencrypt system. Unfortunately however, letsencrypt doesn’t let you run servers on different ports particularly easily, although it does have some hooks to stop/start services that may already be listening on port 443 (ssl). I happen to be using a redhat/centos distro, although I’m pretty sure the exact same instructions will work on ubuntu and other distros.

Firstly, you need to download the wowza-letsencrypt-converter java program which will convert letsencrypt certificates to the Java format that Wowza can use. Install that prebuild jar under /usr/bin.

Now, create a directory under the Wowza conf directory called ssl and create a file called jksmap.txt (so for example full path is /usr/local/WowzaStreamingEngine/conf/ssl/jksmap.txt) which lists all the domains the Wowza server will be listening on like:

video-1.example.org={"keyStorePath":"/usr/local/WowzaStreamingEngine/conf/ssl/video-1.example.org.jks", "keyStorePassword":"secret", "keyStoreType":"JKS"}

‘secret’ is not actually a placeholder; it’s the password that the wowza-letsencrypt-converter program sets up automatically so keep it as it is.

Configure SSL on the Wowza server by editing the VHost.xml configuration file (find out more about this process in the wowza documentation). Find the 443/SSL section which is commented out by default and change the following sections:

<HostPort>
        <Name>Default SSL Streaming</Name>
        <Type>Streaming</Type>
        <ProcessorCount>${com.wowza.wms.TuningAuto}</ProcessorCount>
        <IpAddress>*</IpAddress>
        <Port>443</Port>
        <HTTPIdent2Response></HTTPIdent2Response>
        <SSLConfig>
                <KeyStorePath>foo</KeyStorePath>
                <KeyStorePassword></KeyStorePassword>
                <KeyStoreType>JKS</KeyStoreType>
                <DomainToKeyStoreMapPath>${com.wowza.wms.context.VHostConfigHome}/conf/ssl/jksmap.txt</DomainToKeyStoreMapPath>
                <SSLProtocol>TLS</SSLProtocol>
                <Algorithm>SunX509</Algorithm>
                <CipherSuites></CipherSuites>
                <Protocols></Protocols>
        </SSLConfig>
        ...

Note the <KeyStorePath>foo</KeyStorePath> line – the value foo is ignored when using jksmap.txt, however if this is empty the server refuses to start or crashes.

Next, install letsencrypt using the instructions on the certbot website.

Once you’ve done all this, run the following command to temporarily stop the server, fetch the certificate, convert it and start the server again:

certbot certonly --standalone \
    -d video-1.example.org \
    --register-unsafely-without-email \
    --pre-hook 'systemctl stop WowzaStreamingEngine' \
    --post-hook '/usr/local/WowzaStreamingEngine/java/bin/java -jar /usr/bin/wowza-letsencrypt-converter-0.1.jar /usr/local/WowzaStreamingEngine/conf/ssl/ /etc/letsencrypt/live/; systemctl start WowzaStreamingEngine'

Then, in order to ensure that the certificate continues to be valid you need to set up a cron entry to run this command daily which will automatically renew the cert when it gets close to its default 3 month expiry time. Simply create /etc/cron.d/wowza-cert-renewal with the following content:

0 5 * * * root /usr/bin/certbot renew --standalone --pre-hook 'systemctl stop WowzaStreamingEngine' --post-hook '/usr/local/WowzaStreamingEngine/java/bin/java -jar /usr/bin/wowza-letsencrypt-converter-0.1.jar /usr/local/WowzaStreamingEngine/conf/ssl/ /etc/letsencrypt/live/; systemctl start WowzaStreamingEngine'

Easily setup a secure FTP server with vsftpd and letsencrypt

I recently had to set up a FTP server for some designers to upload their work (unfortunately they couldn’t use SFTP otherwise it would have been much simpler!). I’ve not had to set up vsftpd for a while, and when I last did it I didn’t much worry about needing to use encryption. So here are some notes on how to set up vsftpd with letsencrypt on ubuntu 14.04 / 16.04 so that only a specific user or two are permitted access.

First, install vsftpd:

apt install -y vsftpd

Next, you need to make sure you have installed letsencrypt. If not, you can do so using the instructions here – fortunately letsencrypt installation has got a lot easier since my last blog post about letsencrypt almost 2 years ago.

I’m assuming you are running this on the same server as the website, and you’re wanting to set it up as ftp on the same domain or similar subdomain as the website (eg ftp access direct to example.org, or via something like ftp.example.org). If not, you can do a manual install of the certificate but then you will need to redo this every 3 months.

Assuming you’re running the site on apache get the certificate like:

certbot --apache -d example.org,www.example.org

You should now have the necessary certificates in the /etc/letsencrypt/live/example.org/ folder, and your site should be accessible nicely via https.

Now, create a user for FTP using the useradd command. If you want to just create a user that only has access to the server via FTP but not a regular account you can modify the PAM configuration file /etc/pam.d/vsftpd and comment out the following line:

# Not required to be allowed normal login to box
#auth   required        pam_shells.so

This lets you keep nologin as the shell so the user cannot login normally but can log in via vsftpd’s PAM layer.

Now open up /etc/vsftpd.conf

pam_service_name=vsftpd

# Paths to your letsencrypt files
rsa_cert_file=/etc/letsencrypt/live/example.org/fullchain.pem
rsa_private_key_file=/etc/letsencrypt/live/example.org/privkey.pem
ssl_enable=YES
allow_anon_ssl=NO

# Options to force all communications over SSL - why would you want to
# allow clear these days? Comment them out if you don't want to force
# SSL though
force_local_data_ssl=YES
force_local_logins_ssl=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO

require_ssl_reuse=NO
ssl_ciphers=HIGH

Because we’re running behind a firewall we want to specify which port range to open up for the connections (as well as port 21 for FTP of course):

pasv_min_port=40000
pasv_max_port=41000

If you want to make it even more secure by only allowing users listed in /etc/vsftpd.userlist to be able to log in, add some usernames in that file and then add the following to the /etc/vsftpd.conf configuration file:

userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO

You can test using the excellent lftp command:

lftp -u user,pass -e 'set ftp:ssl-force true' example.org/

If the cert is giving errors or is self-signed, you can do the following to connect ignoring them:

lftp -u user,pass -e 'set ssl:verify-certificate false; set ftp:ssl-force true' example.org/