Transparently serving WebP images from Apache

I’ve recently been working on a website where we are creating a tool to customize a product. We have various renders from the designers with lots of transparency and then combine these together on the frontend to produce the customized render. As a result of needing transparency we can’t use the jpeg format so we need to use PNG format, however as this is lossless it means the image sizes tend to be very big. Fortunately the WebP format can compress transparent images including the transparency layer (but this is not set by default). Running the WebP converter with light compression over our PNG assets for this projects produced a set of WebP’s which were in total only 25% of the size of the PNG assets and still a high quality. This means much faster loading for the site, especially when displaying multiple renders of the customized product and its 5-10 layers per render.

However, WebP support is only available in about 70% of the browsers today. Rather than trying to test for it on the client side, it would be great to just keep the browser-side code the same but serve different assets depending on whether the browser supports it or not.

I found a good start for apache support for transparent loading of WebPs on github, however there were a few bugs in the script. Here is the final version that I used – you need to put it under a <VirtualHost> section.

AddType image/webp .webp
<ifmodule mod_rewrite.c>
      # Does browser support WebP? 
      RewriteCond %{HTTP_ACCEPT} \bimage/webp\b

      # Capture image name
      RewriteCond %{REQUEST_URI}  (.*)(\.(jpe?g|png|gif))$

      # if you don't have all jpg/png images available
      # as webp then you want to uncomment the next line
      # so apache first checks if there is a webp file
      # otherwise leave it disabled as it removes the
      # need to query the disk
      RewriteCond %{DOCUMENT_ROOT}%1.webp -f

      # Route to WebP image 
      RewriteRule .* %1.webp [L,T=image/webp]
</ifmodule>

And here is a script to convert all png, jpg or gif files under your image directories to WebP format in such a way that they will be automatically served by the code above.

#!/bin/bash
# Convert all images to WebP
IMAGE_PATHS="assets/ imgs/"
for SRC in $(find $IMAGE_PATHS -name "*.png" -o -name "*.jpg" -o -name "*.jpeg" -o -name "*.gif"); do
    WEBP="${SRC%.*}.webp"
    if [ "$SRC" -nt "$WEBP" ]; then
        echo "Converting to $WEBP"
        convert "$SRC" -define webp:alpha-compression=1 -define webp:auto-filter=true -define webp:alpha-quality=90 -quality 95 "$WEBP"
        
    fi
done

Note the -nt comparison that only updates files if the source has changed. You could add this script to git post-checkout and post-merge hooks to automatically keep your WebP assets in sync with the images in the code (and add a .gitignore entry for *.webp – no need to keep 2 copies of each resource in the repository).

Important note: If you’re using an older version of imagemagick such as on Ubuntu 14.04 (imagemagick 6.7.7), it doesn’t pass the alpha compression arguments through correctly so if you have a lot of transparency you won’t see much in the way of compression happening. Switch the convert line to be something like the below, however you need to remove the gif support as that requires using the gif2webp command to convert:

cwebp -quiet "$SRC" -metadata none -alpha_q 80 -q 90 -o "$WEBP"

Also note that this causes some issues when you have for example a jpg and png of the same base name whose contents are different (I found a few in the old code I inherited). You can find the base name of any of these clashes clashes using the following command:

find $IMAGE_PATHS -name "*.png" -o -name "*.jpg" -o -name "*.jpeg" -o -name "*.gif" | perl -pe 's,\.[^.]+$,\n,' | sort |uniq -d

Using wildcards in ssh configuration to create per-client setups

In my role as a linux consultant, I tend to work with a number of different companies. Obviously they all use ssh for remote access, and many require going through a gateway/bastion server first in order to access the rest of the network. I want to treat these clients as separate and secure as possible so I’ll always create a new SSH key for each client. Most clients would have large numbers of machines on their network and rather than having to cut and paste a lot of different configurations together you can use wildcards in your ~/.ssh/config file.

However this is not amazingly easy – as SSH configuration requires the most general settings to be at the bottom of the file. So here’s a typical setup I might use for an imaginary client called abc:

# Long list of server names & IPs
host abc-server1
hostname 10.1.2.3

host abc-server2
hostname 10.2.3.4
...

# Gateway box through which all SSH connections need routing
host abc-gateway
hostname gateway.example.org

# Generic rule to access any box on ABC's network. Eg ssh abc-ip-10.2.3.4 is the same as ssh abc-server2.
# You could also use hostnames like ssh abc-ip-foo.local assuming these resolve from the abc-gateway box.
host abc-ip-*
ProxyCommand ssh abc-gateway -W $(echo %h | sed 's/^abc-ip-//'):22

# Proxy all ssh connections via the gateway machine
host !abc-gateway !abc-ip-* abc-*
ProxyCommand ssh abc-gateway -W %h:22

# Settings for all abc machines - my username & private key
host abc-*
user mark.zealey
IdentityFile ~/.ssh/abc-corp

Using Letsencrypt with Wowza Media Server

As part of a work project, I needed to set up Wowza Media Server to do video streaming. As the webapp (which I wrote using the excellent ionic 3 framework) is running under https, it won’t accept video traffic coming from non-encrypted sources. Wowza has some pricey solutions for automatically installing SSL certificates for you, you can also purchase ones however these days I don’t see why everyone doesn’t just use the free and easily automated letsencrypt system. Unfortunately however, letsencrypt doesn’t let you run servers on different ports particularly easily, although it does have some hooks to stop/start services that may already be listening on port 443 (ssl). I happen to be using a redhat/centos distro, although I’m pretty sure the exact same instructions will work on ubuntu and other distros.

Firstly, you need to download the wowza-letsencrypt-converter java program which will convert letsencrypt certificates to the Java format that Wowza can use. Install that prebuild jar under /usr/bin.

Now, create a directory under the Wowza conf directory called ssl and create a file called jksmap.txt (so for example full path is /usr/local/WowzaStreamingEngine/conf/ssl/jksmap.txt) which lists all the domains the Wowza server will be listening on like:

video-1.example.org={"keyStorePath":"/usr/local/WowzaStreamingEngine/conf/ssl/video-1.example.org.jks", "keyStorePassword":"secret", "keyStoreType":"JKS"}

‘secret’ is not actually a placeholder; it’s the password that the wowza-letsencrypt-converter program sets up automatically so keep it as it is.

Configure SSL on the Wowza server by editing the VHost.xml configuration file (find out more about this process in the wowza documentation). Find the 443/SSL section which is commented out by default and change the following sections:

<HostPort>
        <Name>Default SSL Streaming</Name>
        <Type>Streaming</Type>
        <ProcessorCount>${com.wowza.wms.TuningAuto}</ProcessorCount>
        <IpAddress>*</IpAddress>
        <Port>443</Port>
        <HTTPIdent2Response></HTTPIdent2Response>
        <SSLConfig>
                <KeyStorePath>foo</KeyStorePath>
                <KeyStorePassword></KeyStorePassword>
                <KeyStoreType>JKS</KeyStoreType>
                <DomainToKeyStoreMapPath>${com.wowza.wms.context.VHostConfigHome}/conf/ssl/jksmap.txt</DomainToKeyStoreMapPath>
                <SSLProtocol>TLS</SSLProtocol>
                <Algorithm>SunX509</Algorithm>
                <CipherSuites></CipherSuites>
                <Protocols></Protocols>
        </SSLConfig>
        ...

Note the <KeyStorePath>foo</KeyStorePath> line – the value foo is ignored when using jksmap.txt, however if this is empty the server refuses to start or crashes.

Next, install letsencrypt using the instructions on the certbot website.

Once you’ve done all this, run the following command to temporarily stop the server, fetch the certificate, convert it and start the server again:

certbot certonly --standalone \
    -d video-1.example.org \
    --register-unsafely-without-email \
    --pre-hook 'systemctl stop WowzaStreamingEngine' \
    --post-hook '/usr/local/WowzaStreamingEngine/java/bin/java -jar /usr/bin/wowza-letsencrypt-converter-0.1.jar /usr/local/WowzaStreamingEngine/conf/ssl/ /etc/letsencrypt/live/; systemctl start WowzaStreamingEngine'

Then, in order to ensure that the certificate continues to be valid you need to set up a cron entry to run this command daily which will automatically renew the cert when it gets close to its default 3 month expiry time. Simply create /etc/cron.d/wowza-cert-renewal with the following content:

0 5 * * * root /usr/bin/certbot renew --standalone --pre-hook 'systemctl stop WowzaStreamingEngine' --post-hook '/usr/local/WowzaStreamingEngine/java/bin/java -jar /usr/bin/wowza-letsencrypt-converter-0.1.jar /usr/local/WowzaStreamingEngine/conf/ssl/ /etc/letsencrypt/live/; systemctl start WowzaStreamingEngine'

Easily setup a secure FTP server with vsftpd and letsencrypt

I recently had to set up a FTP server for some designers to upload their work (unfortunately they couldn’t use SFTP otherwise it would have been much simpler!). I’ve not had to set up vsftpd for a while, and when I last did it I didn’t much worry about needing to use encryption. So here are some notes on how to set up vsftpd with letsencrypt on ubuntu 14.04 / 16.04 so that only a specific user or two are permitted access.

First, install vsftpd:

apt install -y vsftpd

Next, you need to make sure you have installed letsencrypt. If not, you can do so using the instructions here – fortunately letsencrypt installation has got a lot easier since my last blog post about letsencrypt almost 2 years ago.

I’m assuming you are running this on the same server as the website, and you’re wanting to set it up as ftp on the same domain or similar subdomain as the website (eg ftp access direct to example.org, or via something like ftp.example.org). If not, you can do a manual install of the certificate but then you will need to redo this every 3 months.

Assuming you’re running the site on apache get the certificate like:

certbot --apache -d example.org,www.example.org

You should now have the necessary certificates in the /etc/letsencrypt/live/example.org/ folder, and your site should be accessible nicely via https.

Now, create a user for FTP using the useradd command. If you want to just create a user that only has access to the server via FTP but not a regular account you can modify the PAM configuration file /etc/pam.d/vsftpd and comment out the following line:

# Not required to be allowed normal login to box
#auth   required        pam_shells.so

This lets you keep nologin as the shell so the user cannot login normally but can log in via vsftpd’s PAM layer.

Now open up /etc/vsftpd.conf

pam_service_name=vsftpd

# Paths to your letsencrypt files
rsa_cert_file=/etc/letsencrypt/live/example.org/fullchain.pem
rsa_private_key_file=/etc/letsencrypt/live/example.org/privkey.pem
ssl_enable=YES
allow_anon_ssl=NO

# Options to force all communications over SSL - why would you want to
# allow clear these days? Comment them out if you don't want to force
# SSL though
force_local_data_ssl=YES
force_local_logins_ssl=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO

require_ssl_reuse=NO
ssl_ciphers=HIGH

Because we’re running behind a firewall we want to specify which port range to open up for the connections (as well as port 21 for FTP of course):

pasv_min_port=40000
pasv_max_port=41000

If you want to make it even more secure by only allowing users listed in /etc/vsftpd.userlist to be able to log in, add some usernames in that file and then add the following to the /etc/vsftpd.conf configuration file:

userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO

You can test using the excellent lftp command:

lftp -u user,pass -e 'set ftp:ssl-force true' example.org/

If the cert is giving errors or is self-signed, you can do the following to connect ignoring them:

lftp -u user,pass -e 'set ssl:verify-certificate false; set ftp:ssl-force true' example.org/

Fixing Ubuntu massive internal microphone distortion

Update: It is still an issue & the same fix in ubuntu 18.04 unfortunately.

A while ago I upgrade from Ubuntu 14.10 to 16.04. Afterwards, my laptop’s internal microphone started to become massively distorted to the point that people on the other end of skype or hangouts calls couldn’t understand me at all.

Looking in the ALSA settings I noticed that the “Internal Mic Boost” was constantly being set to 100% and when I dropped this down to 0% everything went well. It seems on my laptop at least to be coupled with the “Mic Boost” which boosts both but without quite so much distortion, ie the “Internal Mic Boost” is a boost on top of the “Mic Boost” which is obviously a problem.

I couldn’t find much detail about how to configure this properly, so after some hacking around I was able to come up with the following solution. Go through every file in /usr/share/pulseaudio/alsa-mixer/paths, look for the section “[Element Internal Mic Boost]” if it is there. You should see a setting under that section like “volume = merge“. Turn that into “volume = off“. For me the files are analog-input-internal-mic.conf and analog-input-internal-mic-always.conf

To prevent it being changed later when ALSA is updated, you can run:

chattr +i /usr/share/pulseaudio/alsa-mixer/paths

I’d love to hear if there is a simpler way to work around this issue, but it works for me at least!

Awesome Angular 4 form validator routine

One of the things I like the most about Angular is the ability to make even complex forms relatively simple. In Angular 1 I had a library of form helpers that I wrote, the basic idea was initially show a blank form, when user fills in (dirties) an entry it should do validation. When the user click on the submit button it should do validation of all items on the form, submit the form if no errors and show any errors if there were any. This sounds simple enough but in reality it was a few hundred lines of code to do it correctly.

Angular 2+ (Angular 4 on Ionic 3 in this case) make life a lot easier, especially once you’ve got to terms with the FormBuilder framework. To force whole-form validation (including any subforms) simply create a module called form-tools.ts looking like:

// Force a form to show any invalid entries and return if form is valid or not. Recurse through any subforms.
export const validateForm = (form) => {
    Object.keys( form.controls ).forEach( control_name => {
        let control = form.controls[control_name];

        control.markAsTouched();
        if( 'controls' in control )
            validateForm( control );
    });

    return form.valid;
};

Then you can easily use it from another class as follows:

import { FormBuilder } from '@angular/forms';
import { validateForm } from '../../form-tools';

class ...Page {
    constructor(private fb : FormBuilder ) {
        this.userForm = this.fb.group({
            form elements...
        });
    }

    submit_form() {
        if( !validateForm(this.userForm) )
            return;
        send the form request to the server(this.userForm.value)
    }
}

Fan/Light remote control teardown and documentation

I recently bought a few ceiling fan controllers from AliExpress, I’m guessing there are a lot more out there with a similar electrical design so I thought perhaps my notes on tearing these down would be useful for someone as I didn’t find this elsewhere on the internet. As I was taking them apart to replace the wireless section with an ESP8266 for WiFi remote control I thought I’d document the design of these devices a bit.

Firstly, the main circuits run at 12v – the relays are triggered by that voltage, and the remote control uses a small 12v battery too. However the radio is only rated for 5v, so I think there must be a 5v circuit there as well probably obtained via resistors from the 12v circuit. The basic block layout is that the radio receiver connects to an unmarked chip (in some models this connects to an AT24C02N which presumably holds the last state of the relays). The unmarked chip has 5 output tracks running from it to an ULN2003A which boosts the signal to 12v to trigger the relays. 4 tracks go to the 4 relays (one for the light, and 3 for fan control which has live, 3uF and 2.5uF (2uF in some models) capacitors to regulate the speed). The 5th track goes to a buzzer which is triggered when it gets powered on or receives a command from the remote controller. Unfortunately I can’t remember the order that fan relays were triggered as I wrote it down lost it, but it was something along the lines of:

high live
medium 3uF + 2.5uF
low 3uF

The radio receiver in the ceiling fan based on a SYN470R chip and the transmitter uses an EV1527 to encode the data to send to the fan. It sends 24-bits – a 20 bit id and a 4 bit code. Here are the hex codes for the commands as discovered by my patches to allow this device to work on a C.H.I.P board:

1

3

a

Command Hex code
low 8
medium 2
high 6
stop c
toggle light on/off 5
timer 1h 4
timer 2h
timer 4h
timer 8h

Uploading videos to YouTube with perl

As part of a perl project recently I needed to generate a load of videos (using the excellent MLT framework) and upload them to YouTube. As the bulk of the project was already in perl, I needed to write a YouTube upload module for it. Here’s a rough overview of how to authenticate and upload a video (chunked rather than loading it all into memory at once) with perl. Note as well it was quite a struggle to get long-term auth tokens from the Google services – one slight mistake with a parameter and they only give temporary grants which last for an hour or two rather than indefinitely.

package Youtube::Upload;
use Moo;

use LWP::Authen::OAuth2;
use Path::Tiny 'path';
use URI::QueryParam;
use JSON::XS qw< encode_json decode_json >;
use HTTP::Message;

# API described at https://developers.google.com/youtube/v3/docs/videos/update

has auth_id => is => 'ro', required => 1;
has auth_secret => is => 'ro', required => 1;
has redirect_uri => is => 'ro', required => 1;

# If you havn't used this for a while, blank these and re-run and you'll
# probably need to do some auth against google.
has auth_code => is => 'ro';
has auth_token => is => 'ro';

has auth => is => 'lazy', builder => \&_google_auth;

sub upload {
    my ($self, $details, $youtube_code, $video_file) = @_;

    die "No id to update, but also nothing to upload" if !$youtube_code && !$video_file;

    my %body = %$details;

    # Allow all embedding
    $body{status}{embeddable} = JSON::XS::true;

    my $magic_split = 'BlA123H123BLAH'; # A unique string...

    my $is_new = !defined $youtube_code;
    my ($content, %headers, $uri);
    if( !$is_new ) {
        $body{id} = $youtube_code;

        $content = encode_json(\%body);
        $headers{'Content-Type'} = 'application/json';
        $uri = URI->new( 'https://www.googleapis.com/youtube/v3/videos' );
    } else {
        my $msg = HTTP::Message->new([
            'Content-Type' => 'multipart/related',
        ]);

        $msg->add_part(
            HTTP::Message->new([
                'Content-Type' => 'application/json',
            ], encode_json(\%body) )
        );

        my $video_msg = 
            HTTP::Message->new(
                [
                    'Content-Type' => 'video/*',
                ],
                $magic_split,
            );
        $msg->add_part( $video_msg );
            
        $content = $msg->as_string;
        (my $head, $content) = split /\r?\n\r?\n/, $content, 2;
        my ($k, $v) = split /:\s*/, $head, 2;
        $headers{$k} = $v;
        $uri = URI->new( 'https://www.googleapis.com/upload/youtube/v3/videos' );
    }

    delete $body{id};
    $uri->query_form_hash({ 
        part => join(',', keys %body), 
    });

    my $res;
    if( $is_new ) {
        my @content = split /\Q$magic_split/, $content;
        die "Magic split failed" if @content != 2;

        my $content_fh = path($video_file)->openr;
        my $request = HTTP::Request->new( 'POST', $uri, HTTP::Headers->new( %headers ), sub {
            #warn "chunk uploaded";
            return '' if !@content;

            if( @content > 1 ) {
                return shift @content;
            } else {
                my $read = read $content_fh, my $data, 1_000_000;
                if( !$read ) {
                    return shift @content;
                }
                return $data;
            }
        } );
        $res = $self->auth->request( $request );
    } else {
        $res = $self->auth->put( $uri,
            %headers,
            Content => $content
        );
    }

    my $cont = $res->decoded_content;
    my $ret;
    if( !$res->is_success ) {
        if($res->code != 403) {   # not our video
            die "Response not success: $cont for " . ( $youtube_code || '' );
        }
    } else {
        $ret = decode_json $cont;
    }

    return ref($ret) ? $ret : {};
}

sub _google_auth {
    my ($self) = @_;

    my $auth = LWP::Authen::OAuth2->new(
        service_provider => 'Google',
        redirect_uri => $self->redirect_uri,
        client_type => "web server",

        client_id      => $self->auth_id,
        client_secret  => $self->auth_secret,

        save_tokens => sub {
            say "Save token string: $_[0]" if !$self->auth_token;
        },

        token_string => $self->auth_token,
    );

    # For debug:
    #$auth->user_agent->add_handler("request_send",  sub { shift->dump(maxlength => 10_000); return });
    #$auth->user_agent->add_handler("response_done", sub { shift->dump(maxlength => 10_000); return });

    if( !$self->auth_code ) {
        say $auth->authorization_url(
            scope=> 'https://www.googleapis.com/auth/youtube https://www.googleapis.com/auth/youtube.upload',
            # Need these two to get a refresh token
            approval_prompt => 'force',
            access_type => 'offline',
        );
        exit;
    }

    $auth->request_tokens( code => $self->auth_code ) if !$self->auth_token;

    return $auth;
}

As per this bug report you need to hack LWP::Authen::OAuth2::AccessToken::Bearer to enable the chunked uploads to work, otherwise it throws an error.

The auth_id and auth_secret parameters are given by the google code console when you sign up for YouTube API access, and the redirect_uri should be where the web app would redirect to after displaying the Google oauth permission grant screen. The first time you run the script, you’ll need to save the auth_code / auth_token parts and use them in future when calling. Even though it should grant a new auth token each time the script is run you still need to present the original one from the look of things.

Element scrolling within Angular (1) pages

Back to Angular 1 for today’s post as I’ve been doing some work on an older project today. As we all know, in the old days of the web, you could scroll to items within the page by using an <a> element with a hash reference, like:

<a href="#thing-to-scroll-to">scroll here</a>

<h2 id="thing-to-scroll-to">My title</h2>

With the advent of single-page sites and multiple ajax pages under them however, the hash section of a query parameter is increasingly unable to be used. On Angular 1 with angular-route this becomes impossible.

So what if for example you want to have some references at the top of a page within an Angular application which will scroll the user down to certain sections below, just like the hash references used to do? There are a number of suggestions on the internet for this but they are not very reusable, so I decided to create a simple directive that does this:

app.directive('clickScrollTo', function () {
    return {
        restrict: 'A',
        scope: {
            clickScrollTo: '@'
        },
        link: function (scope, el, attr) {
            el.click(function(event) {
                var main_page = $('.main-page');
                event.preventDefault();
                main_page
                    .stop()
                    .animate(
                        { scrollTop: main_page.scrollTop() + $(scope.clickScrollTo).offset().top },
                        500,
                        'linear'
                    );
            });
        }
    }
});

This is slightly complicated by the fact that I am using an element with class="main-page" with overflow: auto set to contain the scrollable section of the app. If you don’t have a layout like this just replace the $('.main-page') part with $('body').

Then you can easily create elements like:

<span class="a" click-scroll-to="#more-benefits">click here</span>
...
<div id="more-benefits">...</div>

Ionic 3 – Allowing back button action to be specified on a page-by-page basis

Ionic 3 is great as I’ve said in previous posts, however there are a few issues with its out-of-the box handling of back button on Android. Normally it just goes back to the previous page which is usually what you want, however this is not what you want when you are in a popup or modal on the page, or in some other cases you may want to capture the back event and do something else with it on a page-by-page basis. Fortunately it’s pretty straight forward to override this as I discovered.

Firstly, we want to override the back button action for the whole app. Open up app/app.component.ts:
and in the constructor:

import { App, Platform } from 'ionic-angular';
...
  constructor( ..., private app: App, private platform : Platform ) {
    platform.registerBackButtonAction(() => {
        let nav = app.getActiveNavs()[0];
        let activeView = nav.getActive();

        if(activeView != null){
          if(nav.canGoBack())
            nav.pop();
          else if (typeof activeView.instance.backButtonAction === 'function')
            activeView.instance.backButtonAction();
          else
            nav.parent.select(0); // goes to the first tab
        }
      });
  }

This basically defaults to going back if it is possible to, if not then it will take you to the first tab if it is a tab view. However if your active page has a backButtonAction() function, it will delegate to that.

So for example in a modal class you can add something like:

import { ViewController } from 'ionic-angular';
...
    constructor( private viewCtrl : ViewController ) {}

    backButtonAction() {
        this.viewCtrl.dismiss();
    }

which will dismiss the modal and go back to the page that called it, rather than the default action of simply going back a page.