All posts by Mark

I'm a full-stack Linux consultant from the UK specializing in high performance systems, DNS and databases. I have also written and lead teams producing a number of web/mobile apps. I'm fluent in English and Turkish.

Convert emf files to png/jpg format on Linux

For a project recently I was sent some excel files with images embedded in them. Not fun. Then I discovered that these were in some random windows format of emf or wmf (depending on whether I exported as .xlsx or .ods from libreoffice) which I think was just wrapping a jpg/png file into some vector/clipart format. Fortunately there’s a great script called unoconv that uses bindings into libreoffice/openoffice to render pretty much anything, however it doesnt seem possible to change page size/resolution. If you use the PDF output though you can get the image simply embedded in the PDF, then use the pdfimages command to extract the original images out of there. Finally some of these had different white borders so I cropped these and converted to png. Full commands below:

rm -fr out; mkdir out;
for i in xl/media/image*.emf; do
  unoconv -f pdf -o t.pdf "$i";
  pdfimages t.pdf out;
  convert out-000.ppm -trim out/$(basename "$i").png;
done

Drupal: Importing commerce products with Feeds Import 3

My CMS of choice for non-bespoke projects is Drupal even though it’s written in PHP it seems a lot more secure, stable and extensible than most CMS’ out there. Recently I’ve been working on an ecommerce site using Drupal Commerce which is a bit tricky to learn but very flexible and well integrated with Drupal. Today I needed to import a product list into the new system from an existing platform. Fortunately with Drupal’s Feeds Import module this is pretty straight forward (after reading the documentation about how to process multiple taxonomies etc). However it seems like it recently had an upgrade and version 3 is incompatible with version 2 (there’s a Commerce adaptor for v2).

I couldn’t find any code about how to integrate this latest version of Feeds Import with Drupal Commerce to import the prices of the products (which are linked to standard nodes using a Product Reference field). So, I created an input filter of my own to do this, see the code below. Note the custom cover_type field which and also setting the extended data attribute of the pricing detail.

class CommerceImportFilter {
    public static function add_product( $field ) {
        $cp = commerce_product_new('product');
        $cp->title = 'softcover';
        $cp->field_cover_type = array(LANGUAGE_NONE => array( 0 => array(
            'value' => 'soft'
        )));
        $cp->commerce_price = array(LANGUAGE_NONE => array( 0 => array(
          'amount' => $field * 100,
          'currency_code' => 'TRY',
          'data' => array( 'include_tax' => 'kitap_kdv' ),
        )));
        commerce_product_save($cp);
        return $cp->product_id;
    }
}

Solved: Problems with connecting ath9k to 802.11n network

So, I was at a friends house and tried to connect my Qualcomm Atheros AR9285 Wireless Network Adapter (ath9k driver on linux) to their wireless network (D-link DIR-615). It was connecting and then 10 seconds later disconnecting without ever properly establishing a connection. Output as below:

[ 6350.957601] wlan0: authenticate with XXX
[ 6350.971542] wlan0: send auth to XXX (try 1/3)
[ 6350.973230] wlan0: authenticated
[ 6350.976927] wlan0: associate with XXX (try 1/3)
[ 6350.980936] wlan0: RX AssocResp from XXX (capab=0xc31 status=0 aid=3)
[ 6350.981006] wlan0: associated
[ 6350.981376] cfg80211: Calling CRDA for country: GB
[ 6350.984168] ath: EEPROM regdomain: 0x833a
[ 6350.984172] ath: EEPROM indicates we should expect a country code
[ 6350.984174] ath: doing EEPROM country->regdmn map search
[ 6350.984175] ath: country maps to regdmn code: 0x37
[ 6350.984177] ath: Country alpha2 being used: GB
[ 6350.984178] ath: Regpair used: 0x37
[ 6350.984179] ath: regdomain 0x833a dynamically updated by country IE
[ 6350.984207] cfg80211: Regulatory domain changed to country: GB
[ 6350.984209] cfg80211:  DFS Master region: unset
[ 6350.984210] cfg80211:   (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp), (dfs_cac_time)
[ 6350.984213] cfg80211:   (2402000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[ 6350.984215] cfg80211:   (5170000 KHz - 5250000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[ 6350.984217] cfg80211:   (5250000 KHz - 5330000 KHz @ 40000 KHz), (N/A, 2000 mBm), (0 s)
[ 6350.984218] cfg80211:   (5490000 KHz - 5710000 KHz @ 40000 KHz), (N/A, 2700 mBm), (0 s)
[ 6350.984220] cfg80211:   (57240000 KHz - 65880000 KHz @ 2160000 KHz), (N/A, 4000 mBm), (N/A)
[ 6360.987225] wlan0: deauthenticating from XXX by local choice (Reason: 3=DEAUTH_LEAVING)

Not very nice. I browsed around on the internet but couldn’t find anything obvious, eventually by looking at the different options the ath9k kernel driver accepts I found the ath9k_hw_btcoex_disable option which seems to do the trick.

echo options ath9k nohwcrypt=1 ath9k_hw_btcoex_disable > /etc/modprobe.d/ath9k.conf
sudo rmmod ath9k ath9k_hw ath9k_common
sudo modprobe -v ath9k ath9k_hw ath9k_common

and it all works again.

Background Slideshow with AngularJS and Bootstrap

As part of a project we wanted to have the front page with a nice rotating background for the jumbotron. There are a number of carousel components and scripts that can be easily found online but mostly they use the img tag and/or require a root absolute div which means it won’t automatically resize to the jumbotron content. I wanted a jumbotron that would resize to the content and also provide a nice seamless transition for the images. So, I sat down and rolled my own.

Firstly you need to set up a jumbotron component:

.jumbotron-slideshow {
    position: relative;
    background-color: transparent;  // replace the standard bootstrap background color

    .slideshow {
        background-size: cover;
        background-repeat: no-repeat;
        background-position: 50% 50%;
        position: absolute;
        top: 0;
        bottom: 0;
        left: 0;
        right: 0;
        
        /* Layer the images so that the visible one is below all the others,
         * but the previously active one fades out to reveal the visible one
         * below */
        transition: opacity 1s;
        opacity: 0;
        
        &.visible {
            transition: none;
            opacity: 1;
            z-index: -1;
        }
    }   
}       

And then the HTML:

<div class="jumbotron jumbotron-slideshow">
    <div ng-bg-slideshow="[ 'images/bg1.jpg', 'images/bg2.jpg', ... ]" interval=5000></div>

    ... content that you want ...

Create the angular template to generate the image divs:

<div ng-repeat="img in images"
        class="slideshow" ng-class="{ visible: active_image == $index }" ng-style="{ 'background-image': 'url(' + img + ')' }">
    </div>  

And finally the Angular component:

app.directive("ngBgSlideshow", function($interval) {
    return {
        restrict: 'A',
        scope: {
            ngBgSlideshow: '&',
            interval: '=',
        },
        templateUrl: 'views/components/slideshow.html',
        link: function( scope, elem, attrs ) {
            scope.$watch( 'ngBgSlideshow', function(val) {
                scope.images = val();
                scope.active_image = 0;
            });

            var change = $interval(function() {
                scope.active_image++;
                if( scope.active_image >= scope.images.length )
                    scope.active_image = 0;
            }, scope.interval || 1000 );
        
            scope.$on('$destroy', function() {
                $interval.cancel( change );
            });
        }
    };  
});         

Note: If you want to be able to programatically change the interval you’ll need to add a watch that recreates the interval when the interval attribute changes.

Multi-line commands with comments in bash

As part of the last post I initially used a bash script to generate the commands to output the individual videos. As per usual, when I finally got fed up of the limitations and syntax issues in bash I switched to a proper programming language, perl. However this time I learnt a neat trick to doing multi-line commands in bash with comments embedded using the array feature of bash. A multi-line command typically looks like:

        melt \
            color:black \
                out=$audiolen \
            ...

However what if you want to add comments into the command? You can’t.

To solve this create an array:

    cmd=(
        # Take black background track for same number of seconds as the MP3, then add 10 seconds of another image
        melt
            color:black
                out=$audiolen
        ...
    )

and then use the following magic to execute it:

"${cmd[@]}"

Using this you can also conditionally add in extra statements if you’re using a pipeline-type program such as imagemagick (convert) or melt:

    cmd+=(
        # Output to the file
        -consumer avformat
            target="$target"
            mlt_profile="hdv_720_25p"
            f=mpeg acodec=mp2 ab=96k vcodec=mpeg2video vb=1000k
    )

Automatically creating videos from pictures, music and subtitles

So for one of my projects we have a number of albums and individual songs which we want to upload to youtube as many people use this to listen to music these days. We also want to create a separate collection of videos that have the song words (Think hard-burning subtitles into a video). Obviously you can do this in video editing software but it would be nice to be able to tweak all the videos afterwards without having to do much work.

Initially I tried using avconv/mencoder to generate videos based on the pictures using the following code – generate the picture/music as a video, apply subtitles and then finally apply the audio again but without reencoding it.

    avconv -loop 1 -y \
            -i bgimg.jpg \
            -i "$mp3" \
            -shortest \
            -c:v libx264 -tune stillimage -pix_fmt yuv420p \
            -c:a mp3 \
            "$t"

    # Apply subtitles
    mencoder -utf8 -ovc lavc -oac copy -o "$out" "$t" -sub "$sub"

    # Add in end track and overlay with mp3
    mencoder -audiofile "$mp3" -idx -ovc lavc -oac copy -o "final.avi" "$out" "$append"

Whilst this kind of works it’s got a number of downsides the big ones being 1) it isn’t flexible to eg add another picture/slide at the end, and 2) it reencodes the video/audio a number of times.

Then I remembered that the great kdenlive video editing software is actually just a frontend to the brilliant mlt framework. This is basically a library plus commandline programs to do all sorts of video mixing with live or rendered output.

Using the melt commandline program you can test and generate tracks without having to worry about the XML format that it typically uses for the more advanced options. The final commands:

melt color:black out=5614 \
  t.jpg out=250 \
  -track \
    cdimage.jpg out=5614 \
  -transition composite geometry=0,0:100%x70% halign=1 \
  -consumer xml:basic.mlt

melt basic.mlt
  -filter watermark:subtitles.mpl \
    composite.valign=b composite.halign=c producer.align=centre \
  -audio-track audio.mp3

If you want to do the video output you can add the following onto the last command:

-consumer avformat \
  target=out.mpg \
  mlt_profile=hdv_720_25p f=mpeg acodec=mp2 ab=96k vcodec=mpeg2video vb=1000k

Lets go through this a line at a time:

melt color:black out=5614

Generate black background for 5614 frames

  t.jpg out=250

Followed by t.jpg for 250 frames

  -track
    cdimage.jpg out=5614

Generate a new track which is the cd image for the same length as the black track

  -transition composite geometry=0,0:100%x70% halign=1

Mix the two tracks so that the second one (ie the cd image) is 70% of the screen height and centered horizontally to the top.

  -consumer xml:basic.mlt

Output to an xml file (in order to apply subtitles to the whole thing we need to do this intermediary stage)

melt basic.mlt

Start with the mixed video sequence defined in the xml file (which is just instructions, not a staged render)

  -filter watermark:subtitles.mpl
    composite.valign=b composite.halign=c producer.align=centre

Apply the watermark filter with a subtitle mpl file, align to the bottom centered (it will auto scale extra wide lines to be the width of the video). A MPL file looks like this:

1=blah
10=
15=foo
20=

Where the first bit is the frame and the second bit is any text to be displayed. New lines demarcated with a tilde (~) character. Here is a simple perl script to convert a srt format subtitle file into this mpl format:

#!/usr/bin/perl
use strict;
use warnings;
use Path::Tiny 'path';

my ($fps, $in) = @ARGV or die;
$in = (path $in)->slurp;
$in =~ s/\r//g;
my @parts = split /\n\n/, $in;
for my $part (@parts) {
    #print "$part\n\n";
    $part =~ s/^ \D* \d+ \n
        ([\d:,]+) \s --> \s ([\d:,]+) \n
        //x;
    my ($start, $end) = ($1, $2);
    for( $start, $end ) {
        my ($h,$m,$s,$part_s) = split /[:.,]/;
        $_ = int( ( ( $h * 60 + $m ) * 60 + $s + $part_s / 1000 ) * $fps );
    }
    $part =~ s/\n/~/g;
    print "$start=$part\n",
        "$end=\n";

}

Back to the melt commandline:

  -audio-track audio.mp3

Overlay the audio track

For the non-test output commandline parts:

-consumer avformat target=out.mpg

Output using libav

  mlt_profile=hdv_720_25p f=mpeg acodec=mp2 ab=96k vcodec=mpeg2video vb=1000k

Set the profile to be 25fps 720p hd video using mpeg, set audio rate 96kbps and video rate 1000kbps

Easy ticks and crosses using FontAwesome

I spent a few minutes knocking up a comparison table for a project today and wanted an easy way to have ticks and crosses. After a bit of experimenting I found out that the excellent FontAwesome project makes this quite easy:

<span class="fa-stack">
    <i class="fa fa-circle fa-stack-2x"></i>
    <i class="fa fa-times fa-stack-1x fa-inverse"></i>
</span>
<span class="fa-stack">
    <i class="fa fa-circle fa-stack-2x"></i>
    <i class="fa fa-check fa-stack-1x fa-inverse"></i>
</span>

Per-component loading spinner for AngularJS

One of the first things that people want to do with AngularJS is to have a loading spinner on their page to prevent the unseemly appearance of a page with no content loaded because you’re waiting on an ajax xhr request. There are quite a lot of these spinner plugins available, or you can relatively easily roll your own.

However most of these are whole-page ie if any infly request is happening, the whole page appears blocked to the user. This can be quite annoying and give the impression of your site being pretty slow. What other sites heavily dependent on ajax (eg Facebook and LinkedIn) typically do is have each individual block/component on the page display a loading graphic so that perhaps your friends list is marked as loading but your news feed had already loaded.

Fortunately with AngularJS’s awesome scope, factory and component design it’s very easy to bolt this on to an existing app in just a few minutes. Let’s look at some code.

Firstly, (as you should be doing already) you need to have your ajax request going through a single point in your code such as the skeletal factory below. I’d typically do something like this:

angularApp.factory('api', function( $http ) {
    var fns = {};
    var req = function( path, args, opts ) {
        var promise = $http.post( fns.get_url(path), args );

        return promise.then(function(res) {
            return res.data;
        });
    };

    // Two calls - nonblocked which doesnt show the spinner and req which does
    fns.nonblocked = req;
    fns.req = req
    return fns;
});

Then we extend this so that the req function can have a scope passed in which will have a variable called infly_http_request which contains the number of outstanding ajax requests under that scope. We now add this in to the api service replacing the req function with something that will check the requests:

    ...
    function setup_spinner( scope ) { 
        if( scope.hasOwnProperty('infly_http_request') )
            return;

        scope.infly_http_request = 0;
        
        var cur_timeout;
        scope.stop_blocked_request = function( ) { 
            if( cur_timeout )
                $timeout.cancel(cur_timeout);
                
            scope.infly_http_request--;
     
            if( scope.infly_http_request < 0 ) 
                scope.infly_http_request = 0;
        };  
        scope.start_blocked_request = function( ) {
            if( cur_timeout )
                $timeout.cancel(cur_timeout);

            cur_timeout = $timeout(function() {
                scope.stop_blocked_request( );
                // XXX raise error
            }, 10000);

            scope.infly_http_request++;
        };
    }
    fns.req = function( path, args, opts ) {
        if( !opts )
            opts = {};

        var scope = opts.scope || $rootScope;
        setup_spinner( scope );

        scope.start_blocked_request();
        return req( path, args, opts )
            ['finally'](function() {
                scope.stop_blocked_request( );
            });
     };

Basically if a scope option is passed in this will scope the spinner to that block, otherwise it will use the global scope so you can still do a whole-page lock.

Finally here’s a quick directive to apply to a nice and easy spinner using fontawesome:

// XXX has to be a subdirective to an ngController - can't be on the same level as it.
window.angularApp.directive('showSpinner', function() {
    return {
        transclude: true,
        template: '<div><ng-transclude ng-show="infly_http_request == 0"></ng-transclude><div ng-hide="infly_http_request == 0" class="subspinner-container"><i class="fa fa-cog fa-spin"></i></div></div>',
    }
});

And the LESS (CSS) to go with it:

@subspinner-size: 3em;
.subspinner-container {
    text-align: center;
    .fa-spin {
        font-size: @subspinner-size;
    }
}

You can then write your Angular component and HTML as:

angularApp.controller('Product.List', function( $scope, api ) {
    api.req( '/api/path', { data... }, { scope: $scope } )
        .then(...)
});

<div ng-controller="Product.List">
  <div show-spinner>
    ...
  </div>
</div>

Anything within the show-spinner container under the controller and the scope attribute passed in the req() call will be replaced by a spinner while the request is in progress. If not you can have something in the main body of your page to show a spinner like:

<div ng-if="infly_http_request" class="spinner-container">
    <div id="spinner">
        <i class="fa fa-cog fa-spin"></i>
    </div>
</div>

@spinner-size: 5em;
.spinner-container {
    position: fixed;
    top:0;
    left:0;
    right:0;
    bottom:0;
    z-index:10000;
    background-color:gray;
    background-color:rgba(70,70,70,0.2);
    #spinner {
        position: absolute;
        font-size: @spinner-size;
    
        margin-left: -0.5em;
        margin-top: -0.5em;

        z-index: 20000;
        left: 50%;
        top: 50%;
    }
}

rsync with remote filenames with spaces in from bash

Something that always annoys me with rsync is that due to executing a remote shell, any characters in the remote path name require double-escaping (once for this local shell, once for the remote one). For example

rsync -av 'my holiday photos/' server:'my holiday photos/'

creates a remote folder called ‘my’ and puts the directory into that. The solution is to do something like:

rsync -av 'my holiday photos/' server:'my\ holiday\ photos/'

But how to do this when you’re running from the shell eg iterating directories? One way would be to use a command like $(sed …) to handle the escaping, however you can do it purely in shell using two different types of quote. For example today I had to do:

for i in */; do
    rsync -av "$i/img/" server:"backup/'$i'/"
done

Stop Grunt minifying libraries all the time

Recently I’ve been playing around with using a proper build system for my latest Angular project. I chose to start with grunt which seems very powerful if quite difficult to set up (mostly because yeoman did most of the initial config with it). However I find it very strange that by default the grunt-usemin plugin tries to minify and then concat all of the libraries even those such as jquery or angular. This is both not very efficient (as there are already .min.js files distributed with them), also it probably can’t do it as well as they can. So, I started doing a bit of research as to how this could be avoided and came up with the following.

Firstly install the grunt-usemin-uglifynew module:

npm install --save-dev grunt-usemin-uglifynew

Then, change your Gruntfile to look like this:

  var uglifyNew = require('grunt-usemin-uglifynew');
  grunt.initConfig({
    ...
    useminPrepare: {
    ...
      options: {
        flow: {
          html: {
            steps: {
              js_min: [uglifyNew, 'concat'],
              js: ['concat', 'uglifyjs'],
    ...
    usemin: {
      ...
      options: {
        blockReplacements: {
            // copy of js block replacement fn from fileprocessor.js
            js_min: function (block) {
              var defer = block.defer ? 'defer ' : '';
              var async = block.async ? 'async ' : '';
              return '<script ' + defer + async + 'src="' + block.dest + '"><\/script>';
            }
        },

and your html file(s) to look like:

    <!-- build:js_min(.) scripts/vendor.js -->
    <!-- bower:js -->
    <script src="bower_components/jquery/dist/jquery.js"></script>
    <script src="bower_components/angular/angular.js"></script>
    <script src="bower_components/angular-route/angular-route.js"></script>
    <!-- endbower -->
    <!-- endbuild -->

        <!-- build:js({.tmp,app}) scripts/scripts.js -->
        <script src="scripts/app.js"></script>
        <script src="scripts/controllers/main.js"></script>
        <script src="scripts/controllers/about.js"></script>
        <!-- endbuild -->

Basically this splits the js processor into two – js (the normal one) remains unchanged and continues to concat all the files in the block and then minify them. The other one, js_min just tries to find the .min.js file and then concats into a single file (I wish there was an easy way to avoid having to concat them – I think it should really just copy the .min.js to the build directory and update the links to point to them there)