Category Archives: Computers

Computer related articles

Diagnosing faulty memory in Linux…

For the past year I’ve had very occasional chrome crashes (segfaults in rendering process) and an occasional bit of btrfs corruption. As it was always easily repairable with btrfs check --repair I never thought much about it, although I suspected it may be an issue with the memory. I ran memtest86 overnight one time but it didn’t show up any issues. There were never any read or SMART issues logged on the disk either, and it happened to another disk within the machine as well.

Recently though I was seeing btrfs corruption on a weekly basis, especially after upgrading to ubuntu 18.04 (from ubuntu 16.04). I thought it may be a kernel issue so I got one of the latest kernels. It seemed to happen especially when I was doing something quite file-system intense, for example browsing some cache-heavy pages while running a vm with a long build process going on.

Then, earlier in the week the hard drive got corrupted again, much more seriously and after spending some time fixing, running `btrfs check –repair` a few times it suddenly started deleting a load of inodes. Force rebooting the machine I discovered that the disk was un-mountable, although later I was able to recover quite a lot of key data from btrfs restore as documented in this post.

memtest86 was still not showing any issues, and so my first thought was that assuming the hard disk was not at fault it may be something to do only when the memory had a lot of contention (memtest86 was only able to run on a single core on my box). I booted a minimal version of linux and ran a multi-process test over a large amount (not all) of the memory:

apt -y install memtester
seq $(nproc) | xargs -P1000 -n 1 bash -c 'memtester $0 10; E=$?; [[ $E != 0 ]] && { echo "FAILURE: EXIT status: $E"; exit 255; }' "$((($(grep MemAvailable /proc/meminfo | awk '{print $2}') / 1024 - 100) / $(nproc)))"

and check for FAILURE in the log messages, it likely also shows in dmesg, and may only show there if you have ECC RAM.

This will run a process per CPU aiming to consume pretty much all of your available memory. 10 is the number of test cycles to run through. In my case 8 cores and 16gb memory = 1400mb per memtester process. It took about 45 min to run once over the 16gb, or about 25 min to run over 8gb (each of the individual sodimms in my laptop).

Within about 10 minutes it started showing issues on one of the chips. I’ve done a bit of research since this and seen that if a memory chip is going to fail then it would usually do it within the first 6 months of being used. However this is a kingston chip that has been in my laptop since I bought it 2 or 3 years back. I added another 8gb samsung chip a year ago and it seemed to be after that that the issues started, however that chip works out as fine. Perhaps adding another chip in broke something, or perhaps it just wore out or overheated somehow…

Automounting swap on local SSD’s on Amazon EC2

Many instances on EC2 (AWS) now have local SSD’s attached. The excellent ubuntu 14.04 image boots brilliantly on these and automatically formats and mounts any of the local SSD storage. However when the instance shuts down, reboots or gets migrated these SSD’s go away so you still need to use the persistent EBS storage for most operations. If you want to enable swap on the box, add the following to /etc/rc.local – it will create a 2gb swap file each boot on the local SSD and mount it:
dd if=/dev/zero of=/mnt/swapfile bs=1M count=2048
chmod 600 /mnt/swapfile
mkswap /mnt/swapfile
swapon /mnt/swapfile
I’ve not yet figured out what the process is to format/mount these local disks on bootup it may well be easier to add this to them.

Facebook Graph API Page post changes

So about a month back it looks like facebook changed their graph API to prevent posting links to pages using the method we had always used which was simply a post to //feed with my access token with message and link parameters. Posting just a message was working fine still but when I tried to add a link in I was just getting access denied.

After spending an hour or two bashing my head against the wall I discovered that you had to first access a list of all your pages with your user access token, then from that you would figure out the page’s special access token, and only then could you post.

So the resulting (somewhat messy) perl code is like:

my $FB_GRAPH_BASE = '';
my $m = WWW::Mechanize->new;
my $res = $m->get( "$FB_GRAPH_BASE/me/accounts?access_token=$token" );
my $d = decode_json( $res->decoded_content )->{data};
my $page_token = (grep { $_->{id} eq $PAGE_ID } @$d)[0]->{access_token};

$res = $m->post( "$FB_GRAPH_BASE/$PAGE_ID/feed", {
    access_token => $page_token,
    message => $msg,
    link => $url,

Extracting old weird format audio files

So, I had a friend who has a load of recordings from about 10 years ago which were done on a weird dictophone. The files had the extension .FC4 which according to the internet is a legacy Amiga audio format with no more support. Great.

First thing was to run file on it:

$ file t.FC4 
t.FC4: data

Great. Let’s see if we can do a better job looking at a hex dump (with xxd):

0000000: 4649 4c45 0103 0101 0333 0fff ffff ffff  FILE.....3......
0000010: ffff ffff ffff ffff ffff ffff ffff ffff  ................
0000020: aa10 1f40 01ff ffff ffff ffff ffff ffff  ...@............
0000030: ffff ffff ffff ffff ffff ffff ffff ffff  ................
0000040: ffff ffff ffff ffff ffff ffff ffff ffff  ................
0000050: 4d49 2d53 4334 ffff ffff ffff ffff ffff  MI-SC4..........
0000060: 4456 522d 3030 37ff ffff ffff ffff ffff  DVR-007.........
0000070: 4130 322d 3033 3031 3031 3033 3531 3135  A02-030101035115
0000080: ffff ffff ffff ffff ffff ffff ffff ffff  ................
0000090: ffff ffff ffff ffff ffff ffff ffff ffff  ................
00000a0: ffff ffff ffff ffff ffff ffff ffff ffff  ................
00000b0: ffff ffff ffff ffff ffff ffff ffff ffff  ................
00000c0: ffff ffff ffff ffff ffff ffff ffff ffff  ................
00000d0: ffff ffff ffff ffff ffff ffff ffff ffff  ................
00000e0: ffff ffff ffff ffff ffff ffff ffff ffff  ................
00000f0: 4643 34ff ffff ffff ffff ffff ffff ffff  FC4.............
0000100: 5249 4646 501f 0000 5741 5645 666d 7420  RIFFP...WAVEfmt 
0000110: 1400 0000 5003 0100 401f 0000 2910 0000  ....P...@...)...
0000120: 1e00 0000 0200 3a00 6461 7461 0000 0000
0000130: 00fe ffff feff fffe ffff feff ffef 55ff  ..............U.
0000140: feff feff feff feff feff effe feff efef  ................
0000150: efef efef efef efef efef efef 55ef efef  ............U...
0000160: effe feff efef feff effe ffff ffff ffff  ................
0000170: ffff ffff ffff ffff ffff 55ff ffff ffff  ..........U.....
0000180: ffff ffff ffff ffff ffff ffff ffff ffff  ................
0000190: ffff ffff ffff ffff 55ff ffff ffff ffff  ........U.......
00001a0: ffff ffff ffff ffff ffff ffff ffff ffff  ................
00001b0: ffff ffff ffff 55ff ffff ffff ffff ffff  ......U.........
00001c0: ffff ffff ffff ffff ffff ffff ffff ffff  ................
00001d0: ffff ffff 55ff ffff ffff ffff ffff ffff  ....U...........
00001e0: ffff ffff ffff ffff ffff ffff ffff ffff  ................
00001f0: ffff 55ff ffff ffff ffff ffff ffff ffff  ..U.............
0000200: ffff ffff ffff ffff ffff ffff ffff ffff  ................
0000210: 55ff ffff ffff ffff ffff ffff ffff ffff  U...............
00006e0: ffff ffff fd88 3fe1 e1e1 e1ef d21e 1e1e  ......?.........
00006f0: 1fff ffff c821 e1e1 bb32 f2d1 55f1 e1e1  .....!...2..U...
0000700: dbac f61f ffff ac15 fe2e 1e1d 8f2f 1e1e  ............./..
0000710: dc2d 4fe1 d9d4 f3ef ed31 55b2 e219 b4fb  .-O......1U.....

So, looks like at offset 0x100 (256) we have something that is a RIFF/WAV file, then the stuff that shows as U is probably a chunk-size block or somesuch. Given the blocks of data afterwards it could probably be 16-bit single channel at a guess. Perhaps something can read that if we cut the initial header off and re-save:

$ xxd -s -256 -r t out.wav
$ file out.wav 
out.wav: RIFF (little-endian) data, WAVE audio, mono 8000 Hz

Ah-ha looks like file has a clue now. Let’s try to play it:

$ mplayer out.wav
Requested audio codec family [sc4] (afm=acm) not available.
Enable it at compilation.
Cannot find codec for audio format 0x350.

D’oh. Opening as a raw file in audacity shows pretty much white-noise (whereas you’d have expected it to be something vaguely like speach but with blips in every so often if it was any sort of valid PCM or wave type encoding).

After searching around for a long time I discovered this post which talked about a very similar looking header and especially WAV encoding 0x350. This linked to an mplayer plugin with an acm and inf file however the ubuntu version of mplayer doesn’t support w32codecs. I tried installing this in several different ways in a windows 7 vm but couldn’t get it to work.

I then tried compiling mplayer from source only to be greeted with:

cc -MMD -MP -Wundef -Wall -Wno-switch -Wno-parentheses -Wpointer-arith -Wredundant-decls -Werror=format-security -Wstrict-prototypes -Wmissing-prototypes -Wdisabled-optimization -Wno-pointer-sign -Wdeclaration-after-statement -std=gnu99 -Werror-implicit-function-declaration -D_POSIX_C_SOURCE=200112 -D_XOPEN_SOURCE=600 -D_ISOC99_SOURCE -I. -Iffmpeg -O4 -march=native -mtune=native -pipe -ffast-math -fomit-frame-pointer -fno-tree-vectorize -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE  -fpie -DPIC -D_REENTRANT  -I/usr/include/freetype2 -DZLIB_CONST -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -c -o loader/wrapper.o loader/wrapper.S
loader/wrapper.S: Assembler messages:
loader/wrapper.S:31: Error: `pusha' is not supported in 64-bit mode
loader/wrapper.S:34: Error: operand type mismatch for `push'
loader/wrapper.S:38: Error: operand type mismatch for `push'
loader/wrapper.S:40: Error: operand type mismatch for `push'
loader/wrapper.S:45: Error: operand type mismatch for `push'
loader/wrapper.S:46: Error: operand type mismatch for `push'

D’oh. Rather than mess around with trying a 32-bit compile or hacking the assembly I remembered I had a 10-year old laptop lying around with a very old 32-bit install of gentoo. Power it up, install the codec files and it plays them!

I then try to extract some proper PCM WAV file from the FC4 file using mencoder. But mencoder doesn’t support audio-only. I also try using the -dumpstream option in mplayer but that just dumps the encoded audio. So finally I come across the -ao pcm option which puts out a nice plain wav file that I can encode into mp3 or any other format.

Migrating Drupal to new server breaks login

So I just cloned a drupal site onto a new server. It all worked fine but then I couldn’t log in. I found this post which said you need to perhaps change the $cookie_domain variable but that didn’t make any difference. Finally I found the root cause of the problem – mod_rewrite wasn’t active so even though the user/login page was displaying (it was status 404 which redirects through to index.php hence into drupal) it wasn’t accepting POST requests.

a2enmod rewrite
service apache2 restart

Job done.

Replacing glyphicons with font-awesome in bootstrap

So, I wanted to use the much wider range of icons available with Font Awesome compared to the glyphicons in bootstrap. As most of them are in this icon set and as I’m already compiling bootstrap straight from LESS, it didn’t seem worth it to keep the glyphicons in there. However because I’m using Angular Bootstrap they already had a number of glyphicon sections embedded in the templates that I didn’t want to have to remember to change whenever I updated.

Anyway, to replace them first you download the less for bootstrap and font awesome, then you open up bootstrap/less/bootstrap.less, comment out the

@import "glyphicons.less";
line and add the following import:

@import "../../font-awesome/less/font-awesome.less";

You then need to edit font-awesome/less/variables.less and change the @fa-css-prefix: to be glyphicon rather than fa. Recompile and just include the general output in your html, no need for fa to be included as well any more. Then you have a drop-in replacement with many more icons available. Anything you can do with font-awesome can also be done with bootstrap you just have to remember to use glyphicon* rather than fa* in any CSS. So far I’ve only noticed that glyphicon-log-out and glyphicon-floppy-disk classes need to be changed to their fa equivalents.

Running processing (and updating the commit) straight after a commit in git

In one project I have a set of templates that I want built into a single file for quicker download. Rather than having to run a command manually after a commit I’d rather this was done at commit-time and then added to the commit bundle. I spent a while figuring out how to do this but basically you need to create a file .git/hooks/post-commit (in every repository – it doesn’t get pushed/pulled) containing the following:


# Build templates as you wish eg "perl bin/"

git diff --quiet compiled_file_name  # Did we have any change?
if [ $? != "0" ]; then # Yes - redo previous commit
    git commit -C HEAD --amend --no-verify compiled_file_name

Stripping out elements when sending an using AngularJS $http

I’m increasingly using AngularJS for frontend stuff to shift as much as possible into the browser. Basically it just receives some JSON, processes it and then sends it back to be stored in the database. However often to reduce the number of round-trips to the server you want to include additional data with the response. For example if you have a table of foods and you want to form a list of them in Angular, one way would be to get the list of ID’s and then fetch them either one-by-one or in bulk, however this is obviously not good for responsiveness or the backend server. So you’d typically want to send an array of rows from the server but for saving again, effectively you only actually need the ID’s as the rest of the data is already on the server. When you are dealing with big lists this can be rather annoying. Here’s an easy way to strip out keys on AngularJS before sending to the server:

$'/api/save', { data_object }, {
  transformRequest: function(obj) {
    function toJsonReplacer(key, value) { // This taken from angular's function of the same name
      var val = value;
      if (typeof key === 'string' && key.charAt(0) === '$' && key.charAt(1) === '$') {
        val = undefined;
      // These are the custom lines we add in to strip out certain keys - could use a regex too
      if (typeof key === 'string' && ( key == 'nutrients' || key == 'portions' ) )
        return undefined;
      return val;
    return JSON.stringify(obj, toJsonReplacer);

This overrides the $http.defaults.transformRequest as it basically does the same thing (using Angular’s toJson function). It would be nice if it were possible to just use toJson but specify a function for the json transformation.

Importing and prepending subversion history to a git repo

So, when I converted some repos from svn to git a few years ago I just threw away the history (I think the git-svn tool wasn’t working or I was in a hurry or something). Anyway, today I was reminded of this and thought I’d backup all my svn repos into git and where possible prepend the history to the repositories. Based on this stackoverflow post and some experimenting I did the following:

git svn clone --preserve-empty-dirs file://path/to/svn-repo/project/trunk/
INITIAL_SHA1=$(git rev-list --reverse master | head -1)
# the last commit of old history branch
oldhead=$(git rev-parse --verify old-history)
# the initial commit of current branch
newinit=$(git rev-list master | tail -n 1)
# create a fake commit based on $newinit, but with a parent
# (note: at this point, $oldhead must be a full commit ID)
newfake=$(git cat-file commit "$newinit" \
  | sed "/^tree [0-9a-f]\+\$/aparent $oldhead" \
  | git hash-object -t commit -w --stdin)

# replace the initial commit with the fake one
git replace -f "$newinit" "$newfake"

git push origin 'refs/replace/*'
git filter-branch --tag-name-filter cat -- --all
git replace -d $INITIAL_SHA1

git push