Category Archives: PostgreSQL

Running lots of postgres commands in parallel

Postgres is great, however one limitation is that you can only run one command at a time in the shell. Sometimes however when you are doing administrative functions over multiple tables, for example (re)creating indexes or vacuuming and you have a nice powerful box, you can run many of these commands in parallel for easy speedup. Here’s an easy way to run lots of commands in parallel.

Firstly, create a text file with one command per line. For example

Then, ensure that you have your .pgpass file set up correctly so that you can just run psql [database] [user] without being prompted for a password.

Finally, run the following command:

-P 20 specifies the number of jobs to run in parallel so change this to what your server can cope with.

Finding the amount of space wasted in postgres tables

Because of way that postgres handles transaction isolation (ie using MVCC), when you modify or delete a row in a table it marks it as deleted, and then frees the space at a later point in time using (auto)vacuum. However, unless you use the heavy-weight VACUUM FULL command (which exclusive locks the table and totally rewrites it, causing anything trying to access it to block until the command is finished) the space is never reclaimed by the operating system. Normally this is not a problem – if you have a heavily used table with 20mb of data in it it probably has 5-10mb of overhead with the dead rows, reclaimed free space etc which is acceptable. However there are a few situations where it is useful to know what exactly the overhead is:

  1. Sometimes if your table changes very quickly, is large, and your disks or autovacuum parameters are unable to keep up, it can end up growing massive. For example we had a table that contains 3Gb of data but was taking up 45Gb due to the fact that autovacuum couldn’t keep up with the frequency of changes in the table
  2. If you are using table partitioning to store historic data then to make the most use of space you want to see whether a VACUUM FULL would be advantageous to run or not. For example if you have a table that is recording data collected from each day, some days it may be mostly just inserts so doesn’t need vacuuming; other days it may have a number of changes made and so have quite a lot of free space that can be reclaimed. Additionally, VACUUM FULL optimizes the order of data in the table and the indexes making it more performant.

In the first case, looking at the output of a command like

(sorry I can’t remember where I found this) should show you that there are a very large number of dead tuples waiting to be reclaimed (ie turned in to free space) in the table.

However, if your disks were struggling at one point, but then you tweaked autovacuum so it reclaimed the dead tuples correctly (as in case 1 above), your table could now be 90% free space but there is no easy way to find this out within postgres.

Fortunately, there is an excellent extension called pgstattuple which allows you to find out the amount of free space within a table file that has been reclaimed but not released to the operating system. The following query lists all tables which are over 100Mb in size, and have more than 10Mb of free space and have more than 20% free space (you can tweak these numbers – I just did it for our platform where our typical table size is 1Gb+):

This only uses an approximate count, however even so it can be a bit slow (it just took 10 minutes here) on a system with many tables and heavy IO. You can use this to find the tables that would most benefit from a VACUUM FULL command being run.

row_to_json() equivalent for jsonb types

With the advent of the awesome jsonb data type in Postgres 9.4, pretty much all of the json-related functions have a json_* and jsonb_* varient. All those, except row_to_json. I spent a bit of time but couldn’t find an answer online, in the end after a bit of experimentation, I discovered that in Postgres 9.5 at least the to_jsonb function does exactly the same, like:

How we improved database performance 1000% with just one command

In my current contract we have a nice database server with 4 datacentre quality SSD’s arranged in raid5, a battery-backed (Dell PERC / megaraid) controller and very poor disk performance. As I was doing a dump of the database (with parallel process, obviously), iostat was showing 100% disk utilization at 200-300 iops and 40-60mb/sec. Really not what it should be. After poking the raid card a lot with the MegaCli tool, I discovered that by changing the Linux IO scheduler to noop or deadline we suddenly got great performance – 50% disk utilization at 1300 iops and 200mb/sec read. So, an issue with the CFQ scheduler then as everywhere seems to suggest? In fact not – it turns out the root of the problem is that because of the raid controller the kernel did not detect that the underlying disks were SSD rather than HDD. Just by tweaking this setting CFQ provides the same performance as noop or deadline schedulers (in fact better for our postgres workload). So, here’s the command we used to get a 1000% improvement in database performance:

Getting value/count grouping in a single row with postgres

In one of my contracts we’re doing some very complex data storage and querying using jsonb in postgres. One common display in the system we’re replacing is basically something like:

Which is horrible, inefficient and doesn’t let you know if anything occurred outside the range. As we’re already doing a lot with JSON, I was wondering if rather than creating new columns, we could return these counts in a JSON format. It turns out that with postgres this is really easy – here’s how:

First, we want to create a function that given the existing aggregated value and a new key to inclement, returns a jsonb object

Then, you define the aggregate:

If you noticed the STRICT in the function (which means that NULL’s are not allowed), the aggregate we created notices that and will never pass NULL values in to the function meaning we don’t need to special-case them.

If you then run the query first casting the floating-point value into an integer and then turning that into a text datatype for the json key:

You get a JSON result back with the counts:

Obviously your first port of call when faced with something like this is ‘can I group by the value’, but in this case we want the row to be based on something different so we needed the sub-aggregation of the values.

Happy days