Working on a project where we want to store searchable (using LIKE, meaning we can’t use the inbuilt Postgres macaddr
type) MAC addresses in a PostgreSQL table I decided create a unique index in order to only allow one modem per entry in the table and also to speed up searches. However we also want to ensure that we dodn’t get upper- and lower-case versions of the same MAC inserted. PostgreSQL to the rescue – you can create a functional index on the table:
create unique index on modem (( lower(mac) ));
Basically this means that it should first lowercase anything that gets inserted, and then check that the unique constraint isn’t violated. It should also mean that something like:
select 1 from modem where mac = ?
should be indexed and hence work really quickly.
However, while profiling the application I noticed the select statements getting slower and slower the more rows we put into the table – the classical way of seeing that a column is not properly indexed. Looking at the postgres manual I remembered that a functional index actually only speeds up queries where you apply the function to the column, for example something like the below would use the index:
select 1 from modem where lower(mac) = lower(?)
Functionally this was the same as what I was doing, but I didn’t want to have the hassle of remembering to apply that function everywhere we wanted to search by mac. In the end I decided to drop the functional index and just use a standard unique index. I then added a type constraint check which also enforces the value being hex:
alter table modem add constraint ensure_mac_correct CHECK (mac ~ '^[0-9a-f]{12}$');
Did I ever mention how I love postgres? Take that MySQL!