As Google and other search engines are so good at predictive search these days users (ie me) get very frustrated at poor search results on websites or input boxes. Something I try to use across my apps is a decent auto-complete search interface however so many websites are very poor at this either matching only the first part of the string or matching any part of the substring. Additionally sometimes they don’t handle differences in case properly, certainly they don’t usually work with different accenting marks or order particularly well (eg search for beyoglu doesn’t usually return results like Beyoğlu). So, here follows a simple suggestion and code design pattern about how to implement this properly in PostgreSQL (Also works in MySQL although the regex matching code is slightly different). You can then have great instant typeahead functionality for example using the excellent AngularJS Bootstrap Typeahead input. I’ve implemented this in Perl/DBIC but it is a pattern that can be easily applied to any language such as Ruby/Rails or NodeJS.
Whilst there are a number of different search options out there that can plug into existing databases such as ElasticSearch, Sphinx or MySQL/Postgres fulltext search these are often fiddly to set up and are more intended for natural fulltext than for simple phrases or keywords which is what I generally aim for. The below method is pretty quick and easy to set up and allows you full control over the rebuilds, stemming, common word removal etc which is especially important for multi-lingual sites. You can also easily switch between database servers without having to totally redo your search functionality using this method.
Step 1: Add Column to Database Tables
Firstly, for any table you wish to search create a searchdata column probably varchar, with the maximum length of the data you’ll want to be searching (eg article title, author etc combined). For example:
alter table article add searchdata varchar(255) not null default '';
Step 2: Create Search Query Normalization Code
Then in your code create two routines to normalize any search text. Here is a (perl) example from my code:
package OradanOraya::Search; use strict; use warnings; use utf8; use Text::Unidecode 'unidecode'; sub to_search_no_strip { my ($self, $txt) = @_; $txt = lc unidecode($txt); $txt =~ s/[^a-zA-Z0-9 ]/ /g; # kill newlines or space runs $txt =~ s/\s+/ /g; # kill newlines or space runs $txt =~ s/^\s+|\s+$//; return $txt; } sub to_search { my ($self, $txt) = @_; $txt = $self->to_search_no_strip($txt); # common words to strip out $txt =~ s/\b(?: bolum | hastane | doktor | doctor | doc | dr )\S*//xg; return $txt; } 1
The first function is purely for normalizing the search terms (firstly stripping accents using the excellent Text::Unidecode module, then killing any non-alphanumeric chars, ensuring only one space between words and no spaces beginning or end of the text), the latter function does the same but also removes any common words you don’t want indexed.
Step 3: Set Columns to Auto Update in Your ORM
In your ORM base-class (you are using an Object-Relational Mapper rather than plain SQL right?) create some functions to handle the auto-population of these fields when the rows get updated by your code. For Perl’s DBIx::Class users here’s the code you inject into your DBIC Result base class. The first function, _get_searchdata is the key one that takes a specified list of columns, normalizes them and returns the searchdata field. The other functions are for the manual refresh of the search data in the row, automatically updating search data on update and create respectively:
sub _get_searchdata { my ($self) = @_; return My::Search->to_search( join ' ', map { $self->$_ || '' } $self->searchdata_columns ) } sub refresh_searchdata { my ($self) = @_; $self->update({ searchdata => $self->_get_searchdata }); } sub set_column { my $self = shift; my $ret = $self->next::method( @_ ); if( $self->can('searchdata') ) { # Note that we call the super-class set_column update method rather than ourselves otherwise we'd have an infinite loop $self->next::method( 'searchdata', $self->_get_searchdata ); } return $ret; } sub insert { my $self = shift; if( $self->can('searchdata') ) { $self->searchdata( $self->_get_searchdata ); } return $self->next::method( @_ ); }
In any of your tables where you have added a searchdata column create a method that just returns what columns you want to add to searchdata:
sub searchdata_columns { qw< title name > }
Step 4: Search Queries and Ordering
Whenever a row is added or updated now you’ll have the normalized search text added (see below for a script to auto-populate if you have existing data). To do nice searches you can now execute the following SQL (for MySQL replace ~ with REGEXP operator):
select * from article where searchdata ~ 'foo'
This will match the text anywhere. If you want to only match words beginning with this you can use PostgreSQL’s zero-width start-of-word \m operator (in normal regexp language this would be roughly equivalent to \b although that matches beginning and end of words):
select * from article where searchdata ~ '\mfoo'
If you want to order results whereby those with beginning-of-string matches go first, then the rest are alphabetical you can do something like (note the !~ as false orders before true):
SELECT * FROM article WHERE searchdata ~ '\mfoo' ORDER BY searchdata !~ '^foo', searchdata
Well that’s a job well done! You can look at using some sort of index in the database to speed this up but to be honest for tables with less than 10k rows that’s probably not worth while. You’ll need to look at the trie type indexes that Postgres has, I don’t believe MySQL is able to index these sorts of searches.
The DBIC code for this last one:
my $search_str = quotemeta($fn->to_search( $p->{search} )); $dbic->resultset('Article')->search({ searchdata => { '~' => '\m' . $search_str } }, { order_by => [ \[ 'searchdata !~ ?', [ str => '^' . $search_str ] ], 'searchdata' ] });
Extra Step: Create a Reindex Script
You’ll also want to write some code to find any tables with searchdata and update them for initial population. Here’s the perl/dbic solution for this again but should be simple enough with any ORM (note the transaction commit per 100 updates which significantly improves performance on all database servers):
$|++; my $count = 1; $dbic->txn_begin; for my $table ($dbic->sources) { my $rs = $dbic->resultset($table); next unless $rs->result_source->has_column('searchdata'); while( my $d = $rs->next ) { if( $count++ % 100 == 0 ) { $dbic->txn_commit; $dbic->txn_begin; print "."; } $d->refresh_searchdata; } } $dbic->txn_commit;