Tuning a perl for a Postgres based search engine

I have a database of just over 11000 jobs and I need to run and indexer against it for the search engine to work. Just recently this has been getting slower and slower due to other things going on with the server so I decided to have a look at it tonight. The following was what I done and what I found:
Preliminaries.
All regex’s are pre-compiled prior to comparison using something like:
$dict{qr/\b\Q$keyword\E\b/} = $keyword;
Table Name Rows
key_word_search ==: 51641
rss_jobs ==: 179 (last nine hours worth)
Total checks == 9243739 (approx)
Methods:
0. Normal index with no tuning applied. This is what has been running
for the last few months.
1. For each job entry we are indexing check first to see if the job_id
and the keyword is already in the index. If yes go to the next record.
2. Use perls “index” function to pre-process the result and only try a
full regex if the string appears somewhere in one of the 3 RSS entries.
Results.
I was not going to try and second guess the result but I had a feeling
that M2 would be quicker. What I was suprised at is just how much
quicker. I imagine each method would see an improvement if more RAM was
given to Postgres especially M1 and M0 but I doubt either of them would
catch M1.
Also, the trigger that inserts the job actually carries out quite a
few checks to ensure the entry does not already exist so M1 is being
duplicated somewhat anyway and I am not about to relax the database checks/integrity to satisfy performance. Performance can normally be
cured using some other method as can be seen here.
Outer == No. Total Operations applied.
Inner == No. Left after filtering by Method
Matched == No. we matched and will be entered into database.
The inner rule is the method I have put in to filter the results before
I try a full regex match. The original indexer had no filter.
Method 0:
Outer == 9239317 In == 9239317 MATCH == 3009
real 8m23.868s
user 8m9.510s
sys 0m0.720s
Method 1:
Outer == 9239317 In == 14546 MATCH == 3009
real 1m30.897s
user 1m25.840s
sys 0m0.520s
As you can see here using the perls inbuilt “index” function I have
managed to narrow the actual operations considerably. We are relying on
the speed of the index compared to an actual regex match to gain the
speed here. I imageine they have almost literally used C’s
char *strstr(const char *s1, const char *s2);
or something simlar.
Method 2:
Outer == 105084 In == 99293 MATCH == 23
real 2m9.680s
user 0m16.840s
sys 0m5.090s
We can see here that this method is a lot slower. I actually stopped
this one early and it had only completed just over 1% of the total
operations required and it took 2 minutes. This was going to be slow
due to the amount of IO required ie 9 million possible calls to the
database and then a binary lookup on and index of just over 800k is not
going to be that fast at the best of times.
As an excercise and to satisfy my own curiosity I tried to put M1 first
and use M2 after it to see what would happen and the following was the
result.
Outer == 9239317 In == 14540 MATCH == 3009
real 1m42.974s
user 1m22.980s
sys 0m1.430s
We can see from this that calling out to the database is adding
overhead to the process.
Conclusion:
When we are using heavy regex intensive operations it pays to
pre-process using perls inbuilt “index” rather than relying on the
speed of the regex itself.

Writing Linux Device Drivers

I lost my internet connection at the weekend and was at a bit of a loss as to what I could do so I decided to take a pop at writing a simple module for the Linux kernel. I have a copy of
Beginning Linux Programming
ISBN: 1861002971
Authors: Richard Stevens and Neil Matthew
Ed: 2nd
so I turned to the back of it and started my foray into the Kernel. Now you need to remember that I am not a C programmer by trade and turning to the back of this book was a keen reminder of just how rusty my C is getting, not that it was ever rust free.
Luckily for me I have another book that is considered the C bible ie K&R and it deserves its reputation, it is a classic and I would recommend any programmer regardless of language choice to have a flick through it. When I was looking at some odd construct that those pointy hats had invented I had a flick through K&R and soon sorted it out.
Anyway back to the kernel. I was quick to discover that writing a module for the 2.6 kernel is not quite as straight forward as copying from the book and trying to understand what was going on. Things have been changing and I was getting all sorts of weird (or at least to me) and wonderful errors when trying to compile the kernel.
I eventually started to have a read at the recent modules in the source for 2.6.5 which I am running on this box. I also have the source for a 2.4 kernel on here so I opened 2 character drivers and compared notes between them. This is where I started to notice things that had changed. I made the changes I thought where necessary and I managed to get most of the “Hello World” module compiling but I was still getting errors.
I had a hunt around and I found a reference to some new build procedures for 2.6.5 so off I went in search of kbuild documentation and found some more stuff that had changed in the kernel. Namely the build procedure. This part was actually harder than the C that I had been struggling with.
After much swearing (I hate Makefiles and adding some more sugar is a pain in the ass) I managed to get the module compiling and I was on my way.
After a days work I now had a module that, on load would say
“Hello World”
and on removal
“Goodbye World”
time well spent or not? I haven’t decided yet. I wonder how often changes like this take place in the kernel and how much porting takes place because of it.
Where to go from here. I asked a few friends who know more about this stuff than I do and I got mixed advice about continuing. Some of them think the kernel is a mess because they are always changing the driver API among other things. I cannot comment because my knowledge of the Linux kernel is limited to spelling it and I sometimes get that wrong.
I did get some useful pointers though. The following is the best book I have found so far for someone like me who is just starting out in the kernel.
Linux Device Drivers, 2nd Edition
It is written for the 2.4 kernel but has a wealth of information that is still valid today. I have started porting the scull drivers from this to the 2.6 kernel I am running and it is proving very interesting. I printed off chapter 2 and 3 yesterday and have have almost finished them (40 mins from Luton to London on the train each way helps). So far it seems to be moving along at a fair old pace, I am just hoping I can keep up.
I could have done with the following at the weekend. This tells me what I needed to know about moving from 2.4 to 2.6. I can see myself using this a lot in the next few weeks.
Driver Porting

PhpMyLibrary

I had a look at Koha as an open source library system we might use at work and I promised I was going to look at phpmylibrary the next day. Well I didn’t have time but I did manage to look at it just recently and here is what I found.
First off it installed very easily which was nice. We got it up and running with some problems ie we had to turn Globals on and under PHP this is normally considered a no no. This rung alarm bells in my head but I continued on.
Next thing I noticed was the code. It might be because I am used to Perl but the code just looked messy. This is no reason to judge it so I had a look at the main feature ie loading and understanding MARC21.
I could have saved myself a lot of time if I had noticed that they only support USMARC. I left a message on one of their mailing lists asking about the possibility of using MAR21 but heard nothing. Which was another bad sign i.e. from what I can tell its not a very active project.
Next thing I will be looking at is CDS ISIS which is a suite of tools written by United Nations Educational, Scientific and Cultural Organization (UNESCO)

Swoogle

It would appear that Swoogle is not being very polite to web servers. It seems to hit me from 2/4 times a second. I am probably going to ban it from UKlug because its just not very nice to hammer someones server as hard as that. At least Google is useful ie poeple find my site via google and it still manages to be polite about it. You would think that people doing research would be trying to be a bit more polite about what they are doing.
I have sent a couple of emails to the technical contacts and the the people running swoogle
If I don’t get a reply I will ban their entire subnet because they appear to be using different IP address’s to spider from.
130.85.95.109
130.85.95.23

Do not cast your pearls in front of swine

I heard a great saying today.
“do not cast your pearls in front of swine”
This is so true. The context I had heard it is was in reference to people who would not accept Open Source as an alternative to proprietary systems ie you try and convince someone that there is a tool that will do the job and it is free but they insist on spending money on a closed system because free stuff can’t possibly be as good, or maybe thats what they know and they don’t want to change.
Don’t bother with them, let them spend their money and go use your time on someone who deserves and appreciates it. Unfortunately some people are like horses and require their blinkers in order to work otherwise they get spooked.
For those interested in where the saying comes from its the Bible. The original King James says.
“Give not that which is holy unto the dogs, neither cast ye your pearls before swine, lest they trample them under their feet, and turn again and rend you.”
(Matt. 7:6).

Unique Ip address parser

Dean sent me the following to parse the logs and see how many unique ip address’s I was getting on a monthly basis.
grep ‘Nov/2004’ uk*.log | awk ‘{ print $1 }’ | sort | uniq | wc
I wrote the following in Perl which does the same thing but I think I prefer Deans
perl -ne ‘/^(.*?)\s/; $a{$1}++;} END{for (keys %a){$c++;print “$_ == $a{$_}\n”} print “$c\n”;’ ukl*ss.log
maybe we could
perl -ane ‘$a{$F[0]}++};END{for (keys %a){$c++;print “$_ == $a{$_}\n”} print “$c\n”;’ ukl*ss.log
or maybe even
perl -ane ‘$a{$F[0]}++} END{for(keys %a){$c++;} print “$c\n”;’ ukl*ss.log
Or we could just
perl -ane ‘$a{$F[0]}++;END{print keys(%a).”\n”;}’ ukl*ss.log
bollicks to this. I am also sure there is some clever one liner in Perl to do this but I hardly ever use them so I will leave it to the reader to beat it 😉

Who’s searching on what

I noticed that someone had a look for gimpy on my blog today and I was wondering what terms people are finding my site with so I ran the following over my logs
perl -ne ‘/.*google.*?&q=(.*?)(&|”).*$/; print “$1\n” if $1;’ *.log | uniq
I am sure there is a shorter and better way to do it but this was more than enough to have a quick look.

Movable Type SpamAssassin Plugin

I have just finished the beta release of MT-SpamAssassin and so far so good. I have removed MT-Blacklist and everything is fine. I have not built the Bayesian database up completely yet since I don’t have that many comments. If you want to try it you can download it here.
MT-SpamAssassin Download
Please leave me some decent comments so I can seed the database 😉

Tools for manipulating Images

Occasionally at work that we need to do some simple task that involves converting images or finding their sizes etc. The problems with tasks that are “Occasional” is that you can never remember the way you did it the last time.
What size if that jpeg, gif or png?
How can I resize that image?
You can’t be bothered firing up gimp or some other tool so what can you do……

@debian:$ identify truman.gif
truman1.gif GIF 258x333 258x333+0+0 PseudoClass 32c 24kb 0.000u 0:01

That was easy, wasn’t it. What if we needed much more info than this, well thats much harder we need to do the following:

@debian:$ identify -verbose truman.gif

The hard part is the extra typing. I will leave it to the reader to try that one (there is too much output for here).
What about those times when you just wish one of your images was half the size. Well here comes another great tool to the rescue

@debian:$ convert -sample 50%x50% truman.jpg truman_half.jpg

For those that are after a little bit more info on these handly little tools head on over to IBM developer works to get more information.
Even the article above only scratches the surface of what convert can do.

Spamassassin Plugin for Moveable Type

I asked on the Moveable Type support Forum if anyone would be interested in a plugin that uses SpamAssassin. There were no replies to the post so it looks like it is either longer such an issue in the blogging world or maybe its already been done and I have not found the link. Perhaps I posted it to the wrong forum 😉 I would have thought that there would have been some interest in it but I was mistaken.
I wrote the plugin on Saturday and it is almost finished except for the pretty GUI. The Bayesian filtering is also working on it and I have tested it by scripting a few thousand spam entries into it and seeing if it would start spotting them and it did.
Thanks to the pluggable nature of Movable Type the plugin sits quite unobtrusively in it. I was after a much simpler solution than Blacklist without the separate GUI and management facilities etc and I think I could achieve this.
I intend to keep working at it and eventually use it on this blog so if you would like to try it contact me.