I am currently investigating movable type. It seems like the danglies when it comes to doing blogs but this would involve me getting the time to install it and at the moment, time is a bit of a luxury for me.
Should I add a comment feature 28 Dec 03
I suppose I should add a tool where people can leave comments but that might be asking for trouble and that is a little bit more involved than writing my own RSS generator. Its still fairly straight forward though if you have any experience with a database and some free time, something I seem to get in fits and starts. I have tool that allows people to comment on recruitment agencies but it is failry basic and compared to some of the Forums around today, it’s not even in the same league.
Some RSS 27 Dec 03
I have been playing with RSS for a few days and since most of the RSS that I have seen has been blogs I decided to RSS enable my plain old XHTML diary to a whizzy RSS compliant new fangled jobby. I have no other reason for doing this other than possible self promotion via my massively increased site traffic and “NOT”…..
I can hear people scream “use X or Y” do not write your own. What would be the fun in using someone else’s RSS generator. I had a look at some of the more noteworthy blogs and I noticed that there is an awful lot of commented out text in the source of the file. This seems to me to be a bit ignorant because I am paying for bandwidth and every bit counts ;-). I know thats a lame excuse but I could not help it nor could I think of a better one. To cut a long story short I used a very crude method to do it.
Using a couple of extra “span” tags I was able to come up with some compliant RSS from my blog. The joy of Perl.
The Script I used
The following script is quite rough around the ages but is gets the job done. If you have any questions about the Perl or why I just had to write my own feel free.
#!/usr/bin/perl
use strict;
use warnings;
use HTML::Parser;
use URI::URL;
use XML::RSS;
use LWP::Simple;
my $base = “/hjackson”;
my $base_url = “http://www.hjackson.org”;
my $PAGES = {
“$base_url/cgi-bin/blog/december.html” => ‘htdocs/blog/december.xml’,
“$base_url/cgi-bin/blog/november.html” => ‘htdocs/blog/november.xml’,
“$base_url/cgi-bin/blog/october.html” => ‘htdocs/blog/october.xml’,
“$base_url/cgi-bin/blog/september.html” => ‘htdocs/blog/september.xml’,
};
my $STATE = { ‘intext’ => 0,
‘intitle’ => 0,
‘inlink’ => 0,
‘inspan’ => 0, };
my $RSS = { ‘link’ => “”,
‘title’ => “”,
‘description’ => “”, };
sub start_tag {
my ($self, $tag_name, $attr) = @_;
if( lc($tag_name) eq ‘span’) {
if( lc($attr->{class}) eq ‘blogtitle’) {
#print “In Span $tag_name\n”;
$STATE->{intitle} = 1;
}
if( lc($attr->{class}) eq ‘blogtext’) {
#print “In Span $tag_name\n”;
$STATE->{intext} = 1;
}
}
if( lc($tag_name) eq ‘a’ and $STATE->{intitle} eq ‘2’ ) {
#print “href = $attr->{href}\n”;
$STATE->{‘inlink’} = 1;
$RSS->{‘link’} = $attr->{href};
}
}
sub text {
my ($self, $text) = @_;
if ($STATE->{intitle} eq 1) {
#print “Title = $text\n”;
$RSS->{title} = $text;
$STATE->{intitle} = 2;
}
if ($STATE->{intitle} eq 2 and $STATE->{inlink} eq 1) {
$RSS->{title} = $text;
$STATE->{inlink} = 2;
}
if ($STATE->{intext} eq 1) {
#print “$text\n”;
$RSS->{description} = $text;
$STATE->{intext} = 2;
}
if ( ($STATE->{intitle} eq ‘2’) and ($STATE->{intext} eq ‘2’) and ($STATE->{inlink} eq ‘2’ )) {
\&create_rss();
}
}
sub end_tag{
my ($self, $tag_name, $attr) = @_;
if( lc($tag_name) eq ‘span’) {
if($STATE->{intitle}) {
}
if($STATE->{intext}) {
}
}
}
my $rss;
sub create_rss{
$rss->add_item(
‘title’ => “$RSS->{title}”,
‘link’ => “$RSS->{link}”,
description => “$RSS->{description}”,
);
$RSS->{‘title’} = “”;
$RSS->{‘link’} = “”;
$RSS->{‘description’} = “”;
$STATE->{intext} = 0;
$STATE->{intitle} = 0;
}
my ($html_page, $xml_page);
while ( ($html_page, $xml_page) = each %{ $PAGES } ) {
my $content = get($html_page);
#print “$html_page \n$content\n”;
$rss = new XML::RSS (version => ‘1.0’);
$rss->channel(
title => “Harry Jacksons Blog”,
‘link’ => “www.hjackson.org”,
description => “Just my Blog”,
dc => {
date => ‘2000-08-23T07:00+00:00’,
subject => “Harrys Blog”,
creator => ‘harry@hjackson.org’,
publisher => ‘harry@hjackson.org’,
rights => ‘Copyright 2003, Harry Jackson’,
language => ‘en-us’,
},
syn => {
updatePeriod => “hourly”,
updateFrequency => “1”,
updateBase => “1901-01-01T00:00+00:00”,
},
);
my @tags = (‘span’, ‘a’);
my $p = HTML::Parser->new(api_version => 3);
$p->report_tags( @tags );
$p->handler( start => \&start_tag, “self,tagname,attr”);
$p->handler( text => \&text , “self,text”);
$p->handler( end => \&end_tag , “self,tagname,attr”);
$p->parse($content) || die $!;
open ( FILE, “>$base/$xml_page”)
or die “Cannot open file $!\n”;
print FILE $rss->as_string;
close(FILE);
}
RSS Job Database 23 Dec 03
I have been playing with RSS for a few days and have now got an RSS Job database. I spen ages trying to find RSS feeds for this and so far have not sound very many. The database can be found here. An example URL which can be used to search and create RSS feeds from the database is as follows:
http://www.uklug.co.uk/cgi-bin/getjobs.rss?K=perl%20london&M=2&L=100&D=2592000&C=10&npo=1
This link creates an RSS Version 1.0 feed based on a search from the database. You can see from the URL that we are searching for the terms “perl” and “london”. For more information on how to use the database please see the help page
Another Google Find
I found a reference to my brother, Lee Jackson while sarching for information on our family name etc. There is not really much there just that he won a darts match. Its weird when you just happen to come across it.
Spdiering the Internet 07 Dec 03
I have started to document what I have been doing to construct the spiders. It is not really a tutorial, it”s more about what I did and how I did it. I doubt it is even close to how it should be done but I am enjoying doing it and I get to research some interesting areas of information retrieval an processing while doing it so what the hell.
Finishing the Robots 03 Dec 03
I have been quite busy lately dipping my toes in various waters hence the lack of entries just lately. I have actually finished with the robots for the short term and now moved onto the Search engine part of the project.
I am enjoying building the search engine because I get to work with C++ again, which is another language I enjoy using. I like it because I feel as if I am as close to the hardware as I am when using C but have various High Level Tools at hand when I need them. I picked C++ over C because it has the STL which I have used before. I imagine that most commercial search engines are using either C or C++ for
Off to wales 20 Nov 03
I have left the robots gathering links and pages for the last few days and the results are as follows. I am off for a couple of weeks one of which will be spent in a cottage in Wales which should be nice so there will be little or no action for quite a while here.
62.0 Million links found
11.9 Million unique links found
Weeding the database 12 Nov 03
You will see that the database has been reduced in size quite a bit. I have been running out of space so I decided to do some weeding. What I have done is fix all all the Url’s that had a fragment part. Url’s come in the following format.
The fragment part of the URL is not really required by us because it indicates a position in a document. This level od granularity is not required or any use to us, we are only interested in the document itself. I wrote a simple Perl script in conjunction with a Postgres Function to weed these out. During the process I deleted all links that where found by following the original URL with the fragment. This is what has led to the reduction in total links found. If you have a look at the latest robot code you will see that I now cater for this fragment art and strip it off before requestint the document.
55.0 Million links found
11.9 Million unique links found
Re-writing the spiders 08 Nov 03
I have been very busy lately re-writing the spiders for the search engine. I have decided to write up what I did to build the spider in the vain hope that someone may find it useful one day. I digressed several times and had some fun writing a recursive one but I eventually settled on writing an iterative robot that uses Postgres to store the links. This was partly due to already having a database with several million links in it already. Please see the link above for more details. I have also managed to download a few thousand documents for the search engine, hence the increase in the links found, this was caused by me parsing the documents that I had found when experimenting with the new robots .
85.0 Million links found