Blog Spam Project

I was approached tonight by Henry Stern with respect to registering my interest in a project to help curb blog Spam. Apparently it has been noted that I wrote a SpamAssassin plugin for Movable Type. Wonders never cease.
I think a project would be good, as for what would come out of it that remains to be seen. There are a good few blogging tools out there and I doubt getting them all pulling together would be feasible. Although this is not really a necessary requirement to start one.

Postgresql oom-killer

I have never seen the oom-killer before. I had heard stories from battle hardened veterans about their tussles with the beast but these stories where all just myths to me, until today, when the beastie raised its head in my logs.
Dec 20 18:08:50 debian kernel: oom-killer: gfp_mask=0x1d2
59071 Dec 20 18:08:51 debian kernel: DMA per-cpu:
59072 Dec 20 18:08:51 debian kernel: cpu 0 hot: low 2, high 6, batch 1
59073 Dec 20 18:08:51 debian kernel: cpu 0 cold: low 0, high 2, batch 1
59074 Dec 20 18:08:51 debian kernel: Normal per-cpu:
59075 Dec 20 18:08:51 debian kernel: cpu 0 hot: low 32, high 96, batch 16
59076 Dec 20 18:08:51 debian kernel: cpu 0 cold: low 0, high 32, batch 16
59077 Dec 20 18:08:51 debian kernel: HighMem per-cpu:
59078 Dec 20 18:08:51 debian kernel: cpu 0 hot: low 14, high 42, batch 7
59079 Dec 20 18:08:51 debian kernel: cpu 0 cold: low 0, high 14, batch 7
59080 Dec 20 18:08:51 debian kernel:
59081 Dec 20 18:08:51 debian kernel: Free pages: 1040kB (112kB HighMem)
59082 Dec 20 18:08:51 debian kernel: Active:253987 inactive:249 dirty:0 writeback:5 unstable:0 free:260 slab:2326 mapped:254 013 pagetables:680
59083 Dec 20 18:08:52 debian kernel: DMA free:16kB min:16kB low:32kB high:48kB active:12296kB inactive:0kB present:16384kB
59084 Dec 20 18:08:52 debian kernel: protections[]: 0 0 0
59085 Dec 20 18:08:52 debian kernel: Normal free:912kB min:936kB low:1872kB high:2808kB active:873984kB inactive:996kB prese nt:901120kB
59086 Dec 20 18:08:52 debian kernel: protections[]: 0 0 0
59087 Dec 20 18:08:52 debian kernel: HighMem free:112kB min:128kB low:256kB high:384kB active:129668kB inactive:0kB present: 131008kB
59088 Dec 20 18:08:52 debian kernel: protections[]: 0 0 0
59089 Dec 20 18:08:52 debian kernel: DMA: 0*4kB 0*8kB 1*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096k B = 16kB
59090 Dec 20 18:08:52 debian kernel: Normal: 0*4kB 0*8kB 1*16kB 2*32kB 1*64kB 0*128kB 1*256kB 1*512kB 0*1024kB 0*2048kB 0*40 96kB = 912kB
59091 Dec 20 18:08:52 debian kernel: HighMem: 0*4kB 0*8kB 1*16kB 1*32kB 1*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4 096kB = 112kB
59092 Dec 20 18:08:52 debian kernel: Swap cache: add 848298, delete 845230, find 151472/187493, race 0+3
59093 Dec 20 18:08:52 debian kernel: Out of Memory: Killed process 6332 (postmaster).
It would appear I was being a bit greedy with Postgres.

Postgresql Database Recovery

If you have found this page then you probably have a serious problem with a corrupt Postgres database. If like me you found that your database seems to be missing then you have come to the correct place. I am not saying this will cure your problem or even come close but it might give you a few ideas on where to look next.
If you are going to follow this article I strongly suggest you read it ALL before starting to do anything. I have written it as I done it and I went down some blind alleys before I recovered the data. If you start doing what I have done as you read it you might waste a lot of time or fsck something up, so read it first and decide what you need to do. In most situations it might be a very simple single command that needs to be run and you will be sorted, others may have to do it a bit different.
I would also highly recommend getting onto the Postgres mailing lists and asking some pertinent questions. Make sure if you do post questions you give as much detail as possible i.e. version numbers and full debug level log output. There are people on there who have done this a lot more than I have and I am sure they have seen some nastier cases than the one I’ve got.
This article does not cover how to fix file system errors, see fsck or e2fsck for that if you have them. You might also want to investigate setting
zero_damaged_pages = true
in your postgresql.conf file if you are expecting corruption in your files. Ask on the postgresql maiing lists about this before doing it though.
.
My problems started as follows
postgres@debian:~$ psql links
Welcome to psql 7.4.3, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help on internal slash commands
\g or terminate with semicolon to execute query
\q to quit
links=# \dt
no relations found
links=# select relname from pg_class where relname not like ‘pg_%’;

none of my tables were present in the output, this is where I had a sudden urge to go to the toilet.
I don’t often use the links database but every now and then I start a set of spiders that use the database to traverse the internet. I have been doing this for about 2 years and the database is huge or at least for a home machine it is.
I only have a backup from several months ago (lesson to be learned here) and I really didn’t want to loose the data I had collected over the last few months.
On hunting around the internet I noticed that there have been a lot of people who have had corrupt postgres databases and managed to recover them with varying degrees of success. Most causes of these corruptions seemed to be hardware related and some with bodged upgrades. Mine could have been one of 2 things:
1. A glitch on the 160Gb SATA drive the database is stored on. This happened the other night.
2. Recent Debian upgrade to the database.
At this point figuring out what went wrong was less important than getting the data back so I decided not to bother on a witch hunt and cure it instead.
This was the point where I asked myself what was more important.
1. Recover as much of the data as possible.
2. Data Integrity.
For me the choice was quite simple. I wanted the data, I also needed to be able to retrieve the vast majority of it otherwise I might as well just use most recent dump which would guarantee the integrity of the database but would set me back a few months.
First thing I did was stop the postgres server:
/etc/init.d/postgres stop
I then took a full copy of the “data” directory, this is the one you may have used the “initdb” command to setup.
Once the backup has been made make sure that nothing happens to the original directory, don’t do anything to it at all because we may need it later. All subsequent actions will use the copy of the database not the original.
At this point it might be an idea to a little data gathering. For me I needed to know what table was the largest etc. Doing
ls -la /var/lib/postgres/data/base/17142/
This will list all the files in the directory. I was pretty sure that the biggest table was going to be either the “child_links” or home_page table and it was easy to see in my case which was the largest tables. I also turned on full logging on all the postgres databases by editing the postgresql.conf file. Be aware that any more database created by “initdb” will create separate config files in the data directory and these will need to be edited. I suggest copying over them with a single common one. Another thing that you will need to know is the last Transaction ID or at least as close to value as possible. When you start stop the postgres database it write and entry to the log file and this contains a TID. I used grep to find mine ie
grep “next transaction ID” /var/log/syslog
This produced a list of TID’s. (I log to syslog you might not, check postgresql.conf to find out)
The next thing I did was create a new database away from both the copy and the original databases. I did this using the initdb command as follows
initdb /var/lib/postgres/recovery_database
This creates a skeleton database data directory ready for action. Make sure no postmaster instances are running. I then started my new database as follows
/path/to/postmaster -D /var/lib/postgres/recovery_database
This database as it stands is not really much use so:
createdb links
I then fished out the create table script for my database (you may not need this). And created an empty database using it. I then had all my original tables with no data in them. The next thing I did was
links=#select relfilenode from pg_class where relname = ‘child_links’;
relfilenode
————-
17160
(1 row
This gave me the name of the file on disk where the table data was. I stopped the database and then:
cp /copy/base/17142/172345 /path/recovery_database/base/17142/17160
cp /copy/base/17142/172345.1 /path/recovery_database/base/17142/17160.1
cp /copy/base/17142/172345.2 /path/recovery_database/base/17142/17160.2
I know I could have just soft linked them but I was being cautious. If I take copies the originals are safe from harm if I make a cock up.
Restart the recovery database again using the following. (please read the man page before using pg_resetxlog). I used 90812030 here because this number was the largest Transaction ID I could get from the logs.
pg_resetxlog -x 90812030 /var/lib/postgres/recovery_database
/path/to/postmaster -D /var/lib/postgres/recovery_database
I then used
postgres@debian:~$ psql links
Welcome to psql 7.4.3, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help on internal slash commands
\g or terminate with semicolon to execute query
\q to quit
links=# select count(*) from child_links;
count
———-
16341924
(1 row)
I immediately recognized that this count was wrong. 16 Million rows looked more like the home_page table to me. I decided to try
links=# select * from child_links;
……..ERROR………
This threw an error straight away which was a good indication that although it would count rows etc the structure was completely different. I then ran
links=#select relfilenode from pg_class where relname = ‘home_page’;
relfilenode
————-
17152
(1 row
I stopped the database again and now that I had the filename of the recovery databases home_page table I was able to soft link the files I copied earlier to it as follows
cd /var/lib/postgres/recovery_database/base/17142
ln -s 17160 17152
ln -s 17160.1 17152.1
ln -s 17160.2 17152.2
This was a bodge to save me time copying the files back over from the copy. If I corrupt these files the copies are safe anyway so a soft link is a quick way to see if this would work.
I restarted the recovery database
/path/to/postmaster -D /var/lib/postgres/recovery_database
I then used
postgres@debian:~$ psql links
Welcome to psql 7.4.3, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help on internal slash commands
\g or terminate with semicolon to execute query
\q to quit
links=# select count(*) from home_page;
count
———-
16341924
(1 row)
links=# select * from home_page limit 10;

This returned 10 rows of data which is all when and good.
links=# select * from home_page;

This first seemed to be doing something but it eventually failed with an error similar to the following.
FATAL: XX000: xlog flush request 29/5BEF8A58 is not satisfied — flush ed only to 0/62000050
This meant I had to start looking to see what would have caused this. I managed to find some info on this and by Googling I gotthis on one of the postgres mailing lists
From what I could tell I needed to use pg_resetxlog but this time I had to give pg_resetxlog some info about the WAL setting. Following the instructions in the man page I stopped the database and issued
:~$ pg_resetxlog -x 90812030 -l 0x58,0x25 /var/lib/postgres/recovery_database
I then restarted the server and
postgres@debian:~$ psql links
Welcome to psql 7.4.3, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help on internal slash commands
\g or terminate with semicolon to execute query
\q to quit
links=# select * from home_page;

I then used pg_dump to dump this table out to a backup file. The prospect of doing the above for each table made me cringe so I decided to risk trying something on the original database. I do not recommend you taking this short cut if you are really that worried about your data. I have the luxury in my case that the data is neither critical or really important. If you value your data don’t do this to the original repository do it somewhere else first.
I shut down the recovery instance. I then ran
:~$ pg_resetxlog -x 90812030 -l 0x58,0x25 /var/lib/postgres/data
against the original database. I restarted the database using
/etc/init.d/postgresql start
I logged into the database and I was now able to see all the tables etc. I ran a few select statement and everything looked fine. I then logged out and
pg_dump links | gzip > links_dump.gz
This dumped the entire database out.
I then created another directory for a new database as follows.
initdb /var/lib/postgres/new_database
Remember to edit the new config files ie postgresql.conf. You might want to add a higher setting to checkpoint_segments, I used 15. I then ran
cat /var/lib/postgres/full_dump_links.gz | gunzip | psql links
This finished with the following errors.
ERROR: there is no unique constraint matching given keys for referenced table “home_page”
ERROR: insert or update on table “rev_index” violates foreign key constraint “lexicon_id_fk”
DETAIL: Key (lexicon_id)=(22342) is not present in table “lexicon”.
ERROR: there is no unique constraint matching given keys for referenced table “home_page”
There are basically some constraints broken. I had some errors indicating that a few of the unique indexes and primary keys could not be created. This is not really concerning me too much since this is easily remedied. Everything after this point is simple administration tasks ie dropping and creating tables etc and fixing the broken constraints. I also had to check the integrity of the data which is simple enough.
That was it for me. A fully recovered postgresql database.

Kernel 2.6.5 and 2.6.9 fun

I upgraded from kernel 2.6.5 to kernel 2.6.9 because I was getting DMA errors when ripping CD’s to disc. I was also getting major errors with the SATA disk when copying the CD’s to my mp3 player so I have bit the bullet and decided to try and upgrade.
During boot I came across what I thought was some king of bug. When I rebooted the kernel fsck complained about a bad file system, no indication as to which device just the error message.
I logged back in and my SATA disk was gone. It had not been mounted during boot which was a bit of a bummer. I had a look at the dmesg output and lo and behold it is now a SCSI device and my three old mount points are now invalid. I am using the VIA controller ie
CONFIG_SCSI_SATA_VIA=y
in my config. I find these changes very disconcerting but then I am not a kernel hacker. I wonder if there is an easy way to see changes like this without having to read through tons of Change logs. Its even worse when you are jumping several versions.
Kernel upgrades, like life are is just full of little surprises.

kernel: cdrom: dropping to single frame dma

This error manifests itself when using grip to encode some CD’s to ogg. When the CD is scratched it takes a long time to rip it to disk so I would normally set these aside and do them later. When I tried to abort the ripping grip fails with an application error and if I check the logs I see
kernel: cdrom: dropping to single frame dma
ripping after this point fails at around 90% of each track. I have hunted high and low for a solution and so far I have not found one. I am using a SCSI cd burner
‘YAMAHA ‘ ‘CRW2100S ‘ ‘1.0H’ Removable CD-ROM
For the time being one solution that I seen mentioned was to reload the kernel module for the CD drive. This is not easy for me because mine was compiled into the kernel. I decided to download 2.6.8 and see if it works instead. I compiled my SCSI card driver as a module just in case anything went wrong.
Something weird did happen. When I booted into the new kernel I was unable to mount the H320 USB device which is a bit of a bummer. Worse than that was XFree86 started using between 60 and 90 percent CPU. Something is definitely not right with the the new kernel. I decided to recompile the kernel with the SCSI device built in in the off chance that it may have caused this but when I booted back in everything was fine for a few minutes and then X went mad again. I was still unable to see the USB device either. I was also unable to mount my SATA drive either which is where the CD collection is stored so I am switching back to the 2.6.5 kernel and will just reboot when it happens.
For the lowdown on the problem have a look at the following thread

iRiver H320 on Linux

I just bought two of these and decided to get them working on Linux. This is a very rough guide on how to get it running, it is not a guide on how to compile a kernel. For Debian I wrote a page on Compiling a kernel for Debian that you could use as a guide but for other systems see the Kernel Rebuild Guide
First off these are USB Mass storage devices so you need to have USB enabled properly in your kernel. The appropriate options that I had to add to my kernel config file are as follows.

# USB support
#
CONFIG_USB=y
CONFIG_USB_DEBUG=y
# Miscellaneous USB options
#
CONFIG_USB_DEVICEFS=y
# USB Host Controller Drivers
#
CONFIG_USB_EHCI_HCD=m
CONFIG_USB_OHCI_HCD=m
CONFIG_USB_UHCI_HCD=m
# USB Device Class drivers
#
CONFIG_USB_BLUETOOTH_TTY=m
CONFIG_USB_ACM=m
CONFIG_USB_PRINTER=m
CONFIG_USB_STORAGE=y
CONFIG_USB_STORAGE_DEBUG=y
CONFIG_USB_STORAGE_DPCM=y
CONFIG_USB_STORAGE_JUMPSHOT=y
# USB Human Interface Devices (HID)
CONFIG_USB_HID=m
CONFIG_USB_HIDINPUT=y

For those that don’t know what the kernel config file is this is the file that is used to configure the kernel 😉 When I recompiled my kernel I used
make menuconfig
this edits the config file before you compile and install the kernel. After running “make menuconfig” then go to drivers and at the bottom you should see USB device option select this and then select the devices you have on your machine.
To see what devices are on your machine you need to enable them in your BIOS and then you can use
lspci -v | grep HCI
to have a look at what USB controller your motherboard or PCI card is using. Mine was running a VIA controller.
To get the usb to appear when you plug it in you need to have the hotplug scripts installed. On Debian this is a simple
apt-get install hotplug
and thats sorted. I also added the following to my fstab file so that I can browse the devices.
/dev/sda1 /mnt/usb vfat defaults,auto,user,sync 0 0
That was it. I now have the iRiver H320 on my machine and it looks like a 20Gb hard drive. Now to get my CD Collection converted to OGG’s.

Compiling a single module for the 2.6 Kernel

This is relatively straight forward. I just recently installed a new network card to play around with and to see if I can make head or tail of the driver details so I need to make sure I have the driver for the card.
I installed a NetGear F311, I had a couple of spares. The driver for this card is the natsemi driver. To see if you have the source try the following.
]$ locate natsemi
/usr/src/kernel-source-2.6.5/drivers/net/natsemi.c
/usr/src/kernel-source-2.6.5/include/config/natsemi.h
There is no need to be the root user for any of this until you need to actually install the driver, I will tell you when 😉
Copy both these files to a directory of your choice. Then, in the same directory create a Makefile with the following text:
1 obj-m := natsemi.o
2
3 KDIR := /lib/modules/$(shell uname -r)/build
4 PWD := $(shell pwd)
5
6 default:
7 $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules
save it and then execute the folloing command:
]$ make
Some text should whizz past detailing what it is doing. In the directory which you ran make in there should now be several new files
natsemi.ko
natsemi.mod.c
natsemi.mod.o
natsemi.o
The one you are interested in is “natsemi.ko”. As the root user change to the directory containing the “natsemi.ko” file and run
]$ insmod natsemi.ko
If all goes well there should be no messages. To see if it loaded and to satisfy your curiosity try
]$ lsmod
natsemi 18976 0
tulip 36640 0
crc32 3840 2 natsemi,tulip
af_packet 12552 4
The above is what I have on mine
To see if the card works (Debian) edit your
/etc/network/interfaces
file and add the following. Note that I already have a card installed using eth0 so I have chosen eth1 for this card
11 iface eth1 inet static
12 address 192.168.1.10
13 netmask 255.255.255.0
Then issue the command
]$ ifup eth1
]$ ping 192.168.1.10
and you should now have the card working.

Xterm vs Eterm

I have been using Eterm for some time now because enlightenment is my normal choice on Linux and I have never really needed anything else, however, I have noticed that on my machine at work I was getting some odd behavior when using ALT-TAB to switch between terminals so I decided to try xterm instead and I have to say I am very impressed with it.
It involves a little bit more work to set up but then most good things do. So far I have experienced no odd behavior and I think I might adopt xterm as my default terminal, it just seems more mature and competent than Eterm.
These are the setting I stared with in .Xdefaults
xterm*Background: black
xterm*Foreground: grey
xterm*VT100*geometry: 140×28+1+1
xterm*font: 9×15
xterm*scrollBar: False
xterm*JumpScroll: on
xterm*saveLines: 4096
. To load then use
shell]$ xrdb .Xdefaults

Tuning a perl for a Postgres based search engine

I have a database of just over 11000 jobs and I need to run and indexer against it for the search engine to work. Just recently this has been getting slower and slower due to other things going on with the server so I decided to have a look at it tonight. The following was what I done and what I found:
Preliminaries.
All regex’s are pre-compiled prior to comparison using something like:
$dict{qr/\b\Q$keyword\E\b/} = $keyword;
Table Name Rows
key_word_search ==: 51641
rss_jobs ==: 179 (last nine hours worth)
Total checks == 9243739 (approx)
Methods:
0. Normal index with no tuning applied. This is what has been running
for the last few months.
1. For each job entry we are indexing check first to see if the job_id
and the keyword is already in the index. If yes go to the next record.
2. Use perls “index” function to pre-process the result and only try a
full regex if the string appears somewhere in one of the 3 RSS entries.
Results.
I was not going to try and second guess the result but I had a feeling
that M2 would be quicker. What I was suprised at is just how much
quicker. I imagine each method would see an improvement if more RAM was
given to Postgres especially M1 and M0 but I doubt either of them would
catch M1.
Also, the trigger that inserts the job actually carries out quite a
few checks to ensure the entry does not already exist so M1 is being
duplicated somewhat anyway and I am not about to relax the database checks/integrity to satisfy performance. Performance can normally be
cured using some other method as can be seen here.
Outer == No. Total Operations applied.
Inner == No. Left after filtering by Method
Matched == No. we matched and will be entered into database.
The inner rule is the method I have put in to filter the results before
I try a full regex match. The original indexer had no filter.
Method 0:
Outer == 9239317 In == 9239317 MATCH == 3009
real 8m23.868s
user 8m9.510s
sys 0m0.720s
Method 1:
Outer == 9239317 In == 14546 MATCH == 3009
real 1m30.897s
user 1m25.840s
sys 0m0.520s
As you can see here using the perls inbuilt “index” function I have
managed to narrow the actual operations considerably. We are relying on
the speed of the index compared to an actual regex match to gain the
speed here. I imageine they have almost literally used C’s
char *strstr(const char *s1, const char *s2);
or something simlar.
Method 2:
Outer == 105084 In == 99293 MATCH == 23
real 2m9.680s
user 0m16.840s
sys 0m5.090s
We can see here that this method is a lot slower. I actually stopped
this one early and it had only completed just over 1% of the total
operations required and it took 2 minutes. This was going to be slow
due to the amount of IO required ie 9 million possible calls to the
database and then a binary lookup on and index of just over 800k is not
going to be that fast at the best of times.
As an excercise and to satisfy my own curiosity I tried to put M1 first
and use M2 after it to see what would happen and the following was the
result.
Outer == 9239317 In == 14540 MATCH == 3009
real 1m42.974s
user 1m22.980s
sys 0m1.430s
We can see from this that calling out to the database is adding
overhead to the process.
Conclusion:
When we are using heavy regex intensive operations it pays to
pre-process using perls inbuilt “index” rather than relying on the
speed of the regex itself.

Writing Linux Device Drivers

I lost my internet connection at the weekend and was at a bit of a loss as to what I could do so I decided to take a pop at writing a simple module for the Linux kernel. I have a copy of
Beginning Linux Programming
ISBN: 1861002971
Authors: Richard Stevens and Neil Matthew
Ed: 2nd
so I turned to the back of it and started my foray into the Kernel. Now you need to remember that I am not a C programmer by trade and turning to the back of this book was a keen reminder of just how rusty my C is getting, not that it was ever rust free.
Luckily for me I have another book that is considered the C bible ie K&R and it deserves its reputation, it is a classic and I would recommend any programmer regardless of language choice to have a flick through it. When I was looking at some odd construct that those pointy hats had invented I had a flick through K&R and soon sorted it out.
Anyway back to the kernel. I was quick to discover that writing a module for the 2.6 kernel is not quite as straight forward as copying from the book and trying to understand what was going on. Things have been changing and I was getting all sorts of weird (or at least to me) and wonderful errors when trying to compile the kernel.
I eventually started to have a read at the recent modules in the source for 2.6.5 which I am running on this box. I also have the source for a 2.4 kernel on here so I opened 2 character drivers and compared notes between them. This is where I started to notice things that had changed. I made the changes I thought where necessary and I managed to get most of the “Hello World” module compiling but I was still getting errors.
I had a hunt around and I found a reference to some new build procedures for 2.6.5 so off I went in search of kbuild documentation and found some more stuff that had changed in the kernel. Namely the build procedure. This part was actually harder than the C that I had been struggling with.
After much swearing (I hate Makefiles and adding some more sugar is a pain in the ass) I managed to get the module compiling and I was on my way.
After a days work I now had a module that, on load would say
“Hello World”
and on removal
“Goodbye World”
time well spent or not? I haven’t decided yet. I wonder how often changes like this take place in the kernel and how much porting takes place because of it.
Where to go from here. I asked a few friends who know more about this stuff than I do and I got mixed advice about continuing. Some of them think the kernel is a mess because they are always changing the driver API among other things. I cannot comment because my knowledge of the Linux kernel is limited to spelling it and I sometimes get that wrong.
I did get some useful pointers though. The following is the best book I have found so far for someone like me who is just starting out in the kernel.
Linux Device Drivers, 2nd Edition
It is written for the 2.4 kernel but has a wealth of information that is still valid today. I have started porting the scull drivers from this to the 2.6 kernel I am running and it is proving very interesting. I printed off chapter 2 and 3 yesterday and have have almost finished them (40 mins from Luton to London on the train each way helps). So far it seems to be moving along at a fair old pace, I am just hoping I can keep up.
I could have done with the following at the weekend. This tells me what I needed to know about moving from 2.4 to 2.6. I can see myself using this a lot in the next few weeks.
Driver Porting