Postgresql Database Recovery

If you have found this page then you probably have a serious problem with a corrupt Postgres database. If like me you found that your database seems to be missing then you have come to the correct place. I am not saying this will cure your problem or even come close but it might give you a few ideas on where to look next.
If you are going to follow this article I strongly suggest you read it ALL before starting to do anything. I have written it as I done it and I went down some blind alleys before I recovered the data. If you start doing what I have done as you read it you might waste a lot of time or fsck something up, so read it first and decide what you need to do. In most situations it might be a very simple single command that needs to be run and you will be sorted, others may have to do it a bit different.
I would also highly recommend getting onto the Postgres mailing lists and asking some pertinent questions. Make sure if you do post questions you give as much detail as possible i.e. version numbers and full debug level log output. There are people on there who have done this a lot more than I have and I am sure they have seen some nastier cases than the one I’ve got.
This article does not cover how to fix file system errors, see fsck or e2fsck for that if you have them. You might also want to investigate setting
zero_damaged_pages = true
in your postgresql.conf file if you are expecting corruption in your files. Ask on the postgresql maiing lists about this before doing it though.
.
My problems started as follows
postgres@debian:~$ psql links
Welcome to psql 7.4.3, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help on internal slash commands
\g or terminate with semicolon to execute query
\q to quit
links=# \dt
no relations found
links=# select relname from pg_class where relname not like ‘pg_%’;

none of my tables were present in the output, this is where I had a sudden urge to go to the toilet.
I don’t often use the links database but every now and then I start a set of spiders that use the database to traverse the internet. I have been doing this for about 2 years and the database is huge or at least for a home machine it is.
I only have a backup from several months ago (lesson to be learned here) and I really didn’t want to loose the data I had collected over the last few months.
On hunting around the internet I noticed that there have been a lot of people who have had corrupt postgres databases and managed to recover them with varying degrees of success. Most causes of these corruptions seemed to be hardware related and some with bodged upgrades. Mine could have been one of 2 things:
1. A glitch on the 160Gb SATA drive the database is stored on. This happened the other night.
2. Recent Debian upgrade to the database.
At this point figuring out what went wrong was less important than getting the data back so I decided not to bother on a witch hunt and cure it instead.
This was the point where I asked myself what was more important.
1. Recover as much of the data as possible.
2. Data Integrity.
For me the choice was quite simple. I wanted the data, I also needed to be able to retrieve the vast majority of it otherwise I might as well just use most recent dump which would guarantee the integrity of the database but would set me back a few months.
First thing I did was stop the postgres server:
/etc/init.d/postgres stop
I then took a full copy of the “data” directory, this is the one you may have used the “initdb” command to setup.
Once the backup has been made make sure that nothing happens to the original directory, don’t do anything to it at all because we may need it later. All subsequent actions will use the copy of the database not the original.
At this point it might be an idea to a little data gathering. For me I needed to know what table was the largest etc. Doing
ls -la /var/lib/postgres/data/base/17142/
This will list all the files in the directory. I was pretty sure that the biggest table was going to be either the “child_links” or home_page table and it was easy to see in my case which was the largest tables. I also turned on full logging on all the postgres databases by editing the postgresql.conf file. Be aware that any more database created by “initdb” will create separate config files in the data directory and these will need to be edited. I suggest copying over them with a single common one. Another thing that you will need to know is the last Transaction ID or at least as close to value as possible. When you start stop the postgres database it write and entry to the log file and this contains a TID. I used grep to find mine ie
grep “next transaction ID” /var/log/syslog
This produced a list of TID’s. (I log to syslog you might not, check postgresql.conf to find out)
The next thing I did was create a new database away from both the copy and the original databases. I did this using the initdb command as follows
initdb /var/lib/postgres/recovery_database
This creates a skeleton database data directory ready for action. Make sure no postmaster instances are running. I then started my new database as follows
/path/to/postmaster -D /var/lib/postgres/recovery_database
This database as it stands is not really much use so:
createdb links
I then fished out the create table script for my database (you may not need this). And created an empty database using it. I then had all my original tables with no data in them. The next thing I did was
links=#select relfilenode from pg_class where relname = ‘child_links’;
relfilenode
————-
17160
(1 row
This gave me the name of the file on disk where the table data was. I stopped the database and then:
cp /copy/base/17142/172345 /path/recovery_database/base/17142/17160
cp /copy/base/17142/172345.1 /path/recovery_database/base/17142/17160.1
cp /copy/base/17142/172345.2 /path/recovery_database/base/17142/17160.2
I know I could have just soft linked them but I was being cautious. If I take copies the originals are safe from harm if I make a cock up.
Restart the recovery database again using the following. (please read the man page before using pg_resetxlog). I used 90812030 here because this number was the largest Transaction ID I could get from the logs.
pg_resetxlog -x 90812030 /var/lib/postgres/recovery_database
/path/to/postmaster -D /var/lib/postgres/recovery_database
I then used
postgres@debian:~$ psql links
Welcome to psql 7.4.3, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help on internal slash commands
\g or terminate with semicolon to execute query
\q to quit
links=# select count(*) from child_links;
count
———-
16341924
(1 row)
I immediately recognized that this count was wrong. 16 Million rows looked more like the home_page table to me. I decided to try
links=# select * from child_links;
……..ERROR………
This threw an error straight away which was a good indication that although it would count rows etc the structure was completely different. I then ran
links=#select relfilenode from pg_class where relname = ‘home_page’;
relfilenode
————-
17152
(1 row
I stopped the database again and now that I had the filename of the recovery databases home_page table I was able to soft link the files I copied earlier to it as follows
cd /var/lib/postgres/recovery_database/base/17142
ln -s 17160 17152
ln -s 17160.1 17152.1
ln -s 17160.2 17152.2
This was a bodge to save me time copying the files back over from the copy. If I corrupt these files the copies are safe anyway so a soft link is a quick way to see if this would work.
I restarted the recovery database
/path/to/postmaster -D /var/lib/postgres/recovery_database
I then used
postgres@debian:~$ psql links
Welcome to psql 7.4.3, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help on internal slash commands
\g or terminate with semicolon to execute query
\q to quit
links=# select count(*) from home_page;
count
———-
16341924
(1 row)
links=# select * from home_page limit 10;

This returned 10 rows of data which is all when and good.
links=# select * from home_page;

This first seemed to be doing something but it eventually failed with an error similar to the following.
FATAL: XX000: xlog flush request 29/5BEF8A58 is not satisfied — flush ed only to 0/62000050
This meant I had to start looking to see what would have caused this. I managed to find some info on this and by Googling I gotthis on one of the postgres mailing lists
From what I could tell I needed to use pg_resetxlog but this time I had to give pg_resetxlog some info about the WAL setting. Following the instructions in the man page I stopped the database and issued
:~$ pg_resetxlog -x 90812030 -l 0x58,0x25 /var/lib/postgres/recovery_database
I then restarted the server and
postgres@debian:~$ psql links
Welcome to psql 7.4.3, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help on internal slash commands
\g or terminate with semicolon to execute query
\q to quit
links=# select * from home_page;

I then used pg_dump to dump this table out to a backup file. The prospect of doing the above for each table made me cringe so I decided to risk trying something on the original database. I do not recommend you taking this short cut if you are really that worried about your data. I have the luxury in my case that the data is neither critical or really important. If you value your data don’t do this to the original repository do it somewhere else first.
I shut down the recovery instance. I then ran
:~$ pg_resetxlog -x 90812030 -l 0x58,0x25 /var/lib/postgres/data
against the original database. I restarted the database using
/etc/init.d/postgresql start
I logged into the database and I was now able to see all the tables etc. I ran a few select statement and everything looked fine. I then logged out and
pg_dump links | gzip > links_dump.gz
This dumped the entire database out.
I then created another directory for a new database as follows.
initdb /var/lib/postgres/new_database
Remember to edit the new config files ie postgresql.conf. You might want to add a higher setting to checkpoint_segments, I used 15. I then ran
cat /var/lib/postgres/full_dump_links.gz | gunzip | psql links
This finished with the following errors.
ERROR: there is no unique constraint matching given keys for referenced table “home_page”
ERROR: insert or update on table “rev_index” violates foreign key constraint “lexicon_id_fk”
DETAIL: Key (lexicon_id)=(22342) is not present in table “lexicon”.
ERROR: there is no unique constraint matching given keys for referenced table “home_page”
There are basically some constraints broken. I had some errors indicating that a few of the unique indexes and primary keys could not be created. This is not really concerning me too much since this is easily remedied. Everything after this point is simple administration tasks ie dropping and creating tables etc and fixing the broken constraints. I also had to check the integrity of the data which is simple enough.
That was it for me. A fully recovered postgresql database.

Leave a Reply

Your email address will not be published. Required fields are marked *