Creating a new Postgres database 08 Oct 03

After the amount of moving around that I did yesterday it is hard to believe that nothing went wrong. Lots of things went wrong but I will not bore you with them they are trivial to what I did today.
I started moving the “/usr/local/pgsql/data/base/links_database/files” around to make room in various places and managed to corrupt one of them. You can imagine my panic. I quickly put everything back to where it was and started Postgres and checked to see if the file I had corrupted was an index or a table. If it had been a index I could have just dropped it and recreated it but, it was the links_found table. However because I make regular backups of everything this gave me the chance to test them out.
Just recently I made some comments that I was going to use gzip, rather than bzip2, to do my backups, due to the amount of time that bzip2 takes to do anything. Trust my luck the gzip backup that I had taken recently was corrupted giving all sort of end of file errors etc. I need to investigate this because I cannot afford the time, to use bzip2 for my backups.
I used an old backup file from a few days ago. This is how I recreated the database for anyone that is interested.
]# psql template1
template1=# drop database links;
template1=# \q
]# createdb links
]# cat /links_database_01_10_03_15\:12\:57.sql.bz2 | bunzip2 | psql links
It looks like a very old backup from 7 days ago, but since the robots have not been running for a few days during that time so there was little data loss. The whole operation took a couple of hours. I also tried something a bit dangerous, when the database was halfway through re-creating itself I checked to see which data files had been created and had reached the max limit of 1Gb. I then moved the oldest of these to another file system and created a soft link to it. I know that there is probably an easier way do it than this, but because the database is bigger than any of my file systems I needed a quick and dirty method to free file space to avoid running out of room. If anyone knows a better way to do this I would like to know what it is, you know how to contact me.
Just when I thought everything was ok, I got the following error.
links=# select now (), count(*) from links_found;
ERROR: cannot read block 477160 of links_found: Input/output error
links=# select relname, relfilenode, relpages from pg_class order by
relname | relfilenode | relpages
——————————–+————-+———-
links_found | 71509890 | 456987
links_found_pkey | 112850939 | 418056
lf_found_url_idx | 112850933 | 397954
home_page | 71509882 | 90268
home_page_pkey | 112850935 | 77280
home_page_url_key | 112850937 | 74141
hp_url_id_index | 112850934 | 11990
pg_proc_proname_args_nsp_index | 16640 | 125
pg_proc | 1255 | 58
pg_depend | 16598 | 20
I knew this could mean I have a problem on my file system, I was having visions of one of my disks now being completely screwed. I found out what file system it was on using the above filename and then did the following.
[root@harry 71509876]# /etc/init.d/postgres stop
Stopping PostgreSQL: ok
[root@harry 71509876]# cd
[root@harry root]# umount /dev/sda5
[root@harry root]# e2fsck -c /dev/sda5
e2fsck 1.26 (3-Feb-2002)
Checking for bad blocks (read-only test): done
Pass 1: Checking inodes, blocks, and sizes
Duplicate blocks found… invoking duplicate block passes.
Pass 1B: Rescan for duplicate/bad blocks
Duplicate/bad block(s) in inode 766: 206483
Pass 1C: Scan directories for inodes with dup blocks.
Pass 1D: Reconciling duplicate blocks
(There are 1 inodes containing duplicate/bad blocks.)
File /data/base/71509876/71509890.3 (inode #766, mod time Wed Oct 8 13:16:55 2003)
has 1 duplicate block(s), shared with 1 file(s):
[root@harry 71509876]#
We can see straight away that there is a problem on my links_found table again. To fix this I ran e2fsck using the “-f” option and chose the defaults when asked questions. I ran it again to make sure that the defaults where not causing any trouble, and the database is now back in business.
40 Million links found
6 Million unique links found

Leave a Reply

Your email address will not be published. Required fields are marked *