csn, you knew I'd answer this, didn't you? ;)
Can postgresql "corrupt" tables. Yes. and No. There have been bugs in the past that would result in tables sometimes becoming "wedged" where one tuple in the table would do something like kill a backend trying to access it, or you couldn't vacuum it anymore. I was actually bitten by an obscure bug last summer sometime having to do with vacuum.
What happened was that IF you were vacuuming as anyone but the postgres super user, you could get a table wedged to where all vacuums after that would fail.
Of course, a simple dump / restore of the table fixed that.
Vacuum, by the way, is how Postgresql reclaims space lost to it's MVCC locking mechanism's operation. MVCC is explained in the documentatin, but bascially what it means is that any tuple / row can have more than one instance of existance, and which one you see is determined by when your transaction started. Since the system is never sure which tuples are still visible, and which ones have expired, it never deletes the old ones.
That's vacuum's job. now, back in the day (i.e. a year or so ago 😉 vacuum worked by locking the whole table and you had to wait for it to finish. nowadays the lock is only held for a split second while the vacuum process grabs the requisite info from the table, then the table is vacuumed with the same level of concurrency enjoyed by other backend processes like selects or inserts. Further, there's now a non-full vacuum which simply marks the tuple space empty and available without recompacting the table. This is useful on heavily updated tables, as you can just run a vacuum in the background continuously and postgresql will recover the memory being freed as it runs.
Anyway, there IS a program something like myisamcheck, but it is not included in the general distro for two reasons. One is that it's a far more dangersous tool than myisamcheck, as it was meant to be driven by the hackers working on the backend code and whatnot. The other reason it's not included is that table corruption, in production systems, on stable hardware, is virtually unknown, and on the rare occasion where it does happen, you're almost always better off either restoring from backup or doing a "select * into newtable from oldtable" kinda thing than trying to "fix" your database.
About once every two months someone shows up on the postgresql hackers or general list asking why they are getting sig 11s, backends dying, and corrupted indexes. EVERY TIME in the last year or so we've seen this, even when the person asking has adamantly maintained that their hardware was perfect, memtest86 has found bad memory.
My personal experience with Postgresql has been this:
Version 6.5.3: Could crash the whole thing with the right kind of query (unconstrained joins and stuff like that.) This would take down the whole database, which meant you had to restart it. No data in the database was lost, although possibly data going into it might have disappeared. Pretty slow (OK, it was downright POKEY back then.) Already had MVCC mechanism in it, and could handle parallel load quite well.
Version 7.0.x: First version I used that didn't feel slow all the time. Great deal of improvement on the query planner and what not. This version, no matter how ugly the query, I could only crash the backend running the query, not the "postmaster" which is the daemon in charge of things. About 10 times faster than 6.5.3 had been.
Version 7.1.x: Introduced Write ahead logging, which gave Postgresql the last letter in ACID, the D for durability. Write ahead logging ensures that every transaction is either wholly committed or wholly rolled back, even should you lose all power during 1000 transactions all occuring at the same time. Views could now contain unions. about twice as fast as 7.0.
Version 7.2.x: MAJOR rewrite of the query planner's statistical analysis engine. Queries that used to run in minutes drop to seconds. Performance was about 10 times better than 7.1.
Version 7.3.x: Introduced schemas, which let you have multiple name-spaces in the same database (i.e. george.table.field and scott.table.field in the same database) and even more performance tweaking make this version about 2 to 3 times faster.