It depends on your OS, but on my Redhat 7.1/dual400Mhz/650MB/EIDE I was able to get 10.8 Million rows of 307 bytes per row, but with a datafile of over 3Gigabytes it took a flipping 67 seconds to get one single row at offset 4million.
Curiously, I decided to add an index on a char(1) column, but thatjust started to create an index file of over a gigabyte so I stopped that after about 20 minutes :-)
MySQL has added InnoDB which works with dbspaces. All tables of type InnoDB are created in that space, and you can add chunks of space to the InnoDB space. These chunks can be located on several harddisks, thus breaking the max filesize barrier.
PostgreSQL chops up the data itself. It also let's you create different databases on different locations (different disks), but as far as I know does not let you spread the data for one database accross more than one harddisk.
But like I said at the top, by the time you reach gigabyte table sizes, your database performance is very Micros*ft.