Postgresql has: Transactions, stored procedures, triggers, rules, subselects, functional indexes, and is BSD licensed. It has almost complete SQL92/99 compliance, and is built to handle hundreds of live connections updating at the same time. It requires a bit more maintenance (cron jobs to vacuum and analyze are a good idea) and you will likely take a little more time to learn it, as it is larger and more complex. If you are a "database" type person you will likely prefer it.
MySQL is fast, especially with a small number of connections. It has some bare minimum support for transactions and things like subselects, and it's not as fast when you use things like that (i.e. simple queries run faster than postgresql, complex ones run slower.) In my opinion, their following of the SQL standard is sloppy.
An example is the way it handles numerics. SQL92 states that if you have two numerics, each with s1 and s2 places of accuracy to the right of the decimal, then the product of those two numerics should have s1+s2 places of accuracy. In MySQL they don't. They have max(s1or s2) places of accuracy:
MySQL:
create table test (n1 numeric(8,2), n2 numeric(8,3));
Query OK, 0 rows affected (0.10 sec)
mysql> insert into test values (12.12,13.145);
Query OK, 1 row affected (0.00 sec)
mysql> select n1n2 from test;
+---------+
| n1n2 |
+---------+
| 159.317 |
+---------+
1 row in set (0.00 sec)
Postgresql:
dbname=# create table test (n1 numeric(8,2), n2 numeric(8,3));
CREATE TABLE
dbname=# insert into test values (12.12,13.145);
INSERT 1017506 1
dbname=# select n1*n2 from test;
?column?
159.31740
(1 row)
This kind of a lack of attention to detail is why MySQL is not likely to see use in financial insititutions, but is a great choice for simple web based stuff where missing a few points of accuracy means nothing.