PostgreSQL is probably following MySQL's lead in order to make it easier for users to switch. On the contrary, most vendors follow the 'relatively small things' -- just not MySQL.
Think outside the MySQL box and it is fairly obvious why MySQL is 'loose' with conversions. Since you cannot commit or rollback (for the sake of typing I am going to ommit INNODB tables; they did not exist when this convention was established) transactions a single failure in a bulk insert (insert into select *), update, etc. will leave your database in an inconsistent state. So, they took the position that anything that would otherwise cause an error (out of bounds, wrong type, etc.) would be silently rounded, truncated, etc..
As an example, UPDATE table SET price = price * 2
It is not inconceivable that a price will go out of bounds. On any other DBMS the transaction would fail on a bounds, roll back, and tell you an error (not necessarily in that order). Your database is exactly the same as it was before.
If MySQL took the above approach (minus the rollback) you would be left with an inconsistent database -- who knows what rows were updated and which weren't. Instead, MySQL will silently set it to the MAX of the column type and continue on. Is a 127 value an out of bounds round-down or a real value? You can't tell! In some ways I think this way is worse because it does not even pop a warning letting you know which rows break.
Case in point are the users who use auto-increments on small datatypes and keep wondering why they have 100 products with the same ID.
The short of it is there is no logical or compelling reason to deviate from the standard in this manner. All it does is reinforce bad coding skills and make your code extraordinarily non-portable.