As long as your table is indexed, yes. Otherwise what's really gonna cost you is the sort when you stick an order by on the end.
the limit 10 or limit 20 will keep the database from working real hard.
You want to keep the result set as small as possible, and transfer as little data as possible, and indexing and limiting should do that.
Here's some info:
I build a 100,000 row table with one word and one int.
Here's the output from a quick test:
postgres=# explain select * from test order by id limit 10;
NOTICE: QUERY PLAN:
Limit (cost=0.00..0.59 rows=10 width=28)
-> Index Scan using test_id_key on test (cost=0.00..59.00 rows=1000 width=28)
Note that the limit costs a max of 0.59, while the index scan costs anywhere from 0.00 to 59.00 depending on the internal ordering of the dataset. This query runs in <1 second.
Without an index, we get:
postgres=# explain select * from test2 order by id limit 10;
NOTICE: QUERY PLAN:
Limit (cost=11057.75..11057.75 rows=10 width=16)
-> Sort (cost=11057.75..11057.75 rows=100000 width=16)
-> Seq Scan on test2 (cost=0.00..1658.00 rows=100000 width=16)
Now look at the cost of the limit and sort and seq scan. This query takes 8 seconds on my computer.