That really depends. For instance, in PostgreSQL, the planner decides whether or not to use an index based on how costs. If you're retrieving 0.0001% of the table, and index is almost always a good idea. If you're retrieving 50% of it, an index just gets in the way.
Let's say your average record size uses 120 bytes on disk.
OK, "You average record size uses 120 bytes on disk"
Sorry, too much Police Squad...
Anyway, also, let's say that you're database uses a block size of 8k. In this instance, a single block can hold 68 records. This means that accessing just 1.4 percent of your data at random will hit every single block. Since sequential scans are faster than random scans, usually by a factor of about 2, that means that any query that hits > 0.7% of that table would be better served by a sequential scan.
Also, indexes are used for other reasons, like enforcing uniqueness / primary keys.