If it's unsorted, you should be able to get_file_contents, then explode on PHP_EOL to get one line per array element.
After that, you can sort it with a divide and conquer algorithm and then your searches run in O(log n). Personally I'd overwrite the original file with the sorted info (actually: create a new file with this info, unlink old, rename the new one. Just remember rename isn't necessarily an atomic option on all systems).
Still, unless you know where each line in the file starts, you will find it more problematic to create a "more or less binary" search directly targetting the file. Reading the whole file into an array each time you search it, even if the search itself runs in O(log n), seems like a waste of resources.
One option is to create an index file to keep track of file position indices for each id.
But, why is this data in a file instead of in the table, whose technology was created specifically to deal with indexing and minimizing hard drive block accesses in mind.
Also, if you create this file yourself, and use it to present links to people, why search the file at all?
<a href="somepage.php?id=15">John Doe</a>
Just use the id as a get (or post variable),
"SELECT whatever from yourtable WHERE id = " . intval($_GET['id'])
Should this for some reason not be an option, create an index in your table on the name fields, and use the name as WHERE clause.
And, what does this have to do with different browsers? Are you actually saying that you want to provide an actual search function to find a specific id on the page: as in, for the user to find it on the page, NOT related to finding anything on the server. In this case, there is built in search functions in all browsers for starters. Besides, if you do want to provide it yourself, going over 8000 lines of text is quick for the browser anyway and will put not stress on your server, so why worry about it. Your users won't...