One thing I notice is that your regexes are different; the first won't match any line that ends with a "\n" (which is probably every line). I also don't recall Perl's '$' regexp symbol as ignoring newlines, or its readline function stripping them; I know PHP's doesn't, and its regex language is supposed to be "Perl-compatible".
You also seem to assume in the Perl code that you're reading a text file that uses the system's newline convention, while in PHP you read the file as binary and handle text line endings manually.
I'm not sure about setting stream_set_write_buffer; do you have multiple processes writing to the same output file?
Another thing is that you're reading the input as binary and then fiddling with text line endings by hand; are you unsure about whether the lines of your data end with Windows or Unix line endings? Because your Perl code assumes that it matches the system (you're not adjusting $/). Reading the input as text and using \b to match the end of the alphanumeric string instead of "\r?$" (or "$").
I don't have Perl installed, so I can't do a side-by-side comparison on my machine. However, your PHP code runs in 227s and a command-line sed substitution takes 80s.
On the other hand, I thought of having PHP do the substitution one 32MB block at a time, which took about 4.5s to do the job:
$handle = fopen("2Gb.txt", "rb");
$handle2 = fopen("2Gb-2.txt", "wb");
stream_set_write_buffer($handle2, 32000);
while(!feof($handle)) {
$chunk = fread($handle, 32*1024*1024);
$chunk .= fgets($handle); // Avoid a false-positive where the chunk ends with '123456' but not at the end of a line
fwrite($handle2, preg_replace('/123456\b/', 'azerty', $chunk));
}
fclose($handle);
fclose($handle2);
So I reckon it's the underlying I/O that's the hassle, here (btw, these tests were using an SSD drive, so disk seeks between read and write positions weren't an issue).