Try this:
Load the page with fopen(), ie
$page_url = "www.page.com";
fopen($page_url , "r");
then read each line of the file until there are none left -
while (!feof($page_url))
{
$aline = fgets($page, 255);
while(ereg("HREF=\"[\"]*\"", $aline, $match))
{
print($match[0]);
print("<br>\n");
//cleanup now
$replace = ereg_replace("\?", "\?", $match[0]);
$line = ereg_replace($replace, "", $aline);
}
}
fclose($page_url );
+++++++++++++++++++++++++++++++++++++++++
code has ended
I haven't tested it but this should work. It uses grep searches a tool familiar to all unix users. Linux Lives!!
Gordon wrote:
Hey!
I have a big HTML-file including a lots of URLs and I would like to get all those URLs out of this HTML-file to save them in a separate file.
How can I solve this problem?
Thanx,
Nashville