Here's the two bits of code I'm testing with.
This is an xml parser that parses an armory site and pulls out bits of character information. Unfortunately, the drawback is that if the site is down, the file does not time out properly. I've tried using fsockopen but it does not work as the URL returns fine, it just may be very bogged down. Therefore I tried to implement it with a cURL test after this one.
(file_get_contents) test:
<?php
error_reporting(0);
require_once('./global.php');
$locale = "US";
$char['realm'] = "Scarlet+Crusade";
$char['name'] = "Angello";
function parse_wowarmory($url)
{
ini_set('user_agent', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)');
$xml = file_get_contents($url);
require_once(DIR . '/includes/class_xml.php');
$xmlobj = new vB_XML_Parser($xml);
return $xmlobj->parse();
}
if ($locale == 'EU')
{
$armory = "armory.wow-europe.com";
}
else
{
$armory = "armory.worldofwarcraft.com";
}
$char['sheet'] = parse_wowarmory("http://" . $armory . "/character-sheet.xml?r=" . $char['realm'] . "&n=" . $char['name']);
echo '<pre>';
print_r($char['sheet']);
echo '</pre>';
?>
Here's the cURL test
(cURL) test:
<?php
error_reporting(0);
set_time_limit(0);
require_once('./global.php');
$locale = "US";
$char['realm'] = "Scarlet+Crusade";
$char['name'] = "Luttiano";
function parse_wowarmory($url)
{
ini_set('user_agent', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)');
$ch = curl_init();
$timeout = 30;
curl_setopt ($ch, CURLOPT_URL, $url);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
$xml = curl_exec($ch);
require_once(DIR . '/includes/class_xml.php');
$xmlobj = new vB_XML_Parser($xml);
return $xmlobj->parse();
curl_close($ch);
}
if ($locale == 'EU')
{
$armory = "armory.wow-europe.com";
}
else
{
$armory = "armory.worldofwarcraft.com";
}
$char['sheet'] = parse_wowarmory("http://" . $armory . "/character-sheet.xml?r=" . $char['realm'] . "&n=" . $char['name']);
echo '<pre>';
print_r($char['sheet']);
echo '</pre>';
?>
The issue I have with the cURL test is that it creates an "empty" array that contains nothing.
So the first test for file_get_contents produces:
Array
(
[globalSearch] => 1
[lang] => en_us
[requestUrl] => /character-sheet.xml
[characterInfo] => Array
etc. etc.
)
.. but it contains no timeout bypass features if the site goes down..
and the second test with cURL produces:
Array
()
.. which contains nothing.
Any idea what I"m doing wrong or what I can do to make this better?