Hi Guys,

After reading the post about dumb-ass subject lines (which I'm sure I've done in the past) I thought I'd make an effort this time.

Anyway I need some help with the ereg_replace function. I am trying to strip the subdomain (www or whatever) so I just get domain.co.uk.

I have tried:

  • ereg_replace("[>].", "", $HTTP_HOST);

but that strips the dot and 1 previous letter - not exactly what I wanted.
I've tried other variations of the above (apart from the correct one) as they all display the full domain. The above line is the only one that actually changes the domain.

Please help.
Thanks

    That doesn't seem to work either. Using the example in the documentation I used:

    $path_parts = pathinfo($HTTP_HOST);
    
    echo $path_parts["dirname"] . "\n";
    echo $path_parts["basename"] . "\n";
    echo $path_parts["extension"] . "\n";
    

    which returned

    .
    [url]www.domain.co.uk[/url]
    uk
    

      whoops, should be [man]parse_url/man

      hope you got that in the related links.

        is it just me!!!
        this ain't working either

        Using the code

        print_r(parse_url("http://".$HTTP_HOST));

        returns

        Array ( [scheme] => http [host] => [url]www.domain.co.uk[/url] )

        as the [host] is exactly the same as $HTTP_HOST; I still need to strip the subdomain.

          preg_replace('!^[^.]*\\.!','', $_SERVER['HTTP_HOST']);

          Of course, killing off everything up to the first '.' can also be done with [man]strcspn[/man] and substr.

            Hmm...

            Just wondering, what would happen if one would call these scripts, when there is no subdomain included? A lot of sites also support the [url]http://domain.com,[/url] assuming you are then on the www domain...

            Just a though to consider..

            Is there really a way to tell whether a subdomain is used:

            www.shell.nl
            shell.nl
            shell.co.uk
            shell.com
            www.shell.com

            all seem to work, and bring me to a shell website..

            J.

              Good point, leatherback; I just went for the 'string processing' interpretation of the problem, and neglected to mention that. One is going to have to look at the TLD, and if it's national (.nl, .uk, .de, .au) and then look under that - keeping in mind that .uk and .au further subdivide their name space into things like .co and .ac (in the UK) or .com and .edu (in Australia), while others (the Netherlands and Germany) don't.

              Once you've got that out of the way, the last piece would be the domain, while all the bits below it (there might be more than one) are parts of the subdomain. And again, as you've pointed out, there might not be any bits left, and the server would be assuming a default subdomain.

              It should be noted that the W3C recommends not assuming anything about a URL's domain name actually means anything beyond the claim that that the thing is a domain name (at least for http). All the detail below that level is supposed to be treated as "opaque", without further interpretation.

                5 years later

                Example..

                Works unless you have a three letter subdomain.

                $ua = $_SERVER['SERVER_NAME'];
                $ua = explode('.',$ua);
                $ua = array_reverse($ua);
                $url=array();
                $except = array("name","info","coop","aero","travel","museum","asia","mobi","jobs"); //4 letter gtlds
                foreach($ua as $a) {
                	$url[] = $a;
                	if (strlen($a)>3 && !in_array($a,$except)) break;
                }
                $url = array_reverse($url);
                $url = implode('.',$url);
                

                  Note that these days a TLD can be pretty much anything.

                    Write a Reply...