Hey,

i've been trying to make my sites search engine friendly ... even though search engines are unsociable bastar...anyway.

Lets say I have 2 pages.
Does it make any difference if I put both pages in one file (and call them using GET)
or should I create 2 seperate files?

The reason i'm asking is because I've seen a few sites that create 10 php files when they could of easily put them all in one file and I'm just taking a wild guess that it has something to do with search engine inclusion ... either that or they haven't figured out the glory of GET and IF, simple and effective.

    Maybe they've discovered the joys of mod_rewrite 🙂

      na I doubt theres any url manipulation involved or as some people call it: "Voodoo"

        sigh ... I know for certain its not a Url manipulation. I have the source code for one of the sites.

        Each file has about 10-15 lines of code in them (they just include another file).

        Thanks, I'll go check the articles.

          2 years later

          It probably was mod_rewrite, heh heh.

            drawmack wrote:

            Are these urls things like:

            http://www.example.com/articles/1
            http://www.example.com/articles/2
            http://www.example.com/articles/3
            http://www.example.com/articles/4
            http://www.example.com/articles/5
            etc....

            If so then it's most likely a mod rewrite rule that is using php or perl or something to use a script to act like a directory. See the articles on this site that deal with beaing search engine friendly for details.

            The big question of course will be.. When are search engines going to take these urls as one page too!? I mean.. THere really is not technical reason WHY the ?... component of an url could not be used in the lisintgs. So why do they accept rewritten URLS as independent pages!?

              Well, different URLs are typically different resources; it wouldn't make sense for the search engine to index article?date=20040910 because article?date=20031001had some relevant keywords (ditto for article/2004/09/10 and article/2003/10/01) .

              As to why a whole bunch of files is used instead of URL rewriting ... it's certainly nothing to do with search-engine optimisation because a search engine wouldn't care even if it knew.

              I was going to make a facetious comment, but seeing as it was missed first time around (despite being flagged).... 🙂

                leatherback wrote:

                The big question of course will be.. When are search engines going to take these urls as one page too!? I mean.. THere really is not technical reason WHY the ?... component of an url could not be used in the lisintgs. So why do they accept rewritten URLS as independent pages!?

                Because the way search engines work today they only care about what content is at a given URL. You and I might call certain URLs different pages, but to a search index one URL equals one set of content. Another similar but different URL returns different content. It's only humans that distinguish "pages".

                  veridicus wrote:

                  Because the way search engines work today they only care about what content is at a given URL. You and I might call certain URLs different pages, but to a search index one URL equals one set of content. Another similar but different URL returns different content. It's only humans that distinguish "pages".

                  Exactly my point; As the search engine cares about content, it should not care whether a variable is passed as part of an URL which will be rewritten, or through a query string at the end of it. As long as the page provides valid links to the other content sections, it should not make a difference. As long as the content is X% different.
                  However, fact of the matter is that search engines do NOT like to follow keyword-based pages. As such, one can imagine that mod_rewrite fake URLs will be ignored in the future too.

                    leatherback wrote:

                    or through a query string at the end of it.

                    The implication of the querystring being that the user is supplying a query; URLs that contain querystrings are supposed to be dynamically generated by the client. If Google were to identify a URL that is supposed to have a querystring attached there would be absolutely no sense for it to attempt generating and requesting every possible querystring just to see what if anything comes back.

                    There is absolutely no reason to distinguish between static URLs that resolve to distinct files in the filesystem (URLs are not filenames) and URLs that get rewritten server-side into something that contains a query string; how the server decides to match resources to locators is entirely the server's business (do you know how the W3C resolves its URLs? Should you care?).

                      All I was..

                      OH forget it. Seems like you guys do not want to get my point. :glare:

                        Try to be more blunt, then. Maybe that will work 😉

                          Weedpacket wrote:

                          Try to be more blunt, then. Maybe that will work 😉

                          Hm.. yeah. I tried that. Somehow that doesn't work either. 😃
                          I'll just sit here and see the world of search-engine optimisation come to an end <evil grin>hahahahaha</evil grin>

                            Write a Reply...