Yes, it has been debated before, but there's always room for more discussion on this.
It's not a simple cut and dried answer. I'd say that a poorly coded php app is gonna be slower than a well coded ASP/VBS or .cfm application.
Further, depending on what you want to do, a combination of two or more languages may be the best way to develop your site for scalability.
There are a few key points to keep in mind about scalability.
1: Can I scale by assigning different tasks to different machines? And example of this would be using one box running Linux/thttpd/php to server gif, jpeg, and other static content, while using another linux/apache/php box to build database derived pages, and a third machine to run your database.
2: Can I scale by clustering multiple boxes together doing one task? This often works well with apache reverse proxying multiple middle tier servers, which hit one or more fast database servers in back.
3: What is the limit of my scalability solution?
4: What is the cost per page per second etc... of my scalability.
5: Real scalability is a well behaved system when operating under overload conditions.
6: Don't believe anybody's benchmarks until you've duplicated the results yourself on your own machines.
Number 1 is the first step to scalability, and is often overlooked. If you've got your server, database, and everything else sitting on one box, it's often gonna hit a performance wall pretty quick. Doubly so if you haven't optimized that machine for performance.
Once you've segregated your tasks, you need to figure out which machine is the bottle neck. Just test the system under medium load, say 10 or 20 simos, for response time. Then down grade one set of boxes at a time, (i.e. underclock the CPU, remove memory, etc.) and see the result.
Number 2 is a method that is supported natively by both RedHat's clustering software (and a few other's) and by cold fusion. I do believe that IIS 5.5 does something similar, but I haven't tested it so can't speak for how well it clusters.
Number 3 is only something you can determine under testing, and often the licenses required to test for max scalability make it impossible to be sure where that limit is.
Number 4 is simple, simply figure out how much it costs you to add a machine, licenses and hardware and all, then figure out how many pages per second it adds to the system, and that's the cost per page per second delta. Note that if you need to increase database performance to keep your site up to speed, and you're running a database like oracle that charges by the megahertz times the number of CPUs, cost of scalability can be REAL high.
Number 5 is the reason most ecommerce systems run on Unix and / or Mainframes and not NT/Win2k. Poor behaviour under overload. Unix generally deteriorates more slowly than Win2k. I've seen apps on ASP and CF on Win2k that were 2 to 3 times faster under no load than Unix, but under the load of hundreds of users, the Unix boxes were still serving pages at about the same rate, while the Win2k boxes were for all intents and purposes non-reponsive (2 or more minute reponse times were the best it could do.)
Lastly, number 6. It doesn't matter what we tell you, until you've built one of each and tested it for a simulated version of what you'll be doing, you don't really know which will work better, faster, and longer.