Originally posted by jabba_29
Aslos depends on what standard you are wanting to write your code, all elements i.e. height, width, a href etc. should be "quoted" to be xhtml or html (4?) valid.
And such code should render faster because it's easier to parse. Two bytes won't make much of a difference since it's unlikely to result in fewer IP packets being transferred (just eliminating gratuitous whitespace padding from the HTML would save a lot more). And some more improvement can be achieved by generating valid (X)HTML and attaching an appropriate DOCTYPE imprimatur, 'cos then the browser has a chance to know what to expect (attaching the wrong imprimatur can of course make things worse, like breaking any promise can). More ideas (some of which I disagree with) can be found by Googling on "Extreme HTML Optimization".
One significant speedup that a lot of sites I've seen could do with is more intelligent use of SQL. If your DBMS is up to it, suitable views, triggers, and/or stored procedures can take remarkable amounts of workload off the web server and move it to the database server: then the traffic between the database and the script can consist only of information that is actually needed. Even without those facilities, just thinking about the SQL can do good (e.g., be very wary about "SELECT * ...").
Then you can consider one of the PHP optimisers/precompilers out there (Zend's product line, APC, etc.).
And a faster machine never hurts, either 🙂