Well, here's a really silly mistake.
For 48 hours I thought my server was under DOS attack [Denial Of Service]. In a way it was, but it was our own server attacking itself.
My server was configured for 300 max child processes, which is a little too high. Saturday PM, All activity on the server suddenly came to a screeching halt. I noticed that the server was out of memmory and there were 300 apache processes running. I figured it was a runaway glitch and rebooted.
Later on it happened again. I rebooted the server again and changed my passwords just in case it was someone trying to hack in. I checked the processes, cron, at and the command history of my admin users. Nothing in the slightest to indicate a hack, or a hack attempt.
This morning, I went to the server and tried to view a web page, sure enough, it took 5 minutes before the first page showed up. Instead of rebooting the server, I checked the running processes by doing a 'top -bn 1' and saved the output to a file.
I then did /etc/rc.d/init/http restart to kill all the apache children and restart. The server instantly snapped back into shape. I did my 'top -bn 1' and emailed the before and after output to myself to compare the differences.
Within 10 minutes however, my server was fully loaded with 300 httpd processes. What could this be? Because I host several Christian related websites, I suddenly started thinking this had something to do with the Air raides in the middle east :-) I went to several of my favorite christian websites, the first of which was Gospelcom.net. It just so happend that gospelcom was doing routine maintenance (which I did not know at the time) and there web site wasn't working.
Of course I thought the worst.
I went into my httpd.conf file and changed the max request to 50 so that the server would operate its other services like normal. I then began examining the logs. I was trying to call people I knew to find out if their websites were working. I soon figured out it was just my site.
What could it be? Well, finally, I took the time to CAREFULLY look at the access files. I found something happening that was a little mysterious:
YYY.followers.net 64.77.36.XXX - - [08/Oct/2001:08:18:44 -0700] "GET /web/adodb/adodb.inc.php HTTP/1.0" 200 - "-" "PHP/4.0.6"
It just so happens that 64.77.36.XXX is my own IP address. So something on my own server was attacking the server. I thought it was funny that they were trying to get this adodb.inc file again and again. It didn't have anything important, and if they wanted it that bad, they shoud just go to http://php.weblogs.com and get it themselves.
Then I looked at the path. Hmmmm... That's almost the same path I would use if I were trying to include the file in a PHP script.
I looked in the file that was uploaded last before the "attacks" came on Saturday. Sure enough, the line that should have read:
require("/web/adodb/adodb.inc.php");
instead read:
require("http://YYY.followers.net/web/adodb/adodb.inc.php");
Simple little mistake. 48 hours & 2-3 gallons of sweat later, it's kind of funny.
I'm not sure why this happend. Apparently the only way to stop it is to kill apache. It must have just happened that the developer who uploaded the file was checking the application occasionally to see if the server was working yet. Each time the script was run, it would start another round of "attacks" on the server.
Well, I hope that this experience saves you some time. Good luck.
Matt Nuzum