NogDog;11055629 wrote:Yeah, part of me is leaning toward a DB table and cron job just for the retry ability along with hopefully abating worries of zombie processes and such, but it just feels ugly. 🙂
Then it's a choice between two uglies:
you have a database queue of messages to be delivered (not ugly at all IMHO)
you have long-lived, indepdendent, zombie processes that may try over and over forever and never die until you eventually use up all your server memory and your server crashes.
While the likelihood of the latter problem might be pretty remote in practice, it can happen. You should try to avoid situations where you fork off processes that never die. Whether you have a db or not, you will probably have to work some logic into the processes you fork so that they can self-evaluate so they know when to quit and clean up after themselves. I'd also point out that if you construct these processes to send you email in the event of a failure, then you are inviting a billion emails one day when something breaks.
It's been my experience that multi-process situations are tricky and it's helpful to have their logging and status update type stuff written to a db so you can slice it and dice it. Every process should identify itself when logging (see [man]posix_getpid[/man]) or otherwise it can be tricky to know which log entries belong to which process which, in turn, can make it hard to identify what caused the error.
To me the ugly bit gets introduced the minute you start talking to some remote API or gateway. These remote systems will always go flaky at some point, if only temporarily.