Obviously this will need to be done via cron or some other scheduled task (maybe a permanently running task which wakes up when there's work to do).
If you are going to need to check 200k feeds on a daily / weekly basis, the job which does it will certainly need some form of parallelisation.
This should in practice not be particularly difficult. I'd recommend that you choose how many processes you want to use (perhaps at runtime), then go through the database assigning a task ID to each of the feeds.
Then fork N child processes (Perhaps you can do this through the shell not through PHP) and have each one process the feeds with its task ID.
Note that if you're fetching these over HTTP, you may want to use a custom HTTP implementation so that you can tune the timeout etc, very carefully. I did this when I wrote a spider.
Clearly proper error handling must be done at every level, e.g. if a feed seems unavailable you can schedule it for another attempt later, or if a feed gives a badly formed document, or a http error, some method of retrying and recording persistent failures, potentially removing bad feeds from the list.
Presumably all these feeds you're adding will be validated for correctness before they're flagged as being "valid".
It's complicated, but there's nothing that isn't technically feasible in there.
Mark