Tim's article is the one I linked to in my post; note that he was specifically talking about off-site search engines, such as Google, and that the article is almost 7 years old. I'll leave the homework to you, but I believe I've read (and dagon's post implies it) that this isn't a problem any longer for the vast majority of "spiders".
The Zoom documentation is simply stating that their spider can only "walk the web" --- e.g., it's not going to connect to your DB server and do queries (and I wouldn't expect nor want it to). Which is to say, you need a link somewhere on your site to every page on your site if you want the spider to find it. It doesn't indicate that any particular type of URI construction is "bad", but that it can't guess what they are without links. So, if every story is linked in a "tree" that originates on your home page, you should be OK.
Now, if every story isn't linked - say you only have the most recent ones actually on the site at any given time- it wouldn't necessarily be trivial, but it's well within reason to expect that for an average size site, you could hack together some sort of a long "site map" which was really just a list of links --- for example, if you're storing stories with an integer ID in a database and calling them from "page.php?id=n", your script would select the IDs of all published stories (or some other criteria --- all stories, all stories marked "searchable", all stories that are "fresh enough", etc.) and print these out in a long list. Then, as long as this "site map" page was linked from another page of the site (main page, for example), the Zoom bot would "spider" it and index those stories.
Ditto Google, Yahoo, AltaVista, etc., as best my guess. ;-)