There is no limit per se on the size of an array - I've run arrays in size over 100 MB without issues. However, since you're working in a shared environment, you'll likely run into issues with memory usage...
PHP caps memory usage at a system-wide predefined value. The default is 8 MB, which means that code and data cannot occupy more than that amount of memory or your script barfs.
This is where algorithms come into play... what you want to do can most certainly be done. But, you have to process your information in chunks (typically, line by line) and write immediately to disk to keep memory usage to an absolute minimum.
For example, you could reduce memory usage by eliminating empty elements before you even consider putting them into an array.
For example:
$fp=fopen('outputfile', 'w');
$i=0;
$sql="elect * from bignastytable where (some condition)";
$res=pg_exec($dbconn, $sql);
while ($row=@pg_fetch_array($res, $i++, PGSQL_ASSOC))
// this weeds out rows where importantfield is empty
if ($row[importantfield])
{
fwrite($fp, $row[importantfield]);
}
fclose($fp);
Now, without ever using any more memory than required for a single row, we've weeded out all rows where importantfield is empty. Of course, you could perform this trick handily by simply modifying your sql statement, but I think this proves the point.
Another issue to consider is what is a "transaction"? 90,000 transactions could be considered to be 90,000 database connections, or 90,000 SQL transactions, or 90,000 statements.
They are all closely related, but aren't the same.
You can use a single connection as many times as you want in a single web page hit to execute as many SQL statements as you like. You are already familiar with an SQL statement, see the example above.
But, the last option, an "sql transaction" is a feature of any ACID compliant database (such as PostgreSQL, of which I am fond, and which MySQL is gradually approaching in featureset) wherein you can wrap any number of sql statements into a single transaction.
Consider:
insert into tablea (f1, f2, f3) values ($x, $y, $z);
insert into tableb (x1, x2, x3) values ($a, $b, $c);
insert into tabler (k1, k2, k3) values ($k, $l, $m);
What if all these inserts have to do with a single action (updating a user database, for example) but don't hit an error until tabler, and you need tabler or the user database is now invalid.
What then? SQL transactions to the rescue!
begin transaction;
insert into tablea (f1, f2, f3) values ($x, $y, $z);
insert into tableb (x1, x2, x3) values ($a, $b, $c);
insert into tabler (k1, k2, k3) values ($k, $l, $m);
rollback;
When you call 'rollback', everything done since 'begin transaction' is undone. Completely and cleanly. If you want to keep everything in the transaction, you execute "commit;" instead of "rollback;".
Depending on how your service provider interprets a "transaction", you may find that all 102,000 inserts fit into a single "transaction", leaving you with 89,999 more to go.
Lastly, you're doing this to generate a CSV. Why not generate the CSV off-site and then upload periodically via a cronjob or similarly automatic method?