Not sure if this has been resolved or not, but I shall put my 2 pence worth in, in case it helps someone else.
I had this problem recently where I had to import 1.5 million tab delimited rows from a file once a day, into a postgres table.
I looked at looping through the file, and very quickly looked away again due to server performance issues, and the time it took to undertake this process.
What I eventually did, was use COPY to import the data into a temporary table, then manipulate the data once inside postgres. It's not ideal, but in my instance, the entire import process (including dropping/creating indexes, importing data, creating primary key sequence etc.) rarely exceeds 10 minutes.
10 minutes sounds a lot, but the main reason for me using this method was that it puts almost no strain on the system and I found it to be around 500-600% quicker than doing it looping through the file and processing it.