Here's some ideas that might help:
Would it be possible to eliminate the insertion of the duplicates by creating a unique index and then ignoring/suppressing the "duplicate insert" errors coming from the server?
Could you add an AutoIncrement field to the table to create a unique key and then delete records based on that key? The following works in SQL Server (but only because it supports sub selects). RecordId is the AutoIncrement number:
Delete From TestTable
Where RecordId Not In
(Select min(RecordId)
From TestTable
Group by FieldX)
Could you add an AutoIncrement field as a primary key and then open an update cursor on the table:
Select file, Id from table order by file
while !eof
if tempfile <> file then
tempfile = file
else
delete record
In this scenario, each time the field changes, you save it to a temp variable. Each fetch that follows that has the same value, delete it. (FYI - Delete cursors only work on tables that have a primary key.)
I'm sure there are other solutions and some of these might not work for your paticular situation. But maybe this will give you some ideas...