Let's assume that each ad has an index associated with it. So Frank's House of Penguins would be ad 1, Joe's Lawn Gnome Emporium would be 2, Le Chapeau Rouge Restaurant 3, and so on, up to, say, 100. And let's say that Frank and Joe (of their respective eponymous businesses) had paid to make their ads appear three times as often as the others. You would simply add them again, so that ads 101 and 102 would also be Frank's House of Penguins, and 103 and 104 would also be Joe's Lawn Gnome Emporium. Then, instead of generating a random int 0 < n <= 100, you would do 0 < n <= 104, and then each of the ads you wanted would have the extra weight because they have more indices associated with them.
You could do this without too database duplication, too. You might have a table with the ad key, the ad itself, and a weight. Then you'd build a hash of the keys and the range of values for which that ad is displayed. So, the database I proposed above would have a hash that associated ad #1 with random values 1, 103, and 104 (although automatic generation would likely yeild sequential values). Searching a hash (and generating this hash) would be faster than querying the database each time for the key based on wieghts, and would likely provide better performance for dBs with high numbers of ads.