I have a script that finds images in certain folders and displays them. It also makes it so the image is actually the background of a table, in the foreground image is just a clear gif stretched to the same size, to avoid people from saving the image.

The table and the clear image both have the dimensions of the actual image using the getimagesize() function.

Now my site also has a hotlink script enabled, so unless an image is viewed from my webpage, it will just give a forbidden error.

My script wont work with the hotlinking feature, i get this error:

Warning: getimagesize(http://www.ffextreme.com/media/ff9/images/6.jpg): failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden in /home/ffextre/public_html/images.php on line 133

is there a way around this? perhaps a way of getting the image dimensions without using that function? using something that wont interfere with the hotlink feature?

    well getimagesize is able to get info on remote files via http or some protocol but it can do it for local files too. why not use a relative path and then it wont have the problem because it will open it from the file system...

    use something like
    $info = getimagesize("media/ff9/images/6.jpg");
    instead of
    $info = getimagesize("http://www.ffextreme.com/media/ff9/images/6.jpg");

    just make sure you get the local path right.

      As you are attempting to do all of this to block people from snaffling your images, please be aware that this is a bit of a Holy Grail, and I don't think you're there yet.

      I haven't seen the foreground/background table idea used before and it appeals to me. You might want to check for crossbrowser compatibility. I haven't messed about with table background images for quite a few years, but when I did, I noticed that different browsers treated them very differently. This may all be resolved by now. I'd test thoroughly anyway.

      Second up, though, is a more serious issue. Browser caching. If your visitor has this enabled, then your image will automatically be stored on their hard drive. They'll just need search and find it.

      Ooh... just had a thought!

      You could get around this by slicing the image so that your table is a 10x10 grid of 100 sections, each of which contains a piece of the overall image. This would increase processing power on your side in order to slice up the image, though you could get around this by pre-slicing it and serving up the pre-prepared fragments. It would increase bandwith usage as the client would need to raise a hundred requests instead of one to get the image. It would also increase client-side processing needs in order to piece it all back together.

      The advantage would be that your visitor would have to trawl through their cache, find the 100 pieces and stitch them together by hand. Not impossible, certainly automatable with a custom client, but an extra hurdle all the same.

      There's probably a very elegant flash solution to this, you create a simple flash movie which just contains the image. I've never worked with flash, but I'm sure that's easy.

      Just remember though, whatever you do to display the image, it will end up on the visitor's screen and a simple screengrab will make it theirs forever.

        Yeah, so you know, it's pretty much impossible to prevent a webuser from getting an image off your site. You can make it difficult for the average user to get it off, but never someone who knows what they are doing. Most of the time you just do a screen shot and, viola, the image is there with the stroke of a key.

          I once thought about using flash or java to layer an effect over the image. This effect would cause random areas of the image to be distorted - not a major distortion, and not a major portion of the image - maybe a slight blurring of 1-2% of the image. The effect would be dynamic so that every few miliseconds a different 1-2% of the image would be distorted.

          With any luck, this would be barely noticeable to the visitor. How noticable would depend on how closely the visitor needs to study the image. But any screen grab would contain a static distortion which might render it unusable.

          I never did anything to implement this, so I have no idea how workable it would turn out to be in practice.

            I'm not trying to make it impossible from people getting the images, im just making it a pain in the ass some some guy thats making a website doesnt mass save all our images.... if he wanted to it'd take him like a day.

            Most people aren't smart enough to check the cache, if they are.. oh well lol

            Also not letting people get the direct address for the image (unless they check the source, most people dont) makes them less likely to hotlink

            I decided to just disable the hotlinking feature for image files (they're still enabled for like mp3s and stuff, cant have bandwidth drain heh)

            drew010 wrote:

            well getimagesize is able to get info on remote files via http or some protocol but it can do it for local files too. why not use a relative path and then it wont have the problem because it will open it from the file system...

            use something like
            $info = getimagesize("media/ff9/images/6.jpg");
            instead of
            $info = getimagesize("http://www.ffextreme.com/media/ff9/images/6.jpg");

            just make sure you get the local path right.

            Ah yes tried that, works now.... I tried it a while ago and got a lot of errors, must've not had the path right. Thanks.

              Mistah Roth wrote:

              I'm not trying to make it impossible from people getting the images, im just making it a pain in the ass some some guy thats making a website doesnt mass save all our images.... if he wanted to it'd take him like a day.

              Betcha it wouldn't.

              Method 1.
              Write a PHP program to request the first page which contains an image, grab the image, and then request the next page containing an image - repeat until all pages processed. The "grab the image" logic would involve searching the page using regular expressions to find the location of the required image URL. That image could then be requested directly and stored. Even if the URL links to a gatekeeper program which decides whether or not to display the image, the gatekeeper could be fooled - the script can bluff referrer id, browser type etc. A couple of hours to write the script. Minutes to run it. It could even be scheduled to run every day in a cron job, ensuring that the image grabber keeps completely up to date. Total time needed: 3-4 hours.

              Method 2.
              Use a macro recorder to fire up a browser, request the first page which contains an image, dump a screenshoot of the complete browser window to disk, request the next page containing an umage - repeat until all pages processed. The screen dumps could then be fed through a batch processer to crop them so that only the actual image, and none of the page "furniture", remains. Total time needed: probably in the region of 4-5 hours. And again, it could all be cronned.

              I don't mean to be discouraging, I just want to prepare you for the fact that without a lot of effort, any image protection scheme can be foiled easily and repeatedly, so long as the image to be protected is somehow displayed onscreen in all its glory. If you're doing this site for a client, they need to have their expectations set that anything you do is at best just a speed bump, it's not a anti-theft device.

                how about this:
                you append an id to the image that consists of the current date on your server. eg date("YDmHms"); (or something like that) you do this as the image is loaded and then in your code you ensure this code in the image url matches your system clock. this would reduce the chance of someone linking to it because their system clock must be exactly the same as yours 😉 and the link would have to be updated every time they loaded also 😃

                just an idea..

                  chrislive wrote:

                  how about this:
                  you append an id to the image that consists of the current date on your server. eg date("YDmHms"); (or something like that) you do this as the image is loaded and then in your code you ensure this code in the image url matches your system clock. this would reduce the chance of someone linking to it because their system clock must be exactly the same as yours 😉 and the link would have to be updated every time they loaded also 😃

                  just an idea..

                  That might hoodwink anyone who is deeplinking directly to the images, but it wouldn't be a major challenge to workout the time offset once and generate the links dynamically after that. You could make this more complex by munging the fact that you're using the date so that it becomes harder for them to crack. Any solution like this will cost you in processing pwer, though, and may not scale terribly well.

                  It still wouldn't stop someone from generating local copies of the images, though, which I think was the original intention.

                    Write a Reply...