Hi All, how's everyone doin?

I'm thinking of developing a little something that will make it easier for me to update the PHP Scripts (or any other script) for my application..

I basically have a WAMP based software that's deployed to several different clients..

Every now and then, I fix a bug in the software and test it on my local dev server..and when all is good, I would then FTP to each client 1 by 1 to copy the new files onto their servers.. (well basically the whole application directory so that I'm sure not to miss anything) ...

Obviously this has become quote a hassle.. and realized I needed a more automated update system..

I am looking at a few strategies and would love to hear what my fellow php ninjas think.. 😛

STRATEGY B:
- Setup an FTP server online w/c hosts the latest (and greatest) project files/directory..
- Install an FTP sync tool on each client servers and have it run periodically ..
---> im not to keen about this because it has dependencies w/ FTP client and all that. and i don't feel secure leaving FTP passwords in client machines.

STRATEGY 2:
- Setup an HTTP server w/c hosts the latest (and greatest) project files/directory..
- program into my Application to send a list of files currently in its directory and send it along w/ the filesizes and date of modification of each file, to a "check_update.php" script on my Server..
- that check_update.php script will then compare that list w/ its own list and for any file that doesn't match, it then responds a list of files mismatched (these are the files to be updated)
- the client application will then contact another script "do_updates.php" and post a list of files it is request.. this script will then fopen all the files in the list and put them in an array, and send em all back in its reply..

---> i know this is a bit of a security issue where my php scripts and other files will be exposed to piracy if someone figures out this link , and the formats im using, BUT i will have the following measures in place
-----> 1) I have an API KEY for EACH client residing in an encrypted "key.ini" and that needs to match w/ what's securely recorded on my server
-----> 2) I have a mechanism in place where the "do_updates.php" file will reject any updates that fetches 50% or more of the files on the server (for massive updates I don't mind FTPING into my client's machines)
-----> 3) I'm actually using the latest IonCube encoding w/ licensing so , it wont really matter much if someone does get ahold of the complete project files/folder.

STRATEGY 3:
- perhaps there's already an existing solution for this kind of thing? lol

Interested to hear your thoughts guys...

    I recommend doing it with git with a local repo on each server, including development server. When you're done with dev / test, you push to github. Then you can either ssh to each client and manually fetch from github, or you could setup a script to handle those updates for you. Then all you need is a deployment script which sends a request to each client letting their update script know it's time to fetch updates from git.

    In order to get private repos on github you'd have to pay for an account (< 10$ a month). The alternative would be to run your own private git server.

      Thanks man. I'll look into this. Still a bit wary because this has additional dependency (Git) or cost (if w/ account) .. and each client will be another security concern for me since github user/pass is stored in said client..

      what do you think about my STRATEGY # 2?
      For me this is the best scenario since i don't need any other dependencies and all i have to take care of and secure is my own "updating backend" .. but it will take some code work so before i jump on it im waiting to hear, i guess validation.. or someone saying this is stupid. lol

        I'm not sure of all the details of your setup, but my first thought is that it's disappointing that you are using WAMP for your development. Both OSX and *nix have the rsync command which could make your life a lot easier. I also have a vague recollection of some tool for OSX where you could type in a command just once and execute it on dozens of servers at once. I think it was a Apple Remote Desktop or something. Both of these tools would take care of the file comparisons for you so you wouldn't have to write the code you describe in Strategy 2.

        I think you should be careful of having your primary application publish information about its source files as this invites abuse from malicious visitors.

        You might want to construct your web app such that it is built with a script that can be run which will cause it to download an update. Then you could make this update available in the form of a patch on your central web server (the one that hosts the master code). Or alternatively, you could construct your web app such that your centralized server could post an update paypload to each of the many running clients and each client would understand how to interpret the payload and apply the patch. It is obviously important that the central server authenticate itself to apply the patch. You also might want each version of your application to have a distinct password for these updates to make sure that if one gets compromised, that the others are not.

          sneakyimp;11044063 wrote:

          I'm not sure of all the details of your setup, but my first thought is that it's disappointing that you are using WAMP for your development. Both OSX and *nix have the rsync command which could make your life a lot easier. I also have a vague recollection of some tool for OSX where you could type in a command just once and execute it on dozens of servers at once. I think it was a Apple Remote Desktop or something. Both of these tools would take care of the file comparisons for you so you wouldn't have to write the code you describe in Strategy 2.

          I think you should be careful of having your primary application publish information about its source files as this invites abuse from malicious visitors.

          You might want to construct your web app such that it is built with a script that can be run which will cause it to download an update. Then you could make this update available in the form of a patch on your central web server (the one that hosts the master code). Or alternatively, you could construct your web app such that your centralized server could post an update paypload to each of the many running clients and each client would understand how to interpret the payload and apply the patch. It is obviously important that the central server authenticate itself to apply the patch. You also might want each version of your application to have a distinct password for these updates to make sure that if one gets compromised, that the others are not.

          Hi SneakyImp

          About WAMP - there's nothing wrong with wamp. I've been on WAMP, XAMP, and other windows based "LAMP" packages for almost 10 years and have deployed systems in heavy use situations .. worked flawlessly.. i stuck w/ WAMP coz it's really easier to manage (being on a windows machine) .. I know some nix but i guess not strong enough.. my technicians who help support my software is also comfortable working on a windows machine. (it's harder to find and hire technicians with nix knowledge, and they're more expensive) . add that to tons of software available on Windows machines to help my software application in one way or another (such as biometrics SDK etc) .. i can go on listing reasons why i use wamp . Just defending WAMP there a little bit hehe

          You make a good point w/ RSYNC, i've not really explored that much .. I'd like to but if this is a *nix only solution then , perhaps for some of my projects , but not for this one in particular

          You might want to construct your web app such that it is built with a script that can be run which will cause it to download an update. Then you could make this update available in the form of a patch on your central web server (the one that hosts the master code). Or alternatively, you could construct your web app such that your centralized server could post an update paypload to each of the many running clients and each client would understand how to interpret the payload and apply the patch. It is obviously important that the central server authenticate itself to apply the patch. You also might want each version of your application to have a distinct password for these updates to make sure that if one gets compromised, that the others are not.

          Yep.. covered in strategy 2. Although I didn't think about the CENTRAL SERVER managing updates for each client. that may come in handy since some of these clients may have customizations based on customers. thanks for that.

            Tea_J;11044079 wrote:

            About WAMP - there's nothing wrong with wamp. I've been on WAMP, XAMP, and other windows based "LAMP" packages...Just defending WAMP there a little bit hehe

            Yes wasn't meaning to disparage PHP-on-windows. Merely meant to suggest that *nix systems have some really handy command line tools that are widely available and which make this sort of thing easier whereas windows, to my knowledge, does not.

            Tea_J;11044079 wrote:

            Yep.. covered in strategy 2. Although I didn't think about the CENTRAL SERVER managing updates for each client. that may come in handy since some of these clients may have customizations based on customers. thanks for that.

            It's not uncommon for such systems (my wireless router, for instance) to offer a page where one can upload new firmware or an update or something. You could easily construct a page that is included in each of your distributed installations that authenticates a user and allows one to POST a zip file containing a system update possibly including db dumps, replacement PHP scripts, etc.

              Alrighty, thanks very much for the vote of confidence w/ strategy 2 .. helps me move forward w/ peace of mind..

              🙂 🙂

                7 days later

                Implementing strategy 2 in terms of git is still a lot easier than building your solution from scratch. With no additional costs.

                Each client will need a git client. Your own server containing the latest code also needs a git client. You can update a client in three easy steps.

                1. get sha-1 of client's current commit (client's server)

                  $ git log -1 | head -1
                  commit [client-current-commit-sha1]
                  

                  | may not work on windows, and head might not exist either. But you could simply use git log -1. It's no more than a few lines of output.

                2. create patch file (your server)

                  $ git format-patch [client-current-commit-sha1]..HEAD --stdout > /path/to/docroot/file.patch
                  
                3. Get patch to client's server and apply it.

                  $ curl http://your-server.example.com/file.patch | git am
                  

                You might set it up so that you have a script which sends a get request to https://client.example.com/latest-commit which returns output of git log -1.
                Extract the commit sha-1 (40-digit hex) from the first line of the response.
                Execute git format-patch.
                Send patch by post request to client.example.com/apply-patch which then runs git am.

                Alternatively, you could keep a list of ip addresses that are allowed to fetch patches and have the client's servers auto-update by sending a get request with their current sha-1, recieve the patch as response and apply that.

                Using pstools for remote execution (on windows) may also be an alternative. I have no idea how secure that would be, but it should allow you to initiate execution of git tools remotely.

                  This looks very helpful. I don't understand the second command and haven't had any luck getting it to work on a repo for me. E.g.:

                  git format-patch 90c7a00d9e285c70a68e4e82e24e1fe8343233b8 ..HEAD --stdout // NO OUTPUT!

                  Also, am I correct that "git am" applies a patch to a repo? Can you suggest where I might learn more about the use of this function? I've tried git am --help[/man] and am not sure I understand that it actually does. In particular, this doesn't help me understand what your command is doing:

                  DESCRIPTION
                         Splits mail messages in a mailbox into commit log message, authorship information and patches, and applies them to the current branch.
                  
                    sneakyimp;11044165 wrote:

                    haven't had any luck getting it to work on a repo for me.

                    3233b8 ..HEAD

                    That's because of the leading whitespace before the double-dots ..
                    The two commits point are specifying a range in this case, and so the double-dots should connect the two commits

                     git format-patch 90c7a00d9e285c70a68e4e82e24e1fe8343233b8..HEAD --stdout
                    sneakyimp;11044165 wrote:

                    Also, am I correct that "git am" applies a patch to a repo?

                    Yes. It goes together with format-patch, just like apply goes together with patch files created by gitt diff.

                    sneakyimp;11044165 wrote:

                    Can you suggest where I might learn more about the use of this function? I've tried git am --help[/man] and am not sure I understand that it actually does. In particular, this doesn't help me understand what your command is doing:

                    DESCRIPTION
                           Splits mail messages in a mailbox into commit log message, authorship information and patches, and applies them to the current branch.
                    

                    Things will become clear when you remove the extra whitespace and get output from format-patch. format-patch creates patches and formats them as mail messages, with one commit per message / file. Using --stdout > outfile, you may combine them all into one file. When you git am such a file, each patch is applied in succession, including committing the changes. git diff would only affect the working tree.

                    Manpage: https://www.kernel.org/pub/software/scm/git/docs/git-am.html (which i guess contains same as --help)
                    Some info about patch workflow: http://rypress.com/tutorials/git/patch-workflows
                    Other than that, you can usually find more resources when searching: stackoverflow questions, blog entries, examples etc.

                    Perhaps I should point out that the patch work-flow has been reversed from how it usually works (was intended to work) when used to implement "strategy 2" above. Usually, you send patches to the "main" repo. Patches are then applied in the main repo, and if anyone wants to continue working, they'd fetch from that repo again. This is done so that commit histories remain the same everywhere. If different people apply commits in different orders, things wouldn't look the same everywhere. Moreover, commit checksums are in part based on commit-time information, such as committer and commit date.

                    In order to use patches to update client's repos as per "strategy 2", the patches now flow from main repo to others, instead of from others to main. I still believe it is better to distribute updates by having client repos fetch from main repo... But if strategy 2 is chosen as outlined above, it is still simpler to implement with patch files. You just have to make sure commits are done "by the same user" (git config user.name and git config user.mail to set each client repo to whatever you use locally). When you run git am, also supply --committer-date-is-author-date to retain the commits original time.

                      Thanks for the clarification, although I'm still a bit confused. I'll try and do some reading. In the meantime, I'm wondering why this command returns no result:

                      git format-patch 90c7a00d9e285c70a68e4e82e24e1fe8343233b8..HEAD --stdout
                        6 days later
                        Write a Reply...