Basic question and new to this stuff. I have three vps servers with several websites on one each of the two servers. The third server i want to use as a "hub" to administer the various websites from one spot. Need php to move files between two servers. I want to run an admin script onHubServer.mydomain.com that can move files from above webroot to above webroot on another server. Possible?

HubServer.mydomain.com has
/home/uid1/THE_FILE.php
IMAGE1.gif
/public_html/bla
SitesServer.mydomain.com has
/home/uid1/THE_FILE.php
image2.gif
/public_html/bla

    If more security is needed, you can do SFTP via the SSH2 PHP extension. (If the servers are talking to each other just over a local LAN (to be redundant), it might not be as much of a concern, as hopefully no one has a packet sniffer there. 😉 )

      5 days later

      GIT (or some similar versioning system) via SSH ...

        I've used SSH2 extension for this very problem. The basic idea is that your hub server has a private key that it can use to authenticate as some user on the other servers and it logs in using ssh -- just like you might connect yourself using a terminal program. You can then use [man]ssh2_scp_send[/man] to send a file from the hub server to the other server. Note that the credentials you use to authenticate via ssh2 will be connected to some user and that user must have permission to write the destination on the other server.

          dalecosp;11053001 wrote:

          GIT (or some similar versioning system) via SSH ...

          GIT can be an amazing tool if you intend to try and update a complex project on numerous servers. If your various vps servers have git working directories, you can easily update the contents of these working directories with a single git pull command. This gets tricky, though, if users are making changes to the servers. In that case, git pull might not work because some files have been changed locally. If you ALWAYS deploy code to your vps servers via git pull, however, it can be simple and effective.

            sneakyimp;11053017 wrote:

            GIT can be an amazing tool if you intend to try and update a complex project on numerous servers. If your various vps servers have git working directories, you can easily update the contents of these working directories with a single git pull command. This gets tricky, though, if users are making changes to the servers. In that case, git pull might not work because some files have been changed locally. If you ALWAYS deploy code to your vps servers via git pull, however, it can be simple and effective.

            I've become a believer ... for those projects that have the correct architecture & supervision (central repository at the office, multiple productions boxes "out there", appropriate test/approval environment, etc.).

            How it works here:

            # git add somefile.php
            # git commit -m "Fixed bug #123, flamatron speling bug, DaleCosp 1/5/16."
            # ezupdate

            Servers downstream pull the updates, it's reversible if the tester was asleep at the wheel, and with our custom "ezupdate" we even have post-commit hooks that synchronize the databases.... 🙂

              dalecosp;11053031 wrote:

              I've become a believer ... for those projects that have the correct architecture & supervision (central repository at the office, multiple productions boxes "out there", appropriate test/approval environment, etc.).

              How it works here:

              # git add somefile.php
              # git commit -m "Fixed bug #123, flamatron speling bug, DaleCosp 1/5/16."
              # ezupdate

              Servers downstream pull the updates, it's reversible if the tester was asleep at the wheel, and with our custom "ezupdate" we even have post-commit hooks that synchronize the databases.... 🙂

              Similarly, I go into hipchat, and type "/build <target_environment> <name_of_branch>" like "/build sandbox/qaTesting master" and then it just does. When it finishes, it comments back into hipchat that the build was completed (or why it failed). It does similarly to yours, does a git pull, then compiles javascript and css, and verifies DB schema and some data. I can also type /status <taget_environment> and get some nifty output, that looks like:

              Current Branch: master
              Git Hash: 152e090771a6dc5f636f8871228fb69f9fdedc70
              Build Time: 2016-01-02 06:58:38

              Hipchat integrations are my new favorite thing, so anything we want to automate I build a hook into hipchat for.

                Derokorian;11053037 wrote:

                Hipchat integrations are my new favorite thing, so anything we want to automate I build a hook into hipchat for.

                Heh, our "hipchat" still involves vocal cords and openings between cubicles ... :p

                  New Year's resolution: Bring my coding skills into the modern era. You guys and your deployment scripts are impressive.

                    7 days later
                    dalecosp;11053031 wrote:

                    ...we even have post-commit hooks that synchronize the databases.... 🙂

                    Can you elaborate on how this works?

                    At my work there is serious resistance to Git by management. I am trying my best to persuade people, but it's an uphill battle. Luckily a coworker and I are working on a special project where we have (near) free-reign so we are using it. Part of the resistance is that 99.99% of our projects require a database (being WordPress and all) and while keeping code synchronized is pretty easy, doing the same for databases is beyond my knowledge of the whole process. I am hoping if I can overcome that in some way, my push for Git will actually become a reality. The good news is it was mentioned in casual conversation recently and my boss seemed to agree that was "down the road" which so far is a win to me.

                      Bonesnap wrote:

                      At my work there is serious resistance to Git by management. I am trying my best to persuade people, but it's an uphill battle. (...) Part of the resistance is that 99.99% of our projects require a database (being WordPress and all) and while keeping code synchronized is pretty easy, doing the same for databases is beyond my knowledge of the whole process. I am hoping if I can overcome that in some way, my push for Git will actually become a reality.

                      So, is the resistance towards version control in general, or distributed version control, or Git in particular? I have been using distributed version control since that time in my first year of university where the Internet connection in the area I was doing project work failed for quite awhile, but I stuck with Bazaar until it became exceedingly clear that Bazaar simply could not compete with Git in its adoption, whereas Git overcame some of its weaknesses vis-a-vis Bazaar. However, I can understand if management is wary of distributed version control, or is reluctant to switch to another version control system, but I cannot understand how management would say no to version control in general, even if there is no solution in place for dealing with (the versioning of) database migration.

                      As I may have mentioned elsewhere, I am actually more into Python these days, especially with the Django web framework. South, the most popular database migration tool for Django was eventually merged into Django itself. Basically, in Django, model classes are written, then Django introspects and generates SQL statements that are run to create the tables and indices. When a significant change to a model class is made, South (now Django) would be invoked to examine the change, then generate a Python script containing SQL statements that are then run to modify the database (or do data migrations instead of modifying the database), with dependencies on previous migrations specified so that multiple migrations can be applied in a correct order. Consequently, these Python scripts can be placed under version control, hence the migrations can be easily applied for other developers, or run with a post-commit hook, or run with a continuous integration server, etc. Just like code, it is possible for there to be conflicts, in which case

                      Of course, there is the other approach: the framework might have a tool that generates model classes by inspecting the database schema. I imagine that you could use a similiar approach, except that you would write the PHP scripts (to be under version control) that contain the SQL statements to be run. The catch perhaps is that if the framework's tool is applied automatically, you could end up with spurious conflicts in model class files if you place them under version control, but if you are doing manual changes to some parts of these files, you may need them under version control.

                        laserlight;11053141 wrote:

                        So, is the resistance towards version control in general, or distributed version control, or Git in particular? I have been using distributed version control since that time in my first year of university where the Internet connection in the area I was doing project work failed for quite awhile, but I stuck with Bazaar until it became exceedingly clear that Bazaar simply could not compete with Git in its adoption, whereas Git overcame some of its weaknesses vis-a-vis Bazaar. However, I can understand if management is wary of distributed version control, or is reluctant to switch to another version control system, but I cannot understand how management would say no to version control in general, even if there is no solution in place for dealing with (the versioning of) database migration.

                        Pretty much any change to our workflow is met with resistance, so it's not exclusive to Git, but the idea of using some kind of VCS scares management. Mainly because it's misunderstood or just not understood at all. "What about X? What happens when Y occurs?" etc. whenever things come up that could change how we work now. No one at my work uses Git (aside from myself and a little with the coworker who is with me on our project). I bet some guys here don't even know what it is.

                        With that said, we did use SVN way back when I first started, but it wasn't used properly. In fact, I am not sure why it was used at all. We stopped using it around 4 years ago. In any case, SVN and Git are very different.

                        laserlight;11053141 wrote:

                        As I may have mentioned elsewhere, I am actually more into Python these days, especially with the Django web framework. South, the most popular database migration tool for Django was eventually merged into Django itself. Basically, in Django, model classes are written, then Django introspects and generates SQL statements that are run to create the tables and indices. When a significant change to a model class is made, South (now Django) would be invoked to examine the change, then generate a Python script containing SQL statements that are then run to modify the database (or do data migrations instead of modifying the database), with dependencies on previous migrations specified so that multiple migrations can be applied in a correct order. Consequently, these Python scripts can be placed under version control, hence the migrations can be easily applied for other developers, or run with a post-commit hook, or run with a continuous integration server, etc. Just like code, it is possible for there to be conflicts, in which case

                        Of course, there is the other approach: the framework might have a tool that generates model classes by inspecting the database schema. I imagine that you could use a similiar approach, except that you would write the PHP scripts (to be under version control) that contain the SQL statements to be run. The catch perhaps is that if the framework's tool is applied automatically, you could end up with spurious conflicts in model class files if you place them under version control, but if you are doing manual changes to some parts of these files, you may need them under version control.

                        I guess the issue isn't necessarily recreating the database structure, but also its data.

                        Since WordPress is used, pretty much everything is stored in the database. If employee A is working on the project, he may install a plugin, which has its own files, but also makes several (maybe even hundreds) of entries in the database. Sharing the code with employee B via Git is relatively easy, but employee B may have installed a different plugin, and made his own changes to the database (added his own pages, posts, etc.). The challenge is to merge both the code (which is easy) but also the database.

                        Management likes the current setup because since the code and database is in one place, everything is as the left person who left it. Employee A can leave the project with plugin X installed and his work done, and employee B can come on board, do his plugins and entries and then leave, and employee A can come back and it's all there and can pick up where he left off with no time at all.

                        I probably have to do research on workflows with Git and WordPress and see how the developer community deals with it, and then come up with some kind of document explaining everything.

                          Bonesnap wrote:

                          I guess the issue isn't necessarily recreating the database structure, but also its data.

                          Hence I noted "or do data migrations instead of modifying the database".

                            My approach is fairly low-tech. Script "ezupdate" is a shell script. Logic points as follows:

                            #using SSH, instruct production servers to pull HEAD from the central repo
                            /usr/bin/ssh me@production_server /usr/bin/pull_updates
                            
                            #mysqldump affected MySQL tables to a local file (mysqldump outputs an SQL script)
                            /usr/local/bin/mysqldump mydb sometable > /tmp/sometablebody.sql
                            
                            #use echo and cat to add the "use" statement to the top of said files/scripts
                            /bin/echo "use mydb;" > /tmp/sometable.sql
                            /bin/cat /tmp/sometablebody.sql >> /tmp/sometable.sql
                            
                            #SCP the files/scripts to the production servers
                            /usr/bin/scp /tmp/sometable.sql me@productionserver:/tmp/sometable.sql
                            
                            #using SSH, instruct production servers to run the SQL files/scripts
                            /usr/bin/ssh me@productionserver "/usr/local/bin/mysql -u me -p foo < /tmp/sometable.sql"
                            
                            #remove the files/scripts to save disk space
                            /bin/rm /tmp/sometable.sql
                            /usr/bin/ssh me@productionserver "/bin/rm /tmp/sometable.sql"

                            pull_updates on the production boxes is simple:

                            /usr/local/bin/git pull hub HEAD
                              Write a Reply...