WARNING: VERY LONG POST. GRAB A DRINK BEFORE GETTING COMFORTABLE, OR CLICK THE BACK BUTTON OUT OF FEAR.

Starting Monday I will be an a two week vacation. Other than a handful of days I don't really plan on doing much (more of a "staycation"). In that time though I'd like to resurrect a very old project and make my third attempt at creating a game (tried twice in the past). It's more or less text based, but will live in the browser and have buttons, checkboxes, etc. My skill level and skillset is much different in the past when I made my first two attempts (third time is a charm, right?). I am also a lot more prepared as I have mapped out the pages, menu structure, and did some basic sketches of wireframes before beginning (and figured out the colour scheme). I also have the project broken out into 9 phases or milestones (originally I had 3).

Anyway, below are some questions as I'd like to do this "right" and to maximize my motivation and thus the likelihood the project will actually be completed (I'm well aware it will not be completed in the two weeks - I am just using that time to get started :p).

1a. Early branching in Git
What is sort of the accepted workflow or best practice approach in the infancy of a project regarding branches and commits? In the few projects I have worked on with Git I created the repository and the .gitignore file and that was my first commit (the message is always "Initial commit."). I really like the master/development/feature workflow as outlined here. But at what point should branches like development be created? In other words, how far into your project should you go before you stop working directly on your master branch? I have always been under the impression that you should rarely or never work on the master branch and to always consider it production code. This leads into the second question:

1b. "Commit early, commit often"
I have also heard that mantra, too. That's sounds like a solid plan, but is there such a thing as going over board? I haven't really worked out how often to be making commits. I just sort of do them when I complete a reasonable "task" or logical piece of code. Is that too often? Is it considered best practice to commit larger, but more logical chunks or to commit smaller chunks and use tagging/versioning to "group" larger, logical chunks? Here is a commit log for a "for-fun" little project I made for an old video game. Is that too often, not enough, or somewhere in between (also are my comments acceptable in their brevity and description?). I find myself making a commit maybe every 5 to 10 minutes. Refactor a couple JavaScript functions, commit. Updated HTML markup, commit. Change some CSS properties, commit. Good or bad behaviour?

1c. Versioning
I assume those who use Git use some sort of scheme to version their software. I'm personally partial to the MAJOR.MINOR.PATCH scheme. I also have some sort of obsession with versions; I'm constantly checking for updates for every facet of software I have from Windows to my browser to WordPress to FileZilla. Anyway, do you usually start projects as v0.1.0? To me that's the logical starting point, but I'd love to hear others' perspective.

1d. Committing: Fast forward or no?
I've looked this up but to be honest it escapes me. I don't really deal with it directly anymore though since I just commit via PhpStorm now and let it do whatever it is that it does. But is there a "best practices" approach to this? According to that Git branching model link above it's better to NOT fast forward (--no-ff). Any way to find out what PhpStorm does by default (and if there is anyway to change it)?

1e. Committing: Should I track compiled files or no?
I seem to remember that this was considered poor practice. In the past I had run into issues with my .less files auto-compiling to .css when I switched branches because the background software I was using was detecting changes. This isn't really an issue for me anymore however since I use PhpStorm and file watchers (I have NPM installed with Less and UglifyJS). They aren't triggered when I switch between branches. But regardless, is it considered good or poor practice to track these files (or does it even matter)?

1f. Merging: Should Branches be Deleted?
Aside from the master and development branches, should feature branches and others (hotfixes, etc.) be deleted once merged? I assume so but just wanted to hear others' opinions.

1g. Dealing With Merge Conflicts - Recommended Software
PhpStorm allows you to set an external merge/diff tool for when you have conflicts. Does anyone have any recommendations? I looked online but found a lot of mixed opinions. I'm on Windows 7 x64. I guess ideally I'd want 3 panes: The code in the current branch, the code in the branch that I am merging from, and the "would be" result of my merging. Apparently some have four panes, but I think I can deal with only three.

2a. Exception Handling
I totally know what exception handling is and I have used it way, way back in college when I was doing .NET and Java. I am a little embarrassed to say that I have pretty much never used it in PHP. I figured I should get in the habit as it's a very useful tool in any developer's toolbox. I guess it always just seemed like PHP was able to handle any issue with an if/else block. Anyway, I am a little unclear on a thing or two. Namely I'm not sure exactly how aggressive I should be when using a try/catch block. I am going to attempt to make my code very modular (more on this later) so what happens if you have a function with a try/catch block that calls another function with a try/catch block and the second one throws the exception? My understanding is it "bubbles" up to the initial caller.

Furthermore my understanding of exceptions is a little hazey with respect to how it terminates or doesn't terminate program flow. I have always been under the impression that an uncaught exception kills the script immediately not unlike a call to die() or even an exit statement. But my understanding of exceptions in PHP is that if you catch the exception the program doesn't exit and instead does whatever you tell it to in the catch block. Is my understanding at least a little correct? I guess I just have to work out what to do in those catch blocks.

I have been doing research in this area online and checking out some videos on YouTube on how to move from a plate of if/else spaghetti code to utilizing exception handling. I don't have a solid grasp on it quite yet, but I think I'll get there. Like everything though, experience will be the best teacher and I'll just have to dive in.

2b. Is it worth it to create an error class?
I admittedly got this idea from WordPress. They have a WP_Error class that is returned from certain functions if they fail. It allows multiple error messages and error codes. They also have a function to check if the return of said functions is a WP_Error object. Good or bad idea? Should this even be considered, or can this sort of thing be easily handled via intelligent exception handling?

3. Should all functions be part of a class?

I am trying to go for more of a on OOP approach than I have in past projects. I want my functions to be very precise, concise, and be reusable/modular. I know there will logical grouping of functions (user functions, authentication functions, etc.) which can easily be organized into classes, but there will probably also be miscellaneous functions that don't really fit anywhere. I assume that it's totally okay to have some random functions maybe in a helper.php or misc.php file? Or would it be just as "correct" to have a Helper or Miscellaneous class and put them in there? I want to take advantage of the spl_autoload_register function as much as possible and minimize my "includes" as much as possible, too.

3a. Static vs. Non-Static
Does it really matter? I personally prefer the -> syntax to the :: syntax but PhpStorm helps out so in the end it doesn't really matter. Any technical reason to choose one over the other, aside from that extra line of having to instantiate an object with a non-static approach?

4. Should I create a DAL?
Is it really necessary to do this? To be honest I'd like to omit as much (or all) queries from my classes as I can, but maybe I am not wrapping my head around this as much as I thought I did. Would the DAL have generic functions like insert(), update(), etc. or would they be more specific like insert_user(), update_user(), etc.? If the latter than wouldn't I more or less duplicating code? In the sense that there will probably be a insert_user() or create_user() function in the User class, which would do some stuff and then eventually call the insert_user() function in the DAL.

Perhaps I am confusing the purpose of the DAL. Its purpose is to make sure my application is not married to a specific database, and nothing else. Its purpose should not be to facilitate operations in my application. Or can its purpose be both, or should there be another layer in there somewhere that does facilitate operations in my application (what I mean is for example the User class doesn't write any queries itself and instead passes the action off to another class).

I think that's all my questions. Whew. If I think of anything else I'll be sure to post. If you have read this whole post and made it this far than thank you (and give yourself a pat on the back)!

I'll update this thread with progress and probably questions. Also until at least v1.0.0 I'll make the repository public; I'd love to hear feedback on code or just any advice in general.

Thank you so much for reading! 🙂

    I have moved this to General Help as it is relevant to PHP, though it has a more general scope including things like version control and development methodology.

    Bonesnap wrote:

    1e. Committing: Should I track compiled files or no?

    I think that it depends. In some cases the compiled files might be legitimately edited by maintainers (who took over your project years down the road) without access to the tools that generated them. So I would say yes to generated CSS files, maybe for image files, but no to executable programs and script bytecode.

    Bonesnap wrote:

    1f. Merging: Should Branches be Deleted?

    Yes.

    Bonesnap wrote:

    1g. Dealing With Merge Conflicts - Recommended Software

    I use Meld, but then most of my work is on Linux these days. I believe it can work on Windows, but when I was more active programming on Windows I used Beyond Compare.

    Bonesnap wrote:

    2a. Exception Handling
    (...)
    what happens if you have a function with a try/catch block that calls another function with a try/catch block and the second one throws the exception? My understanding is it "bubbles" up to the initial caller.

    No, if the exception is caught in the called function, it will not propagate to the caller. For example, run:

    <?php
    
    function foo() {
        try {
            throw new Exception('Test Exception');
        } catch (Exception $e) {
            echo '#1: ' . $e->getMessage();
        }
    }
    
    function bar() {
        try {
            foo();
        } catch (Exception $e) {
            echo '#2: ' . $e->getMessage();
        }
    }
    
    bar();

    You will find that the output is "#1: Test Exception", not "#2: Test Exception".

    Bonesnap wrote:

    Furthermore my understanding of exceptions is a little hazey with respect to how it terminates or doesn't terminate program flow. I have always been under the impression that an uncaught exception kills the script immediately not unlike a call to die() or even an exit statement. But my understanding of exceptions in PHP is that if you catch the exception the program doesn't exit and instead does whatever you tell it to in the catch block. Is my understanding at least a little correct?

    Yes. In some cases though, you have some code that you really want to run even if an exception is thrown, in which case you would put it in a finally block. If you want the exception to propagate, you might even have a finally block without a catch block.

    Bonesnap wrote:

    Namely I'm not sure exactly how aggressive I should be when using a try/catch block. (...) I guess I just have to work out what to do in those catch blocks.

    You should handle an exception where you have enough information to do so, otherwise you should allow it to propagate (perhaps logging it).

    Bonesnap wrote:

    3. Should all functions be part of a class?

    Before the introduction of namespaces, there would have been the argument that all functions should be part of classes for a poor man's namespacing reason. Now, I suggest that you consider if the function needs to be part of the core interface of a class, or if it can extend the interface within the same namespace as the class.

    I suggest reading this interview of Bjarne Stroustrup, the designer and original implementer of C++, concerning Designing Simple Interfaces. Granted, the context is C++, which has function overloading based on the number and type of function arguments along with argument dependent lookup, but the ideas apply to PHP too.

    Bonesnap wrote:

    3a. Static vs. Non-Static
    Does it really matter? I personally prefer the -> syntax to the :: syntax but PhpStorm helps out so in the end it doesn't really matter. Any technical reason to choose one over the other, aside from that extra line of having to instantiate an object with a non-static approach?

    If it does not matter, then perhaps you should be going for non-member functions or static methods because it probably does not make sense to have an object. If it does matter, then you probably will need some non-static methods.

      1a. Early branching in Git
      People use git all kinds of different ways. I believe the default git repo starts with a master branch so it's been my tendency to just assume that the master branch is the bleeding edge of development and that other branches are created precisely when you need a some other branch. Reasons can vary:
      Code works but you want to refactor it? Make a branch that represents the working code and give it a version number with a nice name like "php-5.5." Continue refactor and development in master branch. You can also commit bug fixes to the other branch if you want to.
      Code ready for release? Some bugs fixed?
      * etc.

      As for general git workflow, rather than init'ing a repo locally, I like to init a bare repo in the place from which it will be hosted -- usually a machine reachable via some domain name from wherever I might be developing. Then, on my workstation, I do a git-clone of that remote (and empty) bare repo. I put my source code in there, do a git-add, git-commit, and git-push. I've found that this eliminates the need to specify which origin/branch you might have to commit to because your workstation's working directory was cloned from the original, bare repo.

      1b. "Commit early, commit often"
      I commit all the time. My primary rule of thumb is is to ask "have I done some work which I should be backed up? Would I be sad if my hard drive crashed and it were gone?" I generally try not to commit code totally broken code but some coding tasks are just too big to get fully working in a reasonable amount of time. I tend not to worry about the size of my repo at all, which may be bad, but PHP source tends to be tiny and git is very space-efficient in my experience. This question can get more complicated when you are working with other people. If two of you edit the same file, things can get complicated. If you have a commit hook set up that auto-publishes code to some dev server, you should never commit code that breaks stuff for others. You should also avoid taking a long time to make changes to some file that is commonly edited by others -- if you do, you may find yourself having trouble pushing your changes because someone else changed the file in the meantime.

      1c. Versioning
      Yes starting with 0.1 is exactly what I do. Until you reach 1.0, I probably wouldn't worry about publishing any patches. The zero in your major version says it all: "this software is in beta, use at your own risk."

      1d. Committing: Fast forward or no?
      I can't say I fully understand this either. I use git from the linux command line and if I try to push to the repo any files that have also been changed by someone else, then git complains and won't let me push, saying something about 'fast forward.' While there may be some way to force a push, I've not yet bothered to find it and generally try to avoid this situation. When it does happen, I stash my local changes, pull from the repo, and then try to resolve any differences manually. I tend to think that's how it should be done. If I'm missing some better way here, I'd love for someone to enlighten me.

      1e. Committing: Should I track compiled files or no?
      Nope. At least not for stuff that is platform-specific like C and C++. A windows executable and a linux executable are totally different. 32-bit and 64-bit executables are totally different. Github has a whole bunch of example .gitignore files which you may find useful.

      2a. Exception Handling
      I wouldn't claim to have fully realized the utility of throwing exceptions, but have slowly find myself any time something happens in code that should not happen. Uncaught exceptions do terminate your code (try it!) and they are better than errors because you can easily echo them or write them to a file (thanks to their __toString method) and you get the entire call stack and line numbers which make it a lot easier to see what went wrong in your code. IF/THEN blocks get messy. Also, IF/THEN blocks do not alter the fact that a function can only return one value. If you are willing to check the result of every single function that might encounter a problem to make sure it's valid data, then I'd be willing to bet you are going to nearly double up the amount of coding you do. If a function cannot perform its duty, you can throw an exception which sends a clear message: "something went wrong" rather than blithely returning some value and forcing the calling scope to minutely examine it and decide if it's valid data or not. You can further increase the flexibility of your exception-throwing by defining nuanced, custom exception types which can be caught at different levels of code. Your catch statements can filter which exceptions you want to catch and which ones bubble up to higher levels of code:

      try {
      
        throw new ToddlerException("Rover licked my lollipop");
      
      } catch (ToddlerException $e) {
        goo_or_poop($e);
      } catch (ParentException $e) {
        use_credit_card($e);
      } catch (PoliceException $e) {
        call_911($e);
      }
      

      2b. Is it worth it to create an error class?
      I haven't seen WP error class in action, but I have seen that PEAR seems to do things this way. Seems like a good way to wrestle with the question of how to determine whether a function result is an error or not, but sounds like it comes with overhead to me -- do all of your functions return objects now? Or are you always using [man]instanceof[/man] on every function result to make sure it's not an error object? This seems unnecessary to me if you are throwing exceptions. I don't even define custom exception classes.

      3. Should all functions be part of a class?
      I have tended this way in recent years -- it helps keep the global namespace clean and clear and your likelihood of name collisions is greatly reduced. It also helps group functionality.

      3a. Static vs. Non-Static
      If I'm not mistaken, this is a memory consideration. My rule: if the function doesn't operate on properties of the object then I make it static. That way, I can use it elsewhere without having to instantiate the object. It also allows a class to sort of broadcast its own functions and make them available elsewhere in your code as a way to canonicalise things.

      4. Should I create a DAL?
      I recently created a DB abstraction base class for CodeIgniter (which has a nice QueryBuilder object) and this his been an unmitigated blessing so far. I was a bit worried about using this in complex project at first, and it seemed like a chore to get it working, but in the vast majority of cases, it eliminates the need to write any SQL at all. It escapes inputs automatically. I have another script that can examine a database full of tables and generate a class for each table in the DB along with javadoc comments so that my IDE and its autocomplete features make inserting and fetch records trivially easy:

      $record = DB_my_table::fetch_by_id($db, 42); // returns an instance of DB_my_table which can be used to update the record in the db
      

      If I still need some fancy JOIN query, I can either extend my DB_my_table class (which would result in DB_my_table_x) and utilize CodeIgniter's QueryBuilder.

        Just my two cents here ... and keep in mind I'm a troll, an eternal noob, and my background isn't CS:

        1a. Early branching in Git

        Not unless you have working software. What's the point of having a branch that can't be used? Software is written for the user ;-)

        However, once you have something that can be called "workable", I can certainly see branching as useful.

        1b. "Commit early, commit often"

        One of the major advantages of the use of versioning software lies in its ability to revert changes. This could be a plus in your development environment, particularly if you program in such a manner that you expect to need revert something? For example, do you see yourself writing something that's kind of experimental, pushing it out for testing, and throwing it away if it turns out to be awful/unworkable/not the best idea you ever had ...

        2b. Is it worth it to create an error class?

        I'd say it depends on the project's overall size/scope. I certainly have one big project that I wish was a little better at, and more organized about, custom logging. I occasionally have to do "hackish" things when a new bug is discovered in that one.

        4. Should I create a DAL?

        Again, depends on the target. Is there any possibility this will be installed on multiple systems, or by persons other than yourself? Is it DB-intensive, such that you might want to try, say, PostGres as opposed to My? What about lifecycle? If Oracle were to shut down MySQL in a couple years, (God forbid), and somehow they could remove it from the wild (heh, I seriously doubt that could ever happen), how much work would you want to do to make your game work again?

        Probably dumb answers ... best of luck on your project. And, really ... you should do a LITTLE something besides coding on your PTO, eh? 😉 🙂

          sneakyimp wrote:

          1d. Committing: Fast forward or no?
          I can't say I fully understand this either. I use git from the linux command line and if I try to push to the repo any files that have also been changed by someone else, then git complains and won't let me push, saying something about 'fast forward.'

          The note about fast forwards is basically an elaboration on why the push failed. Bonesnap's question is more on whether when you merge, should you have the commits from the merge be added to the existing commits, or should there be an actual merge commit. Personally, I started off liking the actual merge commit because that was Bazaar's default style, but I had a senior developer insist that we do rebases instead, which is similiar to fast forwarding except that the commits are replayed over a diverged branch, whereas fast forwarding is done where there is no divergence. Consequently, I have adopted that style out of habit, but I am not sure which really is better.

          sneakyimp wrote:

          While there may be some way to force a push, I've not yet bothered to find it and generally try to avoid this situation.

          If you force the push (e.g., git push origin +master), you will discard the work that others have done, which is a Bad Thing.

          sneakyimp wrote:

          When it does happen, I stash my local changes, pull from the repo, and then try to resolve any differences manually. I tend to think that's how it should be done. If I'm missing some better way here, I'd love for someone to enlighten me.

          I tend to do that too, though by this time you would have made the commit, so actually you would have to reset before you stash. Alternatively, you could do a merge, but that could make the history look a little more complicated (read: ugly) than necessary in such a case.

          If you are working in a local feature branch, you could rebase master (or whatever the target branch is), i.e., replay your commit on top on the new commits, resolving conflicts if necessary. After that, you can do a fast forward merge to master (or whatever the target branch), then push to origin.

          sneakyimp wrote:

          3. Should all functions be part of a class?
          I have tended this way in recent years -- it helps keep the global namespace clean and clear and your likelihood of name collisions is greatly reduced.

          You should use namespaces instead.

            Thank you guys for taking the time to read my enormous post and replying with your insightful answers. I very much appreciate it.

            laserlight;11049457 wrote:

            I have moved this to General Help as it is relevant to PHP, though it has a more general scope including things like version control and development methodology.

            I didn't even think about putting it in this section. Definitely makes sense. Thank you.

            laserlight;11049457 wrote:

            I think that it depends. In some cases the compiled files might be legitimately edited by maintainers (who took over your project years down the road) without access to the tools that generated them. So I would say yes to generated CSS files, maybe for image files, but no to executable programs and script bytecode.

            I will be the only person working on this, so I don't really have to worry about others working on my code.

            laserlight;11049457 wrote:

            I use Meld, but then most of my work is on Linux these days. I believe it can work on Windows, but when I was more active programming on Windows I used Beyond Compare.

            Cool, thanks! I remember seeing Meld when I was looking into it before. I'll check them out.

            laserlight;11049457 wrote:

            Yes. In some cases though, you have some code that you really want to run even if an exception is thrown, in which case you would put it in a finally block. If you want the exception to propagate, you might even have a finally block without a catch block.

            laserlight;11049457 wrote:

            You should handle an exception where you have enough information to do so, otherwise you should allow it to propagate (perhaps logging it).

            Sounds like good advice, thank you!

            laserlight;11049457 wrote:

            Before the introduction of namespaces, there would have been the argument that all functions should be part of classes for a poor man's namespacing reason. Now, I suggest that you consider if the function needs to be part of the core interface of a class, or if it can extend the interface within the same namespace as the class.

            When you say "...or if it can extend the interface within the same namespace as the class", do you mean as a stand-alone function, or as part of another class that extends the route class? I think you mean the former, but just wanted to make sure. Using namespaces to group code into logical chunks makes sense and to be honest I hadn't really considered that. Very good tip, thank you!

            laserlight;11049457 wrote:

            I suggest reading this interview of Bjarne Stroustrup, the designer and original implementer of C++, concerning Designing Simple Interfaces. Granted, the context is C++, which has function overloading based on the number and type of function arguments along with argument dependent lookup, but the ideas apply to PHP too.

            I think most of this was over my head, but the last paragraph made a lot of sense to me. Great discussion!

            laserlight;11049457 wrote:

            If it does not matter, then perhaps you should be going for non-member functions or static methods because it probably does not make sense to have an object. If it does matter, then you probably will need some non-static methods.

            I've always liked objects probably because that's how I "grew up" on programming through Java and .NET (do those languages have static methods?). Only recently have I really been using them. The :: operator feels weird to use compared to the -> operator. Static methods or stand-alone functions might be the best in this instance, though. I'll have to see how I feel when I get down to it.

            dalecosp;11049467 wrote:

            Not unless you have working software. What's the point of having a branch that can't be used? Software is written for the user ;-)

            However, once you have something that can be called "workable", I can certainly see branching as useful.

            Good point, makes sense. Having a development branch won't do much good if the master branch is just an index.php page

            dalecosp;11049467 wrote:

            One of the major advantages of the use of versioning software lies in its ability to revert changes. This could be a plus in your development environment, particularly if you program in such a manner that you expect to need revert something? For example, do you see yourself writing something that's kind of experimental, pushing it out for testing, and throwing it away if it turns out to be awful/unworkable/not the best idea you ever had ...

            I see myself writing perfect code that works 100% of the time, every time. 😉

            I see your point, though I'd be doing anything experimental in a separate branch so I don't have to revert anything. I can just delete the branch. However being able to say "I want to go back in time it this point in my code" and start there again is a powerful ability.

            dalecosp;11049467 wrote:

            I'd say it depends on the project's overall size/scope. I certainly have one big project that I wish was a little better at, and more organized about, custom logging. I occasionally have to do "hackish" things when a new bug is discovered in that one.

            I could use the exception handling to log stuff. I may create the error class anyway. It doesn't have to be involved - just an easy way to return multiple errors and be able to get info about them.

            dalecosp;11049467 wrote:

            Again, depends on the target. Is there any possibility this will be installed on multiple systems, or by persons other than yourself? Is it DB-intensive, such that you might want to try, say, PostGres as opposed to My? What about lifecycle? If Oracle were to shut down MySQL in a couple years, (God forbid), and somehow they could remove it from the wild (heh, I seriously doubt that could ever happen), how much work would you want to do to make your game work again?

            No, this will only be worked on by me and it will only work with MySQL. If Oracle was somehow able to remove MySQL from existence then my game would be way down on my list of priorities :p

            dalecosp;11049467 wrote:

            Probably dumb answers ... best of luck on your project. And, really ... you should do a LITTLE something besides coding on your PTO, eh? 😉 🙂

            All answers are valuable to me. Thank you! And I'll be going to my best friend's wedding next weekend so that counts as doing something else, right? 🙂

              sneakyimp;11049463 wrote:

              As for general git workflow, rather than init'ing a repo locally, I like to init a bare repo in the place from which it will be hosted -- usually a machine reachable via some domain name from wherever I might be developing. Then, on my workstation, I do a git-clone of that remote (and empty) bare repo. I put my source code in there, do a git-add, git-commit, and git-push. I've found that this eliminates the need to specify which origin/branch you might have to commit to because your workstation's working directory was cloned from the original, bare repo.

              I have created the bare repository in Bitbucket and will be cloning from there, though setting up a remote is pretty easy in PhpStorm.

              sneakyimp;11049463 wrote:

              I commit all the time. My primary rule of thumb is is to ask "have I done some work which I should be backed up? Would I be sad if my hard drive crashed and it were gone?" I generally try not to commit code totally broken code but some coding tasks are just too big to get fully working in a reasonable amount of time. I tend not to worry about the size of my repo at all, which may be bad, but PHP source tends to be tiny and git is very space-efficient in my experience. This question can get more complicated when you are working with other people. If two of you edit the same file, things can get complicated. If you have a commit hook set up that auto-publishes code to some dev server, you should never commit code that breaks stuff for others. You should also avoid taking a long time to make changes to some file that is commonly edited by others -- if you do, you may find yourself having trouble pushing your changes because someone else changed the file in the meantime.

              Thank you for the insight. I'll be the only one working on it so I don't foresee any issues with conflicts with others. I have also heard the rule to only commit working code. I do try to commit to that (har har).

              sneakyimp;11049463 wrote:

              Yes starting with 0.1 is exactly what I do. Until you reach 1.0, I probably wouldn't worry about publishing any patches. The zero in your major version says it all: "this software is in beta, use at your own risk."

              I guess my confusion is at what point do I bump to 0.2?

              sneakyimp;11049463 wrote:

              I can't say I fully understand this either. I use git from the linux command line and if I try to push to the repo any files that have also been changed by someone else, then git complains and won't let me push, saying something about 'fast forward.' While there may be some way to force a push, I've not yet bothered to find it and generally try to avoid this situation. When it does happen, I stash my local changes, pull from the repo, and then try to resolve any differences manually. I tend to think that's how it should be done. If I'm missing some better way here, I'd love for someone to enlighten me.

              I think I'll just stick with whatever PhpStorm does. Since I do not completely understand thus unable to make a decision, I'll defer judgement and decisiveness to the PhpStorm developers.

              sneakyimp;11049463 wrote:

              Nope. At least not for stuff that is platform-specific like C and C++. A windows executable and a linux executable are totally different. 32-bit and 64-bit executables are totally different. Github has a whole bunch of example .gitignore files which you may find useful.

              I only mean my .css files and my .min.js files, which are compiled/minified as I work. If they are not to be tracked, how do the compiled files make it from say, the development branch to the master branch? Even if the files already existed in the master branch, none of the changes will be merged because the files are not being tracked.

              sneakyimp;11049463 wrote:

              Also, IF/THEN blocks do not alter the fact that a function can only return one value. If you are willing to check the result of every single function that might encounter a problem to make sure it's valid data, then I'd be willing to bet you are going to nearly double up the amount of coding you do.

              Interesting. If I am interpreting this correctly, are you saying that functions should only be able to return a single data type and if at any point during the execution of said function that data type cannot be returned, throw an exception? So no more "@return int|bool Returns the total amount on success or false on failure" remarks in the PHP Docs?

              sneakyimp;11049463 wrote:

              You can further increase the flexibility of your exception-throwing by defining nuanced, custom exception types which can be caught at different levels of code. Your catch statements can filter which exceptions you want to catch and which ones bubble up to higher levels of code

              How can an exception bubble up to higher levels of code? I thought if it was unhandled it terminated the script? Or am I misunderstanding something?

              sneakyimp;11049463 wrote:

              I haven't seen WP error class in action, but I have seen that PEAR seems to do things this way. Seems like a good way to wrestle with the question of how to determine whether a function result is an error or not, but sounds like it comes with overhead to me -- do all of your functions return objects now? Or are you always using [man]instanceof[/man] on every function result to make sure it's not an error object? This seems unnecessary to me if you are throwing exceptions. I don't even define custom exception classes.

              The WP_Error class has a few methods in it and a couple attributes/variables. It is generally only used in core functions in WordPress, especially when affecting the database or overall application. For example if you execute wp_insert_user() it will either return the new user's ID or a WP_Error. On the other hand if you're trying to retrieve some meta data of a post with get_post_meta() it will either return the value, or null if it doesn't exist. So it's only used when deemed necessary. They have a function called is_wp_error() that is:

              /**
               * Check whether variable is a WordPress Error.
               *
               * Returns true if $thing is an object of the WP_Error class.
               *
               * @since 2.1.0
               *
               * @param mixed $thing Check if unknown variable is a WP_Error object.
               * @return bool True, if WP_Error. False, if not WP_Error.
               */
              function is_wp_error( $thing ) {
              	return ( $thing instanceof WP_Error );
              }
              
              sneakyimp;11049463 wrote:

              I have tended this way in recent years -- it helps keep the global namespace clean and clear and your likelihood of name collisions is greatly reduced. It also helps group functionality.

              If I need to do this I think I'll use custom namespaces.

              sneakyimp;11049463 wrote:

              If I'm not mistaken, this is a memory consideration. My rule: if the function doesn't operate on properties of the object then I make it static. That way, I can use it elsewhere without having to instantiate the object. It also allows a class to sort of broadcast its own functions and make them available elsewhere in your code as a way to canonicalise things.

              Sounds like a neat rule. Just hope down the line though the needs of the method doesn't change :p

              sneakyimp;11049463 wrote:

              I recently created a DB abstraction base class for CodeIgniter (which has a nice QueryBuilder object) and this his been an unmitigated blessing so far. I was a bit worried about using this in complex project at first, and it seemed like a chore to get it working, but in the vast majority of cases, it eliminates the need to write any SQL at all. It escapes inputs automatically. I have another script that can examine a database full of tables and generate a class for each table in the DB along with javadoc comments so that my IDE and its autocomplete features make inserting and fetch records trivially easy:

              $record = DB_my_table::fetch_by_id($db, 42); // returns an instance of DB_my_table which can be used to update the record in the db
              

              If I still need some fancy JOIN query, I can either extend my DB_my_table class (which would result in DB_my_table_x) and utilize CodeIgniter's QueryBuilder.

              I'm not using any framework for this project. I think I'll have a lot more fun that way, and learn a lot more. Plus I'd rather learn to use a framework for something smaller and for something where I haven't failed twice in the past to stay motivated. I only bring that up because if I do create a DAL it won't be on top/with assistance of something else already written. I'm still undecided about it. I like the idea of keeping SQL queries out of my "top level" functions and classes, but having to essentially write two of each method doesn't sit well with me. I should put this at the top of my list of things to read up on.

                Bonesnap wrote:

                When you say "...or if it can extend the interface within the same namespace as the class", do you mean as a stand-alone function, or as part of another class that extends the route class? I think you mean the former, but just wanted to make sure.

                The former.

                Bonesnap wrote:

                I've always liked objects probably because that's how I "grew up" on programming through Java and .NET (do those languages have static methods?).

                Java has static methods; .NET is a framework.

                Bonesnap wrote:

                The :: operator feels weird to use compared to the -> operator.

                If it really weirds you out, you could still create an object and call static methods using the -> operator, though most other people would think that is weird :p

                Bonesnap wrote:

                How can an exception bubble up to higher levels of code? I thought if it was unhandled it terminated the script? Or am I misunderstanding something?

                By "unhandled", it means that there is no catch block that matches the exception's class. But this catch block can be in the caller, or in the caller's caller, etc, e.g.,

                <?php
                
                function foo() {
                    echo 'In foo, before exception is thrown.<br/>';
                    throw new Exception('Hello world!');
                    echo 'In foo, after exception is thrown.<br/>';
                }
                
                function bar() {
                    echo 'In bar, before foo is called.<br/>';
                    foo();
                    echo 'In bar, after foo is called.<br/>';
                }
                
                function baz() {
                    try {
                        echo 'In baz, before bar is called.<br/>';
                        bar();
                        echo 'In baz, after bar is called.<br/>';
                    } catch (Exception $e) {
                        echo $e->getMessage() . '<br/>';
                    }
                }
                
                echo 'Before baz is called.<br/>';
                baz();
                echo 'After baz is called.<br/>';
                  Bonesnap;11049551 wrote:

                  I guess my confusion is at what point do I bump to 0.2?

                  I would say bump it up whenever you've completed some significant bug fix or refactoring or if you've added new features. The value of having version names, after all, is to identify/distinguish slightly (or greatly) different versions of the project. If a user has problem X, you ask what version they are using. Then you can tell them what version they need to get to solve the problem.

                  Bonesnap;11049551 wrote:

                  I think I'll just stick with whatever PhpStorm does. Since I do not completely understand thus unable to make a decision, I'll defer judgement and decisiveness to the PhpStorm developers.

                  I can certainly understand the desire to not fret about it -- and you are probably never going to encounter any 'fast forward' complaints if you are the only person working on the project, but it's been my experience that not knowing what's going on underneath can eventually lead to great distress and confusion at inopportune moments. GIT is a truly awesome tool. It is well-designed, powerful, and sophisticated. Learning more about it will probably be rewarding at some point. I've found that what little I know is an excellent way to impress my clients and my code pimp guy who gets me work.

                  Bonesnap;11049551 wrote:

                  I only mean my .css files and my .min.js files, which are compiled/minified as I work. If they are not to be tracked, how do the compiled files make it from say, the development branch to the master branch? Even if the files already existed in the master branch, none of the changes will be merged because the files are not being tracked.

                  OK I for some reason thought we were talking executables or byte code like C/C++/Java. I have been using git to deploy my PHP projects to the server and it saves me lots of time. I would definitely track any CSS or JS that is part of your project. The only files I usually do not track are ones that my IDE creates (.project, .settings, etc.) and I also add certain files to the .gitignore list if they are configuration files that differ between workstation/dev/production servers. You don't want your local version of config.php to get committed to the repo and overwrite the config.php on the server or vice versa. I also believe that you shouldn't be committing your secret credentials to github or bitbucket or anything like that. Lots of stories about sites getting compromised because people expose their api keys or encryption keys or passwords. Make sure you don't put any sensitive source code in your repo unless you are hosting it securely yourself.

                  Bonesnap;11049551 wrote:

                  Interesting. If I am interpreting this correctly, are you saying that functions should only be able to return a single data type and if at any point during the execution of said function that data type cannot be returned, throw an exception? So no more "@return int|bool Returns the total amount on success or false on failure" remarks in the PHP Docs?

                  I personally find it confusing when a function might return a variety of different objects and tend to think it's a bad idea. While it's not a big deal to check a return value to see if is is_null or to check if it actually returned FALSE instead of an int, I start to think it's a pain in the ass when you execute a function and always have to check its result is_wp_error or whatever on it. Throwing an exception from a function is slightly easier in that you just throw it and the flow of code execution jumps all the way back up through all of your function calls to the first try/catch block it encounters. You can deploy as many try/catch blocks as you like or you can not deploy any and the thrown exception will cause your script to halt. You can also define different custom types of exceptions and then write try/catch blocks that only catch certain types while ignoring others. How is this better? If you call 20 functions, you don't have to check if each function result is_wp_error, you can just wrap all 20 functions in a try/catch block and if a single one has enough trouble that throwing an exception is required, you can catch the exception somewhere else and either deal with it or log it or just apologize to the user.

                  Another good reason that functions should only return one type of result IMHO is that you can write Javadoc-style comments for the function and this will result in your IDE offering auto-complete functionality when you are dealing with the results of your function call. I try very hard to write good Javadoc comments for my OOP properties and methods and have found that the autocomplete functionality provided by my browser dramatically improves my development productivity because I don't have to keep looking back at the class definition to remember my property and method names. It really cuts down on typos and mistakes and debugging time.

                  Exceptions also come with a description of the call stack that led to the error. I'm not sure a WP_ERROR class offers this type of functionality. Basically, when you catch an exception, you can simply echo it and what comes out is a list of all the function calls that led to the exception. Here's an example exception I had just written to a log file (possibly sensitive details redacted)

                  exception 'Exception' with message 'A payment profile already exists for these parameters: A duplicate customer payment profile already exists.' in /var/www/site/application/models/authorizenet_utility.php:1852
                  Stack trace:
                  #0 /var/www/site/application/controllers/Authorizenet.php(231): authorizenet_utility->create_payment_profile('12345678', '123456', Array)
                  #1 [internal function]: Authorizenet->create_billing_agreement_process()
                  #2 /var/www/site/system/core/CodeIgniter.php(508): call_user_func_array(Array, Array)
                  #3 /var/www/site/html/index.php(291): require_once('/var/www/site...')
                  #4 {main}
                  

                  Note how it gives the filenames, line numbers, function calls, and parameter values. This makes it a LOT easier to understand what went wrong. I wonder if the WP_Error class offers this detal?

                  Bonesnap;11049551 wrote:

                  How can an exception bubble up to higher levels of code? I thought if it was unhandled it terminated the script? Or am I misunderstanding something?

                  I believe laserlight answered this question. let us know if you are still confused.

                  Bonesnap;11049551 wrote:

                  If I need to do this I think I'll use custom namespaces.

                  Yes I need to start using namespaces too. I've looked into them but haven't yet tried working with them in my projects. They are conspicuously absent in CodeIgniter.

                  Bonesnap;11049551 wrote:

                  Sounds like a neat rule. Just hope down the line though the needs of the method doesn't change :p

                  Should such a method ever need to refer to the properties of its class object, you can easily find any static invocations of this method distinctly from the object invocations by searching your code for ClassName::MethodName. It usually pretty easy to decide if a function is a good candidate for a static method or whether it must refer to the object itself. Here's another thing: If it needs to refer to the object itself, you can write a different method that uses the original static method but supplies any properties/methods needed, leaving most of your existing code that refers to the static method untouched. It won't work in every case, but definitely solves things most of the time.

                  Bonesnap;11049551 wrote:

                  I'm not using any framework for this project. I think I'll have a lot more fun that way, and learn a lot more. Plus I'd rather learn to use a framework for something smaller and for something where I haven't failed twice in the past to stay motivated. I only bring that up because if I do create a DAL it won't be on top/with assistance of something else already written. I'm still undecided about it. I like the idea of keeping SQL queries out of my "top level" functions and classes, but having to essentially write two of each method doesn't sit well with me. I should put this at the top of my list of things to read up on.

                  A brave and education thing to do certainly -- and having your own code (i.e., your own framework) is pretty satisfying because you understand it well and it just has the stuff you need and nothing more. I like CodeIgniter because of the way it organizes controller classes that map very intuitively onto the urls they serve. It also has the handy Query Builder thing I mentioned before AND it has a Mailer class that can send your mail via SMTP. It's also got built-in session handling features which are nice. I've also written a PHP script to auto-deploy a customized CodeIgniter 3 project when I need to build a new website by contacting github and drawing some customized files from my local git repo. It used to take me hours to get a basic CI3 project set up for a particular domain. Thanks to my special deployment script, I am prompted for some basic information and it rolls everything out in about 5 seconds. Filenames are customized for the project, basic settings are changed, it's like an instant website.

                    laserlight;11049573 wrote:

                    Java has static methods; .NET is a framework.

                    Oops, I knew that. Slip of the finger!

                    laserlight;11049573 wrote:

                    If it really weirds you out, you could still create an object and call static methods using the -> operator, though most other people would think that is weird :p

                    I... never really thought about that. I probably won't do that, but it's good to know.

                    laserlight;11049573 wrote:

                    By "unhandled", it means that there is no catch block that matches the exception's class. But this catch block can be in the caller, or in the caller's caller, etc, e.g.,

                    I think I misread the initial explanation. An exception will bubble up as far as possible until it hits an appropriate catch block, which, if my understanding is correct, is why it's always good practice to include a generic exception at the end to catch anything that hasn't been defined by a custom exception (if that's the desired behaviour).

                    sneakyimp;11049597 wrote:

                    I would say bump it up whenever you've completed some significant bug fix or refactoring or if you've added new features. The value of having version names, after all, is to identify/distinguish slightly (or greatly) different versions of the project. If a user has problem X, you ask what version they are using. Then you can tell them what version they need to get to solve the problem.

                    Makes sense!

                    sneakyimp;11049597 wrote:

                    I can certainly understand the desire to not fret about it -- and you are probably never going to encounter any 'fast forward' complaints if you are the only person working on the project, but it's been my experience that not knowing what's going on underneath can eventually lead to great distress and confusion at inopportune moments. GIT is a truly awesome tool. It is well-designed, powerful, and sophisticated. Learning more about it will probably be rewarding at some point. I've found that what little I know is an excellent way to impress my clients and my code pimp guy who gets me work.

                    This project will probably challenge my knowledge of Git. I have no doubts about that. I am still unsure/uncomfortable with "rebasing", so there's that.

                    sneakyimp;11049597 wrote:

                    OK I for some reason thought we were talking executables or byte code like C/C++/Java. I have been using git to deploy my PHP projects to the server and it saves me lots of time. I would definitely track any CSS or JS that is part of your project. The only files I usually do not track are ones that my IDE creates (.project, .settings, etc.) and I also add certain files to the .gitignore list if they are configuration files that differ between workstation/dev/production servers. You don't want your local version of config.php to get committed to the repo and overwrite the config.php on the server or vice versa. I also believe that you shouldn't be committing your secret credentials to github or bitbucket or anything like that. Lots of stories about sites getting compromised because people expose their api keys or encryption keys or passwords. Make sure you don't put any sensitive source code in your repo unless you are hosting it securely yourself.

                    I don't mean to hijack this reply, but you have touched on a couple things that have been in the back of my mind.

                    1) Using Git to deploy your projects. Right now I just development on my computer in my Git repository, which is also my local webserver. But obviously when the time comes I am going to be copying the master branch to something that is meant to be viewed publicly. Is there some place where I can read up on a 101 on how to automate something like that? I know Git has hooks or whatever, but not much beyond that. Bitbucket also has some new webhooks, but haven't read into them yet.

                    1a) Basically right now my development environment is my live environment since I am just developing on my computer. I should probably get out of that practice, shouldn't I? Maybe at least create another folder where the master branch is automagically uploaded/copied to? Know of any good webcasts where people explain how they setup their development and production environments, their nuances, etc. I always like to know how other people approach stuff.

                    2) I would be including some config info in a config file including database credentials, but the host is localhost and the user was specifically created for (and only has access to) my project's database. Those stories kinda make me laugh, though. Sounds like this scenario kinda falls into the same territory as above.

                    sneakyimp;11049597 wrote:

                    I personally find it confusing when a function might return a variety of different objects and tend to think it's a bad idea. While it's not a big deal to check a return value to see if is is_null or to check if it actually returned FALSE instead of an int, I start to think it's a pain in the ass when you execute a function and always have to check its result is_wp_error or whatever on it. Throwing an exception from a function is slightly easier in that you just throw it and the flow of code execution jumps all the way back up through all of your function calls to the first try/catch block it encounters. You can deploy as many try/catch blocks as you like or you can not deploy any and the thrown exception will cause your script to halt. You can also define different custom types of exceptions and then write try/catch blocks that only catch certain types while ignoring others. How is this better? If you call 20 functions, you don't have to check if each function result is_wp_error, you can just wrap all 20 functions in a try/catch block and if a single one has enough trouble that throwing an exception is required, you can catch the exception somewhere else and either deal with it or log it or just apologize to the user.

                    I have never really thought of this approach. I guess I have just been so used to the "return what is desired, or false/null otherwise" approach. And given PHP's easy type casting/juggling on the fly, it's very easy to test for if($whateverIWant) do what you want else do something else. I like this approach though. This function returns blah. If it doesn't, then there was a problem, throw an exception. I like it!

                    sneakyimp;11049597 wrote:

                    Another good reason that functions should only return one type of result IMHO is that you can write Javadoc-style comments for the function and this will result in your IDE offering auto-complete functionality when you are dealing with the results of your function call. I try very hard to write good Javadoc comments for my OOP properties and methods and have found that the autocomplete functionality provided by my browser dramatically improves my development productivity because I don't have to keep looking back at the class definition to remember my property and method names. It really cuts down on typos and mistakes and debugging time.

                    Heh no worries about PHPDocs, I have been doing that for a while. 🙂

                    sneakyimp;11049597 wrote:

                    Exceptions also come with a description of the call stack that led to the error. I'm not sure a WP_ERROR class offers this type of functionality. Basically, when you catch an exception, you can simply echo it and what comes out is a list of all the function calls that led to the exception. Here's an example exception I had just written to a log file (possibly sensitive details redacted)

                    ...
                    

                    Note how it gives the filenames, line numbers, function calls, and parameter values. This makes it a LOT easier to understand what went wrong. I wonder if the WP_Error class offers this detal?

                    Everything a growing boy needs! I don't think the WP_Error class has that. I'll probably skip such a thing and just go with logging the exception. Do you use any specific/special logs for your exceptions or just put everything into the standard PHP error log?

                    sneakyimp;11049597 wrote:

                    I believe laserlight answered this question. let us know if you are still confused.

                    I am good now!

                    sneakyimp;11049597 wrote:

                    Yes I need to start using namespaces too. I've looked into them but haven't yet tried working with them in my projects. They are conspicuously absent in CodeIgniter.

                    I've read up on them a bit and watched some videos on Youtube. I get them, but will probably have some growing pains implementing them into my project. But oh well, that's all part of the experience 🙂

                    Thanks again everyone for taking the time to reply. Totally appreciate it! 🙂

                      Bonesnap;11049699 wrote:

                      I am still unsure/uncomfortable with "rebasing", so there's that.

                      We are in the same boat. I've been meaning to review laserlight's kindly guidance when I have a moment. Kinda fighting burnout.

                      Bonesnap;11049699 wrote:

                      1) Using Git to deploy your projects. Right now I just development on my computer in my Git repository, which is also my local webserver. But obviously when the time comes I am going to be copying the master branch to something that is meant to be viewed publicly. Is there some place where I can read up on a 101 on how to automate something like that? I know Git has hooks or whatever, but not much beyond that. Bitbucket also has some new webhooks, but haven't read into them yet.

                      I would suggest being more careful about your terms with git. When you say you develop "in your git repository" I think you really mean that you are working in a "git working directory." The repository is stored in the folder called .git (and possibly with remotes elsewhere).

                      Although I've heard this discouraged, I've had some great luck using git to deploy my code by simply cloning a git repo into a working directory in the /var/www folder on my production web server. NOTE that the root of my git working directory is not itself the web root. You do not want your .git folder to be in your web root. This is all to say that I set up my production server like so:

                      # this server only serves one site so the only web root is /var/www/html 
                      cd /var/www
                      git clone http://example.com/some-repo ./
                      

                      Note that the contents of /var/www will look something like this:

                      $ ls -al
                      total 42820
                      drwxr-xr-x  9 sneakyimp    sneakyimp     4096 Jul 24 17:16 .
                      drwxr-xr-x 41 root     root      4096 Jul 17 16:08 ..
                      drwxrwxr-x 15 sneakyimp    sneakyimp     4096 Jan 29 15:29 application
                      drwxrwxr-x  8 sneakyimp    sneakyimp     4096 Jul 28 15:26 .git
                      -rw-rw-r--  1 sneakyimp    sneakyimp      795 May 27 21:58 .gitignore
                      drwxrwxr-x  6 sneakyimp    sneakyimp     4096 Jun 24 16:40 html
                      -rw-rw-r--  1 sneakyimp    sneakyimp       70 Feb  9 12:07 README.md
                      drwxrwxr--  2 sneakyimp    sneakyimp     4096 Mar 13 14:26 shell-scripts
                      drwxrwxr-x  8 sneakyimp    sneakyimp     4096 Dec 28  2014 system
                      drwxrwsr-x  5 www-data sneakyimp     4096 Jun 12 16:44 logs
                      

                      The html directory is my web root so all these other files and folders are outside the web root which means some better security for certain types of files (e.g., logs). You can also see that the .git folder is in here as well as .gitignore. This means they should be fairly safe from curious hackers or sniffing types because they are not directly accessible via HTTP.

                      Because I am not tracking sensitive credential files (such as application/config/database.php) in the git repo, I need to slip these in. There are only a few so it's usually not a big deal.

                      When someone has pushed changes to the repo from their workstation, I can just update the production code like so:

                      ssh sneakyimp@example.com
                      cd /var/www
                      git pull
                      

                      In certain cases I'll need to first assume the identity of some other user so the permissions work out but this is the basic idea.

                      I've heard some old pros say that creating a deployment script (like a linux shell script) is the way to deploy things. I'm not especially good at writing bash scripts, but have had some great luck with scripts that set file permissions to make sure logs are writable by apache, that other files are readable/writable by all my devs, etc. I believe a shell script working in conjunction with a svn or git repo is how the pros would do it, but my approach is making me pretty happy. Saves all kinds of time compared to manually uploading files.

                      rsync is also a cool tool but it might overwrite config files with your local variants.

                      Bonesnap;11049699 wrote:

                      1a) Basically right now my development environment is my live environment since I am just developing on my computer. I should probably get out of that practice, shouldn't I? Maybe at least create another folder where the master branch is automagically uploaded/copied to? Know of any good webcasts where people explain how they setup their development and production environments, their nuances, etc. I always like to know how other people approach stuff.

                      If you are using git for source management, the only reason I can see for some other server is if you are collaborating with someone or if you need your stuff hosted on some publicly-accessible domain (like if you want to test it using your phone). I would recommend periodically backing up your git repo if you have only one copy of it. Also keep in mind that some of your important files (database config in my case) may not be in your git repo. I feel perfectly comfortable developing on my workstation until I need to take a site live somewhere. My workstation is Ubuntu and its behavior is essentially identical to any server I typically deploy on. I had struggled with WAMP and MAMP before and am very happy using Ubuntu.

                      Bonesnap;11049699 wrote:

                      2) I would be including some config info in a config file including database credentials, but the host is localhost and the user was specifically created for (and only has access to) my project's database. Those stories kinda make me laugh, though. Sounds like this scenario kinda falls into the same territory as above.

                      Not sure what you mean here, but it's been my experience that websites typically have a variety of credentials: database, email, payment gateway, cloud API, etc. I've yet to find a way to specify these that doesn't have them living in some PHP file. I'm sure it might be possible but have yet to see any functional usage of any technique like this.

                      Bonesnap;11049699 wrote:

                      I have never really thought of this approach. I guess I have just been so used to the "return what is desired, or false/null otherwise" approach. And given PHP's easy type casting/juggling on the fly, it's very easy to test for if($whateverIWant) do what you want else do something else. I like this approach though. This function returns blah. If it doesn't, then there was a problem, throw an exception. I like it!

                      The cool thing about exceptions is that you don't necessarily need to check the results of a function call. If you call a function and it throws an exception due to a problem (e.g., "user not found") then code execution is going to go from right after that function call to the next catch block, skipping every single line of code in between. In my experience, the catch blocks tend to be where you say to the user "uh on there was a problem."

                      Bonesnap;11049699 wrote:

                      Everything a growing boy needs! I don't think the WP_Error class has that. I'll probably skip such a thing and just go with logging the exception. Do you use any specific/special logs for your exceptions or just put everything into the standard PHP error log?

                      I almost never use the standard php error log. It's been my experience that on a busy site, this log gets ALL KINDS Of stuff written to it and to read it is sort of like drinking from a fire hose. I created my own Log class which I can invoke with a path to a log file and it'll take care of rotating the logs when they reach a certain size. I can create all kinds of these logs, each for some particular type of functionality. When there's a problem, I can look at the specific log file for the issue involved.

                      I've been talking with a client lately about creating some kind of database log table. The idea being that we can write sensitive information related to the error in our database, and just show some nonsense unique id to a user. If they want more trouble, someone at the help desk can look at an admin control panel from which they can look up that unique id and see the sensitive information without it being revealed to the user. The reason we'd use a database is because the average help desk employee might not be fluent with the command line or grep or tail or other commands that can examine text logs on the server.

                        Oh and as far as github (or bitbucket) hooks, there's a lot of debate about whether one should use them to automatically deploy code to a server. I did manage to set this up but it's tricky because the github hook will contact apache and either apache needs permission to fetch the changed files from the repo or you need some kosher way to trigger a script with some other user's permissions to deploy the files. I had a thread about doing that here:
                        http://board.phpbuilder.com/showthread.php?10393219-Deploying-from-github-Managing-permissions-on-a-linux-server-for-a-web-project

                        I don't recall if I described the entire solution, but it was pretty involved.

                          sneakyimp;11049709 wrote:

                          Oh and as far as github (or bitbucket) hooks, there's a lot of debate about whether one should use them to automatically deploy code to a server. I did manage to set this up but it's tricky because the github hook will contact apache and either apache needs permission to fetch the changed files from the repo or you need some kosher way to trigger a script with some other user's permissions to deploy the files. I had a thread about doing that here:
                          http://board.phpbuilder.com/showthread.php?10393219-Deploying-from-github-Managing-permissions-on-a-linux-server-for-a-web-project

                          I don't recall if I described the entire solution, but it was pretty involved.

                          I'd think the usual reason for NOT doing something like this is because you might not want production boxes running HEAD code.

                          I've got a solution in use @work ... I should see if I can find the docs. Basically, one devel box, .git in the working dir, a repo dir elsewhere, and two production boxes "out there". Here's all I have to do to push the prod boxes to HEAD:

                          $git add changed_file.php
                          $git commit -m "Fixed Bug #1234 --- improper frobschnitzer output in Chrome."
                          $my_update
                          

                          my_update is just an alias (Or it might be a small script) thats calls a git command on the remote boxes. They all have key-based SSH set up.

                          The system does use a post commit hook to sync from the workdir to the local repo. I'll see if I can find it if you're interested.

                            sneakyimp wrote:

                            Although I've heard this discouraged, I've had some great luck using git to deploy my code by simply cloning a git repo into a working directory in the /var/www folder on my production web server.

                            Bazaar has an upload plugin that allows you to push only the working tree to the server, along with a file indicating the revision so as to do a subsequent incremental upload. This was developed precisely because a web developer wanted to use Bazaar to deploy website code to servers that did not have Bazaar installed. I have not seen something similiar for git though.

                            sneakyimp wrote:

                            I've heard some old pros say that creating a deployment script (like a linux shell script) is the way to deploy things. I'm not especially good at writing bash scripts, but have had some great luck with scripts that set file permissions to make sure logs are writable by apache, that other files are readable/writable by all my devs, etc. I believe a shell script working in conjunction with a svn or git repo is how the pros would do it, but my approach is making me pretty happy. Saves all kinds of time compared to manually uploading files.

                            I use Fabric, but then a significant amount of my work is with Python.

                            sneakyimp wrote:

                            it's been my experience that websites typically have a variety of credentials: database, email, payment gateway, cloud API, etc. I've yet to find a way to specify these that doesn't have them living in some PHP file

                            Well, you could theoretically use a .ini file outside the document root with [man]parse_ini_file/man, but do not think anyone does it because a configuration file that is also a PHP script is just easier and more efficient.

                            sneakyimp wrote:

                            The cool thing about exceptions is that you don't necessarily need to check the results of a function call. If you call a function and it throws an exception due to a problem (e.g., "user not found") then code execution is going to go from right after that function call to the next catch block, skipping every single line of code in between. In my experience, the catch blocks tend to be where you say to the user "uh on there was a problem."

                            Yeah, that is what I mean by "handle an exception where you have enough information to do so, otherwise (...) allow it to propagate". That said, exceptions should be used to report errors.

                            Sutter and Alexandrescu's C++ Coding Standards Item 70 (part) wrote:

                            An error is any failure that prevents a function from succeeding. There are three kinds:

                            • Violation of, or failure to achieve, a precondition: The function detects a violation of one of its own preconditions (e.g., a parameter or state restriction), or encounters a condition that prevents it from meeting a precondition of another essential function that must be called.

                            • Failure to achieve a postcondition: The function encounters a condition that prevents it from establishing one of its own postconditions. If a function has a return value, producing a valid return value object is a postcondition.

                            • Failure to reestablish an invariant: The function encounters a condition that prevents it from reestablishing an invariant that it is responsible for maintaining. This is a special kind of postcondition that applies particularly to member functions; an essential postcondition of every nonprivate member function is that it must reestablish its class' invariants.

                            Consider a function that searches from an item in a sequence. You could say that the postcondition is that the function returns a valid object or sequence index, in which case a missing item is an error according to the above, so you might throw an exception. Yet, that would imply that a precondition is that the item must be in the sequence, and if this is not reasonable, I would suggest that the proposed postcondition is unreasonable, i.e., it would be more reasonable to amend the postcondition to allow say, returning null or false if the item cannot be found.

                              sneakyimp;11049707 wrote:

                              I would suggest being more careful about your terms with git. When you say you develop "in your git repository" I think you really mean that you are working in a "git working directory." The repository is stored in the folder called .git (and possibly with remotes elsewhere).

                              Heh, yeah, that's what I meant.

                              sneakyimp;11049707 wrote:

                              NOTE that the root of my git working directory is not itself the web root. You do not want your .git folder to be in your web root.

                              Does this differ in your development environment? On my PC I have my web folder which is essentially my public_html folder. Inside that I have all sorts of different projects and things I have worked on/am working on. Each one is in its own folder. Some of these, such as the project I am working on now, are Git working directories (eh, now I know the terms 😉) and contain a Git repository. Is this not the correct approach? How would you handle multiple Git repositories?

                              I agree about keeping the Git repository outside the public_html folder, but how would one handle that for when moving live? I would think you would abstain completely from moving the .git folder to the live environment. I have several filters set up in FileZilla that hides certain files such as .git, .idea (PhpStorm), and some other config files that previous applications I have used created (like Koala and Prepros). So when I upload the folder to the live environment the Git repository isn't copied. Is this sufficient?

                              sneakyimp;11049707 wrote:

                              Because I am not tracking sensitive credential files (such as application/config/database.php) in the git repo, I need to slip these in. There are only a few so it's usually not a big deal.

                              So it's expected not to track the config file and to upload it manually? What about values that are different? The database credentials are probably not going to be the same. So how does one handle that between the development environment and the live environment? Or do you basically change it once, upload it, and not really worry about it again because the file isn't being tracked anyway and the values probably aren't going to change either locally or when deployed?

                              sneakyimp;11049707 wrote:

                              I've heard some old pros say that creating a deployment script (like a linux shell script) is the way to deploy things. I'm not especially good at writing bash scripts, but have had some great luck with scripts that set file permissions to make sure logs are writable by apache, that other files are readable/writable by all my devs, etc. I believe a shell script working in conjunction with a svn or git repo is how the pros would do it, but my approach is making me pretty happy. Saves all kinds of time compared to manually uploading files.

                              rsync is also a cool tool but it might overwrite config files with your local variants.

                              I'm on Windows so I don't think I can do the shell script, but even if I could I'd probably skip that for this project. I'm just kinda interested in how people do deployments. I have heard of other tools but haven't devoted the time to look into them (not enough time in a day, amirite?).

                              sneakyimp;11049707 wrote:

                              If you are using git for source management, the only reason I can see for some other server is if you are collaborating with someone or if you need your stuff hosted on some publicly-accessible domain (like if you want to test it using your phone). I would recommend periodically backing up your git repo if you have only one copy of it. Also keep in mind that some of your important files (database config in my case) may not be in your git repo. I feel perfectly comfortable developing on my workstation until I need to take a site live somewhere. My workstation is Ubuntu and its behavior is essentially identical to any server I typically deploy on. I had struggled with WAMP and MAMP before and am very happy using Ubuntu.

                              The web folder on my PC is web accessible (behind a custom port that I chose), though I wouldn't be pointing anyone towards it. When I eventually want people to see the game I am making I will have a proper domain and hosting for it.

                              I regularly push my commits to Bitbucket so my repository is always backed up. The config file is currently in the repository, but sounds like that shouldn't happen. Maybe when I get closer to actual deployment I'll remove it.

                              sneakyimp;11049707 wrote:

                              Not sure what you mean here, but it's been my experience that websites typically have a variety of credentials: database, email, payment gateway, cloud API, etc. I've yet to find a way to specify these that doesn't have them living in some PHP file. I'm sure it might be possible but have yet to see any functional usage of any technique like this.

                              I just meant that if someone were to pull my config.php file they would get my database credentials but the host is "localhost" and the user and password only has access to the database on my local PC. In other words, I'm not really worried about it. Also the repository won't be public. I'm only making it public until about v1.0.0 so anyone here who wants to take a look can, but once I actually take this game public I'll be making the repository private (but keeping the wiki and issue tracker public).

                              I'll be making the repository public soon since now I actually have something worth of viewing in it :p

                              sneakyimp;11049707 wrote:

                              The cool thing about exceptions is that you don't necessarily need to check the results of a function call. If you call a function and it throws an exception due to a problem (e.g., "user not found") then code execution is going to go from right after that function call to the next catch block, skipping every single line of code in between. In my experience, the catch blocks tend to be where you say to the user "uh on there was a problem."

                              Yeah I like this approach. I'll have some code for you guys to take a look at in a follow up post for any feedback. I've started writing some classes and exceptions, including my database class.

                              sneakyimp;11049707 wrote:

                              I almost never use the standard php error log. It's been my experience that on a busy site, this log gets ALL KINDS Of stuff written to it and to read it is sort of like drinking from a fire hose. I created my own Log class which I can invoke with a path to a log file and it'll take care of rotating the logs when they reach a certain size. I can create all kinds of these logs, each for some particular type of functionality. When there's a problem, I can look at the specific log file for the issue involved.

                              Interesting. I think I'll be adopting this approach or something very similar. I like the idea of segregating the logs into different files based on their functionality or component of the project, and using the regular error log as a type of "catch all" or just a generic error log.

                              sneakyimp;11049707 wrote:

                              I've been talking with a client lately about creating some kind of database log table. The idea being that we can write sensitive information related to the error in our database, and just show some nonsense unique id to a user. If they want more trouble, someone at the help desk can look at an admin control panel from which they can look up that unique id and see the sensitive information without it being revealed to the user. The reason we'd use a database is because the average help desk employee might not be fluent with the command line or grep or tail or other commands that can examine text logs on the server.

                              I like this idea, too. Always nice to have a well-formed UI even if you are an admin, though in our cases we can probably get through a log file pretty quickly.

                              sneakyimp;11049709 wrote:

                              Oh and as far as github (or bitbucket) hooks, there's a lot of debate about whether one should use them to automatically deploy code to a server. I did manage to set this up but it's tricky because the github hook will contact apache and either apache needs permission to fetch the changed files from the repo or you need some kosher way to trigger a script with some other user's permissions to deploy the files. I had a thread about doing that here:
                              http://board.phpbuilder.com/showthread.php?10393219-Deploying-from-github-Managing-permissions-on-a-linux-server-for-a-web-project

                              I don't recall if I described the entire solution, but it was pretty involved.

                              Yeah that seems to be quite involved. I'll probably skip for now :p

                                Bonesnap;11049743 wrote:

                                Does this differ in your development environment? On my PC I have my web folder which is essentially my public_html folder. Inside that I have all sorts of different projects and things I have worked on/am working on. Each one is in its own folder. Some of these, such as the project I am working on now, are Git working directories (eh, now I know the terms 😉) and contain a Git repository. Is this not the correct approach? How would you handle multiple Git repositories?

                                On my workstation, I have a lot of website folders in my /var/www folder. Each of these folders is not a web root but contains a subdirectory (either "public" or "html") that is the webroot for that particular project. For each website I also create a) an apache config that defines the webroot for that website and b) an entry in my hosts file (/etc/hosts on ubuntu) which points some abbreviated version of the actual domain to my local machine. The apache config for one of my sites looks like this:

                                <VirtualHost *:80>
                                    # this domain is not a real domain, it's just some abbreviation or acronym I make up
                                    # I'll add the domain to my hosts file so any requests from my workstation will just be pointed to localhost
                                    ServerName abc.com
                                    ServerAlias www.abc.com
                                
                                # this is the webroot on my workstation
                                DocumentRoot /var/www/abc/html
                                
                                # this doesn't really matter at all
                                ServerAdmin sneakyimp@example.com
                                
                                UseCanonicalName Off
                                
                                # this separates out the errors for this domain from the other websites I run off my workstation
                                ErrorLog /var/www/abc/logs/error.log
                                </VirtualHost>
                                

                                This is the entry in my hosts file:

                                # routes all requests for abc.com to localhost
                                127.0.0.1     abc.com
                                127.0.0.1     www.abc.com
                                

                                So basically instead of one public_html directory for my entire computer, I have one for each website that I work on. Things can get a little confusing when I try to set up https or something, but I can usually figure it out. I basically just edit my source code on my local filesystem and keep a browser open to abc.com and any changes I make take effect instantly without any uploading or anything. And rather than developing in a subdirectory, where I have to be careful to specify relative urls or establish a base url or something like that, I can develop exactly as I would on the server. My workstation and live apache confs tend to be identical.

                                Bonesnap;11049743 wrote:

                                I agree about keeping the Git repository outside the public_html folder, but how would one handle that for when moving live? I would think you would abstain completely from moving the .git folder to the live environment. I have several filters set up in FileZilla that hides certain files such as .git, .idea (PhpStorm), and some other config files that previous applications I have used created (like Koala and Prepros). So when I upload the folder to the live environment the Git repository isn't copied. Is this sufficient?

                                In my case above, the .git directory is in /var/www/abc/.git -- i.e., outside the web root both on my workstation and on the production server. When I want to pull the changes from the repo onto the production server, it's basically this

                                ssh sneakyimp@server.com
                                cd /var/www/abc
                                sudo su abc_user # assume identity of abc_user this is necessary sometimes if I'm collaborating with other users...some permissions trickery often necessary too like chmod g+rws
                                git pull
                                exit
                                exit
                                

                                This is possible because both my workstation's git working directory and my production server's git working directory are pushing/pulling to/from the same repo. The genius of it is that I usually don't need to worry about which files have changed -- git takes care of all that for me. Also, production tends to match exactly the production branch in the repo. If a file gets corrupted or the server crashes, I can just use git to restore the files (except for config.php, database.php, etc. which I must go re-create or whatever). This to me is waaaaay easier than any FTP or SFTP client or even the rsync command. FTP tends to copy ALL the files whether they have changed or not. You may have a smarter FTP client or something, but uploading every single file is just not feasible on large websites and keeping track of which files have changed is just a waste of time when you have something like .git at your disposal, at least IMHO.

                                If I were you, I would put .idea and other project settings into the .gitignore file so they are not tracked. Otherwise, you will eventually have trouble when you are working with other devs -- they will have their own .idea folder for their workstation which may conflict with yours.

                                Bonesnap;11049743 wrote:

                                So it's expected not to track the config file and to upload it manually? What about values that are different? The database credentials are probably not going to be the same. So how does one handle that between the development environment and the live environment? Or do you basically change it once, upload it, and not really worry about it again because the file isn't being tracked anyway and the values probably aren't going to change either locally or when deployed?

                                Yep, I don't want anything in the .git repo that might differ between my workstation and the dev/production servers so I usually add them to the .gitignore file. This means I must manually copy them over to the dev/production server and then change the values to whatever they need to be there. I do this because
                                don't want to overwrite workstation/dev/production config with production/dev/workstation settings that don't work on the other machine
                                don't want to put any sensitive credentials in github or on bitbucket. as I mentioned before, this is a thing and a serious problem for some folks
                                * it allows me to keep any settings or passwords that I use on my workstation private from other developers I may work with
                                And yes, once it's uploaded, I usually can forget about it. For some files, I may keep a copy of the production config on my workstation as config-production.php. If I can be sure that my repo is securely and privately hosted (e.g., I'm hosting it myself) then I'll actually go ahead and check config-production.php and config-dev.php into the repo.

                                Bonesnap;11049743 wrote:

                                I'm on Windows so I don't think I can do the shell script, but even if I could I'd probably skip that for this project. I'm just kinda interested in how people do deployments. I have heard of other tools but haven't devoted the time to look into them (not enough time in a day, amirite?).

                                You can script on windows too. the PuTTY project has a scp executable you can use to transfer files. You can script it from a windows batch file. I hear you about time in the day. Never enough.

                                Bonesnap;11049743 wrote:

                                I regularly push my commits to Bitbucket so my repository is always backed up. The config file is currently in the repository, but sounds like that shouldn't happen. Maybe when I get closer to actual deployment I'll remove it.

                                If there are sensitive credentials in that file, you should probably change them -- not in the repo but actually change your passwords and stuff. Bitbucket might seem reliable and reputable, but security holes do happen even to them.

                                Bonesnap;11049743 wrote:

                                I just meant that if someone were to pull my config.php file they would get my database credentials but the host is "localhost" and the user and password only has access to the database on my local PC. In other words, I'm not really worried about it. Also the repository won't be public. I'm only making it public until about v1.0.0 so anyone here who wants to take a look can, but once I actually take this game public I'll be making the repository private (but keeping the wiki and issue tracker public).

                                Yeah this doesn't sound very dangerous but I'm kind of OCD about passwords and security.

                                Bonesnap;11049743 wrote:

                                Interesting. I think I'll be adopting this approach or something very similar. I like the idea of segregating the logs into different files based on their functionality or component of the project, and using the regular error log as a type of "catch all" or just a generic error log.

                                It is helpful to have task-specific logs, but then having a bunch of different logs can be its own problem. Remember that you must prevent logs from growing too large. If you let a site run for years, they can eventually consume the whole hard drive.

                                Bonesnap;11049743 wrote:

                                I like this idea, too. Always nice to have a well-formed UI even if you are an admin, though in our cases we can probably get through a log file pretty quickly.

                                Yeah DB log makes purging old log entries easier, you can also categorize log entries, search, sort, etc. DBs are awesome.

                                  So I created my database class and a couple other classes. I decided to go with a separate class to handle all the database queries. I'm not sure how I feel about this so I'd love to hear some feedback.

                                  Right now I only have one query (just a simple select query). I didn't want to get too far into it if I was heading in the wrong direction.

                                  So basically there will be three levels:

                                  Level 1 - The page/script itself
                                  I have my register.php file. On this page is a registration form. Part of the registration is retrieving a list of countries and listing them in a <select> element so the user can select one. Near the top of the page I make the following call:

                                  $countries = Region::get_countries_with_airports();
                                  

                                  And further down the page is where I print out the <option> elements.

                                  The class Region handles things regarding countries, continents, regions, states/provinces, etc. That's its purpose.

                                  Level 2 - The "main" class/function
                                  What I mean by main class or function is the class or function that's starting this chain. In this case it's Region. Inside Region I have a static method called get_countries_with_airports. Below is its code (comments omitted for brevity):

                                  public static function get_countries_with_airports()
                                  {
                                      $result    = Database::get_countries_with_airports();
                                      $countries = array();
                                  
                                  if($result)
                                  {
                                      while($country = mysqli_fetch_assoc($result))
                                      {
                                          $countries[$country['country_name']] = $country['country_id'];
                                      }
                                  }
                                  
                                  return $countries;
                                  }
                                  

                                  So remember I wanted to keep SQL queries out of my "main" classes and functions. This was how I accomplishing that. This method now calls my Database class with a method of the same name. This method also returns an array of values rather than a result, resource, or mysqli_result. I find this makes it more usable.

                                  Level 3 - The Database class
                                  This is the final level. Believe it or not I was toying with the idea of creating another level where I hooked into an actual DAL, but decided against it.

                                  So here we are in the Database class with another static method called get_countries_with_airports. Here is where the query is written (comments omitted for brevity):

                                  public static function get_countries_with_airports()
                                  {
                                      try
                                      {
                                          $conn = mysqli_connect(DB_HOST, DB_USER, DB_PASSWORD, DB_NAME);
                                  
                                      $query = "  SELECT 
                                                      country_name, countries.country_id
                                                  FROM 
                                                      countries
                                                  RIGHT JOIN 
                                                      airports ON countries.country_id = airports.country_id
                                                  GROUP BY 
                                                      country_name
                                                  ORDER BY
                                                      country_name";
                                  
                                      $result = mysqli_query($conn, $query);
                                  
                                      mysqli_close($conn);
                                  
                                      return $result;
                                  }
                                  catch(Exception $e)
                                  {
                                      var_dump($e);
                                  }
                                  }
                                  

                                  So right now I am not doing much with the exception, just dumping it. I will eventually log it and probably do something else with it.

                                  Feedback
                                  I would love to hear some feedback on this approach and structuring. An immediate disadvantage I can see is eventually my Database class is going to be very, very large, as there will be a lot of methods in it. Also I am essentially writing each function twice. Some advantages I can see is I have removed any queries from my "main" classes and functions. Also some Level 2 methods have the opportunity to reuse Level 3 methods. One thing I am not sure of though is at which level do I start writing the try/catch blocks and should I have them on all levels.

                                  Oh, and I'll be posting a link to the public repository soon!

                                  Thanks for reading! 🙂

                                    It sounds prudent to try and remove any SQL or database manipulations from your 'main' classes and functions. This sounds a bit to me like MVC practice and I've found it to be very helpful.

                                    It's a bit disappointing that
                                    a) you still have mysqli_fetch_assoc in your Regions class. This means your Regions class is using mysqli and you can't change the DB engine used by your project without great difficulty later.
                                    b) Your Database class isn't so much about databases really as you'll have a method for everything under the sun that you might ever want to grab from the DB. This file is going to be a nightmare later and, if you were working with others on a project, I'd be willing to bet that you'd run into a lot of fast-forward situations where two different people had edited this file.
                                    c) Your Database's static methods are each going to have a mysqli_connect statement in them. I think that this is probably a bad use of a static method. I would imagine that a database connection is the kind of thing that should be wrapped in an Object or some kind in a constructor.

                                    I'd suggest you look into using PDO. It works with MySQL but if you are careful, you can structure your code such that it's easy to swap out MySQL for PostGres or something.

                                      sneakyimp;11049747 wrote:

                                      On my workstation, I have a lot of website folders in my /var/www folder. Each of these folders is not a web root but contains a subdirectory (either "public" or "html") that is the webroot for that particular project. For each website I also create a) an apache config that defines the webroot for that website and b) an entry in my hosts file (/etc/hosts on ubuntu) which points some abbreviated version of the actual domain to my local machine. The apache config for one of my sites looks like this:

                                      <VirtualHost *:80>
                                          # this domain is not a real domain, it's just some abbreviation or acronym I make up
                                          # I'll add the domain to my hosts file so any requests from my workstation will just be pointed to localhost
                                          ServerName abc.com
                                          ServerAlias www.abc.com
                                      
                                      # this is the webroot on my workstation
                                      DocumentRoot /var/www/abc/html
                                      
                                      # this doesn't really matter at all
                                      ServerAdmin sneakyimp@example.com
                                      
                                      UseCanonicalName Off
                                      
                                      # this separates out the errors for this domain from the other websites I run off my workstation
                                      ErrorLog /var/www/abc/logs/error.log
                                      </VirtualHost>
                                      

                                      This is the entry in my hosts file:

                                      # routes all requests for abc.com to localhost
                                      127.0.0.1     abc.com
                                      127.0.0.1     www.abc.com
                                      

                                      So basically instead of one public_html directory for my entire computer, I have one for each website that I work on. Things can get a little confusing when I try to set up https or something, but I can usually figure it out. I basically just edit my source code on my local filesystem and keep a browser open to abc.com and any changes I make take effect instantly without any uploading or anything. And rather than developing in a subdirectory, where I have to be careful to specify relative urls or establish a base url or something like that, I can develop exactly as I would on the server. My workstation and live apache confs tend to be identical.

                                      My reaction to this. I think this will change the way I develop sites and I'll be bringing this up to my boss when I get back. Just a quick question: where do you put the custom .conf file and how does Apache know to read it?

                                      sneakyimp;11049747 wrote:

                                      In my case above, the .git directory is in /var/www/abc/.git -- i.e., outside the web root both on my workstation and on the production server. When I want to pull the changes from the repo onto the production server, it's basically this

                                      ssh sneakyimp@server.com
                                      cd /var/www/abc
                                      sudo su abc_user # assume identity of abc_user this is necessary sometimes if I'm collaborating with other users...some permissions trickery often necessary too like chmod g+rws
                                      git pull
                                      exit
                                      exit
                                      

                                      This is possible because both my workstation's git working directory and my production server's git working directory are pushing/pulling to/from the same repo. The genius of it is that I usually don't need to worry about which files have changed -- git takes care of all that for me. Also, production tends to match exactly the production branch in the repo. If a file gets corrupted or the server crashes, I can just use git to restore the files (except for config.php, database.php, etc. which I must go re-create or whatever). This to me is waaaaay easier than any FTP or SFTP client or even the rsync command. FTP tends to copy ALL the files whether they have changed or not. You may have a smarter FTP client or something, but uploading every single file is just not feasible on large websites and keeping track of which files have changed is just a waste of time when you have something like .git at your disposal, at least IMHO.

                                      If I were you, I would put .idea and other project settings into the .gitignore file so they are not tracked. Otherwise, you will eventually have trouble when you are working with other devs -- they will have their own .idea folder for their workstation which may conflict with yours.

                                      One day I'll move my workflow to be something more like this. It's on my to-do list. And I have an entry in my .gitignore file for the .idea folder; it was the first thing I did 😃

                                      sneakyimp;11049747 wrote:

                                      Yep, I don't want anything in the .git repo that might differ between my workstation and the dev/production servers so I usually add them to the .gitignore file. This means I must manually copy them over to the dev/production server and then change the values to whatever they need to be there. I do this because
                                      don't want to overwrite workstation/dev/production config with production/dev/workstation settings that don't work on the other machine
                                      don't want to put any sensitive credentials in github or on bitbucket. as I mentioned before, this is a thing and a serious problem for some folks
                                      * it allows me to keep any settings or passwords that I use on my workstation private from other developers I may work with
                                      And yes, once it's uploaded, I usually can forget about it. For some files, I may keep a copy of the production config on my workstation as config-production.php. If I can be sure that my repo is securely and privately hosted (e.g., I'm hosting it myself) then I'll actually go ahead and check config-production.php and config-dev.php into the repo.

                                      Sounds good!

                                      sneakyimp;11049747 wrote:

                                      You can script on windows too. the PuTTY project has a scp executable you can use to transfer files. You can script it from a windows batch file. I hear you about time in the day. Never enough.

                                      Interesting. On my to-do list as well. One day...

                                      sneakyimp;11049747 wrote:

                                      If there are sensitive credentials in that file, you should probably change them -- not in the repo but actually change your passwords and stuff. Bitbucket might seem reliable and reputable, but security holes do happen even to them.

                                      Yeah this doesn't sound very dangerous but I'm kind of OCD about passwords and security.

                                      Normally I try to be OCD about passwords and security, too, but in this case the credentials are literally worthless. Also the user I created for the database has to connect via localhost and the port 3306 is blocked in my router. I'll figure something out though.

                                      sneakyimp;11049747 wrote:

                                      It is helpful to have task-specific logs, but then having a bunch of different logs can be its own problem. Remember that you must prevent logs from growing too large. If you let a site run for years, they can eventually consume the whole hard drive.

                                      That is a good point. If I can reach that sort of milestone with my game then I'd be more than happy to revisit how all the logs work.

                                      sneakyimp;11049747 wrote:

                                      DBs are awesome.

                                      That they are!

                                        Thank you for the feedback! This is exactly what I was looking for! 🙂

                                        sneakyimp;11049751 wrote:

                                        a) you still have mysqli_fetch_assoc in your Regions class. This means your Regions class is using mysqli and you can't change the DB engine used by your project without great difficulty later.

                                        Honestly I'm not worried about this. I understand the importance of making your projects "data agnostic" (just watched a video presentation on this exact subject last night) but I have to be realistic with myself and I know the likelihood of me changing the database engine to something else is very, very far down on the list of things likely to happen. I wouldn't even know where to begin with anything other than MySQL and MSSQL, and I'm NOT using MSSQL lol.

                                        sneakyimp;11049751 wrote:

                                        b) Your Database class isn't so much about databases really as you'll have a method for everything under the sun that you might ever want to grab from the DB. This file is going to be a nightmare later and, if you were working with others on a project, I'd be willing to bet that you'd run into a lot of fast-forward situations where two different people had edited this file.

                                        Yes, this was a concern I mentioned earlier about the size of this class. But I'm not sure how else to handle this. Sounds like a "have your cake and eat it, too" scenario. Either I move all the queries out of my main classes and into something else, which is going to create additional overhead since I'll be writing additional class(es) and functions, or I keep queries in my main classes and leave it at that. The only other option I can think of is this:

                                        Create a "sister" class for each main class. So for example there would be a Region_Data class that only handles queries for the Region class. It won't be inherited (though I guess it could be - maybe someone could point out advantages/disadvantages of doing that). These classes would have a constructor that takes the database credentials. Then in my Region class when I am calling the get_countries_with_airports() method I could have something like:

                                        public static function get_countries_with_airports()
                                        {
                                            $data = new Region_Data(DB_HOST, DB_USER, DB_PASSWORD, DB_NAME);
                                            $result = $data->get_countries_with_airports();
                                            // and so on
                                        }
                                        

                                        This approach satisfies a few things:

                                        1) It removes queries from the main classes and functions.
                                        2) It avoids the soon-to-be gigantic Database class and instead (like the main classes themselves) groups functions that are related with each other.
                                        3) I could probably develop an interface for these classes that implements generic CRUD functionality. Any specialized functionality would only exist in the specific class. Some classes would be large while others would be small.

                                        Disadvantages:

                                        1) I'm essentially doubling the number of classes in my project (though with spl_autoload_register is this really an issue?)
                                        2) I'm sure something else that I cannot think of at the moment

                                        Also I will be the only one working on this. I'm not concerned about others.

                                        sneakyimp;11049751 wrote:

                                        c) Your Database's static methods are each going to have a mysqli_connect statement in them. I think that this is probably a bad use of a static method. I would imagine that a database connection is the kind of thing that should be wrapped in an Object or some kind in a constructor.

                                        My solution above would solve this but you're right; that is something that should be in a constructor or stored in the class in private members or something. Definitely an oversight on my part. And then I wouldn't have to constantly do a try/catch block every time I want to do something. Very good catch, thank you.

                                        sneakyimp;11049751 wrote:

                                        I'd suggest you look into using PDO. It works with MySQL but if you are careful, you can structure your code such that it's easy to swap out MySQL for PostGres or something.

                                        I'm not against PDO but I have decided I'd rather learn it in another, smaller project. I've seen examples of its code and it doesn't look overly complicated, but I think for now I'm going to stick with MySQLi.

                                        Also, as promised, here is the link to the repository: https://bitbucket.org/Bonesnap/airline-planet