Bonesnap;11051013 wrote:

Ideally I'd like the above message to come out as something like "The email address you have entered is already in our system. Please enter a different one", or something to that effect; basically something that a layperson would understand. Also of course don't want to expose any column names to the outside.

My guess is that a "duplicate entry 'foo' for key 'bar'" would apply to any situation where you are prevented from entering a record due to a unique index. Would you also want to map those other messages or just this one? Seems to me you could easily recognize this particular error with a regex and extract the email address from it.

<?php
$error = "Duplicate entry 'email@host.com' for key 'user_email'";
$pattern = '/Duplicate entry \'(.*)\' for key \'user_email\'/';
$matches = NULL;
if (preg_match($pattern, $error, $matches)) {
	$user_message = "The email address you have entered is already in our system. Please enter a different one";
} else {
	$user_message = $error;
}
echo $user_message . PHP_EOL;

    Because I'm too lazy to write new example code, here is literally how I handle it:

                try
                {
                    $success = $statment->Execute()
                } catch (DataException $e) {
                    if ($e->getCode() == self::UNIQUE_CONSTRAINT_VIOLATION) {
                        list($sVal, $sCol) = explode("' for key '", substr(trim($e->getMessage(), "'"), 17));
                        $oRet->AddError("Unable to add user, $sVal is already used for $sCol", $e);
                    } else {
                        $oRet->AddError('Unable to query database', $e);
                    }
                }
    

    The only downside in this is that after the user fixes say the username, the email might still fail the unique constraint. Meaning the second attempt might still save. I have not found a way around this problem just yet.

      sneakyimp;11051037 wrote:

      Would you also want to map those other messages or just this one?

      Hmm, that's a good question. I actually only realized the mysqli_stmt object stored multiple errors at the last minute (of making my post). I did a quick look through my tables and there are no tables that have more than one UNIQUE constraint (excluding primary/foreign keys), but that of course doesn't mean there won't be in the future. So I suppose I would have to make this somewhat dynamic, since at the very least the column name would be different. Would it be considered poor design to maybe keep a "master list" of the key names and once I parsed it to check the list for the appropriate error message? Right now it's only about a handful (like 4 or 5), but it's possible that it could grow in the future.

      I figured writing some kind of regular expression or something to parse it would be the way to go; I was just curious how others handle this sort of situation. Traditionally I've always checked the database for duplicates before attempting to insert, but I read (I believe it was on here somewhere) that it would be best to let the database handle that and to deal with the error appropriately.

      Derokorian;11051039 wrote:

      Because I'm too lazy to write new example code, here is literally how I handle it:

                  try
                  {
                      $success = $statment->Execute()
                  } catch (DataException $e) {
                      if ($e->getCode() == self::UNIQUE_CONSTRAINT_VIOLATION) {
                          list($sVal, $sCol) = explode("' for key '", substr(trim($e->getMessage(), "'"), 17));
                          $oRet->AddError("Unable to add user, $sVal is already used for $sCol", $e);
                      } else {
                          $oRet->AddError('Unable to query database', $e);
                      }
                  }
      

      The only downside in this is that after the user fixes say the username, the email might still fail the unique constraint. Meaning the second attempt might still save. I have not found a way around this problem just yet.

      How is it possible that it could fail on a subsequent attempt? Isn't each attempt "oblivious" to any other attempt? In my case the email address is the username (I didn't want to deal with usernames).

        Bonesnap;11051049 wrote:

        Hmm, that's a good question...did a quick look through my tables and there are no tables that have more than one UNIQUE constraint (excluding primary/foreign keys), but that of course doesn't mean there won't be in the future. So I suppose I would have to make this somewhat dynamic, since at the very least the column name would be different.

        Shouldn't be too hard to alter my pattern for other unique indexes. Just replace the index name and error message. Not hard to imagine some kind of array structure and loop that would map index names onto suitable user-friendly messages.

        Bonesnap;11051049 wrote:

        Would it be considered poor design to maybe keep a "master list" of the key names and once I parsed it to check the list for the appropriate error message? Right now it's only about a handful (like 4 or 5), but it's possible that it could grow in the future.

        If your index names change, this array or "master list" would also have to change. So that may complicate code upkeep. Do you really need such a mapping or do you really want to go to this degree of code abstraction? Seems to me that you'd only be dealing with each table/index in some very specific context. Not much point IMHO in building some kind of global/centralized code or data structures -- unless of course you are deploying your code in multiple languages and need different translations for different languages.

        Bonesnap;11051049 wrote:

        I figured writing some kind of regular expression or something to parse it would be the way to go; I was just curious how others handle this sort of situation. Traditionally I've always checked the database for duplicates before attempting to insert, but I read (I believe it was on here somewhere) that it would be best to let the database handle that and to deal with the error appropriately.

        I wouldn't necessarily say that regex is required. I'd be more inclined to check the error code (a number) first because it's more highly structure data. On the other hand, the error code probably doesn't tell you which index caused the problem.

        As for checking for duplicates first, you now understand that tradeoffs. If you decide to restore your check-first approach, I would recommend using START TRANSACTION and COMMIT or whatever -- people like to hit refresh on the page when your server gets busy.

          sneakyimp;11051051 wrote:

          Shouldn't be too hard to alter my pattern for other unique indexes. Just replace the index name and error message. Not hard to imagine some kind of array structure and loop that would map index names onto suitable user-friendly messages.

          Yes, that is what I imagined by a "master list" - at some point in the code, somewhere, there will have be a list to check against of the keys. And with that I can come back with the appropriate error message.

          sneakyimp;11051051 wrote:

          If your index names change, this array or "master list" would also have to change. So that may complicate code upkeep. Do you really need such a mapping or do you really want to go to this degree of code abstraction? Seems to me that you'd only be dealing with each table/index in some very specific context. Not much point IMHO in building some kind of global/centralized code or data structures -- unless of course you are deploying your code in multiple languages and need different translations for different languages.

          I can't imagine my index names changing, but I can see the list of them increasing. I am just looking for a simple, clean way to take the error message and change it into something user friendly. I may only be dealing with the table/index in a specific context, but the execute() method doesn't know what that is. I think if I just created a function that dealt with this it would take care of it relatively easily. The language thing was something I envisioned, but as something so far down the road that it wouldn't impact what I am doing now.

          sneakyimp;11051051 wrote:

          I wouldn't necessarily say that regex is required. I'd be more inclined to check the error code (a number) first because it's more highly structure data. On the other hand, the error code probably doesn't tell you which index caused the problem.

          I would definitely check the error code first before attempting to use the regex.

          sneakyimp;11051051 wrote:

          As for checking for duplicates first, you now understand that tradeoffs. If you decide to restore your check-first approach, I would recommend using START TRANSACTION and COMMIT or whatever -- people like to hit refresh on the page when your server gets busy.

          I still like this approach better as it reduces the amount of hits to the database. Let's face it, most of the time people are going to enter a unique value, so checking the database on every attempt just in case doesn't seem like the best approach.

            Bonesnap;11051049 wrote:

            How is it possible that it could fail on a subsequent attempt? Isn't each attempt "oblivious" to any other attempt? In my case the email address is the username (I didn't want to deal with usernames).

            I have a table with 3 unique constraints on it, so its possible someone includes non-unique values for all 3 fields. In my case, its username, email and integrated_app_id so someone could give me a username, email AND integrated ID that I've already seen, but PDO (or mysql?) only tells me about one, so after they change say username, its possible to then have to tell them the email is in use. For this reason I both check ahead of time (with as typed ajax requests) and at insertion (with constraints on the table).

              Got the duplicate value thing sorted out. I'm using transactions and I have a function that "translates" the error messages into something more user friendly.

              Question: regarding authentication I have a class that generates a token to be used on my forms when submitting them. I have made the executive decision that all my forms are going to be submitted via AJAX as I believe it improves the user experience overall. When submitting a form the first thing that is checked is if the token is valid. Originally I destroyed the token upon submission but of course if the submission fails due to bad input, the user can still try to submit again; however, since the token is destroyed, it will never submit. So what I am wondering is... should I always destroy the old token and resend a new one, or just not destroy the token until the form was successfully submitted? I am thinking the former is the best approach but would love to hear your guys' opinions.

                Bonesnap;11051149 wrote:

                Got the duplicate value thing sorted out. I'm using transactions and I have a function that "translates" the error messages into something more user friendly.

                Question: regarding authentication I have a class that generates a token to be used on my forms when submitting them. I have made the executive decision that all my forms are going to be submitted via AJAX as I believe it improves the user experience overall. When submitting a form the first thing that is checked is if the token is valid. Originally I destroyed the token upon submission but of course if the submission fails due to bad input, the user can still try to submit again; however, since the token is destroyed, it will never submit. So what I am wondering is... should I always destroy the old token and resend a new one, or just not destroy the token until the form was successfully submitted? I am thinking the former is the best approach but would love to hear your guys' opinions.

                I may not be much help here, but this token you refer to sounds a bit like a session id. No token, no session. Is there some difference? You may find discussion about when/how often to refresh/destroy a session id to be helpful.

                  sneakyimp;11051157 wrote:

                  I may not be much help here, but this token you refer to sounds a bit like a session id. No token, no session. Is there some difference? You may find discussion about when/how often to refresh/destroy a session id to be helpful.

                  I was thinking its more of a "Don't process the same submission multiple times" token. In which case, the former is the best option - every time you check a token, you generate a new one.

                    Derokorian;11051163 wrote:

                    I was thinking its more of a "Don't process the same submission multiple times" token. In which case, the former is the best option - every time you check a token, you generate a new one.

                    I'm assuming you mean a POST operation?

                    I've always tried to do the page redirect after handling the post info so users don't just hit refresh -- but now I'm just realizing you are doing AJAX.

                    I'm guessing you are trying to avoid a situation where someone gets all click-happy on the submit button? Sort of hard to think of a solution without more detail. Are you storing the token on the server in order to compare? Or is this a Javscript var?

                      sneakyimp;11051167 wrote:

                      I'm assuming you mean a POST operation?

                      I've always tried to do the page redirect after handling the post info so users don't just hit refresh -- but now I'm just realizing you are doing AJAX.

                      I'm guessing you are trying to avoid a situation where someone gets all click-happy on the submit button?

                      Yes, this is what I assumed, will wait for Bonesnap to confirm 🙂

                      sneakyimp;11051167 wrote:

                      Sort of hard to think of a solution without more detail. Are you storing the token on the server in order to compare? Or is this a Javscript var?

                      If it is like I'm thinking, then you would store it in both - the client and the server's session. I personally use an array of valid tokens, and remove a token once its been used. I do this because people like to have 10 tabs open to the site, and I don't want to invalid the form in tab 1 because they went to a form in tab 7. But even if you send a refresh/location header AFTER processing a save, its still possible to process the same payload more than once without using a token like this. I could click submit 10 times, as long as I click before my browser processes the response from the last click. Try it sometime 🙂

                      Also, as someone into securing everything - I'm surprised you don't use a form token, even with a redirect. Think of the security implications - now you have both a one time use token in the form and a cookie identifying the session to validate that the form is from the trusted user. Without this, I can intercept a post request from user_a and make as many post requests as I want. But if you have a one time use token, it doesn't matter if I got your token on the way to the server, because I can't use it again!

                        I am using the token to verify the form submission came from the correct place. My understanding is this prevents (or at least mitigates) CSRF attacks and just hardens the application overall. The token is stored in a $_SESSION variable which is then checked server-side when the form is submitted. You can see my Authentication class here. The verify_token() method has a quick comment regarding this.

                        Is openssl_pseudo_random_byes() sufficient enough? I gleaned most of this from a video online but it fit my basic knowledge of the subject. Supposedly you should base64_encode() it so it doesn't mess up your HTML markup (it's not done for security purposes).

                        Basically the form loads and a token is generated and stored in the session as well as a hidden input field of the form. When it's submitted, it's verified server-side. If there's no match then the form processing dies immediately and an error message returned to the user. Since my forms are AJAX, no new token is being generated. Originally this caused issues since the verify_method() unsets the session variable regardless of the outcome. At the moment I have commented that out and was going to return to it (AKA, now).

                        From the sounds of it I should be returning a new token regardless of the outcome of verify_token(), which is what I thought to do.

                          Bonesnap;11051191 wrote:

                          My understanding is this prevents (or at least mitigates) CSRF attacks and just hardens the application overall.

                          I imagine it certainly would help.

                          Bonesnap;11051191 wrote:

                          Is openssl_pseudo_random_byes() sufficient enough? I gleaned most of this from a video online but it fit my basic knowledge of the subject. Supposedly you should base64_encode() it so it doesn't mess up your HTML markup (it's not done for security purposes).

                          'Sufficient enough for what?' would be my question. Are you protecting yourself from the CIA? Or your little sister? I would say that function sounds pretty strong from a cryptographic perspective and will suffice for such tokens. If, on the other hand, you are needing to protect state secrets from another well-funded state, you should charge a lot of money for something more considered. You might consider, for example, that cryptographic attacks often depend on the attacker's ability to detect patterns in one's random number generators and therefore it might not be such a good idea to directly expose the output of this function to snoopers. You might charge your clients money to read a lot of fancy, hard-to-understand research papers or amazing stuff from Bruce Schneier.

                          I would imagine the token would be most effective if it changes every time. Is there any chance this might make your web app really persnickety. What happens if someone hits the back button? Or refresh? Does your web server session and the Javascript get out of sync over the token? If so, will it usually right itself? I seem to recall having a bit of difficulty with a Q&A captcha long these lines. Nothing terrible really, the form could just be annoying if your submission failed because you had to enter a different captcha.

                            sneakyimp;11051261 wrote:

                            'Sufficient enough for what?' would be my question. Are you protecting yourself from the CIA? Or your little sister? I would say that function sounds pretty strong from a cryptographic perspective and will suffice for such tokens. If, on the other hand, you are needing to protect state secrets from another well-funded state, you should charge a lot of money for something more considered. You might consider, for example, that cryptographic attacks often depend on the attacker's ability to detect patterns in one's random number generators and therefore it might not be such a good idea to directly expose the output of this function to snoopers. You might charge your clients money to read a lot of fancy, hard-to-understand research papers or amazing stuff from Bruce Schneier.

                            Indeed an open-ended question. I just meant reasonable. I'm sure if I said I was using md5(time()) to generate my tokens I would be ripped apart :p I am confident in the tokens that are generated; reading online about the function has positive feedback about its use. You just have to make sure the extension is installed, which I am checking for.

                            sneakyimp;11051261 wrote:

                            I would imagine the token would be most effective if it changes every time. Is there any chance this might make your web app really persnickety. What happens if someone hits the back button? Or refresh? Does your web server session and the Javascript get out of sync over the token? If so, will it usually right itself? I seem to recall having a bit of difficulty with a Q&A captcha long these lines. Nothing terrible really, the form could just be annoying if your submission failed because you had to enter a different captcha.

                            I ended up verifying, regenerating, and returning a new token every time there is an AJAX call. The only time I did not return the new token was on a successful registration, though the user could just refresh the page and register again (with different info of course).

                            I haven't tested the back button but my forms are only one page. I should check though back button then a forward button. When the page loads a new token is generated and stored, so that covers refresh. If there is an error like a mismatch I am regenerating and returning a new token, so it should correct itself.

                            I thought about captcha but decided against using it. I hate them. Users hate them. New users are sent a confirmation email to activate their account. Eventually I'll have a cron job running once a month to prune any accounts that haven't been activated in the last 30 days. If I find there are issues with bots/spam accounts I'll take action, but until then I have made the registration as simple as possible.

                              Bonesnap;11051271 wrote:

                              I thought about captcha but decided against using it. I hate them. Users hate them. New users are sent a confirmation email to activate their account. Eventually I'll have a cron job running once a month to prune any accounts that haven't been activated in the last 30 days. If I find there are issues with bots/spam accounts I'll take action, but until then I have made the registration as simple as possible.

                              I kinda felt the same way until one of my sites passed the 2-million user mark and most of the accounts were spam accounts. I ended up implementing a Q&A captcha. I.e., you ask a question that requires language understanding and reason and for which the answer is well defined. Even just introducing a few questions seems to have foiled both the bots and foreign-language spammers -- at least for now.

                                sigh

                                Lesson learned.

                                The last thing I was working on was implementing PHPMailer so I could send the activation email. Adding it into my project and sending out emails was pretty easy.

                                While doing this I created a mail.php file that was being included via the config.php file so I wouldn't have to set "global" settings like the SMTP server, password, etc. every time I wanted to send out an email. Unfortunately I created the file via PhpStorm which usually displays a little dialog window asking if you want to add the file to Git. The dialog has one of those "Remember my choice" checkboxes that I checked a while ago because 99% of the time when I create a file I'm going to want to track it.

                                So I added the file but made the mistake to not add it to my .gitignore file first. Normally PhpStorm colours the new (but tracked) files as green, new files (but not tracked) as red, and ignored files as grey. The mail.php file was grey. Except that it was still technically tracking, and I committed and pushed the file to my repository which is public. With my (development) email credentials in it. I changed it immediately so it's fine, but damn, if I hadn't noticed that would have sucked. I mean there's nothing in the email account; it's simply just an account I use to play around with when working on projects, but it's annoying.

                                Apparently I had to run the following command to stop the tracking:

                                git rm --cached mail.php

                                And then commit it. Which I did, and then pushed that commit, though it's not reflecting in my repository.

                                Annoying, but lesson learned.

                                  THE FILE MAY NOT BE GONE. It's in the repo's history. If you changed your password and stuff you should be OK -- but just to be paranoid, I'd make sure I never used that password again for anything else either.

                                  One of the rather thorny issues I've encountered in numerous different ways is the management of sensitive credentials and also files that differ between different my workstation, dev, and production. Some examples of these credentials:
                                  database credentials
                                  email credentials
                                  cloud credentials
                                  payment gateway credentials (yikes!)
                                  file paths
                                  domain names
                                  * .htaccess files (in theory these can be made such that they are not domain-specific, but in practice it can be tricky if your .htaccess file is elaborate)

                                  Almost all of these are very sensitive data. I originally tried to put them all in one place. E.g., a config.php file or in a folder named 'secret' or 'credentials' or something. This makes it easier to avoid the problem you just had because all you have to do is make sure the one file/folder is in the .gitignore file and you should be safe. In practice, though, these machine-specific or sensitive bits of data tend to be kinda scattered all over one's project. You have to be vigilant with your .gitignore file.

                                  For non-sensitive data that differs by machine (workstation, dev, testing, production) I find that it's usually a good idea to put the original file in your .gitignore file so that it doesn't get overwritten when you deploy source from git. E.g., the db credentials file in codeigniter is at

                                  {project_root}/application/config/database.php

                                  I usually add that file to the .gitignore file and then we have several variants that might be in the git repo (IF THE GIT REPO IS PRIVATE, NOT IF IT'S ON GITHU😎:

                                  {project_root}/application/config/database-DEV.php
                                  {project_root}/application/config/database-PROD.php
                                  {project_root}/application/config/database-TEST.php
                                  

                                  And then I might also have a {project_root}/application/config/database-SNEAKYIMP.php file on my workstation (also in the .gitgnore file maybe).

                                  Anyways, I have always found it to be a nuisance both managing all these conflicting files and also keeping sensitive ones out of github repos. Would love to hear any creative thoughts on managing this problem.

                                    The saga continued! Something happened and the file was still being tracked, and pushed up the file with my new credentials in it. It was like that for about an hour a half. I only noticed when I came back to do some more work. Very frustrating. I redid the command, but this time did the commit and push via the command line instead of PhpStorm. I also completely closed PhpStorm and reopened it and everything is working correctly now. The password has of course been changed. These passwords were randomly generated so I am not worried about their reuse.

                                    sneakyimp;11051301 wrote:

                                    For non-sensitive data that differs by machine (workstation, dev, testing, production) I find that it's usually a good idea to put the original file in your .gitignore file so that it doesn't get overwritten when you deploy source from git.

                                    I usually add that file to the .gitignore file and then we have several variants that might be in the git repo (IF THE GIT REPO IS PRIVATE, NOT IF IT'S ON GITHU😎:

                                    And then I might also have a {project_root}/application/config/database-SNEAKYIMP.php file on my workstation (also in the .gitgnore file maybe).

                                    That's a smart idea. I should have done that with the config.php file since it has the URL in it and will be a pain in the ass when (if) I eventually get my project onto something public. Also the tip about the "secret" folder is a good idea, too.

                                    Not sure on tips about tracking/not tracking these files. These are the sort of things I wish people would cover in their Git tutorials/videos/whatever. Everything is about how to "get started" but typically ends there. Workflows, branching techniques, commit message tips, tracking files, dealing with merge conflicts, etc. would all be useful info. At this point I know how to use Git but this extra stuff would be a huge improvement. Sometimes these items are touched on, but only the surface.

                                      Bonesnap;11051313 wrote:

                                      The saga continued! Something happened and the file was still being tracked, and pushed up the file with my new credentials in it. It was like that for about an hour a half. I only noticed when I came back to do some more work. Very frustrating. I redid the command, but this time did the commit and push via the command line instead of PhpStorm. I also completely closed PhpStorm and reopened it and everything is working correctly now.

                                      That sounds frustrating to me. I'd file a bug report with PHPStorm. Sounds like the .gitignore file is being cached in the git-guts.

                                      Bonesnap;11051313 wrote:

                                      That's a smart idea. I should have done that with the config.php file since it has the URL in it and will be a pain in the ass when (if) I eventually get my project onto something public. Also the tip about the "secret" folder is a good idea, too.

                                      The secret folder is not always feasible if you are using someone else's framework. There's also still the challenge of making sure that the various credential files are similar to each other. For example, what if you add a new constant defintion to config.php on your workstation? It can become a manual task to make sure all of the various versions of these files contain all the necessary definitions.

                                      Bonesnap;11051313 wrote:

                                      These are the sort of things I wish people would cover in their Git tutorials/videos/whatever. Everything is about how to "get started" but typically ends there. Workflows, branching techniques, commit message tips, tracking files, dealing with merge conflicts, etc. would all be useful info. At this point I know how to use Git but this extra stuff would be a huge improvement. Sometimes these items are touched on, but only the surface.

                                      I agree. Version control is creepy at first, whether it's CVS, SVN, or GIT. I bought a book on git (which I seem to have misplaced) that does an admirable job of trying to explain the branching concept pretty well, but it's 400+ pages -- and hardly riveting reading. GIT is pretty amazing though. Waaaaaay better than CVS or SVN or SourceSafe IMHO.

                                        Obviously, it's my own opinion, but I've handled config well in my Framework and basically it works like this... The framework has a config folder with any configuration values required by the framework itself (IE: the model needs to know what DB engine, host, etc) but it may not hold any real values (ie the config value may be null, with just the key defined). Next the application has a configuration folder, these are loaded after the frameworks configuration files so the application can overwrite values in the FW config. IE: a specific site uses a database with name XYZ always, so it can go in the application level config - whereby it overwrites the db name from the FW config. Finally, at the root level is a config folder. This folder contains only one file in my repo: .gitignore and it ignores everything but the .gitignore in that folder. This is where private credentials and install specific items go (such as db host for example).

                                        Dero\Core\Config is the class from my FW, and as you can see it loads in order described and merges/overwrites. So lets look at a simple example, basic website configuration:

                                        First is a config in the framework, for example: website.json this is what the FW needs to set up the website in the files I've created. Just some general configuration settings:

                                        {
                                          "image_url": "/images/",
                                          "script_url": "/scripts/",
                                          "style_url": "/styles/",
                                          "site_url": "/",
                                          "template": {
                                            "extensions": ["phtml","php", "tpl", "html"],
                                            "paths": ["dero/view"]
                                          }
                                        }

                                        Then is a config file for the application, for example my website has this website.json which simply defines another view path and the url:

                                        {
                                          "site_url": "http://derokorian.com",
                                          "template": {
                                            "paths": ["app/view"]
                                          }
                                        }

                                        Finally, is in the ignored config - there is never anything in these files that isn't defined in the application config or framework's config. This way when I add a new configuration item - anyone that pulls the update can see the additionally defined fields for use. For example, here is my local website.json which just changes to my local domain:

                                        {
                                          "site_url": "http://local.derokorian.com"
                                        }

                                        This is a rather trivial example, because nothing here is worth protecting. But lets look at my Flickr bar - its a controller for my application that just displays some pictures from the service. For that I have flickr.json from the application config:

                                        {
                                          "api_key": null,
                                          "secret": null,
                                          "favorite_bar": {
                                            "photoset": "72157629096003262",
                                            "userid": "57781201@N05"
                                          }
                                        }

                                        So here we can see api_key and secret are null. I'm simply providing the definition that these must exist but I don't want them checked into any type of version control. photoset and userid are publicly available just by looking on flickr.com so there's no reason they can't be checked in.

                                        So now the problem becomes - how do I get the correct configuration onto a server? Well, I generally SCP from local testing or a testing environment... BUT I have recently started a new project to manage these configurations. its basically just a repo with a cron that installs on a server a long with a folder structure defining configuration for each project per hostname. Its privately hosted repo (on a box in my living room actually) so I'm not worried about it being in a repository. The purpose of the cron is to pull any new configuration I push to the repo. This is just a convenience project, and it allows me to make new hosts easily, by copying a folder from a similar server and making necessary changes. Then I just install the cron and call it a day.