I realize it may be gauche to post SO links here, but their Developer Survey is usually pretty interesting. I'd love to hear anyone's thoughts on the 2018 survey.

    < 7% female.

    Still.

    And I feel bad that only 14% of our engineering staff is female.

    🙁

      sneakyimp;11065654 wrote:

      I realize it may be gauche to post SO links here, but their Developer Survey is usually pretty interesting. I'd love to hear anyone's thoughts on the 2018 survey.

      My main thought was that I was too busy yesterday (and today, for that matter), to give it a good read. I did answer it IIRC.

        Yeah I was shocked at how male -- and YOUNG -- the demographic is. I was also struck by the presence of scads of funny languages and frameworks I've never heard of before. Wondering a bit if I should modernize my skills. Definitely thinking I should look into AI.

          sneakyimp;11065668 wrote:

          Yeah I was shocked at how male -- and YOUNG -- the demographic is. I was also struck by the presence of scads of funny languages and frameworks I've never heard of before. Wondering a bit if I should modernize my skills. Definitely thinking I should look into AI.

          Whatever "AI" is ... ;-) :-D

          I've joined Data Science Central ... and believe it or not, reading over there I think that I have written an AI application in use here.

          It's not as smart as me though, although it doesn't take lunch breaks or sleep ... (well, it does [man]sleep/man sometimes, but not for too long, we hope....).

          As for young ... hmm. I'm the youngest in my shop. And I was born the month after the Beatles played their last public concert at Shea Stadium.

          So we're not ALL young (although no one else here can code their way out of a paper bag, so maybe the real developers are infants by comparison ...)

            Also, with any such survey, if it's not applied with good data-collection practices, numbers like respondent age might be misleading due to some sociological reason(s) whereby younger/newer developers are (a) more likely to use SO to begin with and/or (b) are more likely to respond to a survey.

            Or maybe the burn-out rate is high?

            Or they move on to management and stop having to write code, and therefore stop needing SO?

            Or they become so proficient that they don't need SO? 😉

              NogDog wrote:

              Also, with any such survey, if it's not applied with good data-collection practices, numbers like respondent age might be misleading due to some sociological reason(s) whereby younger/newer developers are (a) more likely to use SO to begin with and/or (b) are more likely to respond to a survey.

              The survey does suffer from a severe selection bias.

              dalecosp wrote:

              Whatever "AI" is ...

              According to Douglas Hofstadter, the principal problems of AI are: What is "A"? And: What is I?

                Weedpacket;11065671 wrote:

                According to Douglas Hofstadter, the principal problems of AI are: What is "A"? And: What is I?

                I do believe I can rightly apprehend the confusion of ideas which might lead to such questions.

                  Weedpacket;11065671 wrote:

                  The survey does suffer from a severe selection bias.

                  According to Douglas Hofstadter, the principal problems of AI are: What is "A"? And: What is I?

                  sneakyimp;11065676 wrote:

                  I do believe I can rightly apprehend the confusion of ideas which might lead to such questions.

                  The first one ('A') might be harder in theory; the second is harder in practice. Our EzLink(tm) system categorizes products something over 90% correctly, the last time we checked it, on copiers, printers, etc. (Hardware).

                  However, throw parts for those, or toner cartridges or document feed trays into the mix and the score slips considerably. This became a real problem when we went from @10K products to well over 100K ... if we upload 1000 copiers and there are maybe 87 marked incorrectly as printers, that's possible to fix with human intervention and a little time (30 minutes or less? depends on the error, really.)

                  If we upload 50,000 parts and 5,000 of those are categorized incorrectly, that's perhaps a half day or more work, if anyone doesn't go blind/crazy trying to fix it. And the question for me is, with my limited time available, do I fix the errors that are now showing in public on the WWW site, or do I fix the system so it gets smarter about those errors and risk introducing bugs and reversing the success rate? I love to be able to do both, but most days only one or the other gets attention, and depending on the other projects in the pipe, maybe NEITHER.

                  Increase the system's failure rate to 20-25% and you've just shot a "man day", which probably isn't as costly in Joplin MO as in NYC, but still is something we can't afford.

                  And that's just money. What if we're talking about IBM's Watson and cancer diagnostics? I wouldn't want to be the guy where they say ... "well, it's just that the AI has a 4% failure rate, and you're one of those cases."

                  So the level of intelligence/accuracy is really the difficult thing about AI.

                  Of course that became deadly obvious just the other day. I wonder if Uber has an accuracy metric? ;-)

                    Write a Reply...