The magic of true OOP...
I've been thinking about the true meaning of OOP lately, and this thread from devshed caught my attention:
So true OOP is mostly about OOD, while fake OOP(sometimes referred to as OBP, object based programming) can have every single line using objects but the code block itself as a whole is procedural. It seems confusing at first, but I think I am starting to get a taste of it. It looks to me that the object way of thinking is the key difference between a true OO programmer and a procedural programmer.
That being said, what do you think is the best way to prevent writing fake OOP code with that is truly procedural in disguise? Using object and methods to encapsulate procedural code sure aint gonna work, thats fake OOP/OBP. Is this where architecture design/framework come into play?
Yes, that is what I would say is the idea here (though I disagree with the "fake" label). I remember that you were quite obstinate about thinking otherwise
Originally Posted by Lord Yggdrasill
There's no need to do that; there isn't really "fake OOP". Remember, a programming paradigm is a tool, or perhaps a set of tools. There is nothing sacred about OOP and OOD, so if in some case having object structures with procedural programming works, do it. You are just using another tool that does the job.
Originally Posted by Lord Yggdrasill
Oh really? I know I am nowhere near your level of skillset but I am definitely willing to improve myself as much as I can, especially when it comes down to object orientation. I definitely dont remember being obstinate, but if I did then I will apologize. XD
Originally Posted by laserlight
Settled 4 red convertible
It's easy to become so enthralled with something new, or something you're learning, that you focus on that thing to the exclusion of others. Perhaps you've done that a tad, and that's what Laserlight was referring to?
Recently, someone posted a link here in which I read something like, "Programming is the application of technology to solve problems".
In other words, we get caught up in the code (and that's normal --- we're technicians), but we need to think outside the box to create solutions to real-world problems with the best tools we can create or obtain for the job ... regardless of personal preferences or peer pressure.
Short of rigorous testing, it's sometimes hard to make a convincing argument for or against a certain technology and/or approach to problem solving. I will say that learning to think "in objects" has broadened the scope of my understanding of my work ... and that, in itself, is a Good Thing(tm). Still, there comes a point at which it seems overkill to write an OO class for one task. If you can *know* that you'll be able to "generify" the code to apply to a future endeavor ... that's another story, of course.
High Energy Magic Dept.
I think it's reasonable to generalize that if you don't do OOA/D* first, then the resulting OOP may not be as fully objectified as it could and start leaning toward procedural coding in a class-driven wrapper**. How thorough that analysis/design should be and how much effort should be spent on it is just another factor in the whole cost/schedule/benefit decision-making process. There's no single answer solution that is correct for all situations, but the important thing is that you are learning the options and which questions to ask -- choosing optimal answers is as much a result of experience (and guessing) as anything else, I think.
* Depending on the given situation (e.g. size, scope, and complexity of the project -- not to mention budget), this might be a formal process with lots of time spent on it, or simply taking 5 minutes with a pencil and sheet of paper to figure out what objects you want to define.
** In some situations, that may be perfectly acceptable.
Please give us a simple answer, so that we don't have to think, because if we think, we might find answers that don't fit the way we want the world to be." ~ from Nation, by Terry Pratchett
"But the main reason that any programmer learning any new language thinks the new language is SO much better than the old one is because he’s a better programmer now!" ~ http://www.oreillynet.com/ruby/blog/...ck_to_p_1.html
A procedural language is good for describing a procedure (quite common in a scripting environment: "Do this then this then this...").
Originally Posted by NogDog
Object-oriented languages are good for describing problem/application domains ("There's this thing and this and this...") - if you've got the time and budget to do the analysis needed to create such a domain description - if there's a particular problem immediately to hand it's a higher priority to solve that problem than to solve every other problem as well. They're implemented procedurally.
Declarative languages are good for specifying actual problem instances ("This is what I want done: do it for me."). They're typically implemented these days in an object-oriented fashion, with the object graph being used to model the syntax and semantics of the declarative language.
In the context of DSLs, the last category is starting to become better recognised and adopted - in no small part because they do require somewhat more computing power to implement (just as object orientated programming requires more than procedural programming) and so had to wait for interpreters and compilers to become fast enough (and/or machine code/bytecode/low-level source caches to become large enough). (Imperative DSLs do still exist for scripting purposes. But if your application is going to be scriptable, you will eventually want the scripting language to be computation-universal - so you might as well save yourself some trouble and just embed something like Python or Lua or some other well-designed language of the genre right from the start.)
But that kind of covers any significant application: the bit that prevents coding from being coding for its own sake is the job of constructing a language so that users can communicate their wants to the application and the application can communicate the results to the user: declarative DSLs are formal (i.e., machine-readable) instantiations of things that previously existed in piles of requirement specifications and job tickets and wireframes and back-of-the-envelope scribbles. (That's another thing about declarative languages: they're much more amenable to being processed in different ways for different purposes - diagramming, document generation, code generation, execution...)
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)