Examples! Yes!
The most visible example of OOP really flying is in graphical user interfaces. Each widget is an object, many of which are made up of several others; a Window may have a Titlebar, a MenuBar (with Menus that have MenuItems), Scrollbars (which have a track, a slider, and a couple of buttons)..... Underneath there'd be a generic "Button" object, from which more specialised buttons would be created ("Close buttons", "Scrollbar Buttons", and whatsyourfancy). So the principle is that all the code that makes any sort of button gets written once and that becomes the Button class. All the code that distinguishes a Close Button from an ordinary button would be written once and that becomes the CloseButton class. Then once you've got the classes you can use them to crank out CloseButtons galore.
Ultimately (and ideally from the paradigm's point of view), everything is an object. Not only is a Button an object, but its appearance as well - if you make it a separate entity from the button itself you can then have Themes.
Another example is the Javascript DOM. An HTML (or XML) document is an object. Its header element is an object; ditto its title element, ditto every element. This is the DOM representation of an HTML document. Javascript has an interface to manipulate this representation: do that in a browser displaying the document, and it's redrawn to reflect the changes. Every single aspect of the page can be rewritten ad lib. An entire page could be built from scratch, or totally replaced, just by messing around with its DOM representation. No doubt ActionScript gives just as much control over Flash movies.
Firefox's entire front end is written in this wise: a dialect of XML to describe the layout, CSS to describe its appearance, and Javascript to define its functionality. I shudder to think of the work that would be involved to achieve such a division of labour if you couldn't have a distinct "thing" you could work with in isolation - that you could uniquely identify at any time and to which you could attach behaviour, give an appearance to, and then position.
Of course, in the end, it's all ones and zeros. The OO paradigm could be used in literally any programming language (and a good thing too, otherwise it couldn't be translated into machine code). It's just that some languages make it easier than others: from those where using it makes for the simplest and easiest-to-understand programs, all the way to those where it's easier to use it to write a compiler to implement a more OO-friendly language.