If you put all your included files into a directory, then use robots.txt to disallow indexing of that folder, then web-crawlers will not index what's in the include directory.
The crawler sees nothing more than what a regular user sees (well, except they don't see the images, it's an all text version of the site). So if you use php includes, the crawler is no more aware of php than just the extension of the file.
I don't think google will not index your entire site if you have static content. All it will do is not re-index that same page again. So the rest of your pages will be crawled, but not the static ones. If they crawled pages like you expect them to, then 99% of the web would not be indexed since many pages don't change. Take blogs for example. While the main page changes once every week or so, if the page gets indexed daily, it would notice that there hasn't been a change and think it's static.
In short, robots.txt will work as you want. You can even disable crawling for specific pages, so you don't have to move your pages into a folder. Something like:
User-agent: *
Disallow: /include.php
Disallow: /include2.php
Disallow: /inc/
Disallow: /img/
Disallow: /images/
Disallow: /js/
Disallow: /css/
for example.