The ToU reads:
"You agree that you will not use any robot, spider, scraper or other automated means to access the Site for any purpose without our express written permission."
I wonder what "automated means" is. Sure, I understand you don't want millions of people screenscraping your site, because of performance considerations. But having to jump through three hoops in order to get the information you want (presumably to discourage 'automated means') actually makes it more more attractive to use those automatic scripts. It's a bit like DRM, really.
Also, what is "automated means"? Is a browser that automatically downloads images in a HTML page "automated"? Is a browser itself not "automated"? Is it not automated when a user initiates an action? So if I call the script myself, it's not automated? If you offer information through HTTP, does it matter through what tool that information is accessed, if that tool is a good citizen (as in: a reasonable interval between requests)?
For the time being, I will keep on using the tool. I might make a small script that processes Pocket Queries (the 'sanctioned' way to get information out of gc.com) and breaks them up into the individual geocaches, but that is for the future.
It reminds me of the time we got an angry call from a website owner. His site was broken (it generated links to nowhere in a never-ending recursive loop) and so our spider blew through his monthly bandwidth allowance in a single night. Whose problem is that? Whose fault is that? Who needs to repair their software? And if you don't want to be crawled, why don't you have a robots.txt file?