Fail the first: it doesn't appear to support receiving compressed feeds. This is somewhat forgivable as decompressing the feed does require a little more processing, but at the same time it saves massively on bandwidth. Enabling it on one site I admin reduced the bandwidth needed for most textual content (webpages, stylesheets, scripts, and so on) by over 50%.
Fail the second: it apparently has no concept of caching. HTTP provides many ways to specify how a page should be cached and when it must be reloaded, but the simplest is to provide a date and a unique identifier for the current version of the page. The client can then make a request to the server, saying that it only wants a new copy of the page if it's changed since the last date/version it saw. Supporting this on your site is essential if you don't want RSS readers to eat your bandwidth, but it does rather rely on the client also doing so. Clients that don't understand caching are the exception.
Fail the third: it polls this particular feed every 30 minutes, despite the fact that the blog is actually only updated once a week. Now, polling every half hour is not unreasonable, but combined with the first and second fails it means Amazon is responsible for about 70MB of data transfer a month. That's a fair bit when the cap is only 400MB. It'd be nice if there was some way to control the polling interval.
Fail the fourth: it doesn't actually obey the 301 response code. A 301 response code means that the page requested has moved permanently, and the client should update any bookmarks or similar. It does at least follow the redirect, so it's not a complete fail.
Fail the fifth: their client identifies itself as "RPT-HTTPClient/0.3". Well-behaved bots identify themselves as what they really are, and provide some sort of contact address. The trouble is that while I could apply the banhammer to this bot, it's likely doing something useful. I have a sneaking suspicion that it's what Amazon are using to pull the feed for the Kindle version of the site.