First, there's this:
A straightforward idea is to track the certs you see over time and generate a prominent warning if you see something anomalous. This is available as a fully-functioning Firefox extension, Certificate Patrol. This should be built into every browser.This is similar to pet names, but is more similar to the way SSH works. Like Pet Names, this approach will tell you if you visit a site and its certificate has changed. Unlike Pet Names, it won't say anything when you visit a new site. There's a trade off there. Either is a big improvement on the current state, though I suspect pet names could lead to a better overall user interface. The reason is that pet names can be integrated with the browser's bookmarks.
Second, there's this more speculative request:
In addition to your first-hand personal observations, why not leverage other resources on the network to make their own observations? For example, while Google is crawling the web, it can easily save SSL/TLS certificates when it sees them, and browsers could use a real-time API much like Google SafeBrowsing.
The Y property would give us this effect. What if, when you got a Google search result, it not only told you the URLs for the hits but also the certificates for those pages? You can then only be attacked if the attacker fools both you and also every Google web crawler.
Let me add one thing. If web protocols used these two tricks, how important would certificate authorities be? These two decentralized techniques strike me as so much more effective that certificate authorities are a waste of time. If you already know that a site is the same one you've visited a dozen times, and you already know it's the same site that Google thinks it is, what do you care about what some Iranian agency thinks of the site?