We all know that as security tools evolve, so do the troublemakers that make their existence necessary. There's an interesting security report regarding this evolution, and how there are some new attacks emerging that show amazing stealth and sophistication, to the point where existing models of protection don't apply. These new sorts of attacks play on the chance that you only need to infect a system once, and if your attempt fails then that it is likely pointless to try again. Upon a first visit to a malicious site, the bad code the attacker wants run is sent. On repeated visits, the user or machine is presented with clean code, leaving no trace of the malicious page behind for security tools to find. It can apparently even stretch this to web crawlers and other automated search mechanisms:

Moreover, evasive attacks can identify the IP addresses of crawlers used by URL filtering, reputation services and search engines, replying to these engines with legitimate content and increasing the chances of mistakenly being classified by them as a legitimate category,
This sort of threat obviously would require a change in how security software works, requiring more real-time intervention. It would make specific threats harder to identify as well. Of course, it would also mean a server hosting malicious content would require more work as well, having to keep track of all previous visitors for a particular exploit. Interesting stuff.