2014 in review + 2015 preview

When 2014 started, we wrote about some key metrics we’ve been collecting for the last few years in order to give our community a foundational understanding of the trends we observe in the Web malware space, how key constituencies (like website owners, security companies, and hosting providers) are responding, and where StopBadware is making a difference. As we move into 2015, we want to share some of the questions we’ve been asking ourselves to take stock of our programs and their impact, and to identify significant challenges and focus areas for the coming year.

How many bad URLs are being detected?

At time of writing, there are over 1.6 million badware URLs being blacklisted by the three companies that provide StopBadware with their blacklist data (Google, ThreatTrack Security, and NSFocus). Here’s what the rate of newly blacklisted URLs and total aggregated blacklist volume over time look like for the second half of 2014:

Screen Shot 2015-01-05 at 11.23.23 AM.png

That latest big spike in newly reported URLs is a result of the SoakSoak malware campaign. Keep in mind that the up-and-to-the-right trajectory of this graph doesn’t necessarily mean things are getting worse on the Web. The ever-increasing number of URLs on the Internet and significantly better detection mechanisms are also responsible for the rise in blacklist volumes.

Are more or fewer users visiting malicious sites in spite of malware warnings? When do they ignore warnings?

Last year, we noted that 2013 clickthrough rates from Firefox malware warnings had fallen by more than 50%. We’re happy to report that in 2014, that trend continued: Users who ignored malware warnings and claimed that infected sites were “not attack sites” decreased by an additional 28.6%.

Unfortunately, there’s a grim caveat to what overall appears to be an encouraging batch of numbers: Warning clickthrough rates spike during malvertising campaigns. Unsurprisingly, the biggest spikes in clickthrough traffic occur when malware distributors infect ad networks used by high-traffic sites. This often means that related sites are affected by the same malvertising campaign: we’ve seen malicious ads from the same infected network hit multiple webcomic sites, similar anime forums, top torrent sites, and big technology news and commentary platforms, all of which have many dedicated users and are considered highly trustworthy.

How many website owners are requesting reviews? How many review requests are for sites that have been effectively cleaned up?

2013 was a record year for our review process, with just shy of 40,000 reviews requested. This year wasn’t quite as frenetic. We received 29,000 review requests in 2014, which, while lower than last year, was still more than we saw in 2012.

To more completely evaluate website owners’ responses to hacks, we also consider how many review requests StopBadware has to test manually—that is, how many haven’t been cleaned up successfully and require staff examination. We get a lot of these when malware is difficult to clean up, or when site owners don’t believe there’s something really wrong with their sites. Last year, we noted that even though our overall review request volume had increased significantly, the number of reviews our team had to test manually fell. This year, that trend continued; manual tests were down 27.7% from 2013. Since 2012, the number of review requests we have to test manually has fallen by 78%.

How many URLs StopBadware tested were infected? What classifications of badware were present on infected sites?

A few notes on this data: Tests marked clean vs. bad actually have as much to do with the individuals testing them as with the status of the sites being tested. Newer testers may have trouble finding sneakier types of malware, for instance. In addition, bad guys are rather adept at making sure our research team doesn’t see many exploits, even though we see plenty of drive-by download code on the pages we test. Finally, because our sample size (of manual tests) is smaller this year, the overall breakdown of badware types is more susceptible to fluctuation.

Faced with sophisticated infection techniques, are site owners able to effectively clean up? How long does it take? Do they stay clean?

The dwindling number of manual tests indicates website owners are getting better at cleaning up their sites. It’s important to note that when review requests don’t require manual testing from StopBadware, the URLs in question are typically removed from Google’s (or another) blacklist within roughly 24 hours. So the majority of webmasters who requested reviews this year saw their sites de-listed in under a day (this includes the folks affected by the massive SoakSoak campaign that hit in mid-December 2014), and certainly within two days.

Better cleanup is good. But how many sites are staying clean—i.e., getting rid of malicious backdoors, updating website software, and so on? To answer this, our research team turned to the data in the StopBadware Data Sharing Program.

Our researchers looked at rates of recompromise, particularly for sites that are running popular website software (e.g., WordPress).

Key findings:

How many of the sites StopBadware tests are maliciously registered sites instead of hacked legitimate sites?

Back in July, we detailed our plans to develop a classifier that will automate differentiation of hacked legitimate sites from sites registered for malicious purposes. From the beginning of Q2 until the end of 2014, StopBadware sorted infected websites into categories based on legitimacy and each site’s place in the malware chain.

This data helped us build an initial version of our experimental classifier. We’re honing the first iteration with assistance from researchers at TU Delft; output should be available on our blog around the end of Q1 2015.

What do we anticipate as our defining challenges for 2015 and beyond?

Continuing to advance information sharing efforts. In addition to data from industry pioneers ESET, Fortinet, Internet Identity, and Sophos, we began incorporating reports from a number of vetted independent security researchers into shared data this year.

As our data sharing program moves into its second full year, we anticipate putting fewer resources into simply growing it and more into maximizing the richness and utility of the data. We plan to formalize several strategic partnerships to incorporate additional detail to extant data and continue pursuing ambitious research projects.

Malicious advertising. StopBadware’s years of experience have taught us that when popular sites are compromised (especially more than one popular site of the same type), malvertising is by far the most likely culprit. But backing up this suspicion with evidentiary support is difficult: Malicious advertisements are highly targeted, our IPs are often blocked by bad guys, website admins and tech teams have a tough time isolating individual bad ads, advertising networks don’t like to admit they were abused by malware distributors...the list goes on. Most of the time, the only way we can verify that malvertising is responsible for the compromise of big sites is when the websites acknowledge publicly that they had issues with an ad network. This is beginning to happen a bit more frequently, in large part because consumers demand acknowledgment and accountability from companies they trust.

We hope to see even more transparency about the problem of malicious advertising in 2015—not only from the owners and users of compromised sites, but from advertisers and ad platforms themselves. We expect that we’ll continue to wrestle with key issues in this space, including (but not limited to) effective reporting mechanisms, transparent mitigation and escalation processes, scalable prevention techniques, and research that contributes to the development of useful reputation metrics and changes in business practices.

Malware notification. Malware notification is a messy, multi-faceted beast. There’s a lot of bad stuff on the Internet, it’s not always clear who’s in a position to address it (or where legal and regulatory boundaries lie), and even when it is clear, it’s often costly for intermediaries to take action. The net effect is that bad stuff stays up longer than it should, does more damage than we’d like, and breeds weariness and cynicism among industry stakeholders who should be working more collaboratively to protect users. StopBadware believes that increasing trust in this process will result in lower costs for Web infrastructure providers and better security awareness among consumers.

We’ve been interested in the logistics and the outcomes of malware notification since we came into being as an independent nonprofit [1] [2] [3]. We gather large amounts of data on malware URLs, and we’ve been active in the cleanup process since our infancy at Harvard’s Berkman Center. Yet, predominantly because of our size and limited resources, our forays into firsthand notification and outcome tracking have been done on a small scale. In 2015, we’ll take on several major projects in anticipation of increasing our notification capabilities: building a comprehensive database of abuse contacts, laying groundwork to track abuse reports submitted to them, and continuing research on how reporter reputation affects notification response.

Tracking the life cycle of compromise. Our holy grail is a comprehensive picture of the life cycle of a malware URL (from multiple perspectives) from registration to weaponization to cleanup or takedown. What makes hacked sites and servers vulnerable? Which mechanisms are leveraged throughout the abuse cycle? Are certain types of service providers (hosts, registrars, free subdomain providers, dynamic DNS services, etc.) particularly susceptible to abuse? Which factors might allow us to predict hacks, or malicious registrations, well before they happen?

We’re conducting research to answer these questions; ultimately, the best way to test our hypotheses and formulate viable solutions is for us to be able to act as a central command post, with visibility into and control over the life cycle of malicious URLs. Our primary projects and research targets for 2015 are a substantive step toward realizing a vision of StopBadware as a centralized resource for malware tracking and cleanup.

__________________________________________________________________________

As always, we thank our partners and our community for their financial and tactical support for StopBadware’s work this past year. Our research, programs, and special projects depend entirely upon contributions from organizations and individuals who believe in the need for an independent nonprofit dedicated to improving security outcomes for Internet users. We invite you to learn more about working with us as we move forward!