Screaming Frog checklist
 Share
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

 
View only
 
 
ABCDEFGHIJKLMNOPQRSTUVWX
1
FeatureExplanationDoes Screaming Frog support a feature?
2
Basic SEO reports
3
List of indexable/non-indexable pages
It's necessary to view a list of indexable / non indexable pages to make sure there are no mistakes. Maybe some URLs were intended to be indexable?
Yes
4
Missing title tags
Meta titles are an important part of SEO audits. A crawler should show you a list of pages that have missing tags.
Yes. Go to "Page titles" -> "Missing"
5
Filtering URLs by status code (3xx, 4xx, 5xx)
When you perform an SEO audit, it's necessary to filter URL by status code. How many URLs are not found (404)? How many URLs are redirected (301)?
Yes, visit the "Response codes" section
6
List of Hx tags
“Google looks at the Hx headers to understand the structure of the text on a page better.” - John Mueller
Yes, you can see information related to H1 and H2 tags. Go either to "H1" or to "H2" section
7
View internal nofollow links
It's nice to see internal nofollow list to make sure there aren’t any mistakes.
You can export a list of internal links and then filter them to "nofollow"
8
External links list (outbound external)
A crawler should allow you to analyze both internal and external outbound links.
Yes. + for every URL you can see the number of outlinks (internal + external)
9
Link rel="next" (to indicate a pagination series)
When you perform an SEO audit, you should analyze if the pagination series are implemented properly.
Yes, Go to "Directives" -> "Rel/Prev"
10
Hreflang tags
Hreflang tags are the foundations of international SEO, so a crawler should recognize them to let you point to hreflang-related issues.
Yes. Go to "Overview" -> "Hreflang"
11
Canonical tagsEvery SEO crawler should inform you about canonical tags to let you spot indexing issues.
Yes. Go to "Overview" -> "Directives". You can choose the following reports here: "canonical", "canonical self referencing", "canonicalized", "non canonical".
12
Information about crawl depth - number of clicks from a homepage
Additional information about crawl depth can give you an overview of the structure of your website. If an important page isn’t accessible within a few clicks from a homepage, it may indicate poor website structure.
Yes. See the "Crawl depth" column
13
Content analysis
14
List of empty / Thin pages
A large number of thin pages can negatively affect your SEO efforts. A crawler should report them.
Yes (you can sort crawled pages by the word count or text to code ratio)
15
Duplicate content recognision
A crawler should give you at least basic information on duplicates across your website.
Yes. Screaming Frog: "you can find exact duplicate pages (via an md5 duplicate content check) under 'URI > Duplicates', as well as duplicated titles, description, headers, or use custom search to identify templated sections of text"
16
Convenience
17
A detailed report for given URL
It's must-have! If you do a crawl of a website, you may want to see internal links pointing to a particular URL, to see headers, canonical tags, etc.
Yes
18

Advanced URL filtering for reporting - using regular expressions and modifiers like "contains," "start with,” "end with."

I can't imagine my SEO life without a feature like this. It’s common that I need to see only URLs that end with “.html” or those which contain a product ID. A crawler must allow for such filtering.
Screaming Frog's internal search supports regular expressions, but you can't combine rules. If you want to perform an advanced filtering, you should export data and do it on your own
19
Adding additional columns to a report


This is also a very important feature of crawlers. I simply can't live without it. When I view a single report, I want to add additional columns to get the most out of the data. Fortunately, most crawlers allow this.
No, but the reports already contain multiple columns. The 'Internal' tab combines all elements, like status code, meta robots, title, h1, h2 tags, meta description, canonicals, word count, etc.
20
Page categorizing
Some crawlers offer the possibility to categorize crawled pages (i.e. blog, product pages etc) and see some reports dedicated to specific categories of pages.
No
21
Filtering URLs by type (HTML, CSS, JS, PDF etc)


Crawlers visit resources of various types (HTML, PDF, JPG). But, usually you want to review only HTML files. A crawler should support this.
Yes. Open the "Internal" tab and click on "filter"
22
Basic statistics about website structure - ie. Depth stats, Yes. Go to the "Site structure" tab
23
Overview - the list of all the issues listed on a single dashboard


It's a positive if a crawler lists all the detected issues on a single dashboard. Of course, it will not do the job for you, but it can make SEO audits easier and more efficient.
No, but you can see many interesting data in the "Overview" section
24
Comparing to the previous crawl
When you work on a website for a long time, it’s important to compare the crawls that were done before and after the changes.
No
25
Crawl settings
26
List mode - crawl just the listed URLs (helpful for a website migration)
Sometimes you want to perform a quick audit of a specified set of URLs without crawling the whole website.
Yes
27
Changing the user agent
Sometimes, it's necessary to change the user agent. For example, even when a website blocks Ahrefs, you still need to perform a crawl. Also, more and more websites detect Googlebot by user agent and serve it a pre-rendered version instead of fully equipped JS.
Yes
28
Crawl speed adjusting

You should be able to set a crawl speed i.e 1-3 URLs per second if a website can't handle host load, while you may want to crawl much faster if a website is healthy.
Yes + you can do it while crawling
29
Can I limit crawling? Crawl depth, max number of URLs
Many websites have millions of URLs. Sometimes, it's good to limit the crawl depth or specifying a max number of URLs allowed to crawl.
Yes
30
Analyzing a domain protected by an htaccess Login
(helpful for analyzing staging websites)

This is a helpful feature if you want to crawl the staging website.
Yes. It supports basic and digest authentication, as well as web forms authentication, where it can login to anything you can with your browser (intranets, WordPress etc) using its Chrome browser - https://www.screamingfrog.co.uk/crawling-password-protected-websites/.
31
Can I exclude particular subdomains, include only specific directories?

Yes
32
Universal crawl -> crawl + list mode + sitemapYes. You can combine a regular crawl, with XML Sitemap and a list of URLs. You can also include URLs from GA/GSC as well.
33
Maintenance
34
Crawl scheduling
It's handy to be able to schedule a crawl and set monthly/weekly crawls.
Yes. There's an inbuilt scheduler, which allows you to crawl at chosen intervals, save crawls and export all data and reports to a specific location.
35
Indicating the crawling progress
If you deal with big websites, you should be able to see the current status of a crawl. Will you wait a few hours, or weeks till the 1kk+ crawl will finish?
Yes. +Screaming Frog provides real-time crawling and reporting during the crawl).
36
Robots.txt monitoring
Accidental changes in robots.txt can cause Google to not be able to read and index your content. It's beneficial if a crawler detects changes in Robots.txt and informs you.
No
37
Crawl data retention
It’s good if a crawler can store results for a long period of time.
Forever (as long as you have active licence)
38
Notifications - crawl finished
A crawler should inform you when a crawl is done (desktop notification / email).
No
39
Advanced SEO reports
40
List of pages with less than x links incoming
If there are no internal links pointing to a page, it may mean for Google that the page is probably irrelevant. It’s crucial to spot orphan URLs.
No, but you can sort by number of 'inlinks' / 'unique inlinks', or export the results to CSV/Excel file and then filter it
41
Comparison of URLs found in sitemaps and in crawl.Sitemaps should contain all the valuable URLs. If some pages are not included in a sitemap, it can cause issues with crawling and indexing by Google.
If a URL is apparent in a sitemap, but can't be accessible through crawl, it may be a signal to Google that a page is not relevant.
Yes. This can be viewed under the 'Sitemaps' tab and filters.
42
Internal Page Rank valueAlthough any PageRank calculations can’t reflect Google’s link graph, it’s still a really important feature. Imagine you want to see the most important URLs based on links. Then you should sort URLs by not only simple metrics like number of inlinks, but also by internal PageRank. You think Google doesn’t use PageRank anymore? http://www.seobythesea.com/2018/04/pagerank-updated/
Yes.
43
Mobile audit
In mobile-first indexing it’s necessary to perform a content parity audit between the mobile and desktop versions of your website
You can crawl as a mobile user-agent to crawl a mobile website & gather data for further analysis.
44
Additional SEO reports
45
Malformed URLs (https://https://, https://example.com/tag/someting/tag/tag/tag or https://www.example.com/first_part of URL Yes, you can see URLs with uppercase, underscores, non ASCII parameters, over 115 characters. http://take.ms/wmy8d
46
List of URLs with parametersYes. Go to "URL" -> "Parameters"
47
Mixed content (some pages / resources are served via HTTPS, some by HTTPS)Yes, Go to "Overview" -> "Protocol" and here, you can see resources served via HTTP and HTTPS. There's also a 'Insecure Content' report under 'Reports'.
48
Redirect chains report
Nobody likes redirect chains. Not users, not search engines. A crawler should report any redirect chains to let you decide if it's worth fixing.
Yes, there is a special report called "Redirect chains". Click on "Reports" -> "Redirect chains"
49
Website speed statistics
Performance is becoming more and more important both for users and SEO. So crawlers should present reports related to performance.
Yes, you can see statistics related to the response time, compression usage, file sizes.
50
List of URLs blocked by robots.txt
It happens that a webmaster mistakenly prevents Google from crawling a particular set of pages. As an SEO, you should review the list of URLs blocked by robots.txt - to make sure there are no mistakes.
Yes. Go to "Response codes" -> "Blocked by robots.txt"
51
Schema.org detectionNo, but you can use custom extraction: https://www.chrisains.com/seo/how-to-extract-schema-mark-up-with-screaming-frog/
52
Export, sharing
53
Exporting to excel / CSV
Sometimes a crawler has no power here and you need to export the data and edit it in Excel / other tools.
Yes, you can perform a full export
54
Exporting to PDFNo
55
Custom reports / dashboards

No
56
Sharing individual reports
Imagine that you want to share a report related to 404s with your developers. Does the crawler support it?
No
57
Granting access to a crawl for another person


It's pretty common that two or more people work on the same SEO audit. Thanks to report sharing, you can work simultaneously.
No (but you can send a Screaming Frog crawl file to your colleagues so they can open it on their computers (of course, if they have an active licence for Screaming Frog)
58
Miscellaneous
59
Explanation on the issues
If you are new to SEO, you will appreciate the explanation of the issues that many crawlers provide.
No
60
Custom extraction
A crawler should let you perform a custom extraction to enrich your crawl. For instance, while auditing an e-commerce website, you should be able to scrape information about product availability and price.
Yes, you can use up to 10 custom extractions. You can select elements by Regex/Xpath and CSSPath
61
Can crawler detect the unique part - that is not a part of the template? It’s valuable if a crawler let you analyse only the unique part of a page (excluding navigation links, sidebars and footer).
No
62
Ability to use the crawler's APINo
63
Supported operating systemsWindows, Linux, Mac
64
Integration
65
Integration with Google AnalyticsYes. Screaming Frog: "You can connect to GA and select segments, dates, individual metrics (user, session, goals, ecommerce, site speed, Adwords data), dimensions and match data against URLs in a crawl. You can also discover orphan URLs that are in GA, but not found in a site crawl."
66
Integration with Google Search ConsoleYes. Screaming Frog: "you can connect to GSC and select dates and dimensions (device type, countries, filter for queries) and match search analytics impressions, clicks, CTR and position against URLs in a crawl. You can also discover orphan URLs that are in search analytics, but not found in a site crawl."
67
Integration with server logs No. However, you can use another tool provided by Screaming Frog: SF Logfile Analyser and import any crawl data (from any tool, including Screaming Frog crawler) to compare and analyse it against the log data.
68
Integration with other toolsMajestic, Ahrefs, Mozscape
69
JavaScript rendering
JavaScript is more and more popular. If your website depends heavily on JavaScript, it's a good idea to use a crawler that supports JS.
Yes. Screaming Frog uses Headless Chrome. Nice feature: you can see the screenshots of a rendered page and view the stored rendered HTML from the crawl.
70
Why do users should use your crawler? "The SEO Spider is a desktop crawler that’s a bit different to the competition as it provides true real-time crawling and reporting throughout the crawl to quickly diagnose and audit SEO issues. It’s built for SEO professionals that demand fast reliable data to make informed decisions and discover common SEO issues. It's super flexible and able to scale to crawling millions of URLs with database storage.

It’s also highly customisable and has so many use cases, from finding broken links, creating XML Sitemaps, performing an SEO site audit or content audit, to far more advanced uses, such as uncovering JavaScript rendering problems, scraping data from websites or auditing redirects in a website migration for example.

The SEO Spider has also been developed for 8 years with so much support from the SEO community and evolved alongside the industry and changing practices. So many of the features have come directly from community feedback, and there’s plenty more to come with rapid, continued development of the tool. "
71
Free account - tryYou can download a demo version that allows you for crawling up to 500 URLs. However, the configuration menu is disabled in the demo version
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
Loading...
Main menu