|Feature||Explanation||Does Ryte support a feature?|
|Basic SEO reports|
|List of indexable/non-indexable pages|
It's necessary to view a list of indexable / non indexable pages to make sure there are no mistakes. Maybe some URLs were intended to be indexable?
|Yes, go to "Indexability" -> "What is indexable". Then click either on "indexable" or "non indexable"|
|Missing title tags|
Meta titles are an important part of SEO audits. A crawler should show you a list of pages that have missing tags.
Go to "Content "-> "Title" -> "Length"
Then filter it by "not set".
|Filtering URLs by status code (3xx, 4xx, 5xx)|
When you perform an SEO audit, it's necessary to filter URL by status code. How many URLs are not found (404)? How many URLs are redirected (301)?
Go to "indexability" -> "Status codes".
For every report you can see and filter the status code of each URL.
|List of Hx tags|
“Google looks at the Hx headers to understand the structure of the text on a page better.” - John Mueller
|Yes. Go to "Content" -> "Headlines". You can view reports related to H1-H4 headlines. |
|View internal nofollow links|
It's nice to see internal nofollow list to make sure there aren’t any mistakes.
|Yes, Go to "Website success" -> "Links" -> "Internal nofollow"|
|External links list (outbound external)|
A crawler should allow you to analyze both internal and external outbound links.
Go to "Links" -> "Link targets" -> "Different host".
Or "Indexability" -> "Outbound content" to see it grouped by target url
|Link rel="next" (to indicate a pagination series)|
When you perform an SEO audit, you should analyze if the pagination series are implemented properly.
|Yes. Go to Indexability -> Pagination|
Hreflang tags are the foundations of international SEO, so a crawler should recognize them to let you point to hreflang-related issues.
|Yes. Go to "Website success" -> "Multilingual settings"|
|Canonical tags||Every SEO crawler should inform you about canonical tags to let you spot indexing issues.||Yes. Go to "indexability" -> "Canonicals"|
|Information about crawl depth - number of clicks from a homepage|
Additional information about crawl depth can give you an overview of the structure of your website. If an important page isn’t accessible within a few clicks from a homepage, it may indicate poor website structure.
|Yes, go to "Links" -> "Click Path"|
|List of empty / Thin pages|
A large number of thin pages can negatively affect your SEO efforts. A crawler should report them.
|Yes, go to Content -> Word statistics -> Unique Word Count|
|Duplicate content recognision|
A crawler should give you at least basic information on duplicates across your website.
|Yes. "Ryte enables you to identify and compare duplicate pages, near duplicate pages as well as the total amount of occurrences on your whole website. Hereby Ryte compares all content, as well as all titles, meta descriptions, and h1 tags.|
With "Content Success", we also offer you a nifty tool to improve your content's quality and get ideas how to write engaging, unique content easily."
|A detailed report for given URL|
It's must-have! If you do a crawl of a website, you may want to see internal links pointing to a particular URL, to see headers, canonical tags, etc.
Advanced URL filtering for reporting - using regular expressions and modifiers like "contains," "start with,” "end with."
I can't imagine my SEO life without a feature like this. It’s common that I need to see only URLs that end with “.html” or those which contain a product ID. A crawler must allow for such filtering.
|Yes + you can combine rules by OR/AND|
|Adding additional columns to a report
This is also a very important feature of crawlers. I simply can't live without it. When I view a single report, I want to add additional columns to get the most out of the data. Fortunately, most crawlers allow this.
Some crawlers offer the possibility to categorize crawled pages (i.e. blog, product pages etc) and see some reports dedicated to specific categories of pages.
|Filtering URLs by type (HTML, CSS, JS, PDF etc)
Crawlers visit resources of various types (HTML, PDF, JPG). But, usually you want to review only HTML files. A crawler should support this.
|Basic statistics about website structure - ie. Depth stats,||Yes|
|Overview - the list of all the issues listed on a single dashboard
It's a positive if a crawler lists all the detected issues on a single dashboard. Of course, it will not do the job for you, but it can make SEO audits easier and more efficient.
|Comparing to the previous crawl|
When you work on a website for a long time, it’s important to compare the crawls that were done before and after the changes.
|List mode - crawl just the listed URLs (helpful for a website migration)|
Sometimes you want to perform a quick audit of a specified set of URLs without crawling the whole website.
|No, but you can use a lifehack: create custom sitemap and set Ryte to crawl it.|
|Changing the user agent|
Sometimes, it's necessary to change the user agent. For example, even when a website blocks Ahrefs, you still need to perform a crawl. Also, more and more websites detect Googlebot by user agent and serve it a pre-rendered version instead of fully equipped JS.
|Crawl speed adjusting |
You should be able to set a crawl speed i.e 1-3 URLs per second if a website can't handle host load, while you may want to crawl much faster if a website is healthy.
You can adjust crawl speed as well as custom delay.
|Can I limit crawling? Crawl depth, max number of URLs|
Many websites have millions of URLs. Sometimes, it's good to limit the crawl depth or specifying a max number of URLs allowed to crawl.
|Yes, you can limit the number of URLs to be crawled|
|Analyzing a domain protected by an htaccess Login|
(helpful for analyzing staging websites)
This is a helpful feature if you want to crawl the staging website.
|Can I exclude particular subdomains, include only specific directories?||Yes. |
While setting a crawl go to "Advanced analysis" section and click on "What to analyze" or use virtual robots.txt.
|Universal crawl -> crawl + list mode + sitemap||No|
It's handy to be able to schedule a crawl and set monthly/weekly crawls.
|Indicating the crawling progress|
If you deal with big websites, you should be able to see the current status of a crawl. Will you wait a few hours, or weeks till the 1kk+ crawl will finish?
|Yes. Go to settings and you will see how many URLs were crawled so far. Ryte is planning to introduce a dedicated dashboard for it|
Accidental changes in robots.txt can cause Google to not be able to read and index your content. It's beneficial if a crawler detects changes in Robots.txt and informs you.
|Crawl data retention|
It’s good if a crawler can store results for a long period of time.
|Forever (as long as you have active licence)|
|Notifications - crawl finished|
A crawler should inform you when a crawl is done (desktop notification / email).
|Advanced SEO reports|
|List of pages with less than x links incoming|
If there are no internal links pointing to a page, it may mean for Google that the page is probably irrelevant. It’s crucial to spot orphan URLs.
|Yes, go to Links -> Pages without incoming links. You can see a chart with the link distribution. http://take.ms/66uDe|
|Comparison of URLs found in sitemaps and in crawl.||Sitemaps should contain all the valuable URLs. If some pages are not included in a sitemap, it can cause issues with crawling and indexing by Google. |
If a URL is apparent in a sitemap, but can't be accessible through crawl, it may be a signal to Google that a page is not relevant.
|Yes, go to Sitemaps -> Included in Sitemaps and click on "Not included"|
|Internal Page Rank value||Although any PageRank calculations can’t reflect Google’s link graph, it’s still a really important feature. Imagine you want to see the most important URLs based on links. Then you should sort URLs by not only simple metrics like number of inlinks, but also by internal PageRank. You think Google doesn’t use PageRank anymore? http://www.seobythesea.com/2018/04/pagerank-updated/||Yes|
In mobile-first indexing it’s necessary to perform a content parity audit between the mobile and desktop versions of your website
|Yes, You can crawl as a mobile user-agent to crawl a mobile website & gather data for further analysis.|
|Additional SEO reports|
|Malformed URLs (https://https://, https://example.com/tag/someting/tag/tag/tag or https://www.example.com/first_part of URL||Yes. Go to URL Structure -> Filenames. Also, review the Structure -> `folder count` section. Alternatively, you can try the following filters: "URL contains space", "URL contains https://https".|
|List of URLs with parameters||Yes. Go to Website Success -> URL structure and click on `Get Parameters`|
|Mixed content (some pages / resources are served via HTTPS, some by HTTPS)||Yes. http://take.ms/lgtAT|
|Redirect chains report|
Nobody likes redirect chains. Not users, not search engines. A crawler should report any redirect chains to let you decide if it's worth fixing.
Go to "Indexability" -> "Redirects" -> "Status Codes"
Then, filter for Redirects to see chained Redirects. For more than one hop, use the Inspector to jump from one Redirect to the next.
|Website speed statistics|
Performance is becoming more and more important both for users and SEO. So crawlers should present reports related to performance.
Go to the "Performance section"
|List of URLs blocked by robots.txt|
It happens that a webmaster mistakenly prevents Google from crawling a particular set of pages. As an SEO, you should review the list of URLs blocked by robots.txt - to make sure there are no mistakes.
|Yes (go to Indexability -> Robots.txt and tick "Disallowed". To get most of the data, you can add additional column: "internal dofollow links counter "|
|Schema.org detection||No, but you can use custom extraction|
|Exporting to excel / CSV|
Sometimes a crawler has no power here and you need to export the data and edit it in Excel / other tools.
Currently up to 100k rows, but unlimited through API calls.
Ryte: "Exporting will be extended massively in the upcoming weeks"
|Exporting to PDF||Yes|
|Custom reports / dashboards||No, but you can use their API and build your own tools|
|Sharing individual reports|
Imagine that you want to share a report related to 404s with your developers. Does the crawler support it?
Also as a shared url for non-logged in users, you can also define a specific timeframe on how long the report will be available.
|Granting access to a crawl for another person|
It's pretty common that two or more people work on the same SEO audit. Thanks to report sharing, you can work simultaneously.
|Explanation on the issues|
If you are new to SEO, you will appreciate the explanation of the issues that many crawlers provide.
|No, but you can find the explanation on the issues in their documentation.https://support.ryte.com/hc/en-us/sections/202059903-SEO-Aspects|
A crawler should let you perform a custom extraction to enrich your crawl. For instance, while auditing an e-commerce website, you should be able to scrape information about product availability and price.
You can select elements by Regular Expressions, XPath and CSS.
In order to do it, in the crawl settings go to the Custom Snippets section.
Nice feature: you can test custom extractions before setting them + you can filter any report by extracted data
|Can crawler detect the unique part - that is not a part of the template?||It’s valuable if a crawler let you analyse only the unique part of a page (excluding navigation links, sidebars and footer).||No. Ryte: "it's definitely part of the roadmap"|
|Ability to use the crawler's API||Yes.|
|Supported operating systems||All - it's web-based application|
|Integration with Google Analytics||Yes. Ryte: "Adding Google Analytics data enriches the Ryte crawl data tremendously.|
You can easily add Google Analytics data points like the number of unique users, page impressions, dwell time, bounce rates and even goal conversions in a separate column in every one of our reports. You can also access live data in real time with just one click to measure immediate effects."
|Integration with Google Search Console||Yes. Ryte: "With the help of Google Search Console search analytics data, you can optimize your search performance based on 100% real Google data. Ryte provides you with up to 5x more data than you can manually get out of Search Console and Ryte will also save this data for you indefinitely. Besides our dedicated tool "Search Success" targeted towards monitoring, analyzing and optimizing your search performance, you can also access live data in real time within "Website Success" with just one click for any crawled URL.
|Integration with server logs||No|
|Integration with other tools||No|
|Not yet, but coming soon.|
|Why do users should use Ryte?||"With over 1 Billion crawled URLs each month, Ryte is a state-of-the-art crawling solution trusted by some of the best SEOs and Online-Marketeers in the world. Besides being used by numerous well-known brands and large agencies, Ryte tries to make this quite complex topic accessible to everyone. Ryte is understandable for Rookies but loved by Experts.|
Ryte is an innovative strategic software suite which allows you to continuously monitor, analyze and optimize not just your website but also your contents as well as your search performance with the help of 100% real Google data.
Ryte monitors vital elements of your website's success. You get the best possible data to make informed decisions and ultimately also hands-on advice to correct errors and make your website the best it can be. Ryte's products are designed to deliver on these action points, making your life easier and more successful.
Everything you need in one platform to make sure your digital business is growing.
Don't just do it, do it Ryte ;) "
|Free account - try||Yes.|
You can test the full Ryte suite for 30 days for free. Smaller websites can sign up for the Ryte Free account allowing them to analyze up to 100 URLs per month for free lifetime https://en.ryte.com/product-insights/the-ryte-free-account-in-detail