Original text (24th January)
This page is explicitly “noindex,follow”.
Here are some links to pages that (at the time of writing) are not and have never been linked to anywhere else:
In about a month I’ll remove that robots.txt rule, and see if Google crawls the previously blocked page. If it does, that means that this page is still being treated as “noindex,follow”. If it doesn’t, and that remains the case for a reasonable period of time, that indicates that this page is being treated as “noindex,nofollow”.
I’m using robots.txt to do this because it means I don’t have to update this page to change what it (follow) links to – so if this mechanic described by John Mueller refreshes when the page is updated, that won’t invalidate my methodology.
Here’s a link to the Screaming Frog output as it currently stands.
Update (29th January)
Here’s the site: search result on January 29th (I forgot to check sooner):
This is behaving as expected, with two of the pages linked to from here both found, but not the one that I’ve linked to with a “nofollow” attribute.
In addition, I decided to add this link, in case I need a 2nd robots.txt blocked URL to play with later:
A bit of a rant about some of my biggest pet peeves in interpreting analytics, rank tracking or ranking factor study data.
Sometimes, you can’t have any more pie. Perhaps the pie is infinitesimally small. Or perhaps the rest is already taken. Or perhaps you already have the entire pie. In these cases, to progress, you must make the pie itself larger.
Information architecture is a broad topic, which arguably includes almost everything that we traditionally call technical SEO, and a lot of UX. In this post, I’m going to focus more narrowly on quickly identifying simple changes to a site’s internal navigation that can boost the performance of your key landing pages. At its most basic, this is a process you could execute in half an hour.
Back in Google’s early days, people navigated the web using links, and this made PageRank an excellent proxy for popularity and authority. The web is moving away from primarily link-based surfing, and Google no longer needs a proxy — so what, in 2017, is the point in links?
(Spoilers: We’re not done with them yet…)
Following on from some of my recent content and research around the importance of brand awareness for SEO, the next question should be how we can measure it with the same level of accuracy that we’ve become used to for other digital marketing KPIs. This post suggests a variety of ways to get started.
Which is the better predictor of rankings – branded search volume, or Domain Authority?
A deep dive into one of the pieces of research that went into my SearchLove San Diego presentation.
Back in Google’s early days, people navigated the web using links, and this made PageRank an excellent proxy for popularity and authority. The web is moving away from primarily link-based surfing, and Google no longer needs a proxy – so what, in 2017, is the point in links?
If you don’t have a physical presence, there are some situations where you can still rank for local queries. Learn whether you should, how to identify queries to compete for, and recommendations on how to optimize for them.
Statistical forecasting is a powerful tool that’s been used at Distilled for a while, both by consultants when analysing client data and by our in-house monitoring tool that alerts us to problems with client sites. In this post, I’m publicly launching a free forecasting tool that I spoke about last week at BrightonSEO, and explaining how to make best use of it.