A look at some ranking behaviour I’ve seen in the wild, and my working theory to explain it. I’m using “ranking factors” to describe the static metrics we’re all familiar with, and thinking about how dynamic feedback from SERPs might be feeding back into rankings where there’s enough data to do so.
Something I’ve been thinking about for a while – what does internal linking best practice look like in a world with mobile first and (it turns out) “noindex,follow” URLs not being re-crawled? It’s easy enough for small sites, but this ought to be a major concern for any site with a 4+ figure page count.
While I was in Seattle for MozCon, I had the chance to record a couple of videos for Whiteboard Friday – this was actually the one that I got out in a rush at the end, so hopefully I’ll be able to share the first one, too, in the fullness of time!
In the meantime, this is the 5-minute version of my MozCon talk, covering the meat of the analysis involved.
This is my presentation from MozCon 2018. The topic is local pages for national or international businesses – primarily how to figure out whether you need them, and some things not to do if you do need them.
Original text (24th January)
This page is explicitly “noindex,follow”.
Here are some links to pages that (at the time of writing) are not and have never been linked to anywhere else:
- Here’s a link to a page that is blocked in robots.txt.
- Here’s a nofollowed link.
- Here’s a perfectly normal link.
In about a month I’ll remove that robots.txt rule, and see if Google crawls the previously blocked page. If it does, that means that this page is still being treated as “noindex,follow”. If it doesn’t, and that remains the case for a reasonable period of time, that indicates that this page is being treated as “noindex,nofollow”.
I’m using robots.txt to do this because it means I don’t have to update this page to change what it (follow) links to – so if this mechanic described by John Mueller refreshes when the page is updated, that won’t invalidate my methodology.
Here’s a link to the Screaming Frog output as it currently stands.
Update (29th January)
Here’s the site: search result on January 29th (I forgot to check sooner):
This is behaving as expected, with two of the pages linked to from here both found, but not the one that I’ve linked to with a “nofollow” attribute.
In addition, I decided to add this link, in case I need a 2nd robots.txt blocked URL to play with later:
- Here’s a second link to a page that is blocked in robots.txt (just in case).
A bit of a rant about some of my biggest pet peeves in interpreting analytics, rank tracking or ranking factor study data.
Sometimes, you can’t have any more pie. Perhaps the pie is infinitesimally small. Or perhaps the rest is already taken. Or perhaps you already have the entire pie. In these cases, to progress, you must make the pie itself larger.
Information architecture is a broad topic, which arguably includes almost everything that we traditionally call technical SEO, and a lot of UX. In this post, I’m going to focus more narrowly on quickly identifying simple changes to a site’s internal navigation that can boost the performance of your key landing pages. At its most basic, this is a process you could execute in half an hour.
Back in Google’s early days, people navigated the web using links, and this made PageRank an excellent proxy for popularity and authority. The web is moving away from primarily link-based surfing, and Google no longer needs a proxy — so what, in 2017, is the point in links?
(Spoilers: We’re not done with them yet…)
Following on from some of my recent content and research around the importance of brand awareness for SEO, the next question should be how we can measure it with the same level of accuracy that we’ve become used to for other digital marketing KPIs. This post suggests a variety of ways to get started.