This is my presentation from MozCon 2018. The topic is local pages for national or international businesses – primarily how to figure out whether you need them, and some things not to do if you do need them.
I’d long noticed on sites with multiple analytics setups, that traffic levels could differ even on unfiltered views. In this post, I tried to dig into the patterns in that data.
Original text (24th January)
This page is explicitly “noindex,follow”.
Here are some links to pages that (at the time of writing) are not and have never been linked to anywhere else:
- Here’s a link to a page that is blocked in robots.txt.
- Here’s a nofollowed link.
- Here’s a perfectly normal link.
In about a month I’ll remove that robots.txt rule, and see if Google crawls the previously blocked page. If it does, that means that this page is still being treated as “noindex,follow”. If it doesn’t, and that remains the case for a reasonable period of time, that indicates that this page is being treated as “noindex,nofollow”.
I’m using robots.txt to do this because it means I don’t have to update this page to change what it (follow) links to – so if this mechanic described by John Mueller refreshes when the page is updated, that won’t invalidate my methodology.
Here’s a link to the Screaming Frog output as it currently stands.
Update (29th January)
Here’s the site: search result on January 29th (I forgot to check sooner):
This is behaving as expected, with two of the pages linked to from here both found, but not the one that I’ve linked to with a “nofollow” attribute.
In addition, I decided to add this link, in case I need a 2nd robots.txt blocked URL to play with later:
- Here’s a second link to a page that is blocked in robots.txt (just in case).
A bit of a rant about some of my biggest pet peeves in interpreting analytics, rank tracking or ranking factor study data.
Sometimes, you can’t have any more pie. Perhaps the pie is infinitesimally small. Or perhaps the rest is already taken. Or perhaps you already have the entire pie. In these cases, to progress, you must make the pie itself larger.
Information architecture is a broad topic, which arguably includes almost everything that we traditionally call technical SEO, and a lot of UX. In this post, I’m going to focus more narrowly on quickly identifying simple changes to a site’s internal navigation that can boost the performance of your key landing pages. At its most basic, this is a process you could execute in half an hour.
Back in Google’s early days, people navigated the web using links, and this made PageRank an excellent proxy for popularity and authority. The web is moving away from primarily link-based surfing, and Google no longer needs a proxy — so what, in 2017, is the point in links?
(Spoilers: We’re not done with them yet…)
Following on from some of my recent content and research around the importance of brand awareness for SEO, the next question should be how we can measure it with the same level of accuracy that we’ve become used to for other digital marketing KPIs. This post suggests a variety of ways to get started.
Which is the better predictor of rankings – branded search volume, or Domain Authority?
A deep dive into one of the pieces of research that went into my SearchLove San Diego presentation.
Back in Google’s early days, people navigated the web using links, and this made PageRank an excellent proxy for popularity and authority. The web is moving away from primarily link-based surfing, and Google no longer needs a proxy – so what, in 2017, is the point in links?