Interviewing google’s john mueller at searchlove domain authority metrics, sub-domains vs. sub-folders and more anthony krierion o gosh corpus christi

##########

I have previously written and spoken about how to interpret Google’s official statements, technical documentation, engineers’ writing, patent applications, acquisitions, and more (see: From the Horse’s Mouth and the associated video as well as posts like “what do dolphins eat?”). When I got the chance to interview John Mueller from Google at our SearchLove London 2018 conference, I knew that there would be many things that he couldn’t divulge, but there were a couple of key areas in which I thought we had seen unnecessary confusion, and where I thought that I might be able to get John to shed some light. [DistilledU subscribers can check out the videos of the rest of the talks here – we’re still working on permission to share the interview with John].

Mueller is Webmaster Trends Analyst at Google, and these days he is one of the most visible spokespeople for Google. He is a primary source of technical search information in particular, and is one of the few figures at Google who will answer questions about (some!) ranking factors, algorithm updates and crawling / indexing processes. gas pain in chest Contents

We had previously seen numerous occasions where Google spokespeople had talked about how metrics like Moz’s Domain Authority (DA) were proprietary external metrics that Google did not use as ranking factors (this, in response to many blog posts and other articles that conflated Moz’s DA metric with the general concept of measuring some kind of authority for a domain). I felt that there was an opportunity to gain some clarity.

I expect that practically everyone around the industry has seen at least some of the long-running back-and-forth between webmasters and Googlers on the question of sub-domains vs sub-folders (see for example this YouTube video from Google and this discussion of it). I really wanted to get to the bottom of this, because to me it represented a relatively clear-cut example of Google saying something that was different to what real-world experiments were showing.

I decided to set it up by coming from this angle: by acknowledging that we can totally believe that there isn’t an algorithmic “switch” at Google that classifies things as sub-domains and ranks them deliberately lower, but that we do regularly see real-world case studies showing uplifts from moving, and so asking John to think about why we might see that happen. He said [emphasis mine]:

Sometimes that includes sub-directories, sometimes that doesn’t include specific sub-directories. So, that’s probably where that is coming from where in that specific situation we say, “Well, for this site, it doesn’t include that sub-domain, because it looks like that sub-domain is actually something separate. So, if you fold those together then it might be a different model in the end, whereas for lots of other sites, we might say, “Well, there are lots of sub-domains here, so therefore all of these sub-domains are part of the main website and maybe we should treat them all as the same thing.”

And in that case, if you move things around within that site, essentially from a sub-domain to a sub-directory, you’re not gonna see a lot of changes. So, that’s probably where a lot of these differences are coming from. And in the long run, if you have a sub-domain that we see as a part of your website, then that’s kind of the same thing as a sub-directory.

Another area that I was personally curious about going into our conversation was about how John’s role fits into the broader Google teams, how he works with his colleagues, and what is happening behind the scenes when we learn new things directly from them. gas density conversion Although I don’t feel like we got major revelations out of this line of questioning, it was nonetheless interesting:

I assumed that after a year, it [would be] like okay, we have answered all of your questions. It’s like we’re done. But there are always new things that come up, and for a lot of that we go to the engineering teams to kind of discuss these issues. Sometimes we talk through with them with the press team as well if there are any sensitivities around there, how to frame it, what kind of things to talk about there.

For understandable reasons, there is a general reluctance among engineers to put their heads above the parapet and be publicly visible talking about how things work in their world. We did dive into one particularly confusing area that turned out to be illuminating – I made the point to John that we would love to get more direct access to engineers to answer these kinds of edge cases: Concrete example: the case of noindex pages becoming nofollow

What I found more interesting than the revelation itself was what it exposed about the thought process within Google. What it boiled down to was that the folk who knew how this worked – the engineers who’d built it – had a curse of knowledge. They knew that there was no way a page that was dropped permanently from the index could continue to have its links in the link graph, but they’d never thought to tell John (or the outside world) because it had never occurred to them that those on the outside hadn’t realised it worked this way [emphasis mine]:

it’s been like this for a really long time, and it’s something where, I don’t know, in the last year or two we basically went to the team and was like, “This doesn’t really make sense. When people say noindex, we drop it out of our index eventually, and then if it’s dropped out of our index, there’s canonical, so the links are kind of gone. Have we been recommending something that doesn’t make any sense for a while?” And they’re like, “Yeah, of course.” More interesting quotes from the discussion

Googlers don’t necessarily know what you need to do differently in order to perform better, and especially in the case of algorithm updates, their thinking about “search results are better now than they were before” doesn’t translate easily into “sites that have lost visibility in this update can do XYZ to improve from here”. electricity formulas physics My reading of this situation is that there is ongoing value to the work SEOs to do interpret algorithm changes and longer-running directional themes to Google’s changes to guide webmasters’ roadmaps:

the clearer we can separate the different parts of a website and treat them in different ways, I think that really helps us. So, a really common situation is also anything around safe search, adult content type situation where you have maybe you start off with a website that has a mix of different kinds of content, and for us, from a safe search point of view, we might say, “Well, this whole website should be filtered by safe search.”

Whereas if you split that off, and you make a clearer section that this is actually the adult content, and this is kind of the general content, then that’s a lot easier for our algorithms to say, “Okay, we’ll focus on this part for safe search, and the rest is just a general web search.” John can “kinda see where [rank tracking] makes sense”

I wanted to see if I could draw John into acknowledging why marketers and webmasters might want or need rank tracking – my argument being that it’s the only way of getting certain kinds of competitive insight (since you only get Search Console for your own domains) and also that it’s the only way of understanding the impact of algorithm updates on your own site and on your competitive landscape.

I think in general, I feel the SEO industry has come a really long way over the last, I don’t know, five, ten years, in that there’s more and more focus on actual technical issues, there’s a lot of understanding out there of how websites work, how search works, and I think that’s an awesome direction to go. So, kind of the voodoo magic that I mentioned before, that’s something that I think has dropped significantly over time.

I mean, firstly, I learned that I enjoy it, so I do hope to do more of this kind of thing in the future. In particular, I found it a lot more fun than chairing a panel. In my personal experience, chairing a panel (which I’ve done more of in the past) requires a ton of mental energy on making sure that people are speaking for the right amount of time, that you’re moving them onto the next topic at the right moment, that everyone is getting to say their piece, that you’re getting actually interesting content etc. In a 1:1 interview, it’s simple: you want the subject talking as much as possible, and you can focus on one person’s words and whether they are interesting enough to your audience.

In my preparation, I thought hard about how to make sure my questions were short but open, and that they were self-contained enough to be comprehensible to John and the audience, and allow John to answer them well. I think I did a reasonable job but can definitely continue practicing to get my questions shorter. Looking at the transcript, I did too much of the talking. gas finder Having said that, my preparation was valuable. It was worth it to have understood John’s background and history first, to have gathered my thoughts, and to have given him enough information about my main lines of questioning to enable him to have gone looking for information he might not have had at his fingertips. I think I got that balance roughly right; enabling him to prep a reasonable amount while keeping a couple of specific questions for on the day.

I also need to get more agile and ask more follow-ups and continuation questions – this is hard because you are having to think on your feet – I think I did it reasonably well in areas where I’d deliberately prepped to do it. This was mainly in the more controversial areas where I knew what John’s initial line might be but I also knew what I ultimately wanted to get out of it or dive deeper into. I found it harder where I found it less expected that I hadn’t quite got 100% what I was looking for. It’s surprisingly hard to parse everything that’s just been said and figure out on the fly whether it’s interesting, new, and complete.

I don’t know if I’d have been able to get more out of him even if I’d pushed, but looking back at the conversation, I think I gave up too quickly, and gave John too much of an “out” when I was asking about their internal toolset. He said it was “kind of like Search Console” and I put words in his mouth by saying “but better”. I should have dug deeper and asked for some specific information they can see about our sites that we can’t see in Search Console. John can “kinda see where [rank tracking] makes sense”

I promised above to get a bit deeper into our rank tracking discussion. I made the point that “there are situations where this is valuable to us, we feel. gas prices under a dollar So, yes we get Search Console data for our own websites, but we don’t get it for competitors, and it’s different. It doesn’t give us the full breadth of what’s going on in a SERP, that you might get from some other tools.”

They do things like they use proxy’s on mobile phones. It’s like you download an app, it’s a free app for your phone, and in the background it’s running Google queries, and sending the results back to them. So, all of these kind of sneaky things where in my point of view, it’s almost like borderline malware, where they’re trying to take user’s computers and run queries on them.

Ultimately, John acknowledged that “maybe there are ways that [Google] can give you more information on what we think is happening” but I felt like I could have done a better job on pushing for the need for this kind of data on competitive activity, and on the market as a whole (especially when there is a Google update). It’s perhaps unsurprising that I couldn’t dig deeper than the official line here, nor could I have expected to get a new product update about a whole new kind of competitive insight data, but I remain a bit unsatisfied with Google’s perspective. I feel like tools that aggregate the shifts in the SERPs when Google changes their algorithm and tools that let us understand the SERPs where our sites are appearing are both valuable and Google is fixated on the ToS without acknowledging the ways this data is needed. Are there really strong advocates for publishers inside Google?

the engineering teams, [are] not blindly focused on just Google users who are doing searches. They understand that there’s always this interaction with the community. People are making content, putting it online with the hope that Google sees it as relevant and sends people there. This kind of cycle needs to be in place and you can’t just say “we’re improving search results here and we don’t really care about the people who are creating the content”. electricity distribution costs That doesn’t work. That’s something that the engineering teams really care about.

I would have liked to have pushed a little harder on the changing “deal” for webmasters as I do think that some of the innovations that result in fewer clicks through to websites are fundamentally changing that. In the early days, there was an implicit deal that Google could copy and cache webmasters’ copyrighted content in return for driving traffic to them, and that this was a socially good deal. It even got tested in court [Wikipedia is the best link I’ve found for that].