We’ve all heard about mobile-first design, but it can be hard to internalize. For many of us, our first instinct is to create a traditional desktop website and then work on making it mobile-friendly. As you’ll learn in this episode, a true mobile-first approach is more important than it’s ever been — and will only keep growing in importance.
In this conversation, my guest and I are going to take a deep dive and geek out about technical SEO. It’s hard to imagine anyone better suited for such a conversation than Dawn Anderson, who is an internal SEO and digital marketing consultant for brands and startups alike. She and I speak at many of the same conferences, so I already know she’s a fantastic expert and the perfect guest for the show.
In this Episode
- [01:09] – Dawn starts off by discussing crawl budget, what it is, and why we care.
- [03:29] – Crawl budget feeds into things like page speed, Stephan points out. Dawn then expands on the idea.
- [05:46] – Dawn talks about the key areas in which you can emphasize importance in huge websites.
- [07:07] – Google pretty much ignores priority on XML Sitemaps because people lie, Dawn explains. Another problem is that they tend to get the “last updated” field wrong.
- [11:05] – Stephan steps in to clarify what Dawn has been saying by using the metaphor of a tree with sickly branches.
- [12:11] – Dawn discusses a canonical error that she sees frequently involving migrating sites and updating site maps. Stephan then offers his own related insight and advice.
- [16:25] – Dawn thinks that more education about search console parameters would be really useful for those in the search engine optimization field.
- [18:20] – What are some other technical SEO issues that a lot of people get wrong?
- [20:48] – For listeners who aren’t as familiar with technical SEO, Stephan clarifies and explains what Dawn has been talking about in simpler terms.
- [22:58] – Dawn offers an example involving an ecommerce site to illustrate what she and Stephan have been talking about.
- [26:15] – The older the site, the longer the process can take for all but the largest and most important sites, Dawn explains.
- [28:34] – What is hreflang, and why should we care? Dawn and Stephan discuss the challenges inherent in having different sites for Canada, Australia, England, and the United States, for example.
- [32:51] – These challenges apply in foreign languages too, Stephan points out, using the example of France and Quebec.
- [35:08] – Stephan and Dawn talk about the time that SEO takes.
- [36:56] – Dawn talks about a very recent Google release involving mobile-first indexing. Stephan then clarifies why it’s important to think about mobile first as your strategy, and desktop second.
- [40:01] – Dawn was recently watching a webinar, which she describes and discusses.
- [44:12] – On a desktop site, tabbed content is still in the HTML but not displayed by default. On mobile devices, Google isn’t discounting it in the same way.
- [45:25] – What is AMP? And do all Marketing Speak listeners need to implement it on their sites?
- [47:47] – Dawn explains that there have been issues with AMP’s proprietary code, but overall it increases performance and conversion rates.
- [49:26] – Speeding up a slow-loading website is the carrot, but there are sticks as well, Dawn points out.
- [51:36] – Does Dawn have a parting tidbit or bit of advice for technical SEO? She recommends making sure that you focus completely on mobile and make sure it’s a fast experience.
Transcript
It’s been ages since we geeked out on technical SEO. Just kidding, it’s only been a few weeks but it feels like ages so we’re going to geek out about technical SEO again. In this episode number 118, our guest today is Dawn Anderson. Dawn is an internal SEO and digital marketing consultant for brands and startups, and a lecture and trainer in digital marketing, marketing analytics, and search marketing. She speaks at many of the same conferences that I do, including Brighton SEO, and Pubcon. It’s a great pleasure to have her here today, and I think you’re going to enjoy this episode. Without any further ado, on with the show. Dawn, it’s great to have you on the show.
Thank you for having me. I’m really pleased that you asked me. Looking forward to it.
Yeah. Let’s talk about some geekier SEO topics. I know you’ve spoken about crawl budget before. Let’s start there.
Okay. What do you want to know in particular? Is there anything in particular?
Let’s start with the basics because a lot of our listeners are not super technical about SEO and they might not have even heard that term before. What is it and why do we care?
I think the first thing that’s really, really important to stress is that crawl budget as such, it’s a term made up by SEOs. It’s not used internally biy search engines, and effectively it’s two things in one. There’s two parts to it. The first part is based upon crawling politeness, and that’s based on what they call host load. In other words, what is the capability of the server to be able to cope with simultaneous requests, crawling requests? What can this server handle, that’s based on crawling politeness. Obviously the main rules are don’t hurt the website when you’re crawling it. That takes care about half. The other half is based on priorities. If you’ve got hundreds of thousand of pages that are really, really low importance, they don’t change very often, they may be not the best quality in the world, they don’t trigger a lot of impressions, they’re not frequently called by visitors as such in search, you could probably say they’re not actually that important for us to spend crawling resources on. That’s crawl frequency. Kind of how important is it to come back and visit this page often? If you’re a Mom and Pop blog, page that never changes, changes once a decade, it doesn’t get a lot of visitors, they’re obviously not as important to be visited very, very frequently as maybe the homepage of CNN. That’s a quite good example. It’s those two things. Host load, what can you handle, the other half is how important is it to come back and how frequently does this page change, because obviously it’s important, the search results oppression, up to date, if that makes sense.
Crawl budget feeds into things like page speed because if your host load is not very good because let’s say you’re on a lower priced…
If you’re on a shared hosting, yeah, with lots of other websites. Yeah, it does. Because my understanding based on looking at logs, at papers, and obviously a lot of not just patents but a lot of information retrieval papers if you like, is that there is this notion of scheduling.Obviously, once a pattern is built about how often a page be visited, based on crawl samples and getting a feel for the quality, a feel for the change frequency, etc., past crawling behavior. These things get put into a bucket of importance, and obviously the schedule gets built on things that are running like clockwork. This Googlebot comes out, it’s like a shopping list to go out and visit on a particular date and time or whatever, or whenever things come to the top of the queue. If and when Googlebot visits your site is really slow and he’s not able to get around or maybe you’re getting lots of errors, etc., Googlebot will pull back, maybe won’t get around everything that was on the shopping list on that occasion. Actually, things like page speed are really important because the more your site can handle, the better, the quicker Googlebot gets on the list. Maybe next time, if the list is still big, Googlebot will come and say, “Right. We can do a bit more on that site because obviously, it has a lot of changes but we can’t always get around that, now we can. It’s gone faster, etc.” Working on speed on bigger sites is really, really important. Plus obviously so much better user experience overall.
Right. When you advise clients to optimize their crawl budget so that they’re on a faster server, they’re able to handle more hits from the spiders from Googlebot etc., and also to better prioritize so that the most important pages are presented to Googlebot higher up in the list of what gets crawled and how often, right?
Yeah. I find people have huge, huge websites. Some of the key areas which you can emphasize importance through things like XML Sitemaps. Maybe we’ll build a good structure, a scaffolding, that’s almost below the surface. Even using things like HTML site. You don’t necessarily have to have them indexed, but they’re really for navigation once people are on the site or maybe even for search engines to traverse and robots to understand the structure and the importance, etc., and signpost. And obviously, the use of internal links were a little key between site sections that are maybe related. Domain down the pages that are not important so that they have fewer internal links, etc. That seems to be really, really quite powerful when you have a big site. It’s like presenting more clues for the important pages and fewer clues to the lesser important pages as such.
XML Sitemaps has this priority field so that you can specify how important the page is from a crawling standpoint, not from a page rank standpoint.
Google have said that’s pretty much ignored by them. John said it quite a few times, because the reason is people lie because obviously everybody thinks that they just are important so they always go priority 1.0 because they want Google to come back all the time to that page. Also, people tend to get last updated rank. They try to trick Googlebot, maybe sometimes inadvertently they’ll put ‘Updated Daily’. They don’t tend to use the server response to say when it was last updated, they’ll just put ‘Updated Daily’ into their XML Sitemaps. Obviously, if it’s pretty evident that they’re not updated daily because search engines can presumably tell by comparing this visits URL with the previous visit URL as to whether it actually did change or not, then you could assume that once they realize that that whole XML Sitemap is inaccurate, is potentially not really going to trust it as much. Whereas if they realize actually that the last modified dates on this XML Sitemaps is actually usually fairly accurate, then it’s quite a good signal that they should take notes of it. But definitely, Google have said that priority is something that they pretty much ignore on XML Sitemaps. I think the thing is, when he was interviewed two years ago now by Eric said that, “If you include something in the first place in an XML Sitemap, it’s considered more important than if you don’t include it. But if you include everything, then nothing is important.” In this case, did you make the extra effort rather than spitting out this auto-generated XML Sitemap to stress towards the importance of these URLs. If you use contextual internal links or links within paragraphs with related terminology around anchors in two different pages, that’s quite a good signal as well of importance, it’s also very useful for the visitor as well. XML Sitemaps, the key is to make sure that you just keep an eye on what you include in there, make sure it’s important stuff. I tend to structure them really well. I use categorized XML Sitemaps wherever possible, just because when you add them in Google Search Console, it’s always much more easy to understand maybe what’s being indexed from a category versus what’s not being indexed and to drill down and identify issues there.
Right. you could even have separate setups inside of Google Search Console to track things differently.
That’s really powerful. On a big site, yeah, you need it. You can see a lot more clearly where the issue is at. Maybe the pages are not getting indexed. You can’t see how many pages are being crawled from a section. I’ve asked about this a few times. I asked John on Twitter and he said they didn’t really feel that there was a good enough use case for it. But for me, that would be really useful because if we could see the sections of a site that was being crawled easily in Search Console, just at the root level that we have to look, it would help us to identify maybe areas which Googlebot doesn’t consider that important. Obviously, we have to bear in mind that it doesn’t necessarily mean you’re going to get right, you’re going to rank higher if your pages get crawled more but Google have also said that generally speaking, if your crawl is going up, it’s a sign of a better quality site and if it’s going down, if your site is getting bigger and the crawl is getting lesser, then maybe it’s a sign that those pages are, generally speaking, not really considered that important. Does that make sense?
Yeah. Essentially it’s like this: if you have a tree and the tree is your website and the branches are your webpages, to have a lot of sickly branches to your tree makes your tree overall look sickly. If you have a lot of unimportant pages that are thin content or duplicate content or have other quality issues to them, your tree, your website overall looks sickly. That’s not good, that’s not going to bode well for your rankings in Google.
Overall. Yeah, yeah. You’ve got a huge website and you’re not really getting a lot of crawl. It’s probably not a great sign.
Right. Your XML Sitemaps plays a role in helping to identify what is important, what are the healthy branches to your tree because it is a canonicalization signal. If you include everything as you said then, Google doesn’t give it as much weight, it doesn’t take it as seriously, your XML Sitemaps file. You want to only include canonical URLs in your XML Sitemaps, nothing that’s duplicate.
No, absolutely. What I see a lot of, and a question that was asked today on Twitter by John again was, “What’s one of your favorite canonical errors that you see?” And I see often, a lot of people go to migrate a site from HTTP to HTTPS then they forget to update all their XML Sitemaps which still have HTTP hanging around from their whole site. Also, a lot of their internal links point back to HTTP. We have to remember as well that with canonicalization, you have to tick a lot of boxes for it to be considered a valid, strong clue, and internal links to the wrong page invalidates it quite strongly. There’s a lot of people make a lot of mistakes with canonicalization.
In particular with the migration from HTTP to HTTPS are a lot of details and just missing one can really mess up your migration process. I just had a client migrate last week. I had them set up a separate XML Sitemaps for HTTPS, separate sitemaps for HTTP which they maintained so they didn’t replace that when they migrated. I made sure that they kept that alive so that they could have Googlebot discover the redirects much more quickly and what they forgot to do or made a mistake in was in the robots.txt file for HTTPS, it still referenced to HTTP URL for the sitemaps file. I didn’t change that one. The other one that’s on the HTTP site, that robots.txt, which was separate, they did not 301 that one. They need to reference the correct one on the correct robots.txt file. Made that change.
So many boxes to tick. Also, it’s things like as well where you got a huge parameter driven site, a lot of people don’t bother to set up the parameters on the new site when they switch of it in across. Even on things like old URLs that are being crawled and they may be years old. Sometimes, those parameters are still listed in Search Console. But when people switch across to HTTPS, build the profile, they need to settle the parameters in the new site in Search Console as well so that they don’t end up with it just all getting messed up as soon as Googlebot starts to go into old places as such.
I think a lot of people don’t use that URL parameters tool inside of Search Console. That’s another thing too, it’s not just that tool exists inside of Google Search Console, there’s another version of that that Microsoft has inside of Bing Webmaster tools. If you leave it to chance, or to Google to figure out whether that parameter in your URL is superfluous or is essential, a lot of times Google gets it wrong and you can easily see that to be the case. Let’s say you’re using as a tracking parameter UTM source, UTM medium, those sorts of Google Analytics tracking parameters. It’s amazing that you can do cycle and search of your site with in:url, utm_medium, or utm_source, and find URLS in the Google Search Results that have that tracking information and Google should know better because it’s its own tool and it’s a superfluous tracking parameter inside it’s own tool and yet they still index those duplicate URLS, duplicate content.
They’re often there in Search Console. As soon as we look in Search Console on bigger sites, you can just so often find those tracking parameters listed in there. I think part of the issue as well is people are a little bit afraid of URL parameters because it has got this big warning in there that says, “This is an advanced feature and you could end up cutting up a lot of your traffic or impact your visibility in search engines really negatively if you get it wrong.” I think maybe more education around Search Console parameters would be really useful for generally SEOs out there because sites now, for me, the really powerful sites are becoming more and more data driven, parameters are in place, and a lot of them, they’re working off standard CMSs. If you like how settings, that search engines know, there are still quite a lot of custom built things out there. I think also people don’t necessarily realize the difference between passive and active parameters. Things like the UTM stuff which is passive and just for tracking, and the active parameters that actually change the content output. The concern amongst a lot of people is that they’ll end up blocking a whole section. It’s really easy to do that, unfortunately. If you block handbags and you really meant to just block purses, then you’ve blocked everything.
Yeah.
Really need to produce something a bit more useful in terms of information on parameters handling generally.
Yeah. I saw that weakness and lack of information, good training on that. When I created my SEO Audit Course, my online course, I included a huge section on every single tool inside of Search Console and how to use it and when. All the different gotchas and potential pitfalls as well.
So many things that could go wrong.
Yes. robots.txt with the parameter handling, with sitemaps, etc. Let’s actually talk about some more technical SEO issues that a lot of people get wrong. What would be some of the others that we should discuss?
There’s a huge amount of conversation going on now about JavaScript, that seems to really push to the front. I think a lot of people don’t get their head around the thought that Googlebot doesn’t do anything that surrounds actions in for instance JavaScript. If you’ve got stuff that is a URL that’s triggered on click, and Googlebot doesn’t click so it’s never going to see those URLs. In those kind of instances, what we tend to do is again, we use things like all the paths that are into those pages that you’re trying to get Googlebot or humans to access via onclick. This is again an area where you can build really, really good navigational sections within a site to make sure that everywhere is accessible that you want it to be accessible. But the same time, I think also a lot of people don’t restrict Googlebot from things like filters in ecommerce and they’ll end up with Googlebot going through everything, they’ll obviously try to maybe sometimes use Search Console to do the restrictions. But one of the good ways to do it is that it’s quite a good idea to use things like onclick because you know that Googlebot is not going to do the onclick or any action as well. That’s quite a big one. I see quite a lot of issues with headers as well, people, some still get things like soft 404s wrong, obviously canonicalization is just a massive issue that when I did my masters I did quite a bit of surveying on how people put the knowledges overall of canonicalization. Even though I suppose the sample was not huge, but it was amongst all SEOs, and a huge amount didn’t realize for instance that relic was next, relic is a completely different relationship to canonicalization. When you see a lot of people that canonicalize from a paginated series to the first page of the series, don’t realize that actually the only time you really should do that is when it’s a view all because you can’t have content that say is on page four of a series included or taken into consideration when you canonicalize from that page to the first page. That’s a big issue. That’s a huge issue, actually. I see that a lot.
Let’s explain that for those who aren’t really into technical SEO as much. Basically, what’s happening is in a pageanations series, let’s say if pages 1 through 20 of handbags. You’re an ecommerce site that’s selling a whole bunch of different handbags. There’s 20 pages with 20 thumbnails and product names and links for more info on each of those 20 pagination pages. Rel prev and rel next helps Google to understand what the pagination series is like, you’re very explicit with this is my pagination series, here is the previous page, here’s the next page, and so forth, which is a best practice, you should be doing it. What you’re saying is that sometimes, people will paginate and set up rel prev and rel next fine but what they screw up is the canonical tag, the canonical link element more technically. Correct?
Yeah.
They point to page one of the pagination series instead of a view all page which would be okay or to self reference.
Canonicalize. Yeah, exactly. That seems fairly frequent.
This is not just about pagination, it’s about anytime that you canonicalize two pages that’s not the same or is not a piece of a larger page because canonicalization is supposed to be about saying, “Hey Google, this is the canonical definitive source URL for this piece of content and it doesn’t have a tracking parameter and that canonical version, it doesn’t have all these passive parameters or what I call superfluous parameters.” If you canonicalized to something completely different, then that’s confusing to Google and it’s also not what you’re supposed to be doing. You’re supposed to tell Google, “Hey, this is the canonical for this piece of content.” If the content isn’t even there, or swapped out by page one instead of page four, now you’ve messed things up.
Yeah, exactly. A good example would be for instance, you’re on an ecommerce site. It’s naturally ordered by default from high price to low price. But all the products are there and it’s though series of handbags 1-20 pages. And then you canonicalize to page 1, but page 1 only shows 20 products and the highest priced products. Then, everything lower priced from 20 down is not taken into consideration in the content at all. You’re missing out on a lot there. The only other option really is to either self reference, the search engine should then realize this is the first page and the most important page of the series but this is just a series canonicalized into the individual ones so you’ll include everything. The other option is to view all and that means that obviously everything from the whole series is in that one first page; the highest cost to the lowest price handbag. The rule is basically it either has to be duplicative or a super set of what you’re canonicalizing from. It must be included. If you canonicalize from a page about red shoes to shoes but then there’s no mention of red shoes within that shoe category at all, it’s kind of what I would imagine would be invalid or would kind of be ignored really. Certainly because of canonicalization as we know is only a hint. You’re probably never going to rank for red shoes with that page. Does that make sense? I see that there’s quite a big gotcha on the gentle sites in that you’ll see that people canonicalize to the category from a sub category and then the only reference to the subcategory is in a link back from the category page back into the subcategory. You’re in this endless loop where actually your passing relevance from the category down to subcategory. Then this category doesn’t include the content that you’re canonicalizing from. That’s quite a big, big problem that you need to look at if you’re running Magento sites in particular.
Yeah. This is a really important point for listeners that you just said that it’s only a hint. A canonical link element is only a hint, it’s not obeyed by Google 100% of the time. If you rely on it, you have to check and make sure that it’s actually being obeyed. There are so many cases where it should be obeyed, it’s been setup correctly, there’s superfluous parameters, passive parameters that are in the URL and you don’t want that version of the URL in the search results and yet there it is. You have to come up with other alternative means sometimes. For example you might need to use a 301 redirect to get rid of the duplicate sites that are from other sub domains or the non-www version of your site should always have a redirect in place and not just rely on the canonical tag.
I think the thing as well, the older the site, the longer it can take for things like canonical, unless it’s a very important site which gets crawled busily in times a minute, something like that. Because all the signals, because they are just hints, take quite a while to consolidate together. For instance, some of the lesser important pages may very rarely get visited to pick up on that strengthening of canonicalization to a super set above it. It just takes quite a while really, especially when you start to throw an hreflang into the mix. Which is also as we know a form of canonicalization in itself. That’s another big area where people seem to make a mess of things. When they almost invalidate the wrong canonicals, they do invalidate the wrong canonicals when they have hreflang going to somewhere else. That’s another mess. It just all takes time.
Let’s talk about hreflang because this is important for sites that are trying to target other languages and other countries besides their main one and there’s lot of gotchas there. They could screw things up if they don’t know what they’re doing. What’s hreflang and why should we care? Let’s start there.
Hreflang is again another meta tag that goes in the head of the page that basically says to search engines that this is the canonical version of a page, it’s based on this territory and this language. If you have two pages, one is French, one is English, you have to have two tags that cross reference each other, otherwise, you’ll end up with an invalid hreflang. It has to be reciprocal across the different nationalities that you’re presenting content to. You’d have on your English page, you’d stress by your hreflang tag that this is an english site and it’s for EngB (English Britain) or EnUs (English US), EnAu (English Australia) and then obviously you’d have one for your French site as well which indicates that it’s for France and it’s in the language of French. It’s important because it’s to ensure that the right page ends up in the right search for the right country. I know that Google’s implementing something now whereby only French sites should show up in France, etc., so you’re going to have that where it’s going to be localized based on the language etc. and so forth.
Yup. That’s really, really powerful to be able to master international SEO and then rank in various country versions of Google. Because if you don’t understand that, it’s tough to compete with those local sites that are in the native language, in the native dialect, and using the colloquialisms.
I think what happens as well if sites are not careful, they don’t do it right, some of the biggest problems tend to be where people have sites that are English, Australian, and targeting the US as well. Three English sites for instance, even a Canadian one as well, throw that in there. Because Google only wants to show one URL that has a particular type of information. If you like the canonical URL fingerprint, the best version of this content, you’ll end up with the wrong sites ranking in the wrong Google search, if you’re not careful. That can be quite a challenge. I think it’s more straight forward if one page is in French for it to show up only in French search and if you only have only one English site, for it show up in English search. When you start to throw in different sites which are all English you’re speaking, but there for different territories, that’s when it can get really messy and you end up with sites stealing traffic off each other from the same group as such.
Right. It ends up looking like duplicate content not because Google can’t understand that there’s a country version of your site for Australian, another country version for New Zealand, and another country version for the UK and another one for the US, it’s that you screwed up your configuration of setting that up right and do hreflang tags right, you didn’t do the canonical tags right, and thus you confused Google.
I think also what happen as well is that for those signs, because it’s only a hint and there are so many things that could go wrong with it, like for instance more links pointing to the English site and the Australian which counters is almost. I think it’s key to use, as you said, localized language. If you’re doing an US site, if you’re going to type/write the word ‘color’, make sure it doesn’t have a ‘u’ in it because those sorts of signals stress that actually this is the use of English or Americanized English. You wouldn’t put the word canonicalization without a ‘z’. In the Gb sites, you would use the ‘s’. Again, little signals that actually this is an English site versus the Americanized English version. Things like, if you’ve got places there as well that are clearly mapped and tied to the US versus UK, that for me is another signal as well. This is a UK site versus the US one. Everything is just incremental games with this thing, isn’t it?
Yeah. It also applies in foreign languages too. If you’re trying to target France and Canada, French speaking Canada, the localization matters there too. For example, in France, they’ll refer to blueberries as myrtilles and in Canada, Quebec, they’ll talk about bleuets instead of myrtilles. If you’re using the right local term in the France version of your French site and the French Canadian version of your site, then that helps Google figure out your targeting.
Massively, massively.
And local links matter too. If you have a lot more links pointing to your French Canadian sites from other Quebec sites, that helps Google figure out that oh, this is targeting Quebec or French speaking Canadians.
Absolutely. Yeah, for sure. It all make a difference. As you say, it’s just incremental gains, small signals, but for me, it just builds up over time. Obviously, we have the challenge that sometimes when you got six weeks, wanting to know answers, it’s very difficult sometimes to explain that these things take time, that actually, you have to be patient, that Googlebot will rarely just update everything quickly because we want them updating quickly. I think in a way that really is a very big challenge for SEO because with PPC obviously, and don’t get me wrong, I’m a fan of PPC, I think everything needs to work together. PPC is very easy to switch on and off and with a bit of optimization, you can immediately get the results that you’re after. With SEO, it’s a long game and things don’t always go quite as you hoped, especially when you’re dealing with big dev teams that have sprints that they’re waiting to push etc., and sometimes, they do it wrong. And then you’re back a little bit square one. It’s kind of a 3 steps forward, 2 ½ back, 2 steps forward, 1 step back. That’s the thing. I think the key is consistency and just being prepared to trust SEOs. Which again, is another subject in itself, isn’t it?
The key is consistency and just being prepared to trust SEOs. Share on XYes. When you say that SEO takes time, this is not just SEOs opinion, this is from the search engines themselves. Google engineers, Maile Ohye for example, who’s no longer with Google but when she was with Google, she put out Google Webmaster Central video, explaining that SEO takes time, it takes four months to a year.
I think she said about six months to a year for you to start to see the results, or to see things starting to go into the right direction even. For me, you have to be consistent with the signal, just keep being consistent. I was reading some stuff in some information retrieval books, it seemed to imply that everything is best around comparing this last crawl with the previous one, on the past calls. It’s like rolling out bridges going out in the right direction or rolling out bridges going in the wrong direction. You can imagine those just take a while to point to positive things.
A little bit ago, when we were talking about hreflang tags, you said it’s important to by-directionally link.
Reciprocally link, yeah.
That reminded me about mobiles sites, when you have a separate mobile site, this is a common mistake to not by-directionally or reciprocal links between the mobile site and the desktop site, if you have separate URLs for these, they should by-directionally link to each other. The desktop site linking via a rel alternate tag to the mobile site, and the mobile site linking to the desktop site through a rel canonical.
Yeah. We just had out today from Google, they released something, didn’t they today? That was on preparing for mobile first index. It’s really near. It’s really important that people fully read up on exactly what that entails, check things like server log files to see whether they’re already starting to be added to mobile first index. John said that on Twitter. I think it was last week, that you should be able to see from your server log files whether you’ve been included because it will be the smartphone bot rather than Googlebot that’s accessing your URLs now. I want you start to move over so the percentage is going to increase there. Yeah, definitely, it’s just very much okay to ensuring that all the signals are tied up between desktop and mobile, and obviously, for me, the simplest solution is just to get a responsive website.
Yeah. That’s something that Google engineers have said before, that if you have a responsive website that’s by default, mobile first ready. Mobile first indexing is not going to be a problem for you if you’re a responsive website.
Yeah, absolutely. But when we look at some of the responsive sites out there and the experience, some of the experiences on these sites that are supposed to be mobile are really just not very good on mobile, so many. They tick the box, they pass the mobile friendly test, but their navigation is poor, it’s not built for mobile users that are using two films, they’re not really thinking from that let’s develop for mobile first and then they’re still doing it the other way around, a lot of developers. I think going forward, the key is just to build the site with the mobile, the smallest screens by default, and then just work it backward so that desktop also works. It’s just easier to obviously use websites on desktop than it is on mobile, but people interact with them differently. The scrolling, the thumbs, even different groups of users. Millenials and forward, they haven’t had a cradle in their hand and they use their thumbs to do things. Older generations still tend to be very warm fingerish, really.
Right. When you say it’s important to develop for mobile first, for those listeners who are not really that familiar with that concept, with Google’s announcement that they’re switching to mobile first indexing, that makes it critical that you are not just mobile friendly, but thinking about mobile first as your strategy and the desktop second because that’s where all the users are. Google searchers are more on mobile devices than they are on desktop and if you want to get all that traffic, you need to optimize first for mobile users and then secondly for desktop users.
I think we’re pretty much all agreed amongst the SEO community that UX now is becoming so important. It’s only because last week, I was watching a webinar, Webmaster Hangout, it was around the jobs schemer, it was Maria. Maria is one of the Google’s Webmaster team, and one of our colleagues, was talking people through using structured data and schemer for the jobs listings in Google. Somebody asked a question about whether they understood when people had a bad experience on a website. She said, clearly, we don’t understand what happens after people leave search and go onto that website, but if we pick up signals from elsewhere about the quality of that sites, we all take that into consideration. Which I thought was pretty interesting, obviously. It’s all the signals, it might be taking shock back and forth. Not so much bounce because obviously we know that bounce is a funny one, people throw around, “Oh, you shouldn’t really have a high bounce rate.” But as we know, on informational pages, people sometimes would just read the page, get all the information they need and then leave. But the point is when you go back to search, and you start looking for other things that are the same, going onto other search results that are from the same query, gives signal that maybe you haven’t found what you needed. The key is that with mobile, it’s going to be more and more critical to ensure that when you do get visits, you get people that good experience so that actually, none of the signals that Google might be picking up from elsewhere about poor quality or a bad result, in that feedback, when people do back and forth, actually happens if you’ve given them a good experience. Does that make sense?
Yeah, yeah. Google isn’t spying on your bounce rates because that would require them going into your Google Analytics.
Or being on your site.
They’re not doing that. They are adamant that they’re not spying on your Google Analytics data and using it against you in search. And I believe them. If they’re not looking at bounce rate, what they are looking at, as you said earlier, Pogo Sticking, dwell time, if people don’t spend much time on your listing, in your website and they bounce back to the search results and then do another search or they click on other results in that same page of search results. Now, that sends a signal to Google that you didn’t provide the answer, you didn’t solve that user’s problem. That’s user engagement signals that Google takes into accounts. Having a good user experience, solving the user’s problems, not making it difficult for them to extract that information because of the UX, that’s all important, feeds into your site’s SEO.
That’s harder on mobile. That’s the point, that’s harder. You have limited space to actually meet the informational need. And there’s actually a really good guide that’s out from Google that looks up mobile design guide, and it taught things like make sure that you understand the top tasks that your users want to complete, and that they’re right at the very top of the main content. Ace things like that, make sure the navigation is filled with top tasks. Make it so that all the pages are accessible, so that’s where XML Sitemaps come in, and HTML Sitemaps for both humans and users, obviously we have this issue with things like carousels and tabbed content, which Google have said that they won’t weigh it down as they do currently with desktop because they’re aware that actually from a UX perspective, people do want to have those drop downs, they do want to have the concertina experience and the fly out experiences, etc. Because it’s just easier, but actually, it’s harder from a dev perspective to build all those things, because they aren’t necessarily out of the box as such.
Right. On a desktop site, tabbed content, stuff that’s behind a tab or you have to take some action in order to see it, it’s still in HTML or it’s still available, it’s just not displayed by default. That gets partially discounted by Google. But what you’re saying is that on mobile devices, Google’s not discounting that in the same way.
They know it’s involved in the UX.
Yeah. Because there’s such a small screen and you’ve got to figure out ways to be more efficient with that minimum amount of screen real estate you have to work with.
Absolutely. It’s more of a challenge and I think it goes so much more further than just ticking that mobile friendly box, and also speed. Fast is the only speed now. Particularly on mobile devices. I think going into this next year, I’m just going to continue to keep adding on more and more stuff. Speed needs to be a massive priority.
Let’s talk about AMP (Accelerated Mobile Pages), do all of our listeners need to implement AMP on their websites? First of all, what is it? What is AMP and why should our listeners care?
AMP is as you say short for Accelerated Mobile Pages. It’s basically a collaboration between Google, I think there’s about, there were about 36 other major tech organizations involved in its collaborative piece of work across CRM companies, analytics companies, publishers, and Google, etc. to solve the problem with mobile experiences being very, very slow whilst the pages are loading. Because of things like JavaScript, etc., and image slots and different image sizes in the small screen, the rendering of the page, the loading of the whole page is very much slowed down whilst the browser is waiting for everything to arrive. When people have a bad experience, they’ll start to use things like ad blockers. I know that search engines said that this is not the issue but from that perspective commercially, from publisher’s perspective. When people are having a bad experience and they start blocking ads because they take that long to load, that’s obviously not going to help them financially. Obviously they’ve come up with this solution which is to provide something called the Accelerated Mobile Pages which is a much faster version, an alternative version of a webpage which is hosted on caches, which is little memory snapshots of the pages, and they serve locally to user. Instead of somebody calling a webpage on a normal webpage and then having to travel across oceans from somebody’s site and another content or whatever or from service hundreds and thousands of miles away, it actually gets served from a cache, a copy which is local to them. It’s much, much faster. That’s it really, it’s really just like small search copies of web pages that are on what they call the AMP version. The end of the webpage will say [inaudible [00:47:20]/amp, and that get served currently. I think Google has AMP caches but apparently anybody could build on caches if you have the resources to develop them.
Right. Your AMP version of your webpage will be hosted by Google and it will be at a Google URL. Some publishers, some website owners are not real thrilled with that concept.
Yeah, I know. There’s a lot of issues with the whole proprietary code type thing. Always proprietary, etc., but at the same time, I’ve seen that there have been quite a lot of big case studies that illustrated that this overall has massively increased performance, massively increased conversion rates. I think I read somewhere that going forward, the Google URL situation will be changed, so actually then, it won’t be the case. I know Google have said many, many times, I think it was [Paul [00:48:22] he was at the last major AMP conference, was a round table discussion, and he was saying, “I know it looks like this is all just Google controlled, but actually it’s because we’re looking for people to come on board and join the dev team that are not from Google, Getting these people to volunteer is not that straightforward.” Maybe overtime it will evolve and it will be less of a Google project as such and more of a collaborative project that it’s intended to be. I know that it doesn’t seem to be going away, I think more and more people are starting to set that and embrace it. I know some SEOs that I know personally, were very, very anti-AMP to start with but actually now, they’re starting to implement it and seeing some really, really big impulsive case studies.
This is especially important if you have a slow loading website, this could be a solution for you to speed up your site, it’s to implement AMP.
That’s the carrot but then there are sticks as well. For instance, if you’re not an AMP page, you’re not a candidate to appear in the top stories carousel. You can’t, literally, that’s it. It has to be an AMP page. Things like soft news, such as articles or guides, etc., which potentially are candidates to appear in there when you touch things like travel guide, you’re not AMP, you just can’t be in there, that’s it. Actually that’s the carrot, that’s the stick. The carrot is obviously, you don’t necessarily get a ranking priority but you’re faster, etc. The jury is out on it still but I think the jury’s coming to the conclusion amongst the SEO community from what I see is that resentment towards AMP is dropping, more and more acceptance of it. Especially as they are starting to add more features and work hard towards making it actually the pages don’t look quite so ugly, they’re not quite as boldy. Obviously the WordPress plugin people, I think there’s a couple I can think of at least, I know that there’s Yoast, and there’s Yoast Glue. I think there’s another one as well that call AMP for WordPress, that’s pretty good, that’s come a long way and they’re doing a lot to make it much more attractive so that you have the same look and feel. Actually, there’s a navigation included which is important. For instance, if somebody lands in an AMP page, you want to make sure you have a navigation that allows people to actually visit the rest of your website. There’s a lot of things happening to actually make that better.
Do you want to leave our listeners with one parting overall tidbit or bit of advice for technical SEO?
Yes. I would just say just make sure that actually, you focus completely and utterly on mobile and make sure that it’s a fast experience. Mobile fast, mobile, fast, mobile first, fast. Off the top of my head, literally, just make sure that you’re ready for that because I think there’s going to be an awful lot of sites, they think they’re mobile first but they’re not mobile first.
Yup.
And then when the indexing comes, it’s going to have a shop.
Yes, yes. Great advice. Thank you so much, thank you listeners. We’ll catch you on the next episode of Marketing Speak. This is your host Stephan Spencer signing off.
Important Links:
- Search Engine Land – Dawn Anderson
- Twitter – Dawn Anderson
- LinkedIn – Dawn Anderson
- Facebook – Dawn Anderson
- BrightonSEO
- Pubcon
- Crawl budget
Your Checklist of Actions to Take
☑ Check the capabilities of my server based on crawling politeness and make sure that it is able to handle simultaneous crawling requests.
☑ Don’t spend crawling resources on all my web pages. Avoid using resources on websites that don’t get many impressions or visitors.
☑ Use my XML sitemap priority field to designate which pages are the most important for Googlebots to crawl.
☑ Keep my XML sitemap details consistent so that Googlebots will see my site as trustworthy.
☑ Set up separate tracking settings in Google Search Console. This will help me identify issues with my sites.
☑ Transfer my website from HTTP to HTTPS. Update everything in my XML after making this switch.
☑ Educate myself with every single tool inside Google Search Console to get better at gathering important data and sighting issues.
☑ Use a canonical URL (HTML tag rel=canonical) on similar pages to tell Google which URL is authoritative.
☑ Don’t forget to use hreflang for websites that have 2 or more languages. This ensures that the correct language appears to users.
☑ Be consistent in my SEO and ensure that all my links are working properly so that I don’t jeopardize my rankings. Always audit at least once a month to ensure my site’s safety.
About Dawn Anderson
Dawn Anderson, is an internal SEO and digital marketing consultant for brands and startups alike. She’s also a lecturer and trainer in digital marketing and search marketing.
Leave a Reply