Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Hide sitelinks from Google search results
-
Does anyone have any recommendations on how you can tell Google (hopefully via a URL) not to index that page of a website? I have tried through SEO Yoast to hide certain sitemaps (which has worked to a degree) but certain functionalities of Wordpress websites show links without them actually being part of a "sitemap" so those links are harder to hide.
I'm having an issue with one of my websites - the sitelinks that Google is suggesting are nowhere near the most popular pages and I know that you can't make recommendations through Google not to show certain pages through Search Console. anymore.
Any suggestions are greatly appreciated! Thanks!
-
Yes, I tried the old Search Console option before I posted in here but sadly, it just redirects you back to the new version. However, I didn't even think about the redirect opportunity and considering the website is built on Wordpress, that should be easy enough to set up.
Thanks so much!
-
Ah. So, then I might try one of the following:
- My preferred approach would be to set up a redirect for that URL to a valid new URL. That way, you would make the best use of the traffic coming from the Sitelink, for whatever time it might remain there. After a while, I suspect Google will either update the sitelink title and description with those from the new redirected page, or perhaps drop that sitelink eventually in favor of another page.
- If you can't do the above (maybe you are not able to set up redirects from the old URL), then I might go the route of using the Search console (old version) to request removal of the old URL (Google Index > Remove URLs). If it really does give a proper 404 response code, then this should work. It doesn't do the job on its own if the URL still gives a valid response code. But a 404 plus a removal should get rid of it. That said, then you are rolling the dice with whatever Google decides to promote as a replacement sitelink. So, I would prefer the first approach, if I thought I could make the best of the traffic coming from that link.
-
Hi There,
Thanks so much for your reply. The trick with this is that the page that is showing as a sitelink is not even part any of the website's sitemaps. We just rebuilt the website for the client about 3 months ago - went from static website to Wordpress and for some unknown reason - Google is remembering a .php link from the old website somehow, but it is nowhere in our FTP, so if you click on it - it provides a 404 error.
The other disadvantage is that the old website was never SEO'ed or had proper page titles so users are confusing that sitelink as the new website link and it goes to a 404 page and people think the website isn't working.
Have I explained a bit better? Does that change your suggestion? Thanks!
-
For an html page, you would include the following line in the HEAD section of the page:
But in your question, I am unclear if you are maybe trying to noindex the sitemap itself? If that is the case, if you are wanting to direct Google to not index an XML file (rather than an html page), in theory you could inject a X-Robots-Tag: noindex into the header for the sitemap file (google how to do that). But probably no need to do that.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Dynamic Canonical Tag for Search Results Filtering Page
Hi everyone, I run a website in the travel industry where most users land on a location page (e.g. domain.com/product/location, before performing a search by selecting dates and times. This then takes them to a pre filtered dynamic search results page with options for their selected location on a separate URL (e.g. /book/results). The /book/results page can only be accessed on our website by performing a search, and URL's with search parameters from this page have never been indexed in the past. We work with some large partners who use our booking engine who have recently started linking to these pre filtered search results pages. This is not being done on a large scale and at present we only have a couple of hundred of these search results pages indexed. I could easily add a noindex or self-referencing canonical tag to the /book/results page to remove them, however it’s been suggested that adding a dynamic canonical tag to our pre filtered results pages pointing to the location page (based on the location information in the query string) could be beneficial for the SEO of our location pages. This makes sense as the partner websites that link to our /book/results page are very high authority and any way that this could be passed to our location pages (which are our most important in terms of rankings) sounds good, however I have a couple of concerns. • Is using a dynamic canonical tag in this way considered spammy / manipulative? • Whilst all the content that appears on the pre filtered /book/results page is present on the static location page where the search initiates and which the canonical tag would point to, it is presented differently and there is a lot more content on the static location page that isn’t present on the /book/results page. Is this likely to see the canonical tag being ignored / link equity not being passed as hoped, and are there greater risks to this that I should be worried about? I can’t find many examples of other sites where this has been implemented but the closest would probably be booking.com. https://www.booking.com/searchresults.it.html?label=gen173nr-1FCAEoggI46AdIM1gEaFCIAQGYARS4ARfIAQzYAQHoAQH4AQuIAgGoAgO4ArajrpcGwAIB0gIkYmUxYjNlZWMtYWQzMi00NWJmLTk5NTItNzY1MzljZTVhOTk02AIG4AIB&sid=d4030ebf4f04bb7ddcb2b04d1bade521&dest_id=-2601889&dest_type=city& Canonical points to https://www.booking.com/city/gb/london.it.html In our scenario however there is a greater difference between the content on both pages (and booking.com have a load of search results pages indexed which is not what we’re looking for) Would be great to get any feedback on this before I rule it out. Thanks!
Technical SEO | | GAnalytics1 -
Google image search filter tabs and how to rank on them
I have noticed Google image search has included suggestion tabs (e.g,. design, nature... when searching background) on the top of the image search.
Technical SEO | | Mike555
Are there specific meta tags I can add into my images so that my images will show up on each tab?
Do those filters just show content based on image keywords or something else? IRme7gQ0 -
301 Redirects, Sitemaps and Indexing - How to hide redirected urls from search engines?
We have several pages in our site like this one, http://www.spectralink.com/solutions, which redirect to deeper page, http://www.spectralink.com/solutions/work-smarter-not-harder. Both urls are listed in the sitemap and both pages are being indexed. Should we remove those redirecting pages from the site map? Should we prevent the redirecting url from being indexed? If so, what's the best way to do that?
Technical SEO | | HeroDesignStudio0 -
Abnormally high internal link reported in Google Search Console not matching Moz reports
If I'm looking at our internal link count and structure on Google Search Console, some pages are listed as having over a thousand internal links within our site. I've read that having too many internal links on a page devalues that page's PageRank, because the value is divided amongst the pages it links out to. Likewise, I've heard having too many internal links is just bad in general for SEO. Is that true? The problem I'm facing is determining how Google is "discovering" these internal links. If I'm just looking at one single page reported with, say, 1,350 links and I'm just looking at the code, it may only have 80 or 90 actual links. Moz will confirm this, as well. So why would Google Search Console report different? Should I be concerned about this?
Technical SEO | | Closetstogo0 -
How to avoid instead suggestion from Google search results ?
Hi, When I search for "Zotey" in google, the following message is being displayed. Showing results for zotye
Technical SEO | | segistics
Search instead for zotey Anyone let me know how to get rid of this conflict asap? Regards, Sivakumar.0 -
Why are Google search results different if you are log'd into Google or not?
I get different results when I'm log'd into my Google account associated with my website than if I'm not. The same country is occurring. So how can I rely on the google results I'm seeing? For instance my site is page 1 with the improvements I made based on SEOMOZ if I'm log'd in. Yet I'm not on the first 25 pages if I'm not logged in.
Technical SEO | | Romana0 -
NoIndex/NoFollow pages showing up when doing a Google search using "Site:" parameter
We recently launched a beta version of our new website in a subdomain of our existing site. The existing site is www.fonts.com with the beta living at new.fonts.com. We do not want Google to crawl the new site until it's out of beta so we have added the following on all pages: However, one of our team members noticed that google is displaying results from new.fonts.com when doing an "site:new.fonts.com" search (see attached screenshot). Is it possible that Google is indexing the content despite the noindex, nofollow tags? We have double checked the syntax and it seems correct except the trailing "/". I know Google still crawls noindexed pages, however, the fact that they're showing up in search results using the site search syntax is unsettling. Any thoughts would be appreciated! DyWRP.png
Technical SEO | | ChrisRoberts-MTI0 -
UK website ranking higher in Google.com than Google.co.uk
Hi, I have a UK website which was formerly ranked 1<sup>st</sup> in Google.co.uk and .com for my keyword phrase and has recently slipped to 6<sup>th</sup> in .co.uk but is higher in position 4 in Google.com. I have conducted a little research and can’t say for certain but I wonder if it is possible that too many of my backlinks are US based and therefore Google thinks my website is also US based. Checked Google WmT and we the geo-targeted to the UK. Our server is also UK based. Does anyone have an opinion on this? Thanks
Technical SEO | | tdsnet0