If you see a message in Google Search Console that a page has been “Crawled – currently not indexed,” it means that Google’s crawler (also known as a “bot”) has successfully accessed the page, but for some reason, the page is not being included in the Google index. Here are a few steps you can take to try to fix this issue:
- Check for crawl errors: Go to the Crawl Errors report in Search Console to see if there are any crawl errors for the page. If so, fix the errors and submit the page for re-indexing.
- Check for noindex tags: Check the source code of the page to see if there is a “noindex” meta tag in the head section. This tag tells search engines not to index the page. If you find one, remove it and submit the page for re-indexing.
- Check for robots.txt block: Check the robots.txt file to see if the page is blocked from being indexed. If it is, remove the block and submit the page for re-indexing.
- Check for canonicalization issues: Make sure that the page does not have multiple URLs pointing to the same content.
- Check for content duplication: Make sure that the page’s content is not duplicated or copied from other sites.
- Use Google Search Console’s Fetch as Google feature to request indexing of the page
- If none of above solution works, you can submit a request to remove the URL from Google’s index via the URL removal tool in the Search Console.