Do you really want to index the cart, checkout, my account, login, logout, search filter pages, and long URL pages? Yes, these are the usual pages reported under this head in the Google search console. I will explain how you can fix excluded by noindex tag.
Excluded by noindex tag means that the reported pages are getting blocked from being crawled and indexed by Google ( any other search engine ). Such blocking is normally intentional and can be done through robots.txt file, <meta name=”robots” content…. in the <head>… section or robots header.
In general; you should be looking at the noindex tag report to filter out the important pages. If you find any main URL (canonical) is listed in the report then we should be fixing it soon as possible.
Before we start let’s look at example cases in which you should fix the issue or leave it as it is
Fix it if is an unnatural link created to achieve functionality by the web developer. Such links get reported in the Google search console and normally get a reference from within the website. You can set up no-follow because thse links are normally used to achieve some functionality.
https://example.com/wp-json/ssa/v1/embed-inner?integration&type&types&edit&view&ssa_locale=en_US&sid=c817cc61959e39ae0a90ee6b15f04871d556c0d7&availability_start_date&availability_end_date&accent_color&background&padding&font&booking_url=https://3m3creations.com/&booking_post_id=11&booking_title=home&_wpnonce=4dd5bb670c
Leave it as it is if it is a search page because a search page can accept any input and quickly exhaust the links, also spammers can enter any word into it and use it to spam your website:
- https://example.com/?s={search_term_string}
- https://example.com/search/{search_term_string}/
- https://www.example.com/search/search_term_string/
Once you decide how these issues are reported and if you want to make them avaialble to crawl and index you can remove the blocking code line or update <meta name=”robots” content. This usaually have urls that are not important.