Something that is not being admitted that often but that's true is the fact that some of the top SEO professionals are web developers who have decided to make a slight change in their career paths. This decision of changing career paths is crucial, as some of the thought leaders in this field have a web development background.
I am not going to give any names of web developers who are in the SEO industry only because I may forget someone, but for those of you who are in the industry, you probably already have some in mind.
The biggest issue for web developers is not the lack of technical background. We take it for granted that they have it. The biggest weakness and at the same time biggest opportunity for them (speaking in terms of SWOT analysis) is to understand why and how SEO is being used.
By being able to determine technically the importance of SEO and placing it into a business context, web developers can be in high demand in the SEO job market.
Without any further ado, we are going to examine only a few fields of SEO epigrammatically that are associated with web development. This post is not covering the basics but it epigrammatically covers all the tasks that are related to web development. If you are a web developer, you might well want to read it.
Google Webmaster Tools
Some of the tasks of web developers are associated with Google (or Bing) Webmaster Tools. Some of these tasks are related to:
• Website Verification (only in the event we cannot verify it through Google Analytics).
• Implementation of a Sitemap.xml. Why it matters?
- It boosts the indexation process and it’s important it concerns us in the below
- Frequency and Priorities –web developers need to know when and why should assign different priorities for each page.
- Multilingual sites and XML Sitemap. The use of the hreflang tag and geotargeting does matter.
- Implementation of structured data using the Data Highlighter or the Structured Data Mark-up Helper.
• Fixing Crawl Errors → 301 Redirections
- Crawl errors are a signal of an abandoned site and thus developers should fix this issue immediately.
- The optimal solution in order to fix this problem is with the use of 301 – permanent redirections as the page rank will carry on flowing.
Use of structured data
• Why should structured data matter for web developers?
- Data and information will make more sense to search engine spiders and by extension to users.
- Increased level of visibility to search engines.
- It’s more likely that the CTR will be improved.
- By having better CTR, visibility and offering data that make more sense then it’s more likely to increase the ROI.
Basic HTML elements
The proper use of meta-titles, meta-description and heading tags can have an important impact on the onsite=-SEO and thus the fundamental rules of SEO should be kept in order to keep the onsite SEO at good levels. Knowing when and why to use the rel=nofollow attribute when a site links to external sites is important as it determines if any PageRank will flow to the external sites (if they are credible) .
The proper use of URLs can help in the process of onsite optimisation and thus it should be seen in supportive but essential element of optimisation.
Pro tip 1: Every time a new URL is included in the site, the XML sitemap should be updated.
Pro tip 2: Rather than having to wait until search engine spiders crawl the new page with the new (or updated URL), why not use the Fetch as URL from Google Webmaster Tools? After submitting the new URL, it may take a few minutes or a few hours until the new (or updated) URL is indexed.
Proper use of robots and metarobots
If we want to control the control behaviour of search engine spiders, or when they want to exclude certain directories, images or protect the private data (e.g wp-admin) of users from being crawled of disallow any malicious bots, then web developers should strongly consider the use of robots.
Another great way of handling certain pages is the use of metarobots, e.g the use of nofollow and noindex would prevent search engine spiders of indexing a certain page.
Duplicate Content Issues/Canonicalisation
Most developers have probably heard of the term duplication. It arises when a page is using the same or almost the same content with other pages. It can be perceived by search engine robots as a signal of low quality and thus it is necessary to consider various options to resolve this issue (e.g. canonicalization, 301 redirection or the use of the nofollow, noindex), but it’s really crucial to think of the pros and cons before doing so.
There are cases where it’s likely to see a case of a duplicate homepage, such as:
These indicative four cases constitute a case of a duplicate homepage which again it can be resolved via canonicalization or 301 redirections (which would be the optimal solution).
Extra tip: For those of you who are developers; you can use this table that depicts all the issues and solutions regarding canonicalization, use of meta robots, the use of the rel=nofollow, nofollow and noindex and 301/302 redirections.
When handling a large e-commerce site or a news site, for an SEO professional or a web developer this task seems challenging but once breaking down on the option and thinking of their pros and cons, it will not be that confusing. When coming across on pagination issues, think of the potential solutions:
o View-All Page and rel=“canonical”
o Remove your paginated content from the index
In a nutshell
All the above that were mentioned above are just an indicative (by no means thorough) list of SEO actions that are associated with web development. Some of the issues that we did not start analysing about the use of AJAX, use of noscripts, image optimisation, link depth and navigation architecture. These topics will be discussed in a future post in the near future as they deserve their own space.
In case you have questions or you would like to contribute about web development and SEO please feel free to do so, under the comments section.