Intro to SEO: Six things marketers should know


Although there is no formal definition for SEO I’m particularly fond of this one from the Search Engine Journal:

“SEO is the process of optimizing a website – as well as all the content on that website – so it will appear in prominent positions in the organic results of search engines. SEO requires an understanding of how search engines work, what people search for, and why and how people search. Successful SEO makes a site appealing to users and search engines. It is a combination of technical and marketing.”

Search engine optimisation is usually broken down into several aspects or disciplines which focus on particular areas or activities of a website. In what follows, I’ll go through each of these main disciplines, offering a brief explanation of what they cover and why they are important. Bear in mind, though, that there is significant crossover within the disciplines. Ultimately, every aspect of SEO is pulling in the same direction - towards improving organic performance of a website - and every competent SEO strategy should incorporate elements from each of these areas.


Technical SEO

“Technical SEO” is the umbrella term for any activity that is carried out which does not involve content. The main objective of technical SEO is to ensure that a website is visible to search engines for indexing and crawling. This sounds simple enough on paper, but there are hundreds of ranking signals (factors that impact a website’s performance on search engines) to optimise from a technical perspective. The main areas of focus tend to be as follows:


URL Structure & Internal Linking

In SEO we are always trying to balance the requirements of search engines with the intent of users. As search engines (Google especially) have become more sophisticated over the years, this balance has become simpler to align - increasingly, the requirements for ranking highly in SERPs (Search Engine Results Pages) are lined up with the requirements of an actual user.

Logical and human readable URLs (web addresses) are an example of a practice that helps both search engines and users to understand your website better. They improve user experience by giving a clear indication of the destination page, while also enabling Google to build an understanding of a site’s content and structure. It does this through a process called Latent Semantic Indexing and more recently Rankbrain, an AI-based algorithm that forms part of the search engine.  Even though URLs are a minor ranking factor for search engines when determining a page’s relevance to a query, including a keyword can help improve site visibility (but must still be reflective of the pages content and topical focus).

Effective internal linking across a site is also essential for search engines and human users alike. Search engine spiders rely on internal links to be able to find all of the pages on a site. Pages that aren’t “findable” by internal links (“orphaned pages”) won’t rank in the SERPs because if a page cannot be found by a spider then it cannot be indexed and thus won’t rank.

Above is a diagram of the minimum amount of internal links there should be from the homepage to other pages. An uninterrupted set of links enable link equity to flow through a site more effectively, increasing every page’s potential to rank. The internal linking structure should be reinforced by a supplementary URL structure as mentioned above in order to provide Google with reinforced semantic signals and positively benefit user experience by taking advantage of breadcrumbs (markers to show where the current page sits in the site’s overall structure).


Crawl Efficiency & Duplication

Google has finite resources, so it must decide how many of those resources to dedicate to how much of a site they and how often they crawl it. This has become more of a focus for SEO for large sites in recent years. In most cases, you only want Google to index and surface pages which have a specific organic function (that is, pages which you specifically want users to find via search engines), such as product pages, service pages and articles.

In order to help Google find the pages you want indexed (and equally to prevent it from indexing pages you don’t want it to find), we can take advantage of both the Robots.txt protocol and sitemaps.

The Robots.txt protocol is a standard used by websites to communicate with website crawlers to inform them about which areas of the site should not be processed or scanned. Although a robot.txt files recommendation to a search engine is ‘advisory’, all modern search engines will respect the file (which should be placed at the root of a domain). Generally, the types of site sections or pages you would want to block are payment or transaction pages; internal search results pages which can cause index bloat as they are dynamic; and in certain cases static asset folders but not CSS or JS since Google changed its crawling guidelines around this topic in 2015.

XML sitemaps are useful for search engines to discover the content you want indexed without solely relying on your internal linking structure. A sitemap should be linked to your robots.txt file to ensure discovery and can also be placed at the root of your domain. There is a 50,000 URL limit for a sitemap but effectively no limit to the number of sitemaps you can include. Many sites break up their pages into a set of specific sitemaps such as country code, site categories, products or statics assets like images and video to name a few.

In the eyes of a search engine, every unique URL is seen as its own page which should be crawled and indexed. As more dynamic capabilities have been introduced to the web to improve things such as user experience, product discovery and tracking accuracy, the potential for duplicate pages has increased significantly. Google and other search engines introduced the canonical URL tag in 2009 to deal with this issue directly. Canonical tags are used to signal to search engines that they should only index one version of any number of identical (or very similar) pages - the tag is placed in the page head and signals to search engines which URL you want indexed as the original page.

Canonical tags can be very useful when dealing with parameter pages (any URL with a question mark after the last forward slash) or faceted navigation (which is common on eCommerce sites) as without a canonical, the various versions of a page can compete with each other in the SERPs and dilute the rankings for the correct page. It is now also recommended that all pages canonicalise to themselves (self referencial) as standard to help reinforce to the correct version is in the index.


Site Speed Optimisation

Ever since Google announced in 2010 that page speed was a ranking factor, SEOs have been looking to take advantage of any performance gains that can be implemented. Slow loading pages does not just ruin the user experience but also waste crawl budget, reducing efficiency along with the number of pages Google will see on your site. They have been shown to produce higher bounce rates and lower conversion across both desktop and mobile, which is one of the reasons Google announced that from mid 2018 pagespeed will also be a ranking factor on mobile SERPs. Some things that can be done to increase page speeds are:

  • Enable compression of your CSS, HTML & JS
  • Minify your code by removing spaces, comments and extraneous characters
  • Remove JavaScript that blocks the rending of the page
  • Improve server response times by reducing bottlenecks
  • Make use of a content delivery network (CDN)
  • Optimize your images through compressions and the use of CSS sprites
  • Leverage browser caching which improves perceived load for return users
  • Reduce redirect steps - ideally there should be only one step in a redirect


Schema Mark Up & Structured Data

Emerging from a collaboration across the the major search engines (Google, Bing, Yahoo & Yandex), Schema is a semantic vocabulary of tags that can be added to a page’s HTML to improve the way search engines read and present your page in the SERPs. They can enhance the rich snippets displayed under a page’s title and provide another opportunity to give the search engines more information about the page or business.

Facebook also uses the Open Graph protocol which is another type of markup that enables FB to parse out information such as the page title, description and image and adding it to the open graph.

Typical uses for schema tags would be to provide information about:

  • An organisation
  • A brand
  • A place
  • An event
  • A product
  • Page elements such as an article, nav and other div elements


Although there has been no conclusive proof that schemas are a ranking signal, it is worth pursuing to provide search engines with as much information as possible about your company and website content. It is also expected that search engines like Google will become more and more reliant on schemas to gather semantic data about pages. This is a result of not only the ever-ballooning cost of crawling and rendering websites (which is slow and costly), but also the need for machine-ingestible formats that can be provided to voice assistant-type devices. I touched on this in my previous post.


On-page / Content

This part of the SEO puzzle seeks to improve page performance by optimising on-page elements. This involves aspects of UX, copywriting and topic/keyword research.



These are the fundamental building blocks for your content. In order to rank for your desired keyword terms, it is essential to be strategic in your approach to keyword selection and targeting - otherwise a lot of efforts can be wasted.

It is best practice to choose thematic keywords. These are words that relate directly to a particular subject and follow a particular theme.

This is useful when building content silos or areas of expertise on a site, as the topic will always remain relevant within the keywords, albeit with a level of flexibility in the approach to the content. Enabling the user to move from top-level content right down to very detailed or granular information is useful. This is also how Google looks to understand a website through its content structure.


Off Page

This aspect of SEO is concerned primarily with inbound links from other websites (known as backlinks). Natural links from authoritative and relevant websites act as an independent ‘vote of confidence’, helping search engines to trust your website.

Other aspects of off-page include any implicit signals from other areas of the web. This includes online PR, social media and brand awareness activity both online and offline. In order to improve these signals, various methodologies or strategies can be employed.

Backlink Profile

This is a list of all inbound links across a website and their originating sources. The composition of this profile is incredibly important and works as a great foundation to any site if balanced correctly. The mix of backlinks should come from a variety of sources such as news sites, educational sites and other businesses.

Earning Backlinks

Good content that satisfies searcher intent will be linked to naturally but this can be infrequent and not solely relied upon for increasing a website's backlinks. This is why in recent years content marketing has become such a large component of digital marketing, due to the need for companies to drive inbound links through campaign activity which builds up their backlink profile.


Where to go next

SEO is a fascinating discipline with an ever-increasing blend of marketing skill and technical acumen required to gain visibility. Getting to grips with the foundational elements above is a great place to start. If you’re looking to further understand search, familiarising yourself with the various Google algorithms is a great next stop. It’s also worth familiarising yourself with Google's own take on the subject.

Related News