Gebna

The SEO Checklist

Written by Bahaa Zidan

Search engines like Google, Ecosia, or Brave use bots to crawl the internet looking for new pages to index or old pages to update. These bots or web crawlers do three things:

Inside your <head> tags

Here’s the content for the head tag of this page:

<!-- Global Metadata -->
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width,initial-scale=1" />
<!-- Fav Icon -->
<link rel="icon" type="image/svg+xml" href="/favicon.svg" />
<!-- A shoutout to the static generator I'm using. Astro -->
<meta name="generator" content={Astro.generator} />
 
<!-- Canonical URL -->
<link rel="canonical" href={canonicalURL} />
 
<!-- Primary Meta Tags -->
<title>{title}</title>
<meta name="title" content={title} />
<meta name="description" content={description} />
 
<!-- Open Graph / Facebook -->
<meta property="og:type" content="website" />
<meta property="og:url" content={Astro.url} />
<meta property="og:title" content={title} />
<meta property="og:description" content={description} />
<meta property="og:image" content={new URL(image, Astro.url)} />
 
<!-- Twitter -->
<meta property="twitter:card" content="summary_large_image" />
<meta property="twitter:url" content={Astro.url} />
<meta property="twitter:title" content={title} />
<meta property="twitter:description" content={description} />
<meta property="twitter:image" content={new URL(image, Astro.url)} />

Sitemap

To ensure web crawlers index every page on your website regardless of the navigation structure, we need to have a sitemap.xml file. This file lists every page on your site describing each one. You can read here to learn about the structure of the xml file if you want to make it yourself. In practice, your website generator or web dev framework is going to have a way to generate a sitemap.xml file for you. I used Astro to build this blog and there’s a first party package called @astrojs/ sitemap.

Note that the file doesn’t need to be called sitemap.xml.

Once you’ve generated a sitemap file, you need a robots.txt file at the root of your website. This file does 2 things:

Example robots.txt content:

User-agent: *
Allow: /
 
Sitemap: https://gebna.gg/sitemap-index.xml

This is allowing all user agents to crawl every page.

Important Tools

Once you’ve got the basics figured out. You need to pay attention to 2 tools:

The work is never done

Beyond the checklist, lies a never ending journey of creating an infrastructure that is fast and reliable, staying on top of your metrics, and most importantly creating content that is good enough to be shared and linked across the web.