Static site hosting on Amazon S3 with SSL, http/2, and Amazon's Cloudburst CDN
I’ve been using this hosting setup for three years now to make my blog fast, cheap, secure, SEO-friendly, and using commodity cloud resources to host it. This is the HOWTO.
I have to admit to being impressed with sites that load with preternatural quickness and a bit judgey about technical influencers whose sites take forever to load. Sure, devs are not designers and backend guys are generally not great at frontend optimization, but I still find it the littlest bit inexcusable. Add loading slow to not using https, broken links/images, or a site not being mobile responsive, and well… I think that’s borderline sloppy.
For this setup, I use Jekyll mostly due to the fact it’s been trouble-free, and the fact I feel its plugin system, asset pipeline, maturity, and fact it uses ruby makes my life easier. You can use any static site generator really, and I imagine I’ll be looking at Gatsby and Hugo for the 2019 Q3 version of the site, but you can use virtually the same setup on Amazon. I host this for mere pennies on S3 on AWS (~$1 USD a month), make it super fast with with http/2, secure with https/ssl, Route 53 DNS, and using Amazon’s CDN Cloudburst to allow fast cached global delivery of content. You can get a similar effect with free Firebase Hosting on Google, which I keep meaning to try to compare, but as afaik it gets pricey very quickly if you go above a view ceiling.
The Setup
This HOWTO assumes you are comfortable on the command line, capable of text editing yaml and markdown files, can use git for version control, and can open (or have) an Amazon Web Services account since we’re using S3, Route 53 for DNS, and Cloudburst CDN.
The site currently uses jekyll 3.8.5
on ruby 2.6.2p47
with a small set of plugins that make life easier. This has been a very stable config on every combo or upgrade of Jekyll and ruby I’ve used it on to date.
I rolled a custom theme using the excellent and small Bulma
css framework (though you can use this setup with any jekyll theme). Importantly, I’m using jekyll-assets 3.0.12
to handle optimizing css and javascript as well as fingerprinting images for cache-busting. You could throw an automatic image optimization step in here as well, but I personally use few images and generally use ImageOptim to smoosh things down before they go into posts, so image optimization felt like overkill, but something perhaps for a future iteration.
If you’ve already got an existing jekyll setup, you should be able to add in some of the plugins, and statically host the site on Amazon with all the benefits mentioned. I’ve even thrown in a rake file so that a simple rake deploy
is as easy as a git push
for getting your site generated, upped to Amazon S3, updating your CDN cache invalidations and makes sure everything is up to date.
To be blunt, unless you absolutely need a database and a server, always default to static sites. Static sites and a little bit of Amazon’s serverless lambda framework (eg. a serverless contact form) give you almost all the benefits of a dynamic site without maintenance or security overheads . Bottom line: Go static unless you can’t.
There are effectively 4 different parts to setting this system up despite all the moving parts. Jekyll itself, s3_website
(which makes much of the magic happen), and the Amazon quad of S3 for storing and serving the site, Route 53 (for DNS), a security cdertificate, and Cloudfront which does your extra speedy Content Delivery Network for you. Much of the setup magic is handled for you via the excellent s3_website
gem (you can do everything it does manually, but quite frankly, why would you do that?).
Jekyll
If you don’t already have jekyll on your system, find a theme you like or want to modify and do the gem install jekyll
that will get it onto your local dev machine (you, of course, need ruby installed. If you’re on a Mac, I highly recommend making sure you’re updated to the latest via homebrew
and an easy brew install ruby
rather than defaulting to the system ruby.)
Starting new, go for a jekyll new awesomesauce
to get your new site up and running. cd
into the awesomesauce
directory, git init .
, do an initial commit and jekyll serve
in the terminal to make sure it spins up like it’s supposed to. Go visit http://127.0.0.1:4000
in your browser to see what you have wrought. Ctrl-C
to kill the server and then use your favourite text editor to modify the default Gemfile
to look like this:
source "https://rubygems.org"
ruby RUBY_VERSION
gem "jekyll", "3.8.5"
# This is the default theme for new Jekyll sites. You may change this to anything you like.
# Or remove it if you have rolled your own theme already.
gem "minima", "~> 2.0"
group :jekyll_plugins do
gem "jekyll-assets"
gem "jekyll-feed"
gem "jekyll-sitemap"
gem "jekyll-seo-tag"
end
# For jekyll-assets
gem 'sass'
gem 'uglifier'
Do your bundle install
to get everything sorted and a proper Gemfile.lock
in place. Git commit your changes.
The key plugin for me here is actually jekyll-assets
which takes care of optimizing your css and javascript for you and, being from ruby land, leans on sprockets to do the heavy lifting. This does mean you need to use special magic tags in your {{ "{% css " }}%}
and {{ "{% js " }}%}
in your templates, and you need to have your assets in the magic folder _assets
in the top level directory, but it takes care of all the asset-pipeline-y things that things like Hugo don’t without grunt
or other build tools, so definitely worth it. Asset pipelining (combining into one file, compressing, fingerprinting, optimizing etc) is a major pain and having something that does it for you makes your site much faster just having it.
jekyll-feed
, jekyll-sitemap
, and jekyl-seo-tag
are all included to optimize your site for search and ranking. I include them here for completeness and the fact they avoid you having to do some things manually in the code and you get optimized site searchiness.
s3_website
This feels almost like a cheat code in terms of sorting your setup, but I’ve been using the spectacular s3_website gem from Lauri Lehmijoki to take care of much of all the heavy lifting of getting an S3 site working on http/2, fronted by Cloudfront, and with a nice deploy and cache invalidation mechanism.
Getting the gem is a simple gem install s3_website
. Do it.
s3_website
has few commands you need to worry about; the important one is how to get everything set up.
In the root directory of your jekyll install (which, as you’ll recall, is awesomesauce), issue a s3_website cfg create
. This will generate a nice yaml configuration file for you called s3_website.yml
. Make sure you have this file in your .gitignore
(especially if you are not using environment variables for your Amazon credentials though it is a much better idea to use environment variables for this.). There are a bunch of options: which Amazon region to use, whether you want to use cloudfront, and whether to take advantage of reduced redundancy storage (the answer is yes, since you’re just deploying static files you’re generating, it’s version controlled, and you will be using Cloudfront.).
Here’s how mine is setup, though it should be pretty simple to figure out how to alter this for your own setup and tastes. Note: I’ve set mine up in the Irish Amazon region eu-west-1
.
s3_id: <%= ENV['S3_ID'] %>
s3_secret: <%= ENV['S3_SECRET'] %>
s3_bucket: blog.awesomesauce.com
cloudfront_distribution_id: <your-distribution-id>
index_document: index.html
error_document: 404.html
max_age:
"assets/*": 6000
"*": 300
gzip:
- .html
- .css
- .js
- .ico
- .xml
- .asc
- .pdf
- .md
# See http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region for valid endpoints
s3_endpoint: eu-west-1
s3_reduced_redundancy: true
cloudfront_distribution_config:
default_cache_behavior:
min_ttl: <%= 60 * 60 * 24 %>
http_version: http2
aliases:
quantity: 1
items:
- blog.awesomesauce.com
cloudfront_invalidate_root: true
cloudfront_wildcard_invalidation: true
As you can see, s3_website
sets up almost everything for you (including the Cloudfront cache, which I personally appreciate, though I did it manually myself the first time.).
S3 configuration
When you set this up, s3_website
will create an S3 bucket named blog.awesomesauce.com
.
Go to the S3 console in your web interface, and click on the second tab called “Properties”. One of the cards is called “Static website hosting”. Click the radio button that says “Use this bucket to host a website”. Leave the default index document and 404 as index.htnl
and 404.html
. Note this will also set your bucket permissions policy to “Public” (so don’t put anything in this bucket you aren’t comfortable with the world seeing.).
You’ll also be given an “endpoint” which is where your website lives on the big wide wide world web. In the example case above, this would be http://blog.awesomesauce.com.s3-website-eu-west-1.amazonaws.com
(a mix of the unique bucket name you’ve given yourself and the s3 website endpoint for your region, in this case EU West.).
From there, your site is basically ready. You could literally upload an index.html file to the bucket and then open your browser to http://blog.awesomesauce.com.s3-website-eu-west-1.amazonaws.com
and you’d see the page.
Now let’s make it so you’ve got a nice, friendly domain URL the world can find, SSL and http/2, and are caching through the CDN for even greater speed.
Route 53 Setup
Personally, I do not use Route 53 for all my DNS needs. I am a huge fan of dnsimple and have been using it for years to manage my core personal and business domains. So, assuming you have not double downed on Route 53 already (and in which case this section is extraneous for you), and I’ll show you how to get Route 53 working in a manner with your current provider.
The key trick here is that for your primary DNS provider, you are going to have to create NameServer records (NS records) which point at Amazon’s nameservers when you create the Route 53 records. This will allow you to delegate Amazon as the authoritative lookup for controlling where it says your site actually is. You can think of it this way: Your authoritative DNS delegates to Amazon’s nameservers as being the ones that have the final say on where exactly the jekyll site is on the internet.
With that, let’s roll up our sleeves a bit. While you can do all this stuff on the command line via Amazon’s tools, for the initial walkthrough it’s probably better to do this via the web interface (though considering how complex that has become that’s debatable.).
In the Amazon console , pull down on the Services menu item and under the Networking and Content Delivery" heading, pick Route 53 which is the Amazon DNS Service (alternatively, you can simply use the Route 53 home shortcut .
Under Dashboard, under the DNS Management heading, click on “Hosted Zones” (If you’re not hugely up on DNS, a hosted zone tells Route 53 how to respond to DNS queries for a domain such as awesomesauce.com.). Click on “Hosted Zones” (or, if you have none, click “Created Hosted Zone”).
Once you are in “Hosted Zones” you should see a button with “Create Hosted Zone” at the top left and a list of your hosted zones. Now, hit the button marked “Create Hosted Zone”, it should give you a domain name box to type in which is where you would put your awesomesauce.com. Make sure the Type remains “Public Hosted Zone” (as you are going to publicly expose the URL.). Now hit the “Create” button at the bottom.
Amazon kindly takes you to the “Record Set” screen here, but behind the scenes it’s created a unique identifier for your zone (generally a long alphanumeric string starting with “Z”) and has also kindly created 4 NS Nameserver records which tell the internet where to look when they are trying to find out about the domain “awesomesauce.com” and an SOA (“State of Authority”) record.
The first thing you need to do is hit the “Create Record Set” button. This should bring up a “sidebar screen” to the right of the screen. In the name, type in the subdomain you used for the bucket of your S3 (so, say “blog.awesomesauce.com”). Leave the Type as an “A - IPv4 record” which should be the default.
The meat of how this works with CloudFront is by aliasing. Click the Alias radio button to move it from No to Yes. This should bring up an “Alias Target” field. Into the field, go through the options available and if you’ve done things correctly with the s3_website cfg
step above, you should see under “– Cloudfront distributions –” menu item a target which is the distribution id you set up earlier (something like d6bf0rop76d6uf.cloudfront.net.
as an example.). If it’s not in there, take the earlier distribution id that was created and place it in there. Set the “Routing Policy” to “Simple” (for the last option, “Evaluate target Health” leave it as No as Route 53 cannot evaluate the health of a Cloudfront distribution.).
If all has gone well, you’ve now set up Amazon Route 53 to point at the Cloudfront Distribution you created with s3_website, backed by your S3 bucket. Now you just need to point your main DNS to Amazon as canonical for this domain (as well as create any URL aliases) and deploy the blog to S3.
Request an SSL certificate
All sites should run using https and ssl these days. To get https
working via SSL (and http/2 enabled for your site as it only runs over ssl) as well as allow https to access the content through Cloudfront, you are going to need to request a security certificate. Amazon makes this painless, through their Amazon Certificate Manager and if you’ve already registered the site through Route 53, it’s a one-click operation though there are a tiny few gotchas so we’ll just do the walk through (as an example, in the past, you used to need to request this in the US-East Region, but that appears to have changed.).
Navigate in the AWS console to the Certificate Manager
under the “Security, Identity, and Compliance” menu item or you can get to it direct here
.
If you’ve never used ACM before, you’ll get the “Getting Started” screen, and click on the “Get Started” button below the “Provision certificates” options on the left. This will start walking you through a provisioning wizard. Leave the default “Request a public certificate” radio button clicked in the first screen and click “Request a certificate”.
This takes you to the meat screen. Under domain name, you are going to use a wildcard so that everything under your domain is secured. Type in (from our example) *.awesomesauce.com
. Click Next. You will then be asked to choose a validation method. Since you’ve already set up DNS on this domain, the easiest way to do this is to choose the DNS option.
Amazon will then ask you to create a CNAME record in your DNS (of a a form like Name _8de4d85d673f84819d7c5afa23721938.awesomesauce.com.
and with a Value of _4fe259023c3b014e9df7fa04e3395c08.ltfvzjuylp.acm-validations.aws.
). There is a super handy button immediately below this to “Create record in Route 53”. If you click on that, the record will be in your Route 53 and DNS and should validate within 30 minutes.
Voila. You are done with SSL and security certs.
Pointing your main domain to the Route 53 DNS
We’re almost done. Now, let’s set up your main DNS now to point at Route 53 as the NameServer for this domain (you can skip this step if you use Route 53 as your main DNS.).
So, you will need to create 4 NS (NameServer) records for blog.awesomesauce.com
and point each at one of the unique four nameservers Amazon had listed in the DNS record whjen you configured Route 53 above (hint: they’ll have names like “ns-3.awsdns-00.com.” etc). Create these and wait a little bit and now your blog.awesomesauce.com
URL should be pointing at the Amazon Cloudfront distribution.
If you’re like me, you may want this site to be the main home for your material and have a few aliases pointing at this canonical URL. The easiest way to handle this is to create extra DNS “URL records” which point at blog.awesomesauce.com
. In this case, you can have the naked domain awesomesauce.com
, www.awesomesauce.com
, and say me.awesomesauce.com
all pointing at blog.awesomesauce.com
and all those URLs will now resolve to the Cloudfront distro you’ve created and aliased via Route 53 on Amazon.
Deploying
That’s it. You are now set up. You have configured your S3 buckets, they have a content delivery network to cache pages, Route 53 DNS can now route requests for those pages to Cloudfront, and your main DNS knows to delegate requests to Amazon to handle.
Now you just have to put some content into those buckets. S3_website
makes this easy.
There is a small gotcha though. Perhaps the only irritatiing thing I find about using Amazon S3 and Jekyll together is that there is no stripping of the .html extension from the files that Jekyll creates in Jekyll. If you want really nice clean permalinks in the form of https://blog.awesomesauce.com/this-is-your-extensionless-link
, it is actually a bit of work to get this sorted. Jekyll’s normal way to handle this on a regular server is to use an .htaccess
apache file to convert the URLs on the file. Amazon’s S3 static site hosting though, does not support using an .htaccess
to handle this.
The way I sorted this was by writing a rake script (rake is ruby’s task running application) to strip all the html extensions from the generated site, then uploading those files to Amazon via the s3_website deploy
command. The rake task is below, which not only strips and deploys for you, but also pings search services to let them know they should re-index your site as it has newer content (note: when stripping the .html from files, you need to exclude index.html and 404.htmnl since Amazon treats those files specially in S3 static hosting.).
require 'active_support/inflector'
require 'date'
require 'rspec/core/rake_task'
require 'fileutils'
desc "Deploys to S3 via s3_website"
task :deploy do
system "jekyll clean && jekyll build"
Dir.glob('./_site/*.html').each do |f|
unless File.basename(f) == "index.html" || File.basename(f) == "404.html"
FileUtils.mv f, "#{File.dirname(f)}/#{File.basename(f,'.*')}"
end
end
puts ".html stripped from pretty URLs and ready for upload."
system ("s3_website push")
Rake::Task["notify"].invoke
end
# Usage: rake notify
task :notify => ["notify:pingomatic", "notify:google", "notify:bing"]
desc "Notify various services that the site has been updated"
namespace :notify do
desc "Notify Ping-O-Matic"
task :pingomatic do
begin
require 'xmlrpc/client'
puts "* Notifying Ping-O-Matic that the site has updated"
XMLRPC::Client.new('rpc.pingomatic.com', '/').call('weblogUpdates.extendedPing', 'daryl.wakatara.com' , '//daryl.wakatara.com', '//daryl.wakatara.com/feed.xml')
rescue LoadError
puts "! Could not ping ping-o-matic, because XMLRPC::Client could not be found."
end
end
desc "Notify Google of updated sitemap"
task :google do
begin
require 'net/http'
require 'uri'
puts "* Notifying Google that the site has updated"
Net::HTTP.get('www.google.com', '/webmasters/tools/ping?sitemap=' + URI.escape('//daryl.wakatara.com/sitemap.xml'))
rescue LoadError
puts "! Could not ping Google about our sitemap, because Net::HTTP or URI could not be found."
end
end
desc "Notify Bing of updated sitemap"
task :bing do
begin
require 'net/http'
require 'uri'
puts '* Notifying Bing that the site has updated'
Net::HTTP.get('www.bing.com', '/webmaster/ping.aspx?siteMap=' + URI.escape('//daryl.wakatara.com/sitemap.xml'))
rescue LoadError
puts "! Could not ping Bing about our sitemap, because Net::HTTP or URI could not be found."
end
end
end
Place the rake script int he root of your Jekyll distribution and simply rake deploy
. The script will do a clean build of your Jekyll site, strip all the .html from all but index.html and 404.html and then upload it to your S3 bucket. If you are doing future deploys, it will also invalidate the Cloudfront cache on older content that has changed.
Anyhow, that’s it! It takes some setup, but the result is a superfast, modern, and securely hosted, and CDN-backed site that will definitely impress performance snobs like myself. Happy hosting!