Lover of all things beautiful

2016-10-18
Some Features I Like About Gitlab

What is Gitlab

An alternative to Github with many more features and an increasly better offering

Free Private Git Repositories

On of Github’s price points is to offer you a limited number of private repositories. You want more, pay more. Then Bitbucket came along and offered free private repositories, but limited the number of people you could share those repositories, it made for some weird accounting at the end of the day. Gitlab however has changed such things. First it allows an unlimited number of private repositories (🎉)

Free Private Docker Image Repository

That’s right, FREE. PRIVATE. DOCKER. REPO. Here again, you can go to Docker Hub and you get one free private repo. At Gitlab you get an image repositor per project and it’s not that hard to set up at all. I toyed around with Amazon’s Docker Image Repository, but it was a huge pain.

To get started with Gitlab Image Repository, you can set it up like

1
2
3
> docker login registry.gitlab.com;
> docker build -t registry.gitlab.com/USERNAME/PROJECT .
> docker push registry.gitlab.com/USERNAME/PROJECT

Free Test Runners

I will have to admit, figuring out how to configure these was a little bit a of a challenge, but once I got it rolling it’s nice for it to automatically run my tests. Get this, I can have it automatically run my contract tests. It’ll setup a test Mongo database for me, npm install, then npm test and BANG! Tests are run and it will email the results. I just have to figure out how to automatically deploy, for now I have a script that does that for me.

Conclusion

Gitlab is growing by leaps and bounds. The interface could use a little TLC since most its a little cluttered in some areas, but overall, it’s a great product. From what I can tell, they have a model that fits almost any situation. Free online, paid online, host your own community, host your own enterprise. It looks like they have your covered.

Read More

2016-10-03
Create a local HTTPS proxy server

In recent months I’ve been working to add Apple Pay for Web to a major clothing retailer. One of the requirements for Apple Pay for Web is that the connection must be over HTTPS. Most of the time when I’m developing locally, I do not use HTTPS. Local, meaning the application code is running on my laptop. In most cases, HTTPS is just run in staging and production environments and not handled directly by your app code.

So this posed a problem. I didn’t want to work on the server where it provides an HTTPS connection, that’s a pain. Also, if you’re not familiar with HTTPS servers, you need to have a valid certificate and more. So this solution requires a few steps to get working, but once it does, it works nicely.

TL;DR

  1. Create a fake SSL certificate (e.g. www.fake-example.com)
  2. Create a docker container that uses Nginx to proxy localhost:443 => 172.16.123.1:3000
  3. Override your local /etc/hosts file
  4. Start your app on localhost:3000
  5. Hit your app in the browser at https://www.fake-example.com

Creating a fake SSL certificate

It’s most likely not a good idea to use your production SSL certificate and key when doing local development, so you’ll want to create a fake version.

It’s entirely possible to purchase legitimate SSL certifates and use them, but this is not always possible, and it’s free to make your own

Update this code and run it on a Mac or Linux machine

1
sudo openssl req -x509 -sha256 -newkey rsa:2048 -keyout cert.key -out cert.pem -days 1024 -nodes -subj '/CN=www.fake-example.com'

You now have files called cert.key and cert.pem for a certificate that is valid for the next 10 years.

Caveat: Your browser will not like this certifcate, but since you trust it, you can override the browser to accept the certificate and not complain.

Save those files, we’ll use them in just a minute

Create a docker container

Now we want to create a docker container that will use our certificate and proxy all requests to our app running on port 3000. We will use docker compose to accomplish the configuration of the container.

Create a directory

First, you’ll want to create a directory where you can hold the docker compose yaml file, the nginx configuration, and the ssl certificates. Mine looks like this:

1
2
3
4
5
6
.
├── docker-compose.yml
├── proxy.conf
└── ssl
├── cert.key
└── cert.pem

The Docker Compose file

The docker-compose.yml file is really pretty short. Here it is.

1
2
3
4
5
6
7
8
9
10
11
12
version: '2'

services:
nginx:
image: nginx:stable-alpine
volumes:
- ./proxy.conf:/etc/nginx/conf.d/default.conf
- ./ssl:/etc/nginx/ssl
ports:
- 443:443
extra_hosts:
- "dockerhost:${LOCAL_IP}"

As you can see:

  • We pull down the nginx image using the small linux alpine version.
  • Then we mount some volumes inside the container, the proxy.conf file and the ssl folder that holds our certs.
  • Next the configuration tells the container to listen on port 443 and forward internall to port 443
  • What is the extra_hosts configration?
    • Glad you asked, let’s cover that in this next section

The localhost alias

The problem we encounter here is allowing the container to contact the host machine. Since Nginx is running inside the Docker container and your app is running outside of the docker container, locally on port 3000, the container needs to know how to reach the host machine.

This is not as easy as it would appear.

There are several solutions to this. On a Mac, the best way I’ve found is to create an alias to localhost. Simply run this command will create an alias to locahost.

1
sudo ifconfig lo0 alias 172.16.123.1

On Windows, I’m not sure, sorry.

Back to extra_hosts

Notice that I have a reference to ${LOCAL_IP}? Well, I have an environment variable that is just LOCAL_IP=172.16.123.1. In the docker-compose.yml file we assign that IP to the hostname dockerhost.

DANGER! However tempting, Do NOT name your environment variable DOCKER_HOST, you’ll break your docker setup. The reader can investigate why or ping me for the answer.

Docker Compose files can utilize environment variables, nifty. Moving on.

The Nginx configuration

Now lets take a look at our proxy.conf file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
server {
listen 443 ssl;
server_name localhost;

ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate ssl/cert.pem;
ssl_certificate_key ssl/cert.key;

location / {
proxy_pass http://dockerhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-HTTPS 'True';
}
}

In here you’ll notice a few items of interest:

  • We specify our cert and key as living in the ssl/* folder we mounted in our docker-compose.yml file
  • We are proxy passing all traffic to dockerhost:3000, this will resolve to your localhost:3000 on your machine.
  • The rest is just standard configuration for setting up Nginx as a proxy.

Start your docker container

Inside your folder that has the docker-compose.yml file, all you need to do is run docker-compose up -d. That will start your HTTPS proxy and put it in the background.

Check if it started successfully by running docker-compose ps

1
2
3
       Name                Command          State              Ports
--------------------------------------------------------------------------------

ssltesting_nginx_1 nginx -g daemon off; Up 0.0.0.0:443->443/tcp, 80/tcp

Override your local /etc/hosts file

Now we need to tell our computer that www.fake-example.com is not something it should ask DNS about, but can get it’s answer right from /etc/hosts. Just so you understand, when your browser needs to resolve an address to a domain name, it will first look in /etc/hosts, then ask DNS.

So we just need to add this to our /etc/hosts file.

1
127.0.0.1   www.fake-example.com

This pretty easy, if you are going to be flipping this on/off I would suggest an app like Gas Mask or HostBuddy.

Start your application on port 3000

This is something you’ll have to figure out yourself.

Hit you application in the browser.

Now it’s time to open the application in the browser and test the whole thing. Go ahead and open up https://www.fake-example.com in the browser. If we’ve done everything right, we should see a warning

What the?!

Chrome warning

This is a good thing, Chrome doesn’t trust our self signed certification, so now we just tell Chrome to trust it by clicking Advanced > Proceed to www.fake-example.com (unsafe))

If you get a 502 Bad Gateway it means that Nginx cannot reach your app on port 3000. Most likely because there is a problem with your container talking to your host. Remember I said it’s not as easy as it seems :(

Conclusion

So now you have a local HTTPS server that imitates a production one. It’s now possible to do any type of development locally in your app that might require an HTTPS connection, like Apple Pay for the Web.

If your company needs help implmenting Apple Pay for the Web, please contact me. I contract our my services and can have your site accepting Apple Pay payments in short order.

Read More

2016-08-26
Reach Docker Host From Container

This is the best way I’ve found to set up for a container to contact the host machine.

1
sudo ifconfig lo0 alias 172.16.123.1

Now you can use the IP 172.16.123.1 to contact your local host machine. Might want to store that in an environment variable.

Note: I had written a much longer, more indepth post, but a few unfortunate key strokes in Vim obliterated much of the post…

Read More

2016-08-21
A Handy Publish Script for Static Sites Using CloudFront

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#!/bin/bash

CURR_COMMIT=$(git rev-parse HEAD);
CURR_VERSION=$(node -e "console.log(require('./package.json').version);");
VER_HASH=$(git rev-list -n 1 v$CURR_VERSION);

# Don't want to redo version bump
if [ $CURR_COMMIT == $VER_HASH ]
then
echo 'Already up to date'
exit
fi

npm version patch;

NEW_VERSION=$(node -e "console.log(require('./package.json').version);");

echo $NEW_VERSION;

git push origin head;

npm run buildprod

aws s3 sync ./public s3://www.example.com --size-only --delete;

# Invalidate cache
aws cloudfront create-invalidation \
--distribution-id YOUR_DISTRIBUTION_ID \
--paths "/*";

What Does It Do?

In a nutshell. This script will bump the current version of the project, build your static site, upload to AWS S3 bucket, then tell Cloudfront to invalidate all the files in the specific Cloudfront distribution.

What You’ll Need

To run this you’ll need a few things in place

  1. You need the AWS CLI. You can installed this with Brew brew install awscli, npm npm install -g aws-cli, or just from their site https://aws.amazon.com/cli/
  2. After you have AWS CLI installed, then you need to configure it. This requires putting the file ~/.aws/credentials in place with your creds. Read More
  3. You will need git installed.
  4. For this script we use npm.

Let’s Walk Through Each Part

1
2
3
CURR_COMMIT=$(git rev-parse HEAD);
CURR_VERSION=$(node -e "console.log(require('./package.json').version);");
VER_HASH=$(git rev-list -n 1 v$CURR_VERSION);

Here we are gathering a few details. We want to know the latests commit hash. Then we pull the version from the package.json file and use that to look up the commit hash of the tag. When we use npm version patch, it creates a semver tag like v1.3.1.

1
2
3
4
5
6
# Don't want to redo version bump
if [ $CURR_COMMIT == $VER_HASH ]
then
echo 'Already up to date'
exit
fi

Here we check if the tag hash is different than the current HEAD hash. Basically we don’t want to create another commit if nothing new has been commited. You need to actually change code and make a new commit before this script will continue past this part.

1
2
3
4
5
6
7
npm version patch;

NEW_VERSION=$(node -e "console.log(require('./package.json').version);");

echo $NEW_VERSION;

git push origin head;

Here we tell NPM to bump the patch version up one. It will change the package.json file version, make a commit, then create a git tag with that version. After we bump the version, we retrieve the version from package.json like we did earlier and echo that new version out. Then we push this new commit and tag up to our git server (github, gitlab, bitbucket, etc).

1
2
3
npm run buildprod

aws s3 sync ./public s3://www.example.com --size-only --delete;

Here we build our static site files. In this case I’m using npm to fire off a webpack configuration to build my files. Then we use the aws cli to sync our files to s3. We basically tell it to delete all the files on the server that are not local and only use size as a comparision indicator.

1
2
3
4
# Invalidate cache
aws cloudfront create-invalidation \
--distribution-id YOUR_DISTRIBUTION_ID \
--paths "/*";

After we’ve done uploading our files to S3, we tell Cloudfront to invalidate all our files. Since this is a wildcard, we only get dinged for one cache invalidation from Cloudfront rather than the hundreds or thousands of files that are on our site (that could get expensive, since Cloudfront starts charging for invalidations past 1000/month)

Summary

This script makes it really easy for me to just fire and forget and it has a little built in safecheck so I’m not just bumping up the version with no new code to show for it. I’ve gotten in the habit to create a publish.sh for many of my projects since I’m not always sure how each are deployed if I only deploy on occasion. Happy Shipping.

Read More

2016-04-26
A Better Hack to Get Docker and VPN to Play Nice

If you’ve found this article, then you’ve banged your head against the problem of being on a restrictive VPN and using Docker at the same time. The culprit is usually Cisco AnyConnect or Junos Pulse.

The Problem

You use Docker for development. For various reasons you need to connect to a VPN, but as soon as you do, Docker stops working. There are many solutions out there, some work, others do not. The bottom line is there is no elegant solution and this solution here is not elegant, but it will work. What’s happening? Yeah, when you connect, AnyConnect blunders in, overwrites all your computer’s routes to send them through the VPN tunnel. Luckily, it doesn’t route localhost (127.0.0.1) to the tunnel. This is our backdoor to hack ourselves in.

The Setup

My current setup involves using Docker Machine to create a Parallels VM. I’m on a Mac, Window/Linux YMMV. VirtualBox should work just fine; VMWare, can’t really say. Some really restrictive VPN that doesn’t allow split traffic, like Cisco AnyConnect or Junos Pulse.

The Hack

You’ll want to setup your Docker Machine first and get your env setup eval $(docker-machine env). Once you have your docker machine up. You’ll want to set up a Port Forwarding rule in Parallels. Go to Preferences > Networking. Then you’ll want to add a new rule like this

'Port Forwarding Rule

“default” is the name of my VM

You need to start a container that will forward the HTTP port for docker to localhost. Just run this command

1
$(docker run sequenceiq/socat)

You can find out more on what is doing at https://hub.docker.com/r/sequenceiq/socat/

Now on the command line, you need to update your ENVIRONMENT VARIABLES to use this new localhost incantation. We’ll be changing the DOCKER_HOST and DOCKER_TLS_VERIFY. We set DOCKER_HOST to your localhost version. Then we need to disable TLS verification with DOCKER_TLS_VERIFY.

1
2
3
export DOCKER_HOST=tcp://127.0.0.1:2375 && \
export DOCKER_TLS_VERIFY="" && \
export DOCKER_CERT_PATH=""

Now you can connect to your restrictive VPN* with docker ps.

Caveats

  1. You should have your VM up and running and have Docker-Machine env set in your terminal
  2. Any ports you want to connect to on your docker will need to be port forwarded with Parallels.

Notes

  1. This will all be obsolete when Docker for Mac is released to the general public. Can’t wait? Sign up for their private beta
Read More

2016-04-11
A Hack to Get Docker Working While on VPN

See the improved version

If you’ve found this article, then you’ve banged your head against the problem of being on a restrictive VPN and using Docker at the same time. The culprit is usually Cisco AnyConnect or Junos Pulse.

The Problem

You use Docker for development. For various reasons you need to connect to a VPN, but as soon as you do, Docker stops working. There are many solutions out there, some work, others do not. The bottom line is there is no elegant solution and this solution here is not elegant, but it will work. What’s happening? Yeah, when you connect, AnyConnect blunders in, overwrites all your computer’s routes to send them through the VPN tunnel. Luckily, it doesn’t route localhost (127.0.0.1) to the tunnel. This is our backdoor to hack ourselves in.

The Setup

My current setup involves using Docker Machine to create a Parallels VM. I’m on a Mac, Window/Linux YMMV. VirtualBox should work just fine; VMWare, can’t really say. Some really restrictive VPN that doesn’t allow split traffic, like Cisco AnyConnect or Junos Pulse.

The Hack

You’ll want to setup your Docker Machine first and get your env setup eval $(docker-machine env). Once you have your docker machine up. You’ll want to set up a Port Forwarding rule in Parallels. Go to Preferences > Networking. Then you’ll want to add a new rule like this

'Port Forwarding Rule

“default” is the name of my VM

Now on the command line, you need to update your ENVIRONMENT VARIABLES to use this new localhost incantation. We’ll be changing the DOCKER_HOST and DOCKER_TLS_VERIFY. We set DOCKER_HOST to your localhost version. Then we need to disable TLS verification with DOCKER_TLS_VERIFY.

1
2
export DOCKER_HOST=tcp://127.0.0.1:2376
export DOCKER_TLS_VERIFY=""

Now you can connect to your restrictive VPN* with docker --tlsverify=false ps.

docker ps

This is not an elegant solution, but will work until I figure something else more robust.

Caveats

  1. You should have your VM up and running and have Docker-Machine env set in your terminal
  2. You’ll get numerous warnings from docker-compose, annoying, but they are just warnings.
  3. You have to include --tlsverify=false with every Docker command e.g. docker --tlsverify=false ps

Notes

  1. Please keep in mind, companies implement restrictive VPN because it would be easy for a hacked computer or maliciously setup computer to allow access the VPN from outside world. By forwarding all ports through the VPN, it makes this security hole much more difficult.
  2. I’ve tried going the route of readding the routes (pun intended) to the Mac’s routing table to redirect the IP that Parallels VM is on back to the Parallels interface, but didn’t get anywhere with that.
  3. A better solution would be to include 127.0.0.1 with the SSL cert that Docker Machine creates for the VM, then you wouldn’t have issues when connecting via 127.0.0.1
Read More

2016-03-31
Preventing unwanted git email leaking

Maybe you work on different git projects, on business, then at home for personal projects. In the past, as a convenience, I set my ~/.gitconfig to include my name and email like the following.

1
2
3
[user]
name = "Shane A. Stillwell"
email = "shanestillwell+spam@gmail.com"

This is exactly what git wants you to do when it detects you haven’t set your name or email

1
2
3
4
5
6
7
8
9
*** Please tell me who you are.

Run

git config --global user.email "you@example.com"
git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

So what’s the big deal?

Now when you are working on a projects for work, or different clients, it will use the global name / email, but shoot, you didn’t mean to make that commit with your personal email, it was supposed to be your work email. Bummer, it’s now a permanent part of git log.

The solution

Very simple. Open up your ~/.gitconfig and change your email to none. (I assume your name doesn’t change between projects)

1
2
3
[user]
name = "Shane A. Stillwell"
email = "(none)"

Now in each project, before you can commit you’ll be prompted like above, just remember NOT to use the --global flag, e.g. git config user.email "shanestillwell+spam@gmail.com". Now each git repo will have a correctly set email address and you’re less likely to leak personal emails into business projects.

Read More

2016-02-14
Using free SSL and Cloudfront for an Angular React site

This is the holy grail, the culmination of knowing there is something out there better, stronger, faster. In this blog post, I’ll outline how to set up Amazon S3, Cloudfront, and Route53 to serve your static React, Angular, or any site over HTTPS. Get ready, it’s about to get real.

What am I even talking about?

The new en vogue way to build sites is to use React or Angular on the front end. Compile the site into a few files, like index.html, scripts.js, styles.css using Webpack. Then you take those files and upload them to a some basic web server (no need for Node.js, Rails, or other dynamic scripting language server). I’ve used https://surge.sh/, which is really easy and customizable, but you’ll need to pay $13 a month for a custom SSL site. For hobby sites, that gets to add up.

Since Google will start pointing out non-HTTPS sites, it’s probably a good idea to get all your sites secure, even the static ones. Most places charge for SSL, but Amazon is offering Free HTTPS certs for its services. This means you can use free certs for Cloudfront and Elastic Loadbalancers.

Cloudfront just serves up files in your Amazon S3 bucket. The last piece is using Route53 to point your domain to Cloudfront, but that’s not really required.

1
2
3
4
5
6
                  +----------+  +-----------+  +-----------+
| Route | | | | |
www.example.com+--> 53 +--> Cloudfront+--> S3 |
| | | | | |

| | | | | |
+----------+ +-----------+ +-----------+

Setting up the site on Amazon S3

Get Amazon S3 to act as our web server

The first step involves setting up an Amazon S3 bucket to hold our files. It’s a good idea to name the bucket after your site, e.g. www.example.com. You’ll need to set the properties for the bucket.

Permissions :: Bucket Policy
You’ll need to set these custom bucket policy permissions. Basically, anyone can get any object, and anyone can list the bucket contents.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
"Version": "2012-10-17",
"Id": "Policy1455417919361",
"Statement": [
{
"Sid": "Stmt1455417914563",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::www.example.com"
},
{
"Sid": "Stmt1455417914543",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.example.com/*"
}
]
}

Static Website Hosting
Select Enable Website Hosting, for the Index Document: index.html. Do the same for Error Document: index.html. This is required, because we are using html5 history and a user may try to fetch https://www.example.com/app/users, since this file doesn’t really exist, we still want to serve up our React/Angular app. Just to be clear, our app URLs will NOT be using hasbang-urls.

Now would be a good time to upload your static app files to your new bucket. Here is an example I used for testing, notice how I am placing a ?v=2 version. This will allow our Cloudfront cache to differentiate between versions. It’s beyond the scope of this post, but Webpack can easily version bundles and update your HTML file.

index.html

1
2
3
4
5
6
7
8
<html>
<head>
<link rel="stylesheet" href="/style.css?v=2" type="text/css" media="screen" title="no title" charset="utf-8">
</head>
<body>
<h1>Example Site</h1>
</body>
</html>

style.css

1
2
3
h1 {
color: red;
}

Get a free SSL cert

Other places charge as much as $20/month for cert support

This one is not that hard. Go to the Certificate Manager and set up an ssl cert for your domain e.g. example.com & www.example.com. You’ll need to be able to response to a select number of email aliases, so you’ll need to have access to the domain and the email server for the domain.

Cache with Cloudfront

Use Cloudfront as a sort of loadbalancer/SSL frontend to S3

Now we come to the part of the show where it gets intense. Cloudfront has a number of options, most of the defaults are pretty sane, but let’s list out what we need to do. First we’re going to create a new web distribution.

  • Origin Domain Name: Your bucket, you can select it from the list
  • Viewer Protocol Policy: Redirect HTTP to HTTPS
  • Forward Query Strings: Yes
  • Price Class: I choose US / Europe, you choose whatever
  • Alternate Domain Names: www.example.com
  • SSL Certificate: Custom SSL Cert (choose your newly created cert)
  • Default Root Object: index.html
  • Click Create Distribution

Wait, we’re not done yet. We need to tell Cloudfront that when we get an error, that we need to serve up the index.html file. Click on your distribution, then click on the Error Pages tab. Now it’s time to click Create Custom Error Response. From here you’ll select

  • HTTP Error Code: 404
  • Customize Error Response: Yes
  • Response Page Path: index.html
  • HTTP Response Code: 200

That will serve up the index.html file whenever Cloudfront can’t find the file.

Route53 points to Cloudfront (optional)

Route traffic for www.example.com to Cloudfront

Now you can create a record in our domain for your Cloudfront endpoint. Go to your Hosted Zones in Route53. From there Create Record Set. The Name: www, Alias: Yes, Alias Target: your Cloudfront url. Save it and you’re done. Now after a little bit, when your Cloudfront distribution is done processessing, you should be able to hit your site in a browser and see it served up.

Updating your site

You need to invalidate the cached index.html

If you update your CSS file, you’ll need to bump the ?v=2 to ?v=3 (or have Webpack just build your site and do it for you). You can then upload your files to your Bucket (please automate this). Then you’ll need to invalidate at the very least, the index.html file on Cloudfront. Go to your Cloudfront distribution and select the Invalidations tab and Create Invalidation. You can do every file with with *, or just put in /index.html to clear the index.html file. After a few minutes, the new version of index.html will start to be served up by Cloudfront.

You’ve just created a free, SSL protected, web server that is blazing fast, inexpensive, and supports cutting edge ReactJS, Angular, and even Ember web apps. Go forth and obtain knowledge and wisdom. Please leave a comment or contact me on Twitter.

Read More

2015-12-22
Using Amazon Route53/S3 to redirect your root domain

The Backstory

You may find yourself needing to redirect all traffic from your root domain example.com, otherwise known as the apex, to your real hostname www.example.com. Maybe you didn’t know this, but you cannot use a CNAME on your apex domain. This will bite you in the butt when you try to use your root domain example.com with Heroku’s SSL (HTTPS) service. Heroku will give you a hostname and tell you to create a CNAME to that hostname. However, this is not strictly possible. Some registrars can get around this by essentially providing you an HTTP redirect, but his is hack. In short, don’t use your apex domain e.g. example.com, even though you see all the cool kids on the block doing it.

Just Do It!

Create an Amazon S3 Bucket

First you are going to create a bucket in Amazon S3. You can name it after your domain, but the name is immaterial. The important part is going to be select Static Website Hosting. In that section, you’re going to select Enable Website Hosting. The Index Document is also immaterial, you can name it foo.html.

The important part is the Redirection Rules. You’ll use this.

1
2
3
4
5
6
7
8
9
<RoutingRules>
<RoutingRule>
<Redirect>
<Protocol>REPLACEME with either HTTP or HTTPS</Protocol>
<HostName>REPLACEME with www.example.com</HostName>
<HttpRedirectCode>301</HttpRedirectCode>
</Redirect>
</RoutingRule>
</RoutingRules>

You wont forget to actually replace the protocol and hostname sections will you?

Create a Route53 Alias

Now, walk over to Amazon Route53. I’m totally assuming that you’re using Route53 for your DNS records, otherwise, this whole article is pretty much pointless. Click on your root (apex) record. Now we want to select Type: A IPv4 Address, then you’ll select Alias: Yes. For the Alias Target you will get a dropdown, you’ll see one of them listed as S3 Website Endpoints, that’s the S3 bucket you just set up. Select that.

Now Save Record Set and you’re done.

Recap

You created an S3 bucket that acts like a webserver, anything that hits this S3 bucket will be immediately redirected to your desired hostname. You configured your domain example.com to hit your S3 bucket.

Caveat

  • This will not work for HTTPS => HTTPS, only HTTP => HTTP(s). So https://example.com will not work, it just times out. This might be a deal killer for some in which case you might want to stick with using an A record for your apex domain by setting up a real server.
Read More

2015-10-14
Leveraging Reflux (flux) with React like a pro

#Leveraging Reflux (flux) with React like a pro

My latest project for a well known clothing retailer has involved writing a React based web app. It has a staggering amount of data that needs to update different parts of the app. To accomplish this, we are using React (of course) and Reflux.

So you’ve heard React is the new hotness that all the cool developers are adding to their resume. It’s supposed to cure all your front-end woes. But in fact, you’ll find yourself with some new hand wringing problems to get you down. Don’t Fret. There is a better way.

Flux is not so much a software, but a way to keep data synchronized in your front-end app. What am I talking about? Say a person enters some data into a form and presses Send, we need to keep that data in sync in other parts of the app. Reflux is just a re-invention of the flux concept with some powerful features and simplified tools.

A simplified diagram of data flow

1
2
3
4
5
╔═════════╗       ╔════════╗       ╔═════════════════╗
║ Actions ║──────>║ Stores ║──────>║ View Components ║
╚═════════╝ ╚════════╝ ╚═════════════════╝
^ │
└──────────────────────────────────────┘

The major players

  • The React component (View Component)
  • The Reflux Actions file
  • The Reflux Store file

Think of this one way data flow like this. The component says “Hey, the person has updated their name in an input field”, so it calls the action that is defined in the Action file. The Action takes care of sending out a broadcast with the data the person entered. The Store is listening to the Actions file and gets called, it updates itself and then triggers an update. The component is listening to the Store and updates itself because it has new state values.

A simple web form

FirstNameHeader.js (For displaying the name entered into the input)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import React from 'react';
import Reflux from 'reflux';
// We import the PersonStore so we can get its value and updated when it's updated
import PersonStore from './PersonStore.js';

export default React.createClass({
mixins: [
/* The value of the PersonStore is set in the local state as 'person' */
Reflux.connect(PersonStore, 'person')
],
renderer() {
return (
<h1>{this.state.person.first_name}</h1>
);
}
});

FirstNameInput.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import React from 'react';
import Actions from './Actions.js';

export default React.createClass({
renderer() {
return (
<input type="text" name="first_name" defaultValue="" onChange={this._onUpdate} />
);
},
_onUpdate(event) {
let name = event.target.name;
let value = event.target.value;
// `updatedPerson` is defined in Actions.js, and PersonStore.js has a `onUpdatePerson` method that matches the following
Actions.updatePerson(name, value);
}
});

Actions.js

1
2
3
4
5
6
import Reflux from 'reflux';

// This file is small, but essential
export default Reflux.createActions([
'updatePerson'
]);

PersonStore.js The data store for person data

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import Reflux from 'reflux';
import Actions from './Actions.js';

let person = {
first_name: ''
};

export default Reflux.createStore({
// This is magic that matches up the Actions to the listening methods below
listeners: Actions,
init() {
// Need to provide the initial value to compenents
return person;
},

// A `magic` method that matches up to the `updatePerson` Action
// This is called in our `FirstNameInput.js` file
onUpdatePerson(name, value) {
person[name] = value;

// Needed for any component listening to these updates
this.trigger(person);

}
});

So what does this lame web app do? Why so complex?

Glad you asked. When a person enters their name in the input, the change is reflected in the header component. But wait, you could add another component that would also be able to get those updates too. Later we’ll see how you can combine multiple Actions and Stores to create powerful workflows for your data.

I admit, the documentation for Reflux is lacking and not very easy to follow, but it’s a very powerful and elegant framework for keeping state across components. We’ve found it essential to building a very complex React web app.

Disclaimers

  • I’m using ES6 in my examples
  • Reflux has several different ways to skin a cat, this is the one of the simple more magical ways
  • We’ll cover async actions in a different post
Read More