Lover of all things beautiful

2016-08-26
Reach Docker Host From Container

This is the best way I’ve found to set up for a container to contact the host machine.

1
sudo ifconfig lo0 alias 172.16.123.1

Now you can use the IP 172.16.123.1 to contact your local host machine. Might want to store that in an environment variable.

Note: I had written a much longer, more indepth post, but a few unfortunate key strokes in Vim obliterated much of the post…

Read More

2016-08-21
A Handy Publish Script for Static Sites Using CloudFront

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#!/bin/bash

CURR_COMMIT=$(git rev-parse HEAD);
CURR_VERSION=$(node -e "console.log(require('./package.json').version);");
VER_HASH=$(git rev-list -n 1 v$CURR_VERSION);

# Don't want to redo version bump
if [ $CURR_COMMIT == $VER_HASH ]
then
echo 'Already up to date'
exit
fi

npm version patch;

NEW_VERSION=$(node -e "console.log(require('./package.json').version);");

echo $NEW_VERSION;

git push origin head;

npm run buildprod

aws s3 sync ./public s3://www.example.com --size-only --delete;

# Invalidate cache
aws cloudfront create-invalidation \
--distribution-id YOUR_DISTRIBUTION_ID \
--paths "/*";

What Does It Do?

In a nutshell. This script will bump the current version of the project, build your static site, upload to AWS S3 bucket, then tell Cloudfront to invalidate all the files in the specific Cloudfront distribution.

What You’ll Need

To run this you’ll need a few things in place

  1. You need the AWS CLI. You can installed this with Brew brew install awscli, npm npm install -g aws-cli, or just from their site https://aws.amazon.com/cli/
  2. After you have AWS CLI installed, then you need to configure it. This requires putting the file ~/.aws/credentials in place with your creds. Read More
  3. You will need git installed.
  4. For this script we use npm.

Let’s Walk Through Each Part

1
2
3
CURR_COMMIT=$(git rev-parse HEAD);
CURR_VERSION=$(node -e "console.log(require('./package.json').version);");
VER_HASH=$(git rev-list -n 1 v$CURR_VERSION);

Here we are gathering a few details. We want to know the latests commit hash. Then we pull the version from the package.json file and use that to look up the commit hash of the tag. When we use npm version patch, it creates a semver tag like v1.3.1.

1
2
3
4
5
6
# Don't want to redo version bump
if [ $CURR_COMMIT == $VER_HASH ]
then
echo 'Already up to date'
exit
fi

Here we check if the tag hash is different than the current HEAD hash. Basically we don’t want to create another commit if nothing new has been commited. You need to actually change code and make a new commit before this script will continue past this part.

1
2
3
4
5
6
7
npm version patch;

NEW_VERSION=$(node -e "console.log(require('./package.json').version);");

echo $NEW_VERSION;

git push origin head;

Here we tell NPM to bump the patch version up one. It will change the package.json file version, make a commit, then create a git tag with that version. After we bump the version, we retrieve the version from package.json like we did earlier and echo that new version out. Then we push this new commit and tag up to our git server (github, gitlab, bitbucket, etc).

1
2
3
npm run buildprod

aws s3 sync ./public s3://www.example.com --size-only --delete;

Here we build our static site files. In this case I’m using npm to fire off a webpack configuration to build my files. Then we use the aws cli to sync our files to s3. We basically tell it to delete all the files on the server that are not local and only use size as a comparision indicator.

1
2
3
4
# Invalidate cache
aws cloudfront create-invalidation \
--distribution-id YOUR_DISTRIBUTION_ID \
--paths "/*";

After we’ve done uploading our files to S3, we tell Cloudfront to invalidate all our files. Since this is a wildcard, we only get dinged for one cache invalidation from Cloudfront rather than the hundreds or thousands of files that are on our site (that could get expensive, since Cloudfront starts charging for invalidations past 1000/month)

Summary

This script makes it really easy for me to just fire and forget and it has a little built in safecheck so I’m not just bumping up the version with no new code to show for it. I’ve gotten in the habit to create a publish.sh for many of my projects since I’m not always sure how each are deployed if I only deploy on occasion. Happy Shipping.

Read More

2016-04-26
A Better Hack to Get Docker and VPN to Play Nice

If you’ve found this article, then you’ve banged your head against the problem of being on a restrictive VPN and using Docker at the same time. The culprit is usually Cisco AnyConnect or Junos Pulse.

The Problem

You use Docker for development. For various reasons you need to connect to a VPN, but as soon as you do, Docker stops working. There are many solutions out there, some work, others do not. The bottom line is there is no elegant solution and this solution here is not elegant, but it will work. What’s happening? Yeah, when you connect, AnyConnect blunders in, overwrites all your computer’s routes to send them through the VPN tunnel. Luckily, it doesn’t route localhost (127.0.0.1) to the tunnel. This is our backdoor to hack ourselves in.

The Setup

My current setup involves using Docker Machine to create a Parallels VM. I’m on a Mac, Window/Linux YMMV. VirtualBox should work just fine; VMWare, can’t really say. Some really restrictive VPN that doesn’t allow split traffic, like Cisco AnyConnect or Junos Pulse.

The Hack

You’ll want to setup your Docker Machine first and get your env setup eval $(docker-machine env). Once you have your docker machine up. You’ll want to set up a Port Forwarding rule in Parallels. Go to Preferences > Networking. Then you’ll want to add a new rule like this

'Port Forwarding Rule

“default” is the name of my VM

You need to start a container that will forward the HTTP port for docker to localhost. Just run this command

1
$(docker run sequenceiq/socat)

You can find out more on what is doing at https://hub.docker.com/r/sequenceiq/socat/

Now on the command line, you need to update your ENVIRONMENT VARIABLES to use this new localhost incantation. We’ll be changing the DOCKER_HOST and DOCKER_TLS_VERIFY. We set DOCKER_HOST to your localhost version. Then we need to disable TLS verification with DOCKER_TLS_VERIFY.

1
2
3
export DOCKER_HOST=tcp://127.0.0.1:2375 && \
export DOCKER_TLS_VERIFY="" && \
export DOCKER_CERT_PATH=""

Now you can connect to your restrictive VPN* with docker ps.

Caveats

  1. You should have your VM up and running and have Docker-Machine env set in your terminal
  2. Any ports you want to connect to on your docker will need to be port forwarded with Parallels.

Notes

  1. This will all be obsolete when Docker for Mac is released to the general public. Can’t wait? Sign up for their private beta
Read More

2016-04-11
A Hack to Get Docker Working While on VPN

See the improved version

If you’ve found this article, then you’ve banged your head against the problem of being on a restrictive VPN and using Docker at the same time. The culprit is usually Cisco AnyConnect or Junos Pulse.

The Problem

You use Docker for development. For various reasons you need to connect to a VPN, but as soon as you do, Docker stops working. There are many solutions out there, some work, others do not. The bottom line is there is no elegant solution and this solution here is not elegant, but it will work. What’s happening? Yeah, when you connect, AnyConnect blunders in, overwrites all your computer’s routes to send them through the VPN tunnel. Luckily, it doesn’t route localhost (127.0.0.1) to the tunnel. This is our backdoor to hack ourselves in.

The Setup

My current setup involves using Docker Machine to create a Parallels VM. I’m on a Mac, Window/Linux YMMV. VirtualBox should work just fine; VMWare, can’t really say. Some really restrictive VPN that doesn’t allow split traffic, like Cisco AnyConnect or Junos Pulse.

The Hack

You’ll want to setup your Docker Machine first and get your env setup eval $(docker-machine env). Once you have your docker machine up. You’ll want to set up a Port Forwarding rule in Parallels. Go to Preferences > Networking. Then you’ll want to add a new rule like this

'Port Forwarding Rule

“default” is the name of my VM

Now on the command line, you need to update your ENVIRONMENT VARIABLES to use this new localhost incantation. We’ll be changing the DOCKER_HOST and DOCKER_TLS_VERIFY. We set DOCKER_HOST to your localhost version. Then we need to disable TLS verification with DOCKER_TLS_VERIFY.

1
2
export DOCKER_HOST=tcp://127.0.0.1:2376
export DOCKER_TLS_VERIFY=""

Now you can connect to your restrictive VPN* with docker --tlsverify=false ps.

docker ps

This is not an elegant solution, but will work until I figure something else more robust.

Caveats

  1. You should have your VM up and running and have Docker-Machine env set in your terminal
  2. You’ll get numerous warnings from docker-compose, annoying, but they are just warnings.
  3. You have to include --tlsverify=false with every Docker command e.g. docker --tlsverify=false ps

Notes

  1. Please keep in mind, companies implement restrictive VPN because it would be easy for a hacked computer or maliciously setup computer to allow access the VPN from outside world. By forwarding all ports through the VPN, it makes this security hole much more difficult.
  2. I’ve tried going the route of readding the routes (pun intended) to the Mac’s routing table to redirect the IP that Parallels VM is on back to the Parallels interface, but didn’t get anywhere with that.
  3. A better solution would be to include 127.0.0.1 with the SSL cert that Docker Machine creates for the VM, then you wouldn’t have issues when connecting via 127.0.0.1
Read More

2016-03-31
Preventing unwanted git email leaking

Maybe you work on different git projects, on business, then at home for personal projects. In the past, as a convenience, I set my ~/.gitconfig to include my name and email like the following.

1
2
3
[user]
name = "Shane A. Stillwell"
email = "shanestillwell+spam@gmail.com"

This is exactly what git wants you to do when it detects you haven’t set your name or email

1
2
3
4
5
6
7
8
9
*** Please tell me who you are.

Run

git config --global user.email "you@example.com"
git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

So what’s the big deal?

Now when you are working on a projects for work, or different clients, it will use the global name / email, but shoot, you didn’t mean to make that commit with your personal email, it was supposed to be your work email. Bummer, it’s now a permanent part of git log.

The solution

Very simple. Open up your ~/.gitconfig and change your email to none. (I assume your name doesn’t change between projects)

1
2
3
[user]
name = "Shane A. Stillwell"
email = "(none)"

Now in each project, before you can commit you’ll be prompted like above, just remember NOT to use the --global flag, e.g. git config user.email "shanestillwell+spam@gmail.com". Now each git repo will have a correctly set email address and you’re less likely to leak personal emails into business projects.

Read More

2016-02-14
Using free SSL and Cloudfront for an Angular React site

This is the holy grail, the culmination of knowing there is something out there better, stronger, faster. In this blog post, I’ll outline how to set up Amazon S3, Cloudfront, and Route53 to serve your static React, Angular, or any site over HTTPS. Get ready, it’s about to get real.

What am I even talking about?

The new en vogue way to build sites is to use React or Angular on the front end. Compile the site into a few files, like index.html, scripts.js, styles.css using Webpack. Then you take those files and upload them to a some basic web server (no need for Node.js, Rails, or other dynamic scripting language server). I’ve used https://surge.sh/, which is really easy and customizable, but you’ll need to pay $13 a month for a custom SSL site. For hobby sites, that gets to add up.

Since Google will start pointing out non-HTTPS sites, it’s probably a good idea to get all your sites secure, even the static ones. Most places charge for SSL, but Amazon is offering Free HTTPS certs for its services. This means you can use free certs for Cloudfront and Elastic Loadbalancers.

Cloudfront just serves up files in your Amazon S3 bucket. The last piece is using Route53 to point your domain to Cloudfront, but that’s not really required.

1
2
3
4
5
6
                  +----------+  +-----------+  +-----------+
| Route | | | | |
www.example.com+--> 53 +--> Cloudfront+--> S3 |
| | | | | |

| | | | | |
+----------+ +-----------+ +-----------+

Setting up the site on Amazon S3

Get Amazon S3 to act as our web server

The first step involves setting up an Amazon S3 bucket to hold our files. It’s a good idea to name the bucket after your site, e.g. www.example.com. You’ll need to set the properties for the bucket.

Permissions :: Bucket Policy
You’ll need to set these custom bucket policy permissions. Basically, anyone can get any object, and anyone can list the bucket contents.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
"Version": "2012-10-17",
"Id": "Policy1455417919361",
"Statement": [
{
"Sid": "Stmt1455417914563",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::www.example.com"
},
{
"Sid": "Stmt1455417914543",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.example.com/*"
}
]
}

Static Website Hosting
Select Enable Website Hosting, for the Index Document: index.html. Do the same for Error Document: index.html. This is required, because we are using html5 history and a user may try to fetch https://www.example.com/app/users, since this file doesn’t really exist, we still want to serve up our React/Angular app. Just to be clear, our app URLs will NOT be using hasbang-urls.

Now would be a good time to upload your static app files to your new bucket. Here is an example I used for testing, notice how I am placing a ?v=2 version. This will allow our Cloudfront cache to differentiate between versions. It’s beyond the scope of this post, but Webpack can easily version bundles and update your HTML file.

index.html

1
2
3
4
5
6
7
8
<html>
<head>
<link rel="stylesheet" href="/style.css?v=2" type="text/css" media="screen" title="no title" charset="utf-8">
</head>
<body>
<h1>Example Site</h1>
</body>
</html>

style.css

1
2
3
h1 {
color: red;
}

Get a free SSL cert

Other places charge as much as $20/month for cert support

This one is not that hard. Go to the Certificate Manager and set up an ssl cert for your domain e.g. example.com & www.example.com. You’ll need to be able to response to a select number of email aliases, so you’ll need to have access to the domain and the email server for the domain.

Cache with Cloudfront

Use Cloudfront as a sort of loadbalancer/SSL frontend to S3

Now we come to the part of the show where it gets intense. Cloudfront has a number of options, most of the defaults are pretty sane, but let’s list out what we need to do. First we’re going to create a new web distribution.

  • Origin Domain Name: Your bucket, you can select it from the list
  • Viewer Protocol Policy: Redirect HTTP to HTTPS
  • Forward Query Strings: Yes
  • Price Class: I choose US / Europe, you choose whatever
  • Alternate Domain Names: www.example.com
  • SSL Certificate: Custom SSL Cert (choose your newly created cert)
  • Default Root Object: index.html
  • Click Create Distribution

Wait, we’re not done yet. We need to tell Cloudfront that when we get an error, that we need to serve up the index.html file. Click on your distribution, then click on the Error Pages tab. Now it’s time to click Create Custom Error Response. From here you’ll select

  • HTTP Error Code: 404
  • Customize Error Response: Yes
  • Response Page Path: index.html
  • HTTP Response Code: 200

That will serve up the index.html file whenever Cloudfront can’t find the file.

Route53 points to Cloudfront (optional)

Route traffic for www.example.com to Cloudfront

Now you can create a record in our domain for your Cloudfront endpoint. Go to your Hosted Zones in Route53. From there Create Record Set. The Name: www, Alias: Yes, Alias Target: your Cloudfront url. Save it and you’re done. Now after a little bit, when your Cloudfront distribution is done processessing, you should be able to hit your site in a browser and see it served up.

Updating your site

You need to invalidate the cached index.html

If you update your CSS file, you’ll need to bump the ?v=2 to ?v=3 (or have Webpack just build your site and do it for you). You can then upload your files to your Bucket (please automate this). Then you’ll need to invalidate at the very least, the index.html file on Cloudfront. Go to your Cloudfront distribution and select the Invalidations tab and Create Invalidation. You can do every file with with *, or just put in /index.html to clear the index.html file. After a few minutes, the new version of index.html will start to be served up by Cloudfront.

You’ve just created a free, SSL protected, web server that is blazing fast, inexpensive, and supports cutting edge ReactJS, Angular, and even Ember web apps. Go forth and obtain knowledge and wisdom. Please leave a comment or contact me on Twitter.

Read More

2015-12-22
Using Amazon Route53/S3 to redirect your root domain

The Backstory

You may find yourself needing to redirect all traffic from your root domain example.com, otherwise known as the apex, to your real hostname www.example.com. Maybe you didn’t know this, but you cannot use a CNAME on your apex domain. This will bite you in the butt when you try to use your root domain example.com with Heroku’s SSL (HTTPS) service. Heroku will give you a hostname and tell you to create a CNAME to that hostname. However, this is not strictly possible. Some registrars can get around this by essentially providing you an HTTP redirect, but his is hack. In short, don’t use your apex domain e.g. example.com, even though you see all the cool kids on the block doing it.

Just Do It!

Create an Amazon S3 Bucket

First you are going to create a bucket in Amazon S3. You can name it after your domain, but the name is immaterial. The important part is going to be select Static Website Hosting. In that section, you’re going to select Enable Website Hosting. The Index Document is also immaterial, you can name it foo.html.

The important part is the Redirection Rules. You’ll use this.

1
2
3
4
5
6
7
8
9
<RoutingRules>
<RoutingRule>
<Redirect>
<Protocol>REPLACEME with either HTTP or HTTPS</Protocol>
<HostName>REPLACEME with www.example.com</HostName>
<HttpRedirectCode>301</HttpRedirectCode>
</Redirect>
</RoutingRule>
</RoutingRules>

You wont forget to actually replace the protocol and hostname sections will you?

Create a Route53 Alias

Now, walk over to Amazon Route53. I’m totally assuming that you’re using Route53 for your DNS records, otherwise, this whole article is pretty much pointless. Click on your root (apex) record. Now we want to select Type: A IPv4 Address, then you’ll select Alias: Yes. For the Alias Target you will get a dropdown, you’ll see one of them listed as S3 Website Endpoints, that’s the S3 bucket you just set up. Select that.

Now Save Record Set and you’re done.

Recap

You created an S3 bucket that acts like a webserver, anything that hits this S3 bucket will be immediately redirected to your desired hostname. You configured your domain example.com to hit your S3 bucket.

Caveat

  • This will not work for HTTPS => HTTPS, only HTTP => HTTP(s). So https://example.com will not work, it just times out. This might be a deal killer for some in which case you might want to stick with using an A record for your apex domain by setting up a real server.
Read More

2015-10-14
Leveraging Reflux (flux) with React like a pro

#Leveraging Reflux (flux) with React like a pro

My latest project for a well known clothing retailer has involved writing a React based web app. It has a staggering amount of data that needs to update different parts of the app. To accomplish this, we are using React (of course) and Reflux.

So you’ve heard React is the new hotness that all the cool developers are adding to their resume. It’s supposed to cure all your front-end woes. But in fact, you’ll find yourself with some new hand wringing problems to get you down. Don’t Fret. There is a better way.

Flux is not so much a software, but a way to keep data synchronized in your front-end app. What am I talking about? Say a person enters some data into a form and presses Send, we need to keep that data in sync in other parts of the app. Reflux is just a re-invention of the flux concept with some powerful features and simplified tools.

A simplified diagram of data flow

1
2
3
4
5
╔═════════╗       ╔════════╗       ╔═════════════════╗
║ Actions ║──────>║ Stores ║──────>║ View Components ║
╚═════════╝ ╚════════╝ ╚═════════════════╝
^ │
└──────────────────────────────────────┘

The major players

  • The React component (View Component)
  • The Reflux Actions file
  • The Reflux Store file

Think of this one way data flow like this. The component says “Hey, the person has updated their name in an input field”, so it calls the action that is defined in the Action file. The Action takes care of sending out a broadcast with the data the person entered. The Store is listening to the Actions file and gets called, it updates itself and then triggers an update. The component is listening to the Store and updates itself because it has new state values.

A simple web form

FirstNameHeader.js (For displaying the name entered into the input)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import React from 'react';
import Reflux from 'reflux';
// We import the PersonStore so we can get its value and updated when it's updated
import PersonStore from './PersonStore.js';

export default React.createClass({
mixins: [
/* The value of the PersonStore is set in the local state as 'person' */
Reflux.connect(PersonStore, 'person')
],
renderer() {
return (
<h1>{this.state.person.first_name}</h1>
);
}
});

FirstNameInput.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import React from 'react';
import Actions from './Actions.js';

export default React.createClass({
renderer() {
return (
<input type="text" name="first_name" defaultValue="" onChange={this._onUpdate} />
);
},
_onUpdate(event) {
let name = event.target.name;
let value = event.target.value;
// `updatedPerson` is defined in Actions.js, and PersonStore.js has a `onUpdatePerson` method that matches the following
Actions.updatePerson(name, value);
}
});

Actions.js

1
2
3
4
5
6
import Reflux from 'reflux';

// This file is small, but essential
export default Reflux.createActions([
'updatePerson'
]);

PersonStore.js The data store for person data

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import Reflux from 'reflux';
import Actions from './Actions.js';

let person = {
first_name: ''
};

export default Reflux.createStore({
// This is magic that matches up the Actions to the listening methods below
listeners: Actions,
init() {
// Need to provide the initial value to compenents
return person;
},

// A `magic` method that matches up to the `updatePerson` Action
// This is called in our `FirstNameInput.js` file
onUpdatePerson(name, value) {
person[name] = value;

// Needed for any component listening to these updates
this.trigger(person);

}
});

So what does this lame web app do? Why so complex?

Glad you asked. When a person enters their name in the input, the change is reflected in the header component. But wait, you could add another component that would also be able to get those updates too. Later we’ll see how you can combine multiple Actions and Stores to create powerful workflows for your data.

I admit, the documentation for Reflux is lacking and not very easy to follow, but it’s a very powerful and elegant framework for keeping state across components. We’ve found it essential to building a very complex React web app.

Disclaimers

  • I’m using ES6 in my examples
  • Reflux has several different ways to skin a cat, this is the one of the simple more magical ways
  • We’ll cover async actions in a different post
Read More

2015-06-10
The Life of a Vagabond Software Engineer

The Life of a Vagabond Software Engineer, Part 1.

Constant change requires reliable tools

Maybe you’re like me, you love to change things up, find new places, explore the horizons. This could be the far flung corners of the earth, or just a part of town you’ve never been. I’m a die hard explorer, but I’m also a software engineer. I need to be connected, I need power, and I need an environment that works. In this post, we’ll explore the tools. In subsequent posts, we will explore other factors, such as finding the right places to work, hints on helps.

Minimalism

I gravitate towards minimalism (although in practice it may not always appear that way). I take only what I need and usually items should have dual purposes. The items you choose should be of high quality, durable, and fixable if need be.

Go Light, Go Right.

Technical Accouterments

So let’s get right to it. I need reliable tools. My personal choice for computers is a MacBook Pro 13”. I’ve been using a Mac for almost 10 years now and have had little to no trouble. I usually buy a new one every two years (having a backup is also crucial in my line of work). I need to protect my Mac while it’s in my pack so I use an Aerostich Padded Laptop Computer Sleeve.

For the technical needs I have a small Granite Gear Ripstop Stuffsack. In this bag you’ll find.

Stuff Bag

  • Etymotic HF5 Earphones (great for noise free listening)
  • Simple lens cleaner (I wear glasses)
  • MacBook Power cord in a bag (having cords in their own containers/bags really helps reduce tangle)
  • MacBook Power Extension rolled up with velcro ties
  • iPhone wall charger
  • iPhone Earphones in an old makeup remover case
  • Granite Gear Stuff Sack
  • iPhone cord
  • Microfiber cloth
  • Moleskine Classic Notebook Squared 5x8.25 (not in stuff sack, but inner pocket of the Deuter Backpack)
  • Zebra Ball point pen

I can grab the bag in pack and easily have what I need to power up the Mac or iPhone, clean my glasses, take notes, jump on a conference call and more.

Ready, Set, Eat!

Cup and Spork

For the more culinary, eco friendly needs. I have a simple GSI Cup and a Snowpeak Titanium Spork. Notice the lanyard tied around the spork, this really helps to hang items, find it a lot easier if you happen to drop it (e.g. in the snow). Rant I hate styrofoam cups and plastic forks and spoons. Think of the overflowing landfills that have billions of plastic spoons that were merely used to stir coffee for a few seconds then thrown away forever. EndRant

The Girl Scout cup is sort of a running joke. My wife picked it up at a garage sale as part of a kit for our son. He didn’t want the branded cup, so I took it. Oh, and by the way I’m a Scoutmaster in the Boy Scouts, no scout ever wants to borrow my cup :D

Survival & Emergency

While out of doors in this wonderful world, you will run into situations. It may be as simple as a bandaid, or more life threatening involving emergency medical care and survial. In such cases there are a few items that can make all the difference.

Survival Items

  • Paracord rope
  • Batteries
  • Altoid Mini Container
    • Super glue
    • Cotton ball
    • Dental floss
    • Tweezers
    • Sewing needles
    • Mini compass (I was lost in the woods once, no fun)
    • Safety pin
  • Bandaids & Alligator clips
  • Petzl eLite headlamp
  • Real compass (yeah, being lost is no fun)
  • Lighters (forget the matches)
  • Ripstop bag to carry all these items
  • Leatherman Skeletool CX (not pictured, on my person)

The Rucksack

Lastly, you need a way to carry all those tools. I’ve used a Duluth Pack in that past and it works well, but doesn’t have the back support I’m looking for in a pack. My main stay is a Deuter Trans Alpine 30 AC (I have an older model than the one in the link). The Deuter backpack has a waist belt to take the weight off your shoulders. Plenty of pockets (but not too many). It has an integrated rain cover if I get caught out in the weather (can’t have that expensive Mac getting wet). It’s just the right size if I want to repurpose it for a long day hike with the scouts.

In Short

So that’s most of my kit. I didn’t mention the Kermit Kamping Chair that I use if I know I’m going to be sitting in the woods or park somewhere, or the smaller more personal items (such as flushable wet wipes for the those times, you know). Stay tuned for the next installment where we’ll discuss the ins and outs of finding good Wifi and other great things about working remote, working from anywhere.

Read More

2014-12-11
NG-Form To The Rescue

A project I’m currently work on (for a very popular apparel company) is employing AngularJS for their site. I’ve created some directives to handle collecting addresses for billing and shipping. Reusable directives with their own isolate scope. To add more awesome to the mix, I’m using ngMessage to show various error messages with the form.

This all works great, but the problem was showing the error messages correctly when there are two forms on the page.

For example, if I had a form that had two addresses in my form, it might look like this with my directives.

1
2
3
4
5
6
<form name="myForm">

<div my-address-form-directive="billing"></div>
<div my-address-form-directive="shipping"></div>

</form>

One of my directives might have template code that looks like this to show the error messages. This would show the errors when the input for city had errors.

1
2
3
4
5
<span class="text-danger" data-ng-messages="myForm.city.$error" data-ng-show="myForm.city.$dirty">
<span data-ng-message="required">: this field is required</span>
<span data-ng-message="maxlength">: Please enter only 200 characters</span>
</span>
<input name="city" ng-required="true" ng-maxlength="200" ng-model="city" />

But do you see how the myForm form name is sort of hard coded?

How to get the name of the form?

The individual directives do not know how to get the name of the form since it’s outside the code of the directive. Hard coding would be a bad idea. Even if you can get the name, then if you had two my-address-form-directive on the page, the errors would show on each of the forms (even though they only applied to one)

One way to get the form is to require it in the directive and then assign it to the scope of the directive like this

1
2
3
4
5
6
7
8
9
function myDirective() {
return {
restrict: 'AE',
require: '^form', // The ^ means look at the parent elements
link: function(scope, ele, attrs, form) {
scope.parentForm = form;
}
};
}

This would allow you to grab the parent form and then use that in your template like this (notice we changed myForm to parentForm to match what we’ve assigned to the scope).

1
<span class="text-danger" data-ng-messages="parentForm.city.$error" data-ng-show="parentForm.city.$dirty">

Double Trouble

Now you’ve solved the problem of knowing what name the form has, but what about having two directives under the same form like my top example? The error messages would show on both directives when one of them tripped the error. Not what we want.

Most problems you encounter in your everyday coding have already been solved. Find the solution and your job is that much easier

ngForm To The Rescue

The answer is depressingly simple. Just add a ng-form="formName" around your directive template. Like this.

1
2
3
4
5
6
7
<div ng-form="myForm">
<span class="text-danger" data-ng-messages="myForm.city.$error" data-ng-show="myForm.city.$dirty">
<span data-ng-message="required">: this field is required</span>
<span data-ng-message="maxlength">: Please enter only 200 characters</span>
</span>
<input name="city" ng-required="true" ng-maxlength="200" ng-model="city" />
</div>

You can actually get rid of the code that require: ^form from the directive and its matching scope.parentForm = form. No longer needed, ngForm kills two birds with one stone.

Happy Bird Hunting.

Read More