Teaching vs. Coaching

Teaching is about sharing your experience, coaching is about getting them to live / feel / have their own experience.

Let me explain myself.
A few days ago, I was doing a handover session to one of our clients as I was about to leave.
There was one person, although very smart, full of energy and good intentions, that I kept bringing back as he was giving the room too much details on what I was trying to pass down.

That’s when I discovered my leading style.

Teaching is telling your kid to not put his/her hand in the fire. Coaching is calling on their imagination to think about what might happen if he/she did put his/her hand in the fire.

But let me take you through what I think is the difference between teaching and coaching. What’s coming ahead is a very opinionated list of statements, and I’m only quoting myself.

Sharing knowledge is about giving people all your secrets

I think this is a big fallacy.

In the agile Scrum world, in a cross-functional team, I often hear people say: “everybody should be able to do everything”
I think this is wrong.

Although I believe that you could do anything if you put enough efforts into it, not everybody would be a plumber!
We’re all shaped differently, we all have different experiences, we all have different motivations.

I think what makes more sense is: “as a team, we need to know enough that if one person leaves, we can keep the lights on and keep going forward” - or even - “we all need to know enough about each topic to be able to call bullshit, without having to be experts”.

Each person on the team should be a coach in their area, but shouldn’t try to teach every single thing they know to everyone.

A coach should be asking questions

Yes and no. There’s a limit to the questions you should be asking.

There’s no point asking questions to somebody who doesn’t have any foundations on what you’re trying to get them to.
Sometimes, to avoid audience frustration, it might be a good strategy to give a hint then ask questions that would push the scenario further and open their imagination.

In a teaching scenario, you would just be passing down information and hope for the best.
Students will ask you questions, but only on things they connected to.

In a coaching scenario, you should be the one asking most questions.
By asking questions, you’re inviting the other person to simulate the situation for themselves and effectively live the experience.

I’m a coach, not a teacher

Again … to me, teaching and coaching are two sides of the same coin.

A good amount of teaching, with the right amount of coaching will make your audience connect with you, as you will make them feel the situation.
Brain simulation gets you almost there. Most, if not all experience we have is brain simulation, as your brain only interprets the world.

Conclusion

Share too much details, and you will lose people’s attention, share not enough you will lose people’s attention.
The best way to know where the audience is at, is by asking strategic questions to gauge it.

I cannot teach you what’s the right amount of teaching / coaching, I can only prompt you to make your own experience.

The best way for people to learn, is by experience.
Experience is based on skills you have. Trying to give too much information might get you to build too big of a gap for the people, that they will disconnect.

People don’t build new skills, they only expand the skills they already have.

My secret sauce is: if you and your audience are having fun, you’ve cracked the code.

Changing S3 object content type through the AWS CLI

So you’ve uploaded a file to S3 and want to change its content-type manually?

A good example would be that you have a static website where you’re storing a json file containing informations about your app like the version etc. called info that you upload together with the sync-ing of the frontend code, using aws s3 sync command.

When you do that, S3 doesn’t automatically know the Content-Type of the file info so it won’t set it to application/json.. so if you were to check that file by accessing its URL, instead of just displaying the content in the browser, it will get downloaded.

Sure, you could manually go into the S3 console, and set the Content-Type to application/json, but in a CI / CD environments, you want this to happen automatically, maybe by using the AWS CLI.

The only way to do this via the command line, is to copy the file.
So you might try this:

~ $ aws s3 cp --content-type 'application/json' --acl public-read s3://<bucket>/info s3://<bucket>/info
copy failed: s3://<bucket>/info to s3://<bucket>/info An error occurred (InvalidRequest) when calling the CopyObject operation: This copy request is illegal because it is trying to copy an object to itself without changing the object's metadata, storage class, website redirect location or encryption attributes.

Um… what?? Error? But I am changing the object metadata!!

Well … the AWS CLI for S3 has the flag --content-type but also have a flag called --metadata … but --metadata doesn’t allow you to change the content-type.

Instead you can force S3 to think you’re changing the metadata with the flag --metadata-directive REPLACE.

So try this:

~ $ aws s3 cp --content-type 'application/json' --acl public-read s3://<bucket>/info s3://<bucket>/info --metadata-directive REPLACE
copy: s3://<bucket>/info to s3://<bucket>/info

Yes!!! It worked!

Go enjoy your S3 deployment automation now!

307 Redirect when accessing S3 through Cloudfront?

So you’re trying to configure Cloudfront on top of an S3 bucket where you host your static website?
And you’re getting 307 redirecting to the S3 bucket DNS when accessing the DNS you configured in Cloudfront?

laBrute is coming to your rescue … well… you might not like the answer.
But let’s wind back a bit and look at the scenario:

  1. You created an S3 bucket called: origin-mysite in the Sydney region, which would have the URL origin-mysite.s3-ap-southeast-2.amazonaws.com
  2. You then create a Cloudfront distribution with that bucket set as the origin
  3. You create a DNS name www.mysite.com that points to CloudFront and have it set in Cloudfront as Alternate Domain Names
  4. You try to now access your site via http://www.mysite.com but get redirected to http://origin-mysite.s3-ap-southeast-2.amazonaws.com … ?!

That’s because of the distributed nature of S3.

If a request arrives at the wrong Amazon S3 location, Amazon S3 responds with a temporary redirect
that tells the requester to resend the request to a new endpoint.

You can read more about it here.
Basically the fix is … to wait for it!

Note that if you were using the us-east-1 region, you should not hit this specific problem.

Cloudformation Transform::Include Limitations

It’s been now a few days since I’ve played with Transform::Include.

There’s two limitations I’ve found so far:

If you follow the link for the first one, you will understand what the issue.

For the second one, please follow my lead!

Let’s say, you would like to separate the resources in your template logically into snippets.

For example, you might want to create an S3 bucket and a CloudFront distribution on top of the S3 bucket to be able to use custom SSL, or maybe just for actual caching capabilities.

Doing this in one stack is probably the way to go, as they would be tightly attached, but your template might grow very quickly, as you might have an S3 bucket resource, the SSL certificate resource, the CloudFront resource and possibly the S3 bucket policy as well..

Your first thought might be to separate the S3 bucket + policy into one snippet, and the CloudFront + SSL certificate into another snippet!
Great idea! That’s what I would do.. except, you can’t do it!

Let’s assume you have something similar to:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
AWSTemplateFormatVersion: '2010-09-09'
Description: Transform Include example
Parameters:
ArtifactBucket:
Type: String
Resources:
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
Location: !Sub "s3://${ArtifactBucket}/s3.yaml"
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
Location: !Sub "s3://${ArtifactBucket}/cloudfront.yaml"

What will actually happen, is that Cloudformation would only deploy the second Include!

It would be really awesome if you could use multiple Includes on the same level.. but who knows, that might be coming in the future..
Or even, right now, AWS already has a fix and is pushing it to us-east-1 as you read!

Remember they keep pushing improvements every hour!

Cloudformation Transform::Include - YAML/JSON malformed

Good day y’all,

You’re here because you probably had issues using the Transform::Include feature of Cloudformation.

Note: by the time you get to this blog, AWS might have fixed this bug.

Let me tell you .. it’s awesome!

Most people I’ve worked with have been looking for this for a while! To achieve it, they were using Nested stacks.
But we all know that it’s very different.

I was personally very excited to see that announcement. But really … I only had an opportunity to start using it yesterday.
And it’s awesome!

Now there’s a few gotchas … that might get fixed in the future, but for now there’s a few issue.

Let’s consider the following cloudformation template:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
AWSTemplateFormatVersion: '2010-09-09'
Description: Transform Include example
Parameters:
ArtifactBucket:
Type: String
BucketName:
Type: String
Resources:
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
Location: !Sub "s3://${ArtifactBucket}/s3.yaml"

This cloudformation teamplte basically is just gonna to create an s3 bucket.
The definition of the S3 resource would be done in the s3.yaml file, hosted in our artifact bucket.

The s3.yaml would have the following content:

1
2
3
4
AppBucket:
Type: "AWS::S3::Bucket"
Properties:
BucketName: !Ref BucketName

So far, easy hey!

Note: Before you try to deploy the template, make sure you’ve uploaded the s3.yaml snippet in the right location first

Now, let’s try to deploy that cloudformation template:

1
2
3
4
~ $ aws cloudformation deploy --stack-name transform-include-example --template-file file://cloudformation.yaml
Waiting for changeset to be created..
Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Transform AWS::Include failed with: The specified S3 object's content should be valid Yaml/JSON

Wait … what ?! Is that telling me that the s3.yaml file is not valid?? But .. when I lint my yaml file, it says it’s valid!!

Well well… actually, not sure if it’s a bug, or not (although it smells like one), but basically you cannot use the short notation of the Cloudformation intrinsic functions.

You can’t use !Ref BucketName in this case.

Instead the correct snippet would be:

1
2
3
4
5
AppBucket:
Type: "AWS::S3::Bucket"
Properties:
BucketName:
Ref: BucketName

Try that, and let me know on Twitter!

Cannot copy AMI to another account due to BillingProduct code .. say what?

Note that I’m using Windows AMIs out of … obligation?

tl;dr Because it’s a Windows AMI, you would need to first spin up a new instance from that shared AMI, then create a new AMI in the destination account from that instance.

So like me … you’ve been using Windows Amazon Machine Images.

And you’re looking at copying your freshly built image across to another AWS account, but the copy is failing with:

Images with EC2 BillingProduct codes cannot be copied to another AWS account.

Well .. you’re using Windows, you’re on your own :P

Nah alright, I’ll help you out here.

Basically that message means that you’re trying to copy a Paid AMI from the marketplace or a Windows AMI!

That’s good to know .. but how do you create an AMI for those bad boys then?

Well, you can still launch an instance from that shared AMI then create an AMI from that EC2 Instance (remember, if you’re using Windows, make sure you sysprep the machine).

Now you might be asking…

Can’t I just share a snapshot of the EBS volume of the origin instance to the other account, copy that snapshot and create an AMI out of it?

Well… yes you can. For Windows AMI though, you will have a tough time!

You sure can do this, but when you will create an image from the snapshot, the AMI platform will show Other Linux rather than Windows!!!

According to Amazon documentation (as of 2017-04-17), you can only create a Windows AMI out of an actual EC2 Instance!

Because I love pushing boundaries, I went further than this, and actually tried to spin up an EC2 Instance from that Other Linux AMI.

The instance booted!!! And although I don’t get System Log and Get Windows password feature due to AWS thinking it’s a Linux machine, I’ve been able to RDP into the instance using the origin instance windows password!

Cherry on the cake, as AWS treats that instance as a Linux instance, billing is done according to that!

Now.. of course, I do not recommend you doing this, nor do I know what this means for the Windows licensing etc.

So don’t do it at home!

The End.

Error when using Cloudformation package for Lambda functions?

If you’re like me, and you’ve been using Lambda, you would have been delighted when you found out that Cloudformation now had a transform for Lambda.

This post won’t get in much detail on how to implement a deployment using this Transform feature.. but instead, it will go through one of the issues I had when using it.

If you’re here, that’s because you’ve probably hit an issue that I had:

'NoneType' object has no attribute 'get'

What does that even mean !? Not a lot indeed ..

So I ran the aws cloudformation package command with the --debug flag, and this line stood out:

[...]
2017-04-16 18:28:12,761 - MainThread - botocore.args - DEBUG - The s3 config key is not a dictionary type, ignoring its value of: None
[...]

But really!? I interpreted this as the s3 bucket I specified to the package command does not exist .. but it does..

I’ve been using Cloudformation for a while and I’ve been relying a lot of aws cloudformation create-stack to validate my templates as I deploy them.. I just didn’t see a great value at using aws cloudformation validate-template up to now..

Yes, in a CI, it would be nice to separate the validation step from the deployment step, but really … I didn’t see much value as the create-stack or the update-stack would run the validation anyway!
I know .. I could have at least lint’ed my YAML but hey! I didn’t!

Well … now I found a good reason for it..

The issue was that my YAML was invalid! Of course!!

Guess what … I’ll be sure to at least lint my YAML now!!!

Finding Amazon Linux AMI IDs for Cloudformation Mappings

So … this post came about because I use Cloudformation for all my infrastructure in AWS.

When you create an EC2 instance, you need a base AMI, and for all my tests I usually just use the Amazon Linux AMI.

Now, the issue here is that an AMI is region-bound, so the Amazon Linux AMI that Amazon provides has different IDs in each AWS region.. so usually I would use a Mappings configuration in Cloudformation to specify the right Amazon Linux AMI ID according to the region I’m playing in.

Now, the name of the AMI is the same in all region! Thanks AWS for consistency!
Plus, the AMI name is unique per region!

So let me tell you my evolution on finding the AMIs for each region.

1. Find a cloudformation template that uses Cloudformation Mappings for Amazon Linux AMI

That’s years ago! Don’t judge me :D

So here, I would first go on Google, and try to find a Cloudformation sample that uses Cloudformation Mappings, then copy / paste the Mappings section.

Pros
uhh… None?

Cons
Well, first of all, very manual.. right.

Also, Amazon delivers new AMI with new updates all the time, but the Cloudformation stack might not have the latest AMI referenced.

I have never thought I would need again so I didn’t bother bookmarking the link … guess what..
I’ve always needed it..

2. Find the AMI IDs from the console

See how I finally decided to use an AWS tool rather than Google to find the information? Winner!!

I would go in the AWS console, find the AMI I want, and filter in the AMI console on the name then change each region and filtering on the same AMI name.. and copy / paste was my best friend. Ever.

Pros

  • I could see if there was a new Amazon Linux AMI.

Cons

  • Long, painful … my life sucks kind of way.

3. Using AND the console AND the awscli

Then I decided.. let’s automate this to some level, so I wrote a script that would loop through each region and get the image id based on the AMI name.
Here I would first go in the Console, same as before, and get the AMI name.

Disclaimer: For arguments sake, I didn’t include all the regions here, but there’s an easier way later anyway!

After that, I would run the following script:

for region in ap-southeast-2 us-east-1 us-west-2
do
  echo -n "${region}: "
  aws ec2 describe-images --filters Name=name,Values=<ami_name> --query "Images[0].ImageId" --region $region
done 

–query might sound obscure to you..
Well, this is a jmespath query, which you can read more about here.

4. Step into the future. AWSCLI only solution!

Now now… I’m better than this hey!
Well, awscli has a way to list all the regions.. and I finally decided that I had enough with my lazyness, and automate the whole shizzel:

for region in $(aws ec2 describe-regions --query "Regions[].RegionName" --output text)
do
  echo "${region}: $(aws ec2 describe-images --owners amazon --filters Name=name,Values=amzn-ami-hvm-*s3 --query "reverse(sort_by(Images, &CreationDate))[0].ImageId" --output text --region $region)"
done

This snippet loops through all the regions, and return all the Instance-store based Amazon Linux AMI, and sorting in a descending fashion on the CreationDate.

JmesPath only sorts lexicographically.. it’s not aware of dates.
But this works because the field CreationDate that AWS returns is in ISO 8601 format, which is lexicographical.

Of course, you can format the output the way you want, and feel free to modify the filter to match the AMI you want.

There you go! Hopefully, that helps, Use it, Love it, Live it!

yarn self-update not working ?!

Hey y’all,

After a week of using YARN, I have to say I’m still happy with it.
A few things that I still can’t get around/or fully happy with are the yarn self-update and the yarn global add that doesn’t actually globally add binaries, but adds into the users yarn-cache folder.

But focusing on the problem at hand with the self update feature.

If you are using yarn, version < 0.16.x, you might receive the following error when trying to update it:

1
2
3
4
10:25 $ yarn self-update
yarn self-update v0.15.1
error OAuth2 authentication requires a token or key & secret to be set
[...]

If you do encounter this error, just use npm to upgrade it to the latest version, then following updates of yarn has a fix for the self-update…
Yes, I know .. what an irony.. Anyway:

1
2
3
4
5
10:25 $ npm -g install yarn
/usr/local/bin/yarnpkg -> /usr/local/lib/node_modules/yarn/bin/yarn.js
/usr/local/bin/yarn -> /usr/local/lib/node_modules/yarn/bin/yarn.js
10:29 $ yarn version
yarn version v0.16.1

Now you can enjoy the yarn self-update!!!

First glance at Yarn: a new javascript package / dependency manager

Being subscribed to the Javascript Weekly newsletter, I received an email last week about a new node package manager: Yarn.

Built by Facebook, and knowing how good the tools they build are, I decided to give it a go.
To be honest, NPM slowness was really starting to frustrate me..

I won’t cover how to migrate from NPM to Yarn as it’s been covered by the Yarn docs already.

So, I did a quick test this morning and timed it.

Running yarn install vs. npm install for the first time today got me these results:

1
2
3
4
5
6
7
8
9
10
11
$ time yarn install
[...]
real 0m32.301s
user 0m15.931s
sys 0m8.744s
~/application/ $ time npm install
[...]
real 0m44.660s
user 0m19.924s
sys 0m8.410s

Quite a significant difference already hey!? More than 12 seconds faster!

Then I went on and run those commands a second time:

1
2
3
4
5
6
7
8
9
10
~/application/ $ time yarn install
[...]
real 0m1.468s
user 0m0.589s
sys 0m0.144s
~/application/ $ time npm install
[...]
real 0m6.274s
user 0m4.210s
sys 0m0.663s

Much much faster!

Now, I often use the help, as being old (yeah … right …), I do sometimes forget the other subcommands that I don’t often use, here are the timings:

1
2
3
4
5
6
7
8
9
10
~/application/ $ time yarn install
[...]
real 0m0.991s
user 0m0.424s
sys 0m0.095s
~/application/ $ time npm help
[...]
real 0m2.199s
user 0m0.808s
sys 0m0.178s

Convinced yet?

One caveat is that Yarn (v0.15.1) doesn’t seem to include a search subcommand like NPM does.