Cloudformation, ECR and role circular dependency ...

So … after a few months of abstinence, I’m back into the AWS space.

And working on uSpot, because I use AWS more than Google Cloud, I decided to move my services from GCE to AWS ECS.
Of course, being an automation engineer, or at least I tend to believe so :D, even my own infrastructure needs to be built properly … with CI / CD … infrastructure-as-code etc.

Anyway, yesterday I went on to create a Cloudformation template to create my ECR repository and the role that can push to it, and bumped into a circular dependency
If you look at how to create an ECR repository, you would want to specify an IAM user / role that can access that repository.

Now, because I try to be secure-minded, I want to limit the role I’m created to just that single repo, through its policy.
Here’s how it would look like: Invalid gist.

Looking at that gist, it makes sense that we get a circular dependency, as both the role and the ECR resources are referencing each other.
Luckily, Cloudformation allows you to create the role policy document separately from the actual role.

That’s it!! That’s the way around it! ECR depends on the role, not the policy!

Here’s the working gist.

Enjoy!

Getting the ID of an AMI from its name in each region

Good morning folks,

It’s been years that I’ve been thinking of doing that script, but never been bothered to be honest :D
But today I got annoyed at it, for the thousands’ time, so here I go.

Let’s say, you want to create a mappings in Cloudformation for the Amazon Linux AMI, to be used by your instances.
You will need to specify the AMI id in the Mappings section, but it’s not that easy to get the AMI id in every single region
You would need to change region and check the ID, plus you would need to know the name of each region …

So you might go in the AWS Console, and search for the AMI, use the name as filter, and change the region to copy / paste the AMI Id of the Amazon Linux.
Yes… that’s what I did… sounds awful? Because it is!!!

So here’s a script I’m now going to use everything I need to do this:

1
2
3
4
5
6
AMI_NAME="amzn-ami-hvm-2016.03.1.x86_64-gp2"
for i in $(aws ec2 describe-regions --query "Regions[].RegionName" --output text)
do
echo -n "$i "; AWS_DEFAULT_REGION=$i aws ec2 describe-images --filters "Name=name,Values=${AMI_NAME}" --query "join(',', Images[].ImageId)" --output text
done

Boom! Enjoy fellas!

NodeJS & Mongoose $addToSet duplicates on objects

Hi Fellow Surfers,

Today I’ll talk a bit about Mongoose which is a NodeJS module to allow use manage your MongoDB models / schemas.

I’ve been working with NodeJS for the last few months when I started my journey on creating my first iPhone app: uSpot.

NodeJS and MongoDB are both very new to me, so of course, I’ve been hitting a lot of blockers due to my beginner experience!

One of them is the use of $addToSet operator that MongoDB provides when updating a document.

This operator allows you to add unique value to an array field on your document.

Let’s say you had a shopping list of a list of items: [ “bananas”, “apples” ] and you were to update the shopping list by adding an item called “bananas”, by using $addToSet, Mongo would check if the value exists already or not, if not, add it to the array.

So far, how great! Now, let’s say you want to use a Javascript Object instead.

For example, let’s say you have an array like this:

1
2
3
4
[
{ "name": "bananas", "qty": 2 },
{ "name": "apples", "qty": 4 }
]

If you were to do:

1
2
3
shoplist.update( { "$addToSet": {
"items": { "name": "apples", "qty": 4 }
} }, function(err, list) { ... });

The resulting array is:

1
2
3
4
5
[
{ "name": "bananas", "qty": 2 },
{ "name": "apples", "qty": 4 },
{ "name": "apples", "qty": 4 }
]

Why … wait … what?! Now, that’s unexpected!

Guess why that happens?
That’s because Mongoose by default creates a new MongoDB ObjectId ( this hidden _id field) every time you pass it a Javascript Object to update the field of a document.

Now how to go around it?
You can tell Mongoose to not create a new ObjectId, by making sure your mongoose schema is as followed:

1
2
3
4
5
6
7
8
9
10
11
12
var mongoose = require('mongoose');
var shopListSchema = {
"name": { type: "String" },
"items": [
{
"name": { type: "String" },
"qty": { type: "Number" },
** "_id": false**
}
]
}

Setting false to the _id property gives you the expected result!

Time to get back to code!

Amazon T2 instances

Amazon Web Services got a new type of instances: T2!

This new class replaces the old T1 and M1s.
You gain more performance (thanks to the CPU burst to 3.3Ghz) and more memory.
Find more information on the AWS blog.

But there’s a catch!
It’s not that simple to move from the old generation to T2!

You know how Amazon runs on Xen, and Xen provides two types of virtualization: HVM (Hardware-assisted Virtualized Machine) and PVM (ParaVirtualized Machine)…. right.

T1/M1 are PVM and T2 are HVM. And guess where is that defined?
In the AMI, which can make some of you smile because it’s obvious that it would be there (not being sarcastic here).

You were thinking of migrating to T2?
Think twice and evaluate how much work is that gonna be for you.

The good thing is you can do it, but it’s a loooot of work.
You could create a snapshot of the root device of your PV instance and copy the files over to the HVM instance or any other solution you might have…

Unless, and hopefully, you made your Instance installation reproducible, using CloudFormation, or a package that installs all the dependencies or using a config management system (Puppet or Chef), so that you could just change the type of the instance and it’s underlying AMI, recreate the instance and the job would be done in minutes!

That’s where Amazon wins again, if you used the tools it provides you properly!

AWS: CanonicalHostedZoneName was not found for resource... err?!

Hi All,

Happy new year!
First post for 2014, how exciting!!! no… not really …
Let’s get to it!

If you’ve been using CloudFormation for a while, you might have stumble on this error before:

Attribute: CanonicalHostedZoneName was not found for resource: ELB

Yeah cool … but what does that mean? Seriously … (<– one of my catch phrases according to my partner’s brother :D)

Well it means what it means!
Your LoadBalancer resource that you defined in the Cloudformation stack does not have this attribute.

You’re probably trying to create an AliasTarget DNS record for your ELB DNS name.

I know what some of you would say:

But but, dude, I looked at AWS docs, and it says to use CanonicalHostedZoneName, so seriously, I’m no idiot, look for yourself: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-route53-aliastarget.html

And I would say that you read correctly, but you did not check out the right doc.
Here’s another link that says:

If you specify internal for the Elastic Load Balancing scheme, use DNSName instead. For an internal scheme, the load balancer doesn’t have a CanonicalHostedZoneName value: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-elb.html

Mwahaha! Gotcha!

You’re creating an internal loadbalancer? Use DNSName instead!

The beauty of it is that you can also use DNSName for a public-facing loadbalancer… so question is, why do we have the choice?

Beat me.

AWS: My sad experience with RDS MySQL

Hello folks,

Welcome to my blog … for those who never came before.. Probably you!
If you’re here, it’s probably because you’re as sad as I am.

I’m going to share my quick RDS experience with you today.

At work, we’re working on getting a new RDS instance ready for a datacenter migration (from physical datacenter to AWS woot!).

It’s not happening as easily as I first thought (it never does!).

Anyway, today I crossed path with the sad man inside me for 3 things and you may want to know:

  • RDS Snapshot restore
  • Cloudformation Fail
  • RDS-as-a-Slave snapshots

RDS Snapshot restore

Do not think of RDS snapshot restore as a normal mysqldump restore!
It’s a snapshot of the RDS volume, just like you would do an EBS volume snapshot.

And you can’t restore an EBS snapshot into the volume can you? … No.
You can only create a new volume from an EBS snapshot, that makes sense right?

Well it’s the same principle for RDS snapshot! You cannot restore an RDS snapshot into the same instance, you would have to create a new one and specify the snapshot that you want to restore.

There’s a few things that you need to know there. That means, if you have a Cloudformation stack to create your RDS instance, don’t think RDS snapshot will be enough to be able to easily restore the data of yesterday (if you ended up deleting the main table for example).
Quite unfortunate, isn’t it.. You might want to keep a mysqldump running from somewhere.
Well, you could still delete your current instance, and restore into an instance with exactly the same identifier you would say…

Also, if you created a snapshot from an RDS instance that has Iops enabled, make sure you restore into an RDS instance that has Iops enabled otherwise, your cloudformation stack would fail.
Some of you probably think: “Well I did it through the AWS Console though”.
Mwahaha lies I tell ya! Go back in the AWS Console and check your RDS instance, I bet it has exactly the same Iops setup as the RDS instance from which the snapshot was taken.

CloudFormation fail

Sometimes, for whatever possible reasons, your CloudFormation stack might fail when updating a huge database… and it will end in the state: UPDATE_ROLLBACK_FAILED.

Well … go put your gloves on, get a sandbag and hit it until you bleed, because you would need it if that happens on your production database… there’s no way around it! You would have to recreate your stack!

Update: Actually there is a way around it, be nice and call AWS support.. and ask for the Cloudformation tech peeps!

RDS-as-a-slave snapshot**

Let’s say now that you created an RDS instance and is a replica of another MySQL instance somewhere because you might be migrating your application from one datacenter to AWS.

Well, note that if you want to take a snapshot of the RDS, you might want to stop the replication (call mysql.rds_stop_replication;) and do a SHOW SLAVE STATUS \G to get all the master information before taking the snapshot.

Why? I’m glad you ask! Even though those information will be saved in the snapshot, as it’s information that is written to the disk, when you restore a snapshot into a new RDS instance, it will reset the slave on the instance!

Yeah … I’ve fired a feature request. Let’s see what happens!

I hope it was informative, and good luck guys!

On the bright side, I still think AWS techs did a great a job on it, they’re still on a big journey, don’t shoot them! We need Cloud providers like them!

/usr/bin/ruby: invalid option -D ... nuuuuhhhh!

Having issues install your ruby gem?

Well, don’t worry, it’s not your fault!!
But you are doing something wrong … confused?

Let me explain.

You probably trying to install a gem via bundle or whatever and getting that responsed back:

1
2
3
4
Gem::Ext::BuildError: ERROR: Failed to build gem native extension.
/usr/bin/ruby extconf.rb
/usr/bin/ruby: invalid option -D (-h will show valid options) (RuntimeError)
extconf failed, exit code 1

Well relax!! I got that as well, and well … you might have a “space” in the folder or parent folder name where you’re trying to install the gem.

Just remove that space and replace it with an underscore ‘_‘ and magic!

Problem solved!

AWS SimpleEmailService and Cloudformation

Note: This document is based on Amazon Linux AMI (CentOS).

Welcome back to my blog!
I’m guessing you’re here because you’re having problem creating a user automagically from your CloudfFormation stack which would be used by your SMTP server for authentication into SES.

Well, you’re at the right place!

You create a user in Cloudformation with something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[...]
"SESUser": {
"Type": "AWS::IAM::User",
"Description": "User used to send email through SES",
"Properties": {
"Path": "/application/",
"Policies": [ {
"PolicyName": "DeveloperSiteIAM",
"PolicyDocument": { "Statement": [
{ "Effect": "Allow", "Action": "*", "Resource": "*" }
] }
} ]
}
},
"SESKeys": {
"Type": "AWS::IAM::AccessKey",
"Properties": {
"UserName": { "Ref": "SESUser" }
}
},
[...]

You refer those in your sasl_passwd as documented by AWS: Postfix Configuration for SES

But when you try to send an email out, you get this following error:

SASL authentication failed; server email-smtp.us-east-1.amazonaws.com[174.129.24.189] said: 535 Authentication Credentials Invalid</div>

Frustrating!!! How could you … not let me pass!?!

Well, relax friends and read again! In that very same page, AWS specified:

"Your SMTP credentials and your AWS credentials are not the same"

Get it already?

But you can for sure go around that with some scripting!
You can generate a Username / Password to use in your SASL configuration to connect to SES using the keys of that IAM user.

The username is actually the access key id of that user, and the password can be generated like this: SES SMTP Credentials

Err … Java … algorithms … go back to school and learn it!
Ok ok, you can find it here in ruby (is that any better??).
Well, Ruby is usually installed on Linux by default, and sure is on the Amazon Linux Image.
So you can use that script there!

And just because it’s Christmas today – yes it IS –
I’m going to give you an example of a cloudformation stack template that uses that script: here.

The files specified in the template are being downloaded from S3, so make sure you create a bucket and put the files in your bucket with the following paths:

That cloudformation template is also a good example for you to understand CfnHup configuration and CloudformationInit Mustache (file templating).

I hope this helped solve your issue!

Note: This document is based on Amazon Linux AMI (CentOS).

How to get your SES SMTP password from an Amazon secret key

Like a lot of you guys, I’ve been using Amazon Web Services for a bit now.

And now, planning on using SES, I’ve having issues.
You would think that, following this documentation, it seems easy to setup.

But by default, you would have to create a user by using the link that the SES console gives you to create a new IAM user, which will have an SMTP password linked to.
Problem is, most of them might not want to create a new IAM user, or might wanna create that IAM user through Cloudformation (see blog post).

Amazon gives you a way to generate that SMTP password from the Secret Key of an IAM user, but in Java!! Linux do not have Java installed by default, and seriously what a pain if you have to install Java just for that!

But, Linux does have Ruby, most of the cases, installed, with openssl and base64 gems!
That’s all we need right here!

So how to generate that SMTP password with ruby? Here you go:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/usr/bin/ruby
require 'openssl'
require 'base64'
sha256 = OpenSSL::Digest::Digest.new('sha256')
secret_key = "Secret Access Key"
message = "SendRawEmail"
version = "\x02"
signature = OpenSSL::HMAC.digest(sha256, secret_key, message)
verSignature = version + signature
smtp_password = Base64.encode64(verSignature)
puts smtp_password

And all done!