Terraform tricks to cope with conditionally created resources

Terraform is a great tool and we use it extensively to spin up resources in AWS.  It’s very easy to get started, the documentation is great and once you have built yourself a development environment it’s just a matter of changing a few config settings and you have yourself a staging and prod environment too.

It gets more tricky when the different environments are not exactly copies of each other. For example, in development you might want to create a SNS topic to use for testing whereas in staging and production you might want to use an external one.

Creating resources in some environments and not others can be controlled using the “count” parameter on a resource. So in the case of the SNS topic we only want to create in development, we can set an sns_count variable to be 1 in development and 0 elsewhere and use this to only create the topic in development.

But suppose we want to use the ARN of the created topic if it exists, or a fixed ARN of an external topic otherwise. Now it becomes even harder. Suppose that we are creating the SNS topic like this:

resource "aws_sns_topic" "sns_topic" {
  name = "our_topic"
  count = "${var.sns_count}"
}

and we have a variable topic_arn that will contain a fixed topic ARN to use if we have not created one. Terraform does not have any kind of conditional logic, but has a simple ternary function and there is a coalesce function which takes the first non-empty string from a list. So you might think that the following string interpolation would work:

"${coalesce(aws_sns_topic.sns_topic.arn, var.topic_arn)}"

Unfortunately it doesn’t because in the case where we do not create the topic, Terraform complains that there is no arn attribute.

So, to get around this we use this syntax instead:

"${coalesce(join("", aws_sns_topic.sns_topic.*.arn), var.topic_arn)}"

The .*.arn syntax returns a list of the ARNs for all the created topics, which will be either one or none. So we flatten this to a string with the join function and finally, it works as we want. Phew!

Dev meeting – IntelliJ IDEA XQuery, and AWS Lambda

Reece talked about the work that he’s been doing on an improved XQuery plugin for IntelliJ IDEA. This was inspired by some annoyances he was encountering with the existing XQuery plugin – although it’s generally good, it doesn’t have complete support for MarkLogic’s brand of XQuery, and it doesn’t correctly parse all XQuery.

Reece’s implementation covers more areas of the XQuery language, has better error reporting for issues in your XQuery code, can check for file encoding issues, supports Find Usages, and has a number of other features.

Reece is uploading his XQuery plugin to the Jetbrains repository this weekend, so it should soon be publicly available.

Joe and Chris talked about the work that we’ve been doing with AWS Lambda. AWS Lambda is what Joe hoped that cloud computing would be all along. AWS EC2 is a server in the cloud, but not fundamentally different from having a server on your own network. However, AWS Lambda is effectively serverless – you upload a piece of code, and then it runs without you worrying about the infrastructure at all. However, the underlying infrastructure can make a difference to you, in terms of the latency of invoking it when it hasn’t recently been called.

We are using AWS Lambda as part of a filter for an Amazon SQS queue. We’re testing it using an SBT plugin – SBTMocha. The challenges with testing it are:

  • When uploading the lambda, you don’t need the AWS libraries. However, when you’re testing it locally, you need to have the AWS library available. We dealt with this by separating the AWS-specific content from the testable content, and just testing the interesting part that doesn’t depend on AWS.
  • We have a number of environments: dev, qa, production. The configuration for the function needs to vary between each of them. The standard way of dealing with this is to include a “conf” directory as part of the upload, from which the lambda reads its configuration. Our build process in Jenkins constructs an appropriate zip file for the environment.

Using Lambda has been very useful for us – it has removed the need for a server.

Dev meeting – Terraform and CloudFormation

In our dev meeting this week, we discussed Terraform and Cloud Formation.

Terraform is a tool for managing infrastructure. We’ve previously talked about Puppet, Ansible, and so on – it’s not the same as them, but is instead the step before those tools. It creates the cloud servers that can then be setup by Puppet or Ansible (although, for example, Ansible can be used to set up cloud servers too – and Terraform can be used to set up software like MySQL too).

Terraform isn’t cloud agnostic – it has providers that are specific to a platform. This means that you can take full advantage of those platforms – but means that it isn’t trivial to switch from one to another. We’ve only used it with AWS, so far.

You write “.tf” files to configure your infrastructure – this is in a declarative JSON-esque format. You list things like the servers you want, and the groups they’re in, and so on. It will generally work out the order in which these things need to be created itself. You can apply a prefix to the things it generates, to make it easier to run it repeatedly against the same account.

You install Terraform locally (which is typically by downloading a binary – written in Go), then point it at the configuration files you have created (a module full of files). You need to make your AWS API key available to it – this is via a Terraform specific mechanism (e.g. putting it in a tfvars file). Then you proclaim “Behold! I am the Terraformer!” (at least, Chris does), and execute Terraform. It then writes out a state file, describing what has been created – and this should be checked in to Git. When you next run Terraform, then it will check your Terraform state file, and make appropriate changes to your system based on it. It will check the local state against the cloud state, so if you’ve made manual changes to the cloud servers, it will revert them.

Having created the servers using Terraform, it will run a “provisioner”, to run initial setup the first time. This should then call on to another configuration system. Out-of-the-box, it only supports Chef – and running scripts. In our case, we’ve needed to install Puppet on the server via a script, and then invoke that newly added Puppet to configure itself. The provisioning step should be as minimal as possible – and Puppet should do all the subsequent configuration, not the server. Once Puppet is up-and-running, it will poll the puppet master to keep itself up-to-date and configured.

We’ve generally had a good experience with Terraform – all of our pain has come from Puppet, not from Terraform.

CloudFormation is similar to Terraform, but is AWS-specific. It has a pretty GUI that shows all the components that will be set up. We have been using it for setting up MarkLogic clusters – it’s easiest to start with the standard MarkLogic cluster CloudFormation template, and edit that. It supports some limited conditional logic in the templates – such as creating different things based on region. You can open a template with the live state of a current stack – which provides an easy way of altering the configuration of that stack. There are example CloudFormation templates on the AWS marketplace. The MarkLogic template that we’re using creates servers using MarkLogic AMIs.

We all agreed that using either of these tools was better than managing things via the console – and of the two, we broadly preferred Terraform. We are likely to use Terraform more in future. However, for simple things, we are likely to use Ansible in preference to either – because it can do the creation of AWS servers, as well as doing their configuration, all within one tool.

Silly Scala tricks – the Ternary operator

Scala doesn’t use the ternary operator:

  test ? iftrue : iffalse

preferring to use if:

  if (test) iftrue else iffalse

Dan and I (Inigo) were on a long train journey together and decided to see if we could use Scala’s flexible syntax and support for domain-specific languages to implement our own ternary.

Our best shot is:

class TernaryResults[T](l : => T, r : => T) {
  def left = l
  def right = r
}

implicit class Ternary[T](q: Boolean) {
  def ?=(results: TernaryResults[T]): T = if (q) results.left else results.right
}

implicit class TernaryColon[T](left : => T) {
  def ⫶(right : => T) = new TernaryResults(left, right)
}

(1+1 == 2) ?= "Maths works!" ⫶ "Maths is broken"

This is pretty close – it’s abusing Unicode support to have a “colon-like” operator since colon is already taken, and it needs to use ?= rather than ? so the operator precedence works correctly, but it’s recognizably a ternary operator, and it evaluates lazily.

VPN and SSH clients

In this week’s developer meeting, we discussed whether we wanted to set up an office VPN, and then the merits of various different Git clients.

At present, we use SSH and port forwarding to get access to office machines and services from outside. This is a bit tricky to set up, and requires that forwarding for each internal server is set up independently by each developer. Simon made the case that using a full VPN would make connections to the office simpler, and would also mean that it was easier to access our external servers to which access is limited by IP address. After some discussion of the various VPN options, we decided that Joe would set up OpenVPN next week.

After that, we discussed Git clients. We don’t have a single standard Git client, and so we were discussing the various strengths and weaknesses of the clients we do use. We each described clients we had used:

  • TortoiseGit – Windows only. Generally good, fairly simple to use but can do complicated things too. Has good Windows Explorer integration for right-click – the only client that does, and this is very useful.
  • IntelliJ IDEA/Rider integrated client – this is very good for projects for the languages it supports, because it has language specific diffing/merging and pre-commit inspections. It is also convenient that it’s integrated in the IDE. It’s a bit heavyweight to use when quickly checking something in an unfamiliar repo.
  • Gitk+command line – Gitk is a Git tree visualization tool – you use the command line for actual changes. It is good for use on Linux in conjunction with the command line. It can be set up to use alternate diff/merge tools, such as the IDEA differ. A major advantage is that the command line can do everything, and so users of other tools will always have to switch to the command line anyway to do complex tasks.
  • Kraken – Is pretty, starts quickly, and asks for a SSH password once per invocation only. We weren’t sure of the extent to which it could do complicated things, but it has some mechanism for undoing changes. We thought it was not as good as IDEA for Java/Scala projects, but good for other languages. It runs on Windows, Linux, and OS X.
  • SourceTree – Its most distinctive feature is a UI for staging individual hunks and lines within a file – but the app is unstable, doesn’t cope well with separate filesystem changes, and is sluggish when the repository is large. It is okay for browsing history, but has limited searching. Broadly not recommended.
  • GitHub – a simple client, but it can be used for any Git repository, not just Github. It’s too limited for a developer to use as their main client, but might be useful for non-developers. 

    We didn’t come to any firm conclusions about whether any tool was the best.

Developer meeting – Gitlab

This week we talked about Gitlab, and our migration from using Sourcerepo for Git hosting to using our own self-hosted Gitlab on Amazon EC2.

We’re switching to Gitlab because it has a better UI, has a REST API for creating and managing projects, and has some nice features like protected branches and snippets.

Our migration process was to write some Scala code that used Selenium to scrape our existing list of projects from Sourcerepo, and then used the Gitlab API to create new projects in Gitlab. It also migrated users and transferred issues from our Trac issue management system into the Gitlab issue management.

We discussed using the Gitlab “merge request” process in place of our existing Git Flow approach to code review. We didn’t reach any firm conclusions, but we’re likely to discuss this again in a future developer meeting.

We discussed other uses for the API:

  • It potentially allows for better integration with our Jenkins continuous integration server. We would need to write code for this. There is existing Jenkins code that will scan a GitHub account, and create a Jenkins build for every project it finds containing a “Jenkinsfile” pipeline configuration – we could use this as a basis. This would mean that we didn’t need to configure Jenkins projects explicitly, just create a new repo based on our standard project template and the Jenkins project would be created automatically.
  • In addition to the normal Gitlab backup process, we can use the API to maintain a local copy of all the Git repositories on the server. The API allows us to enumerate all of the repositories and create directories for them, then we can use a script to check out the latest contents of each on a schedule.

Ansible and other orchestration tools

In our developer meeting, Richard B talked about orchestration tools, reporting back on his evaluation of Chef, Puppet, Ansible and Salt.

The full details are in the slides that he produced, attached below. The high-level summary is that he thinks that Ansible is most appropriate for our purposes. Ansible playbooks let us set up repeatable scripts for provisioning a server – so the server setup instructions that we currently write can be replaced by an executable and testable script. The main downsides for us are that Amazon Linux doesn’t work well with Ansible, and that we will need to create modules from scratch for some of the software that we use such as MarkLogic.

Richard has set up one of our current projects to work with Ansible, and we will experiment with using Ansible for new projects.

Slides for the Ansible dev talk (PDF)

 

Developer meeting – streams and Vagrant

Rhys talked about Java / Scala streams. Using Java 8, you can iterating over a tree of files, using a Java Stream. With Scala 2.11, the Java stream isn’t compatible with Scala streams. However – all of this will change in Scala 2.12 – this brings interoperability between Scala and Java streams. Reece also talked about how Scala 2.12 fully supports Java 8, and requires Java 8. This includes using Java 8 lambdas, so it doesn’t need to create an interface wrapper and a separate class file for every lambda.

Simon talked about Vagrant – a tool for managing the lifecycle of virtual machines. It uses a “box” as a base image, and then has a “provider” like AWS, Docker, VirtualBox. A Vagrant file represents a base box, and a layer of configuration on top of that – to do things like installing everything on your virtual machine. Vagrant base boxes can be stored anywhere URL-accessible, although there are repositories for them. You can make Vagrant automatically set up port-forwarding to your local machine. It’s important to get the content that doesn’t change into the base box, to improve performance. Packer is a separate thing, that allows you to produce something that is more of a deployable production version of your system, rather than being more dev focussed like Vagrant. Vagrant share allows you to forward connections via Vagrant share to your local Vagrant instance. Those people who are currently juggling a number of virtual machines (e.g. via VirtualBox) should definitely look at using Vagrant, and other people should consider it.

Developer meeting – mobile development

In our developer meeting, we discussed mobile development. We’re not primarily mobile developers, but modern web applications need to work effectively on mobile devices, and we have done some mobile-specific projects.

We talked about who cared about mobile browsers and responsive design – and came to the conclusion that everyone did. It’s not just people actively using a mobile device, but it’s also things like displaying on projectors at client meetings, docking browsers, and Chromebooks and other very small laptops.

We mostly use a responsive design CSS framework like Bootstrap, which does a lot for us but not everything. There is a “bootlint” tool that checks whether we are using Bootstrap classes effectively. We can also use Chrome and Firefox to simulate mobile devices. Actually accessing with real mobile devices is useful – but a bit of a pain for C# projects, because you have to configure the embedded IIS to allow access from outside the computer it’s running on, which isn’t the default.

We’ve had to do work in the past to fix display of items via “hover” – e.g. making them work on-click instead – but this may conflict with existing behaviour. Right-click is effectively unavailable too. Gestures are available – but not widely used.