How to support a list of uploads as input with Absinthe GraphQL
As you might guess, in our day-to-day, we write GraphQL queries and mutations for Phoenix applications using Absinthe to be able to create, read, update and delete records.
This article is directed to anyone who wants to build an orb in CircleCI, a continuous integration platform. I will try to explain some issues I had creating an orb for Elixir projects, and some common expected behaviour and functionality that should be included in your new orb.
Every time we would start a new project, we would need to copy the automation configuration over. If we did an improvement in one of the deployment scripts or in the CI configuration of the projects, this improvement would need to be copied to the other projects as well. This is a very tedious and error-prone task and one of the first thoughts that came to our minds when we saw the orbs was the possibility to have the core configuration centralized in one place and share it in all of our projects.
Please be aware that CircleCI orbs are in the public domain, so all information saved there is public.
Orbs also bring the possibility to share the configuration with other developers, not only allowing them to profit from our orb but also to contribute and improve it.
Elixir is a very young language that is being tried by many newcomers and one of the struggles developers have is to configure CI/CD into their project. Once again having an orb that was battle tested by others will reduce this entry barrier.
Building and maintaining a YML file might be easy at the start, but as soon it keeps growing it will become harder. Taking a look at some others orbs on GitHub, we found that the folder structure was a good solution to maintain a YML file as it grows.
To help us understand the YML configuration defined by the CircleCI team, we installed the CircleCI CLI that has some nice tools like lint
, validate
, and pack
. The first two commands allow you to test the YML file to check if the structure and the key mapping is correct. The error output sometimes might be a bit cryptic, at least to someone new to YML validation, but at the end, by checking the configuration page you end up understanding the errors quickly and get everything on the right track.
An orb contains the following useful attributes: executors
, commands
, jobs
, and examples
. I will do a brief explanation below for each one of them.
Executors define the environment in which a job will run on, allowing you to reuse a single executor definition across multiple jobs.
Here we opted to create a default executor with the following two images: circleci/elixir with the default tag as 1.8.1 and circleci/postgres with the default tag as 11.2-alpine, which were the latest versions available at the time. If you don’t know what alpine means, it’s a smaller version with the necessary minimum, as explained here. You can search for different versions in each image hub.
Providing a good default executor is useful, because it allows the user to not start using more complex stuff if they don’t need to, and it just works out of the box.
But remember that the user can always create their own executor if they need additional software to be executed. I think it is common in this kind of projects to use different configurations for different types of projects. This orb was thought at the beginning for projects that use Elixir and PostgreSQL, and this is reflected in how the default executor in this orb is defined. But in some of our projects we use FakeS3 to simulate the response of an AWS S3 service, and for that already exists a docker image.
Currently, if you just want to use the default executor and add an extra image you will need to create a new executor in your configuration and use that to pass to the job as a parameter or define it when creating a custom job.
executors:
default-with-fakes3:
docker:
— environment:
MIX_ENV: test
image: ‘circleci/elixir:1.6.5’
— image: ‘circleci/postgres:10.4-alpine-postgis-ram’
— image: ‘circleci/fakes3:0.2.4’
CircleCI has a website where we can suggest new ideas to improve it, and I created one explaining what I think could be a good alternative to be added to their configuration.
A command defines a sequence of steps to be executed in a job, enabling you to reuse a single command definition across multiple jobs.
Commands are the basic tools (a list of actions) where we can set up a behaviour. For our orb, I knew what were the basics behaviours: build
, test
and deploy
.
Below, I will describe a list of commands implemented in our Elixir orb, but this might differ from your orb, so use them only as examples.
description: Build Elixir source code
parameters:
cache-version:
default: v1
description: String key to store cache in
type: string
steps:
- checkout
- restore_cache:
keys:
- >-
<< parameters.cache-version >>-mix-cache-{{ .Branch }}-{{ checksum
"mix.lock" }}
- '<< parameters.cache-version >>-mix-cache-{{ .Branch }}'
- << parameters.cache-version >>-mix-cache
- restore_cache:
keys:
- >-
<< parameters.cache-version >>-build-cache-{{ .Branch }}-{{ checksum
"mix.lock" }}
- '<< parameters.cache-version >>-build-cache-{{ .Branch }}'
- << parameters.cache-version >>-build-cache
- run: mix local.hex --force
- run: mix local.rebar --force
- run: 'mix do deps.get, compile'
- save_cache:
key: >-
<< parameters.cache-version >>-mix-cache-{{ .Branch }}-{{ checksum
"mix.lock" }}
paths: deps
- save_cache:
key: '<< parameters.cache-version >>-mix-cache-{{ .Branch }}'
paths: deps
- save_cache:
key: << parameters.cache-version >>-mix-cache
paths: deps
- save_cache:
key: >-
<< parameters.cache-version >>-build-cache-{{ .Branch }}-{{ checksum
"mix.lock" }}
paths: _build
- save_cache:
key: '<< parameters.cache-version >>-build-cache-{{ .Branch }}'
paths: _build
- save_cache:
key: << parameters.cache-version >>-build-cache
paths: _build
Here we checkout the source code and fetch all the dependencies and compile everything. Before we get them and start compiling code, we check if already exists some folders cached from the previous build to save some time.
CircleCI offers a cache system to speed up builds. This cache system is based on matching keys and can have several levels of depth. It’s a little bit tricky, because this cache is immutable (you only have one time cached for the same cache key), and if you perform a cache of some invalid folders and files, it will remain there until you change the key (a fixed string, for example from v1
to v2
) or if associated with the checksum of mix.lock
file. Using the checksum of mix.lock
file, allow us to have a good benefit between one-time cache or not having cache in our build.
So, after briefly introducing how the cache system works, we needed to choose which folders we need to cache. I’ve searched for some config examples on Elixir projects that used Circle CI and found that two folders needed to be cached: deps
and _build
.
This command dockersize -wait tcp://localhost:5432 -timeout 1m
just waits for the database to be ready, in order to start running the tests using mix test
command. This is the most simple version for running tests, but the ones below allow you to bring extra checks into your code.
Since Elixir 1.6 there is available a standard for formatting code, by using mix format
. This will take care of all indentation, brackets, new lines, etc. It’s a big improvement for open source projects, where different people had different code styles that are now uniformed.
In this orb, we check if all files are formatted with the following command mix format --check-formated
.
Code coverage can be a good indicator (if tests were implemented right) about how much of the code is tested. Having a stable code coverage between 80% a 100% is good, but having 100% code coverage on a project that is always changing might be hard to add or change features, so a minimum of 80% is an acceptable value.
The command mix coveralls
will run the tests and report back the line coverage for each file, and fail if the global percentage of coverage isn’t above the specified. You can define the number of line coverage inside the .coveralls
file.
More about this in the excoveralls module repository.
Credo is a static code analysis tool that focuses on code consistency and teaching. It will give you a warning when your code isn’t as good as can be. The information is divided into 5 categories: consistency, design, readability, refactor and warning.
You can enforce a style guide using --strict
, but this is not currently supported by our orb.
Dialyzer is a static code analysis tool for Erlang. In Elixir you can use Dialyxir module to run inside your project. To reduce the time it takes to run it, we needed to cache two folders, ~/.mix
and _build
, so in the first run when building the PLT file it will take more time than future runs. Dialyzer can be a little bit cryptic some times but is definitely a good tool to integrate into your CI.
There are different ways to deploy an Elixir project. We are using edeliver to deploy to an EC2 machine, but it can be done with distillery to deploy for example to a docker container.
In edeliver, the Erlang releases are built on a remote host that is similar to the production machines. After being built, the release can then be deployed to one or more production machines.
This command supports two parameters, config_file_path
and hotupgrade
. In the first, you need to define the path for the edeliver configuration in order to successfully deploy our code. In the second parameter, the default value is false
, but if enabled it allows your code to be deployed without restarting the server. For more information about this, you can see this talk by Tian Chen.
Also to allow access to the build / deploy machine we will need to add the ssh key into CircleCI configuration that can be done here.
With jobs, we can automate common behaviour using our commands described previously and they will be used in the workflows section of the CircleCI configuration. After setup some couple of jobs in order to test the previous commands in an Elixir test project, and looking into other orbs, I end up finding some common behaviour that should be parameters of these jobs.
Before explaining some of the functionality, it is useful to know which different parameters types are accepted. The list is the following: string
, boolean
, integer
, enum
, executor
, steps
, environment
variable name
. These types are validated against the current configuration values in the orb YML file.
As explained above, the executors allow selecting which environment is used. With this property, we allow the user to pass custom executors as a parameter for the jobs.
executor: << parameters.executor >>
parameters:
---
executor:
default: default
description: Executor to be used in this job
type: executor
Some jobs might have this as a parameter because it allows performing an action on a code that has been just checkout from the repository, or in a code that was previously checked out by another job. The default value should vary if the most common scenario is to do a check (true
) or not (false
).
Some jobs will need data from each other, and the way to do that is to allow the files from one job to be persisted to another. For jobs that will produce an output for other jobs to use, you should have a persist-to-workspace
variable (with default a false
).
First, you set up the job parameter as follows:
persist-to-workspace:
default: true
description: Should this job persist files to a workspace? Defaults to true
type: boolean
The YML code should be something along these lines at the end of the job:
- when:
...
condition: << parameters.persist-to-workspace >>
steps:
— persist_to_workspace:
paths:
— project
— .mix
— .hex
— .ssh
root: ~/
Here you will indicate what folder should be persisted for the next workflow.
Similar to the persist to workspace parameter, in some jobs you will need to perform some action on data generated by previous jobs. To do this you need to attach the previously saved workspace (by changing the parameter persist_to_workpsace
value to true
) with the attach-workspace
on the current job.
First, you define this as a parameter of the job:
attach-workspace:
default: false
description: > Boolean for whether or not to attach to an existing workspace. Default is false.
type: boolean
Then you define in the job steps the actual attach_workspace
command right at the beginning of the job, before any action occurs.
- when:
condition: << parameters.attach-workspace >>
steps:
— attach_workspace:
at: ~/
You can define other parameters for the job that matches some of the command parameters. If you use cache, might be a good idea to set up a cache_name
parameter to invalidate a cache state. It’s also a good idea to set up default parameters, with the most common values to be used with, so when calling a job, the user doesn’t need to write every single parameter that doesn’t need.
It is a good practice to provide some common and not so common examples of usage of the orb, so people can understand in which scenarios can be applied with your orb.
In the orb I’ve developed, I’ve provided 3 examples:
Minimal code to build and test
Build, test and deploy
Build, test with FakeS3 (an external image)
In the first example, I show the minimal necessary code to run a job with my orb. I could have removed the parameters from build-and-test
, but on the other hand, I would like to show the user some options that they have.
orbs:
elixir: coletiv/elixir@0.1.0
version: 2.1
workflows:
elixir-build-test-minimal:
jobs:
— elixir/build-and-test:
check-format: true
coveralls: true
credo: true
dialyzer: true
In the second example, I provide a more common scenario when you already want to deploy a project, using edeliver. In this example, I split the dev and prod environment. In dev (associated with the develop branch), the tests are run before the deploy. In prod (master branch) because we assure that no commit can be introduced without being tested before in the develop branch we only perform the deployment task.
The third example is an edge case scenario. Because of the way the executor works, as explained previously, if you want to introduce a new image or even create your own job, this example will demonstrate how it can be achieved.
FakeS3 allows you to simulate an AWS S3 service response while the tests are running. So first of all the following command should be run before executing the test command.
fakes3 -r $HOME/.s3bucket -p 4567 &
And this is integrated into a new job in the config file, and it will execute the coveralls command of this orb.
test:
executor: default
steps:
— attach_workspace:
at: ~/
— run: fakes3 -r $HOME/.s3bucket -p 4567 &
— elixir/coveralls
Like every other software project, I needed to automate all the validations and deployments of this orb. Looking into other orbs these 4 tasks are the minimal actions necessary to complete this task: lint, pack, publish and increment.
Pack, as the name suggests, compiles the folder and files of different keys into one YML file.
Publish, will deploy a development version of the Orb, normally used for testing.
Increment, it is only run when merged into the master branch, and it deploys our orb directly into CircleCI Orb Registry, incrementing its version.
We also created a mini-project in Elixir to test this orb, and added into the Orb CI, validating that the project is compiled and tested after the orb configuration is validated. This is useful because we ensure that the project can run with the modifications we have submitted.
Further down the road, as I was finalising this orb, I found this blog post from Rose, where he shows how to integrate with git tabs as well the CircleCI orb-tools that help with the CI integration.
In the end, I end up having a fully automated deploy with just a commit and a pull request, having even the version of the orb automatically generated by the amount of changes done to the YML. A really neat solution.
This was a fun learning experience, and definitely, by creating this package manager system around CI/CD, the CircleCI team once again gains advantage over other CI/CD tools.
If you use Elixir, we encourage you to give a try to our orb that is available here. If you want to contribute and improve it further, take a look into our Orb GitHub repository.
If you already had built an orb or had some issues building one, please comment below. I would love to ear out your experience with it.
Join our newsletter
Be part of our community and stay up to date with the latest blog posts.
SubscribeJoin our newsletter
Be part of our community and stay up to date with the latest blog posts.
SubscribeAs you might guess, in our day-to-day, we write GraphQL queries and mutations for Phoenix applications using Absinthe to be able to create, read, update and delete records.
If you are a Flutter developer you might have heard about or even tried the “new” way of navigating with Navigator 2.0, which might be one of the most controversial APIs I have seen.
A database cron job is a process for scheduling a procedure or command on your database to automate repetitive tasks. By default, cron jobs are disabled on PostgreSQL instances. Here is how you can enable them on Amazon Web Services (AWS) RDS console.