Engineering

02 May, 2017

Phoenix with image upload to S3 in an API: Implementation and testing

Phoenix image upload - We needed to implement a way to upload user images to the S3 cloud in an API service we built using Elixir and Phoenix.

David Magalhães

Software Engineer

Phoenix image upload to S3 in an API: Implementation and Testing

Here it is, another small article about the implementation of a new functionality in the API service we are building using Elixir and Phoenix.

Implementation

We need to implement a way to upload user images to the cloud (S3 in this case) so after some research I found this article that explains in a very good way the basic code on how to implement it. Arc is a very good dependency to handle image upload either for local or for S3 storage.

Store and fetch

After installing all the dependencies mentioned on the article, you can have a look into a simple controller, in this case I only specify the index and create methods, but all other methods (for example update) are similar.

defmodule MyApp.Web.AvatarController do ... def index(conn) do # We use Guardian to fetch user information current_user = current_resource(conn) if current_user != nil do image_url = Avatar.url({"image.jpg", current_user}) conn |> Phoenix.Controller.redirect(external: image_url) else send_resp(conn, :not_authorized, "") end end def create(conn, %{"avatar" => avatar}) do current_user = current_resource(conn) if avatar != nil do case Avatar.store({avatar, user}) do {:ok, _file_name} -> send_resp(conn, :ok, "") _ -> send_resp(conn, :service_unavailable, "") end else send_resp(conn, :bad_request, "") end end end

In this example, we start our methods by getting the user object. In this case we use Guardian, but you can get it from the database using Ecto. In our Avatar module generated by Arc using mix arc.g avatar we decided to incorporate the user ID into the file name.

defmodule MyApp.Avatar do ... # We use this so other users can't check other user profile images def filename(version, {_file, scope}) do :crypto.hash(:sha256, "a_very_long_string_#{scope.id}_#{version}") |> Base.encode16 |> String.downcase end # Override the storage directory: def storage_dir(_version, {_file, scope}) do "upload/user/avatars/#{scope.id}" end ... end

Note that #{scope.id} will use the user.id field, be sure you have it or change it to the field that you want to use.

After that, we check if the request has the avatar parameter, if not return a bad request HTTP status. If the parameter is present, we try to store it on S3 and we can check if it was successfully stored, checking the match pattern {:ok, filename}, in this case we could use {:ok, _} because we won’t use the filename.

Integration with Ecto

To use Arc with Ecto, you also need to add arc_ecto dependency. To add the field in the user table schema you need to start by creating a new migration to insert a new field.

# new_migration.exs defmodule MyApp.Repo.Migrations.NewMigration do use Ecto.Migration def change do alter table(:user) do add :avatar, :string end end end

After that, you need to add a new field into your user model and use the cast_attachments to validate the image upload and store the necessary information into the database.

# user.ex defmodule MyApp.User do use MyApp.Web, :model use Arc.Ecto.Schema schema "user" do ... field :image, MyApp.Avatar.Type end def changeset(user, params \\ %{}) do user |> Ecto.Changeset.cast(params, ...) |> cast_attachments(params, [:image]) end end

Be aware that when insert a new user the user.id isn’t available until the object is inserted on the database. You can generate an UUID (for example time + random number) to be associated to the filename or you need to insert the values first and then execute an update with the image only.

To view the image URL on the JSON structure I’ve created a simple method to correctly display the URL.

defmodule MyApp.Web.UserView do ... def render("show.json", %{user: user}) do ‰{ "id": user.id, "username": user.username, "avatar": render_image_url(user) } end def render_image_url(user) do if user.avatar != nil do Avatar.url({user.avatar.file_name, user}, :original) else nil end end end

To test if it’s working you can try it using Postman to send a POST request with form-data selected on the body and selecting a image file to upload.

Testing

Writing tests

In this example, I am going to write a simple upload test and check if it was successful or not.

defmodule MyApp.Web.AvatarControllerTest do ... test "Uploading test", %{user: user} do upload = %Plug.Upload{path: "test/assets/user_avatar.jpg", filename: "user_avatar.jpg"} post_params = %{"avatar" => upload} conn = conn_build() |> post(avatar_path(conn_build(), :create), post_params) # In this case we send in the response the location URL of the image assert List.first(get_resp_header(conn, "location")) == Avatar.url({user.id, user}) assert conn.status == 302 end

In this test, we select an image from our test assets and upload it using a POST method. We get the response, in this case we check for 302 (redirect) because we return the final url into S3 storage.

In this second test I’m going to show a test using Ecto (and ExMachina to build the model).

defmodule MyApp.Web.UserTest do ... test "Image Upload with Ecto" do {:ok, avatar_struct} = MyApp.Avatar.Type.load("x.jpg?1234567") user = build(:user, image: avatar_struct) # Be sure the image is available in the test folder upload = %Plug.Upload{path: "test/assets/avatar_user_1.jpg", filename: "avatar_user_1.jpg"} # POST parameters post_parameters = %{ "avatar" => upload } conn = build_conn() |> post(user_path(build_conn(), :create), post_params) json_response(conn, :ok) == render_json("show.json", user: user) end defp **render_json**(template, assigns) do assigns = Map.new(assigns) MyApp.Web.UserView.render(template, assigns) |> Poison.encode! |> Poison.decode! end

Using Fake S3

In order to implement tests to assure the behaviour of the functionality to upload images to AWS S3, we decided to use Fake S3, a fake AWS S3 API that replies in the same way as the real one, so we can test it at will without incurring in extra expenses and without an internet connection.

The readme file is pretty straight forward, and only two command lines are necessary to have it installed and running in your machine. After that you can check the code samples for different languages, in this case we are interested in Elixir. You can modify it on your config/test.exs and check if the port is the same as used in the fake server.

After that ensure that the Fake S3 server is running with the following command fakes3 -r ~/.s3bucket -p 4567, before you perform the tests. Run mix test and check if everything is as expected.

Running in Travis

To add this to your continuous delivery pipeline you can add the following lines into before_script, for example:

before_script: - gem install fakes3 - fakes3 -r $HOME/.s3bucket -p 4567 &

To not let the service hanging we can kill it after the tests finished running.

after_script: — kill $(pgrep -f fakes3)

Elixir

Software Development

Phoenix

Amazon S3

Travis CI

Join our newsletter

Be part of our community and stay up to date with the latest blog posts.

Subscribe

Join our newsletter

Be part of our community and stay up to date with the latest blog posts.

Subscribe

You might also like...

Go back to blogNext
How to support a list of uploads as input with Absinthe GraphQL

Engineering

26 July, 2022

How to support a list of uploads as input with Absinthe GraphQL

As you might guess, in our day-to-day, we write GraphQL queries and mutations for Phoenix applications using Absinthe to be able to create, read, update and delete records.

Nuno Marinho

Software Engineer

Flutter Navigator 2.0 Made Easy with Auto Router - Coletiv Blog

Engineering

04 January, 2022

Flutter Navigator 2.0 Made Easy with Auto Router

If you are a Flutter developer you might have heard about or even tried the “new” way of navigating with Navigator 2.0, which might be one of the most controversial APIs I have seen.

António Valente

Software Engineer

Enabling PostgreSQL cron jobs on AWS RDS - Coletiv Blog

Engineering

04 November, 2021

Enabling PostgreSQL cron jobs on AWS RDS

A database cron job is a process for scheduling a procedure or command on your database to automate repetitive tasks. By default, cron jobs are disabled on PostgreSQL instances. Here is how you can enable them on Amazon Web Services (AWS) RDS console.

Nuno Marinho

Software Engineer

Go back to blogNext