Category
Tutorials
Published
June 28, 2023
In this post, we delve into deploying Azure Container apps with Dapr using Terraform and Azure CLI, discuss strategies for managing CI/CD for container workloads and updates, and provide an overview of the Dapr SDKs. Our journey is exemplified by two demo applications: an endpoint application that handles HTTP requests and an accompanying 'worker' application that processes messages from the endpoint, illustrating the practical application of these technologies.
For the purpose of this post, two demo applications have been developed to serve as the workload for the container app setup. As its name suggests, the endpoint application is an HTTP endpoint for receiving requests. The requests are "processed" and sent as a message to a queue through an output binding to an Azure Servicebus. In the other application, the worker checks that same queue and "processes" the message to finally use an output binding to put it into an Azure Storage Blob.
To deploy the infrastructure and the demo applications, the following is required:
An Azure Account
Software
Download the demo project with the help of Git and create a file named terraform.tfvars in the directory containing the Terraform definitions:
Add the values to the following variables:
Note: The variables messaging_dapr_scopes, and output_dapr_scopes will be explained further in this post.
Provision of the environment:
This will set up all resources and role assignments needed to deploy the container apps, as they will be provisioned separately from the Terraform definition. I have chosen to do it like this because I want to handle Terraform for only some provisioning the environment and related resources. This way, the application version is kept from the Terraform state, and deploying a new version then only requires a call to azure containerapp update (more on this later).
For this post, the file resources.container.app.tf is worth a closer look. It contains the resources that are related to the container app environment:
It's the azurerm_container_app_environment_dapr_component resources that are of notable interest here. These resources set up the necessary Dapr components for the application.
The first one, named queue contains settings for a component that uses queues with Azure Servicebus. Note the field scope that uses the variable var.messaging_dapr_scope. There is an additional component for pub/sub setups that is similar, but more of that can be found in the README of the project. For now, just heed the one with the name queue.
When I filled out the variables, I set it to a list: ["endpoint", "worker"]. These are application IDs we are using further on when setting up the container apps and are the containers that will be integrated together with the component.
The second one, named output, contains settings for a component that uses bindings for blobs in Azure Storage.
The field scope uses the variable output_dapr_scope that we set to a list: ["worker"].
The name fields are essential to keep track of since they are used by our application code when setting up the clients from the Dapr SDK.
The next phase of setting up the project is to build the application and container images and deploy them with container apps into our container app environment.
Run the build script provided in the project (from the project root):
This will build the binaries and create images for them with the help of Docker. First, the images need to be tagged and pushed to the Azure Container Registry created in the infrastructure provisioning step:
And with that, the container apps can finally be deployed. This is done with the help of Azure CLI and a script. From the project root:
Note: The endpoint is "protected" by API keys we set as an environment variable for the endpoint container app. This is not fit for production, only for demo purposes. Save the results from uuid=$(uuidgen) for later use.
The script deploys a container app for each application and sets them up with the correct settings for the Dapr integration.
Observe the following flags for the endpoint deployment:
The flags for the worker are somewhat the same, and it differs on --dapr-app-id, --dapr-app-port, and scale rules:
The scaling rules set on the container apps will allow them to scale down to 0, minimizing the cost of running them. In the case of the worker, it gets scaled up when a message is sent to the queue, we configured for it.
Custom scaling rules can be created if based on the ScaledObject-based KEDA scaler.
This concludes the setup of the infrastructure and deployment of the application.
If you liked this tutorial, I have good news! You can find the entire project on GitHub, complete with all the Terraform configurations and scripts required to set everything up. It should provide a solid starting point for your own exploration and experimentation.
Stay tuned for part 2, where the code parts are discussed in more detail. That's where the real goodies come in!