-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Include Environment Variables in Task Definitions with JSON Fragments #73
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this contribution, @rojaswestall ! In addition the below feedback, can you also provide some evidence of testing this change? (e.g., a successful action run - you can run your own code by running npm package
, pushing the generated /dist
to your feature branch, and specifying yourfork@branch
in the action.yml
)
containerDef.image = imageURI; | ||
|
||
// Check for imageURI | ||
if(imageURI) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since neither input is strictly required now, can we add some debug statements to indicate which course(s) of action the workflow is taking, and throw an error if neither is provided? That way if I wanted to see what was happening I could see:
(image specified) -------> "inserting container image [IMAGE]"
(merge file specified) ---> "merging task definition fragment [FILE_NAME]"
(nothing specified) -----> ERROR: "No image or merge file specified. Please specify one or both to use this action."
|
||
// Merge the merge file | ||
if (!Array.isArray(mergeContents.containerDefinitions)) { | ||
throw new Error('Invalid merge fragment definition: containerDefinitions section is not present or is not an array'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer we not require the presence of containerDefinitions
, since there are top-level fields which can plausibly differ between environments (task-level memory/CPU, tags, task role, etc.). Is it possible to apply merge functionality across the entire task def vs. just this field?
Hey folks, Do you plan to merge this soon? :D |
@rojaswestall do you have any comment/response to the PR feedback? I would like to work towards getting this approved if you're still interested |
We need this, merge please. Thanks. |
I don't think you need this to solve the problem, if you want to have different environment variables per deploying environment you can simply load a different task-definition.json. Here's what I did:
For those who use SSM, the following is not a problem. It actually works well with that trick above ☝️ . {
"name": "adapter-service",
"image": "<IMAGE>",
"environment": [
{
"name": "NODE_ENV",
"value": "<ENVIRONMENT>"
},
]
} There are probably different ways to do this, but I think we can simply search for values with - name: Fill in the new image ID in the Amazon ECS task definition
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: deploy/task-definition.${{ env.ENVIRONMENT }}.json
container-name: container-name
image: container-image
env:
ENVIRONMENT: ${{ env.ENVIRONMENT }} What do you think? |
@rntdrts Having 1 file per environment is not scalable IMHO. I implemented a similar solution seeding variables into a template.
{
"family": "my-service-__ENVIRONMENT__",
"networkMode": "awsvpc",
"executionRoleArn": "arn:aws:iam::764292098539:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "my-service",
"image": "",
"portMappings": [
{
"hostPort": 5000,
"protocol": "tcp",
"containerPort": 5000
}
],
"essential": true,
"environment": [
{
"name": "NODE_ENV",
"value": "production"
},
...
],
"secrets": [
{
"valueFrom": "__ENVIRONMENT__.MY_SERVICE_API_KEY",
"name": "API_KEY"
},
...
]
}
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "256",
"memory": "512"
} workflow yml: - name: Sed "my-service" task definition with staging
run: sed -i "s/__ENVIRONMENT__/staging/g" .github/workflows/task-definition.my-service.template.json
- name: Render "my-service" task definition
id: rendered-my-service-task-definition
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: .github/workflows/task-definition.my-service.template.json
container-name: my-service
image: ${{ env.DTR_REGISTRY }}/my-service:${{ env.GIT_COMMIT }}
- name: Deploy "my-service" to staging
uses: aws-actions/amazon-ecs-deploy-task-definition@v1
with:
task-definition: ${{ steps.rendered-my-service-task-definition.outputs.task-definition }}
service: my-service-staging
cluster: my-cluster I get populated SSM before with The goal here is to use the same task definition across all the environments (could be DEV, QA, PPD, INT, PROD, etc) and write the code once. |
Maybe I didn't understand, but I don't think that what was done here solves your problem (at least by looking at the code). What is done here is a way to merge JSON files with each other. You will end up having to create similar files for each deploying environment ( |
Yes and no, because your staging.json and prod.json would only contain the dynamic parts, like the following: task-definition.json {
"networkMode": "awsvpc",
"executionRoleArn": "arn:aws:iam::764292098539:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "my-service",
"image": "",
"portMappings": [
{
"hostPort": 5000,
"protocol": "tcp",
"containerPort": 5000
}
],
"essential": true,
"environment": [
{
"name": "NODE_ENV",
"value": "production"
},
...
],
]
}
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "256",
"memory": "512"
} staging.json {
"family": "my-service-staging",
"containerDefinitions": [
{
"name": "my-service",
"secrets": [
{
"valueFrom": "staging.MY_SERVICE_API_KEY",
"name": "API_KEY"
},
...
]
}
]
} prod.json {
"family": "my-service-prod",
"containerDefinitions": [
{
"name": "my-service",
"secrets": [
{
"valueFrom": "production.MY_SERVICE_API_KEY",
"name": "API_KEY"
},
...
]
}
]
} Might be better to have a template file and replace it with sed, but for example, if you want to use a different logging driver for production, you have to duplicate the code, so here's where merge JSON files come into action. |
Hi. Any ideas when it can be merged? =) |
Hey folks. Any idea when it is going to be merged? It has been a while now. |
Really looking forward for this! |
Waiting for this issue to get merged |
Hi @rojaswestall, thank you so much for your contribution. Apologies on the delay. We will be working on reviewing Pull Requests on the repositories. In the mean time please ensure that below steps, if not already done, are taken care of in your PR:
|
Issue #20
Description of changes:
Merge a JSON fragment of a task definition with a task definition using lodash's mergewith.
containerDefinitions
andname
within each container definition are required in the fragment.merge
input with the task definitionactions.yml
inputs (made image optional and added merge)By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.