Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

revisit how espressif/idf-rust containers are build #247

Open
Vollbrecht opened this issue Nov 5, 2023 · 12 comments
Open

revisit how espressif/idf-rust containers are build #247

Vollbrecht opened this issue Nov 5, 2023 · 12 comments

Comments

@Vollbrecht
Copy link

Vollbrecht commented Nov 5, 2023

Motivation

Currently we are building all containers published with this file. Currently we get containers for the following configuration matrix:

  • mcu target: all
    • xtensa-toolchain: latest
    • xtensa-toolchian: some-hardcoded version
  • mcu target: specific for each esp variant
    • xtensa-toolchain: latest
    • xtensa-toolchian: some-hardcoded version

MCU target all

I think the all target is fine as is. The image should be used as a good base image for CI, it only needed to be pulled per run and can be cached through all target runs. ( I think this is currently underutilized in all CI's) Without moving to much the topic here just this small note: Installing the espup toolchain with an action for every CI run makes no sense as its wasted 1-2 min for every action for every target. Instead we should use this base-image and test against it.

Testing if one commit of a library is running on the tool-chain is orthogonal to the question - can the tool-chain currently be build? . We should have one job that may build the nightly tool-chain everyday - and every library CI should use this base image for all it's runs and not build the complete tool-chain from scratch !!!

MCU target specific

This image variant has its uses where the user only want to build a project for one specific target. The use case for it is for example the vscode devcontainer used in esp-template and esp-idf-template. Since each generated template is only used for one specific target the underlying container should also reflect that. In my opinion this containers should be as fast and as lightweight as possible making it nice to use.
Observation: riscv targets on std and no_std don't need espup at all. I am sure for it for the std variants so i want to know from you if that's also true for the no_std variants. This would free up a good chunk of space on all riscv images.

Independent of container or not what is currently not clear to me : The crosstoll-ng riscv toolchain that is included in espup includes 4 different risc-v targets:
1. rv32i_zicsr_zifencei
2. rv32imac_zicsr_zifencei
3. rv32imafc_zicsr_zifencei
4. rv32imc_zicsr_zifencei

In my understanding we only need number 2 and 4 correct? The other would only be huge bloat

EDIT: i made a seperate issue for this on espup. And since it seems to be confirmed that we don't need this for no_std and std this is no issue forward on optimizing the containers

Proposal

  • Create a new CI in this repo or another one that crates a daily target all image that can be consumed for all CI to build against. This will also future proof the CI against a growing project overall and we all want that :)
  • The target specific variants don't need daily building as they are our baseline for direct consumption by the enduser.
  • Slim down the riscv targets, than utilize them directly and not build them from scratch in the templates.
  • Test out potential github speedup of container fetch if the all target image would be hosted withing ghcr instead of docker-hub
@Vollbrecht
Copy link
Author

Vollbrecht commented Nov 6, 2023

If we are removing espup from the riscv targets, there is the open question how to tag this containers, since we are not using the xtensa-toolchain anymore. The logical thing is using the specific rust nightly-version used as a base? Though this is a bit awkward since it will look like the xtensa based containers have a non overlapping version scheme compared to the riscv variants?

@Vollbrecht
Copy link
Author

One other big advantage of using the proposal of already build containers for direct consumption is, that people don't need the GITHUB_ACCESS token anymore since they don't trigger the espup install. This is a win for me.

@SergioGasquez
Copy link
Member

If we are removing espup from the riscv targets.

A few notes regarding this. Using espup for RISC-V targets it's not that bad. It won't install the Xtensa Rust, only the nightly toolchain with the proper target and components, the extra weight would come with GCC and Clang (for std we could use the --std flag which skips GCC installation). Not using espup for RISC-V is definitely better, but it introduced a lot of extra code, that why I didn't go for this approach in the Dockerfile that generates the tags,

there is the open question how to tag this containers, since we are not using the xtensa-toolchain anymore. The logical thing is using the specific rust nightly-version used as a base? Though this is a bit awkward since it will look like the xtensa based containers have a non overlapping version scheme compared to the riscv variants?

This is already in a "bad" state, for example esp32c3_1.73.0.1 doesn't mean anything, as it will be using the nightly of the day it was built (which will probably 1.75.0) and not 1.73.0.1.

One other big advantage of using the proposal of already build containers for direct consumption is, that people don't need the GITHUB_ACCESS token anymore since they don't trigger the espup install. This is a win for me.

That's definitely an improvement, I am no expert in CI, so my only concern is if it will be faster, but seems like you clearly think that, so let's give it a shot!

@Vollbrecht
Copy link
Author

Vollbrecht commented Nov 6, 2023

So my current goals would be:

  • Implementing a test CI-Workflow that builds a daily all target image, for consumption in all downstream CI projects. We should test here if it makes a difference uploading this to docker-hub or ghcr, when consumed downstream.

  • Rewrite the underlying DOCKERFILE to directly use a nighty rust container.

  • write a CI-WORKFLOW that builds specific minimal versions of all esp targets with the above new DOCKERFILE (They dont need daily building)

  • update esp-idf-template / esp-templates .devcointainer to directly point to this new minimal containers ( link to the original DOCKERFILE for guys who still want to build all themself).

  • update the generated github ci yaml in templates to point to thins new containers.

  • think about a correct tag schema for the containers

@SergioGasquez does this look good to you or are you against something here. Obviously we really need to test if what i propose here creates benefits, but i am pretty sure it will make it.

@SergioGasquez
Copy link
Member

Implementing a test CI-Workflow that builds a daily all target image, for consumption in all downstream CI projects. We should test here if it makes a difference uploading this to docker-hub or ghcr, when consumed downstream.

In some repos, since we use a matrix with the target, we could use the docker tag that only contains the target, if I am not mistaken. If this is true, RISC V targets should be rebuilt daily too.

Rewrite the underlying DOCKERFILE to directly use a nighty rust container.

For Xtensa targets, this would add unnecessary stuff, right?

think about a correct tag schema for the containers

Maybe <target>_<rust_version>?

  • esp32s2-1.73.0.1
  • esp32c3-nightly-2023-11-01

@Vollbrecht
Copy link
Author

Implementing a test CI-Workflow that builds a daily all target image, for consumption in all downstream CI projects. We should test here if it makes a difference uploading this to docker-hub or ghcr, when consumed downstream.

In some repos, since we use a matrix with the target, we could use the docker tag that only contains the target, if I am not mistaken. If this is true, RISC V targets should be rebuilt daily too.

For caching purposes i suspect that its faster to have one base image for all jobs in a matrix (even if they started in parallel), That's why my idea is for all CI to relight on one image, and on-top of that do the stuff we do normally. But we should test this out if it holds true.

Rewrite the underlying DOCKERFILE to directly use a nighty rust container.

For Xtensa targets, this would add unnecessary stuff, right?

Currently we install the xtensa toolchain always on something with a normal rust installation(but not purging the platform stable part). This bit would just be replaced with the nightly one, Though yeah we should maybe look into it, will check it out.

think about a correct tag schema for the containers

Maybe <target>_<rust_version>?

* `esp32s2-1.73.0.1`

* `esp32c3-nightly-2023-11-01`

The question is what do we do with the all image? all-nightly-2023-11-01-1.73.0.1 this would mention first the riscv toolchain version then the xtensa version, This still could be a bit confusing, people would not realize that the first part is the riscv version and the second the xtensa one.

@SergioGasquez
Copy link
Member

SergioGasquez commented Nov 6, 2023

I see a few options:

  • Xtensa_1.73.0.1-RISCV_nightly-2023-11-01
  • all-2023-11-01
    • Xtensa and nightly version can be inferred from the date the container was built
  • esp-rust-nigthly or similar

@SergioGasquez
Copy link
Member

Note for future me, if we end up changing anything, we should update https://esp-rs.github.io/book/installation/using-containers.html

@Vollbrecht
Copy link
Author

Vollbrecht commented Nov 6, 2023

I see a few options:

* `Xtensa_1.73.0.1-RISCV_nightly-2023-11-01`

* `all-2023-11-01`
  
  * Xtensa and nightly version can be inferred from the date the container was built

* `esp-rust-nigthly` or similar
  
  * Skip adding any version, it's given for granted that we use the latest nightly and latest Xtensa Rust
  * Rust actually uses this naming for nightly: https://github.com/rust-lang/docker-rust-nightly/pkgs/container/rust

i am fine with option 3 though this would imply that we build the all container always only with latest xtensa-rust and not with any other version, i think currently we can publish new versions also for older xtensa-rust versions?

Regarding the stable installation fragments, i think i was wrong here because currently we install it with --default-toolchain none -y --profile minimal. This should be clean enough. So yeah either the xtensa container uses a separate base dockerfile for clarity or we interminngle them in one and based on ARG's we use nightly_base image or our current approach. I think we should have two Dockerfiles.

The xtensa variants could reuse the all Dockerfile, and we just would have one for riscv specific builds

@SergioGasquez
Copy link
Member

i am fine with option 3 though this would imply that we build the all container always only with latest xtensa-rust and not with any other version, i think currently we can publish new versions also for older xtensa-rust versions?

Yes, we could have an input parameter that is the Xtensa Rust version, we could keep that in case the latest Xtensa version is buggy.

It would also be good if we could go back to a specific date so its not only overwritten.

I think we should have two Dockerfiles.

I am fine with two Dockerfiles, it would make things simpler. And build the all image from the Xtensa one, basically Xtensa would really be Xtensa and RISC-V and the RISC-V would a subset.

@Vollbrecht
Copy link
Author

small update:

@SergioGasquez
Copy link
Member

Wow! Thanks for the updates, that's a great improvement!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Todo
Development

No branches or pull requests

2 participants