Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

E2E testsuite refactor: test runner architecture #114

Open
2 tasks
viccuad opened this issue Aug 5, 2024 · 1 comment
Open
2 tasks

E2E testsuite refactor: test runner architecture #114

viccuad opened this issue Aug 5, 2024 · 1 comment

Comments

@viccuad
Copy link
Member

viccuad commented Aug 5, 2024

User stories

As a QE engineer, I want the CI to run the e2e tests against RCs and GAs.
As a QE engineer, I want to add tests to the e2e testsuite with no impact to the development cycle.
As a Kubewarden maintainer, I want to run e2e tests locally against any combination Kubewarden components (helm charts, deps, container images, policies) while developing new features.
As a Kubewarden maintainer I want to run the e2e tests on helm-charts PRs for the code under test in those PRs. This may include select PRs on other repos (e.g: bumping version for release on kwctl, policy-server, controller, which helps for patch releases).
As a Kubewarden consumer, I want to run smoke e2e tests that ship with GA releases to verify Kubewarden installations.

E2E testsuite

  1. Must run locally, and on GitHub Actions.
  2. Must be cluster-provider agnostic (Minikube, Fleet, EKS..)
  3. Must have a separate phase were all the components of the system under test are gathered prior to testing (helm charts tgz or folders, container image versions, policy versions). This allows to easily bump, swap any of those components prior to start testing.
  4. Testsuite run must be able to cancel, restart each testcase for iterative development.
  5. Testcases must be separated between destructive and non-destructive. If needed tests will instantiate their own policy-server and policies that only target specific test namespaces. They can do this programatically by creating a ns on each run.
  6. Non-destructive testcases must be able to run in any order. If a testcase fails, the suite should cleanup and proceed with the next.
  7. Testcases must be able to run in any order.
  8. Testcases should be stackable if possible: values.yaml configuration for the testcase must be minimal, and it should be possible to merge values.yaml files from several testcases to deploy and test a specific configuration (e.g: opentelemetry + audit scanner, or opentelemetry + airgap).
  9. Cover at least the following testcases:
    • non-destructive:
      10. smoke tests: policy pending to active, policy rejection and logging on monitor, monitor to protect, etc. Shipped as part of the Helm chart for Kubewarden consumers.
      11. audit-scanner
      12. opentelemetry (monitoring & tracing integration)
      13. airgap: almost free if number 3 is present.
      14. policy-reporter
    • destructive:
      15. upgrade tests: . If the testsuite has number 3, then upgrading from and to any version is already possible. See Add upgrade tests to end-to-end tests kubewarden-controller#134
      16. load tests: deploy specific workload on cluster.

Acceptance criteria

  • Come up with a simple test runner that covers points 1,2,3,4.
  • Migrate current testcases to new test runner.

See #22 for possible useful projects.

@flavio flavio added this to the 1.17 milestone Aug 9, 2024
@kravciak kravciak self-assigned this Aug 16, 2024
@kravciak kravciak removed this from the 1.17 milestone Aug 16, 2024
@kravciak
Copy link
Contributor

kravciak commented Oct 9, 2024

@viccuad is it ok to close this? I think most of this is covered by "helmer" PRs I merged yesterday.

  • there are no destructive tests, each testcase is able (and should) revert to initial state
  • helm charts can be installed from local directory
  • we can provide custom yaml files with _ARGS parameters

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: No status
Development

No branches or pull requests

3 participants