Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parameterized pytest tests run in parallel when executed through the editor #21529

Closed
red8888 opened this issue Jun 28, 2023 · 17 comments
Closed
Assignees
Labels
area-testing triage-needed Needs assignment to the proper sub-team

Comments

@red8888
Copy link

red8888 commented Jun 28, 2023

Type: Bug

Behaviour

Expected vs. Actual

Python extension runs parameterized pytests sequentially (the expected default behavior) by default when run in test exporer AND the editor.

I uncovered this because I have a parameterized test that must run squetentially (it creates and cleans up files). When run in the editor I can see it fail due to a race condition resulting from the tests running in paralell. It never fails when run through test explorer or the pytest command line.

There is an SO post for this same issue with no responses: https://stackoverflow.com/questions/72204026/running-parametrize-pytests-in-vs-code-sometimes-runs-in-parallel-is-there-a-se

Steps to reproduce:

  1. Run a parameterized pytest in the text explorer- note tests run squentially
  2. Run a parameterized pytest in the editor by clicking the green arrows in the gutter next to the test and see them run in parallel.

Diagnostic data

  • Python version (& distribution if applicable, e.g. Anaconda): 3.10.11
  • Type of virtual environment used (e.g. conda, venv, virtualenv, etc.): Global
  • Value of the python.languageServer setting: Default
Output for Python in the Output panel (ViewOutput, change the drop-down the upper-right of the Output panel to Python)

XXX

User Settings


languageServer: "Pylance"

linting
• flake8Args: "<placeholder>"
• flake8Path: "<placeholder>"
• mypyPath: "<placeholder>"
• pydocstyleArgs: "<placeholder>"
• pylintPath: "<placeholder>"

formatting
• autopep8Path: "<placeholder>"
• provider: "black"
• blackPath: "<placeholder>"
• yapfPath: "<placeholder>"

testing
• pytestArgs: "<placeholder>"
• pytestEnabled: true

Extension version: 2022.18.2
VS Code version: Code 1.73.1 (Universal) (6261075646f055b99068d3688932416f2346dd3b, 2022-11-09T02:08:38.961Z)
OS version: Darwin arm64 22.4.0
Modes:
Sandboxed: No
Remote OS version: Linux x64 5.15.49-linuxkit

System Info
Item Value
CPUs Apple M1 Max (10 x 24)
GPU Status 2d_canvas: enabled
canvas_oop_rasterization: disabled_off
direct_rendering_display_compositor: disabled_off_ok
gpu_compositing: enabled
metal: disabled_off
multiple_raster_threads: enabled_on
opengl: enabled_on
rasterization: enabled
raw_draw: disabled_off_ok
skia_renderer: enabled_on
video_decode: enabled
video_encode: enabled
vulkan: disabled_off
webgl: enabled
webgl2: enabled
webgpu: disabled_off
Load (avg) 9, 10, 10
Memory (System) 32.00GB (0.09GB free)
Process Argv --crash-reporter-id 428f9b51-ebfb-4ac8-948e-55de8b1457ae
Screen Reader no
VM 0%
Item Value
Remote Dev Container: Python 3
OS Linux x64 5.15.49-linuxkit
CPUs unknown (4 x 0)
Memory (System) 7.77GB (3.58GB free)
VM 0%
A/B Experiments
vsliv368:30146709
vsreu685:30147344
python383:30185418
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
vslsvsres303:30308271
vserr242cf:30382550
pythontb:30283811
vsjup518:30340749
pythonptprofiler:30281270
vsdfh931:30280409
vshan820:30294714
vstes263:30335439
vscod805cf:30301675
binariesv615:30325510
bridge0708:30335490
bridge0723:30353136
vsaa593:30376534
pythonvs932:30410667
py29gd2263cf:30773604
vsclangdc:30486549
c4g48928:30535728
dsvsc012cf:30540253
pynewext54:30695312
azure-dev_surveyone:30548225
282f8724:30602487
pyind779:30671433
89544117:30613380
pythonsymbol12:30671437
showlangstatbar:30737416
pythonms35:30701012
pythonfmttext:30731395
pythoncmvfstrcf:30756944
fixshowwlkth:30771522
pythongtdpath:30769146
bgfeh915:30769767
dh2dc718:30770000
pythonidxptcf:30772540
pythondjangotscf:30772537
pythonnoceb:30773526

@github-actions github-actions bot added the triage-needed Needs assignment to the proper sub-team label Jun 28, 2023
@eleanorjboyd
Copy link
Member

Hello! I will investigate this now but in the meantime if you would like to check to see if it can still be repro-ed on the new rewrite architecture we are releasing you can check by add this setting to your users settings.json "python.experiments.optInto": ["pythonTestAdapter"]. It might work correctly here but I have not yet tested.

Thanks, otherwise ill reach out when I investigate further.

@github-actions github-actions bot added the info-needed Issue requires more information from poster label Jun 28, 2023
@FRidh
Copy link

FRidh commented Jul 12, 2023

This is a tricky issue. The parameterized tests I have consume quite the memory. Having it run 50+ of those simultaneously completely hangs the environment.

@eleanorjboyd
Copy link
Member

@FRidh Could you elaborate more? Are you also expecting a sequential test run and your tests are run in parallel?

@FRidh
Copy link

FRidh commented Jul 12, 2023

Exactly. Right clicking on the test in the code viewer and then Run test runs all cases of it in parallel. Running in parallel should be explicit (and definitely not as default).

@github-actions github-actions bot removed the info-needed Issue requires more information from poster label Jul 12, 2023
@eleanorjboyd
Copy link
Member

able to repro- thank you! Will get a fix in

@eleanorjboyd
Copy link
Member

@FRidh, how do you know the tests are running in parallel? On further investigation I have noticed that the parameterized tests are run in a different (possibly random) order when started from the gutter but the tests are still executed one at a time.

@github-actions github-actions bot added the info-needed Issue requires more information from poster label Jul 12, 2023
@FRidh
Copy link

FRidh commented Jul 12, 2023

@FRidh, how do you know the tests are running in parallel? On further investigation I have noticed that the parameterized tests are run in a different (possibly random) order when started from the gutter but the tests are still executed one at a time.

I don't know for sure, but cpu usage and memory is massive in the cases where I encountered the issue. The test uses quite some memory and should sometimes spike in CPU usage. When I run it using Run test I encounter this issue, if I run a case individually using the test explorer, it is OK, and when I invoke a case specifically using pytest it is OK as well.

@github-actions github-actions bot removed the info-needed Issue requires more information from poster label Jul 12, 2023
@eleanorjboyd
Copy link
Member

@red8888, could you explain a bit more about your use case? By design tests should ideally be independent from each other and pytest tests do not actually have a default ordering or understanding of their order (this is a behavior that can be achieved through an additional plugin). The gutter could call run on the parameterize test as a whole instead of each individual test id (thus maintaining an order), but this would require a larger effort as the design change would affect all testing in vscode. Therefore I cannot say if that design change would even be approved and would require larger community/extension involvement.

@GCaw
Copy link

GCaw commented Jul 12, 2023

@eleanorjboyd thank you for investigating this. I created the original stack overflow issue for this that is referenced in this bug report. There is example code in the Stack Overflow post with output that demonstrates that the tests are running in parallel.

@github-actions github-actions bot added the info-needed Issue requires more information from poster label Jul 12, 2023
@eleanorjboyd
Copy link
Member

@GCaw, can you try this on our new testing rewrite? You can do so by setting: "python.experiments.optInto": ["pythonTestAdapter"], . I just tried it with that setting and got:


test_parallel.py start test1: 2023-07-12 11:11:35.754555
end test1: 2023-07-12 11:11:37.759613
.start test2: 2023-07-12 11:11:37.764993
end test2: 2023-07-12 11:11:39.769859
.start test3: 2023-07-12 11:11:39.775971
end test3: 2023-07-12 11:11:41.781062

@eleanorjboyd
Copy link
Member

@FRidh if you set your python log level to trace and your vscode log level to trace you should see a line in your python output that looks something like this 2023-07-12 11:18:05.054 [info] Running pytests with arguments: /Users/..../pythonFiles/vscode_pytest/run_pytest_script.py --rootdir /Users/eleanorboyd/testingFiles/from_users/error_skipped_tests -s. Take this and remove the path ending in script to and replace it with python -m pytest so it looks something like this python -m pytest --rootdir /Users/eleanorboyd/testingFiles/from_users/error_skipped_tests -s. Run this from your terminal and see how it compares in CPU. This is the exact command we are running behind the scenes.

@GCaw
Copy link

GCaw commented Jul 12, 2023

@eleanorjboyd running my example tests with the pythonTestAdapater experiment, I see the same behaviour as you: all the parameterized tests run as one test session, resulting in sequential running of the parameterized tests. This would seem to resolve the issue for me.

@github-actions github-actions bot removed the info-needed Issue requires more information from poster label Jul 12, 2023
@eleanorjboyd
Copy link
Member

@GCaw that is great to hear! Thank you for hopping on the issue to provide that example test case!

@MetRonnie
Copy link

...can you try this on our new testing rewrite? You can do so by setting: "python.experiments.optInto": ["pythonTestAdapter"],

Have confirmed this only spawns 1 pytest process for me in both the test explorer and with the editor gutter button 👍

@FRidh
Copy link

FRidh commented Jul 25, 2023

Same. Issue is resolved for me as well with this setting.

@github-actions github-actions bot removed the info-needed Issue requires more information from poster label Jul 25, 2023
@jcdevil
Copy link

jcdevil commented Oct 12, 2023

Same. Issue is resolved for me as well with this setting 🥳 Thx !

@eleanorjboyd
Copy link
Member

Great thanks everyone!

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 12, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-testing triage-needed Needs assignment to the proper sub-team
Projects
None yet
Development

No branches or pull requests

6 participants