-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tunnel closes too fast #10
Comments
Maybe the tunnel is torn down after one usage? Just saw this on the remote side from sshd:
|
I found that if I removed the redirectStd commands which redirect stdout/stderr of the child process back to the provider, the child process outlives the provider process. I think it is intended by the module, but for some reason does not work. |
@Blefish based on what you mentioned, plus the
Pretty sure it's talking about this panic.
If this is the case, the parent's stderr would have to not be in the logs, otherwise we'd see the panic. As well as that, it's reasonable for the child to die without anything in the tf logs, since the parent died first. Thoughts on this? |
@thecadams thanks for putting up the fork! I've managed getting it to work when executing Terraform locally, but unfortunately Terraform Cloud with remote execution does not work. Terraform Cloud has the same behaviour as you are describing even with your fork installed, the ssh tunnel stops 2 or 3 seconds after it's started. |
You can try release v0.2.3 |
Hi @AndrewChubatiuk,
Thanks for this module, hoping to make it work over here!
Looks like the tunnel is closed from the Terraform side, about 1-3 seconds after being opened.
Logs: https://gist.github.com/thecadams/e3dc630cadadc9018946fef98aea26ca
Of particular interest in the tf log is this line:
I have a config similar to this:
The
rc_prometheus
module manages 1 grafana folder and several dashboards in that folder:Unfortunately despite the grafana provider getting the correct host and port, I get
connection refused
as it seems the connection shuts down too fast. I also tried usingtime_sleep
resources and provisioners in various places, but nothing worked.Expected Behavior
There should be a way to control when the tunnel closes.
Actual Behavior
Tunnel closes within 1-3 seconds, causing
connection refused
errors in the module.Steps to Reproduce
Something like the config above should repro this.
Important Factoids
Looks like recent changes in this fork removed the "close connection" provider, maybe that should be reinstated to support this use case?
You'll also notice stuff in the logs like this, which is not related, it's because I moved the ssh tunnel out of the module since the previous apply:
References
The text was updated successfully, but these errors were encountered: