Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Unable to connect to HostAccessPorts on container startup #2811

Open
hathvi opened this issue Oct 3, 2024 · 1 comment
Open

[Bug]: Unable to connect to HostAccessPorts on container startup #2811

hathvi opened this issue Oct 3, 2024 · 1 comment
Labels
bug An issue with the library

Comments

@hathvi
Copy link

hathvi commented Oct 3, 2024

Testcontainers version

0.33.0

Using the latest Testcontainers version?

Yes

Host OS

Linux

Host arch

x86_64

Go version

1.23.1

Docker version

Client: Docker Engine - Community
Version: 27.3.1
API version: 1.47
Go version: go1.22.7
Git commit: ce12230
Built: Fri Sep 20 11:41:00 2024
OS/Arch: linux/amd64
Context: default

Server: Docker Engine - Community
Engine:
Version: 27.3.1
API version: 1.47 (minimum version 1.24)
Go version: go1.22.7
Git commit: 41ca978
Built: Fri Sep 20 11:41:00 2024
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.7.22
GitCommit: 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
runc:
Version: 1.1.14
GitCommit: v1.1.14-0-g2c9f560
docker-init:
Version: 0.19.0
GitCommit: de40ad0

Docker info

Client: Docker Engine - Community
Version: 27.3.1
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.17.1
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.29.7
Path: /usr/libexec/docker/cli-plugins/docker-compose

Server:
Containers: 14
Running: 9
Paused: 0
Stopped: 5
Images: 34
Server Version: 27.3.1
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
runc version: v1.1.14-0-g2c9f560
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.8.0-40-generic
Operating System: Ubuntu 22.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 32
Total Memory: 62.52GiB
Name: jhome
ID: 43fdd48e-011e-40da-aff1-b76bc378d203
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

What happened?

I'm attempting to utilize testcontainers-go to test my Caddy configuration as a gateway to my API server but I'm running into problems with how testcontainers-go exposes host ports and I believe this issue to be a bug.

Setup

In my tests, I've set up a httptest.Server to act as my API server, listening on a random port on the host. I then set up Caddy in a testcontainer and expose the API server port to the container via HostAccessPorts. My Caddy configuration defines the API server with a health check which Caddy checks on startup.

caddyfile_test.go
package caddy_test

import (
	"bytes"
	"context"
	"fmt"
	"io"
	"net"
	"net/http"
	"net/http/httptest"
	"testing"

	"github.com/stretchr/testify/assert"
	"github.com/stretchr/testify/require"
	"github.com/testcontainers/testcontainers-go"
	"github.com/testcontainers/testcontainers-go/wait"
)

const caddyFileContent = `
listen :80

reverse_proxy /api/* {
	to {$API_SERVER}

	health_uri /health
	health_status 200
	health_interval 10s
}
`

func TestCaddyfile(t *testing.T) {
	ctx := context.Background()

	apiServerListener, err := net.Listen("tcp", "0.0.0.0:0")
	assert.NoError(t, err)

	apiServerPort := apiServerListener.Addr().(*net.TCPAddr).Port
	apiServer := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		fmt.Fprintln(w, "Hello, World!")
	}))
	apiServer.Listener.Close()
	apiServer.Listener = apiServerListener
	apiServer.Start()

	caddyContainer, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
		ContainerRequest: testcontainers.ContainerRequest{
			Image:        "caddy:2.8.4",
			ExposedPorts: []string{"80/tcp"},
			WaitingFor:   wait.ForLog("server running"),
			Env: map[string]string{
				"API_SERVER": fmt.Sprintf("http://%s:%d", testcontainers.HostInternal, apiServerPort),
			},
			Files: []testcontainers.ContainerFile{
				{
					Reader:            bytes.NewReader([]byte(caddyFileContent)),
					ContainerFilePath: "/etc/caddy/Caddyfile",
				},
			},
			HostAccessPorts: []int{apiServerPort},
		},
		Started: true,
	})
	require.NoError(t, err)
	defer caddyContainer.Terminate(ctx)

	caddyURL, err := caddyContainer.PortEndpoint(ctx, "80/tcp", "http")
	require.NoError(t, err)

	resp, err := http.Get(caddyURL + "/api/test")
	require.NoError(t, err)
	defer resp.Body.Close()

	body, err := io.ReadAll(resp.Body)
	require.NoError(t, err)

	assert.Equal(t, http.StatusOK, resp.StatusCode)
	assert.Equal(t, "Hello, World!\n", string(body))

	lr, err := caddyContainer.Logs(ctx)
	assert.NoError(t, err)
	lb, err := io.ReadAll(lr)
	assert.NoError(t, err)
	fmt.Printf("== Caddy Logs ==\n%s================\n\n", string(lb))
}
Test Output
== Caddy Logs ==
{"level":"info","ts":1727952070.1965187,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
{"level":"info","ts":1727952070.1969736,"msg":"adapted config to JSON","adapter":"caddyfile"}
{"level":"warn","ts":1727952070.1969776,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":1}
{"level":"info","ts":1727952070.1972885,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"warn","ts":1727952070.1973321,"logger":"http.auto_https","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv1","http_port":80}
{"level":"info","ts":1727952070.1973393,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1727952070.1973433,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1727952070.1973994,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000736a80"}
{"level":"info","ts":1727952070.1974878,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1727952070.197532,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 7168 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details."}
{"level":"info","ts":1727952070.1975832,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
+{"level":"info","ts":1727952070.1976013,"logger":"http.log","msg":"server running","name":"srv1","protocols":["h1","h2","h3"]}
{"level":"info","ts":1727952070.1976032,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["listen"]}
-{"level":"info","ts":1727952070.1976056,"logger":"http.handlers.reverse_proxy.health_checker.active","msg":"HTTP request failed","host":"host.testcontainers.internal:43017","error":"Get \"http://host.testcontainers.internal:43017/health\": dial tcp 172.17.0.3:43017: connect: connection refused"}
-{"level":"info","ts":1727952070.1976073,"logger":"http.handlers.reverse_proxy.health_checker.active","msg":"HTTP request failed","host":"host.testcontainers.internal:43017","error":"Get \"http://host.testcontainers.internal:43017/health\": dial tcp 172.17.0.3:43017: connect: connection refused"}
{"level":"info","ts":1727952070.1978004,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1727952070.1978037,"msg":"serving initial configuration"}
{"level":"info","ts":1727952070.197835,"logger":"tls.obtain","msg":"acquiring lock","identifier":"listen"}
{"level":"info","ts":1727952070.1985145,"logger":"tls.obtain","msg":"lock acquired","identifier":"listen"}
{"level":"info","ts":1727952070.1985347,"logger":"tls.obtain","msg":"obtaining certificate","identifier":"listen"}
{"level":"info","ts":1727952070.1985307,"logger":"tls","msg":"cleaning storage unit","storage":"FileStorage:/data/caddy"}
{"level":"info","ts":1727952070.1986136,"logger":"tls","msg":"finished cleaning storage units"}
-{"level":"error","ts":1727952070.3384068,"logger":"http.log.error","msg":"no upstreams available","request":{"remote_ip":"172.17.0.1","remote_port":"54434","client_ip":"172.17.0.1","proto":"HTTP/1.1","method":"GET","host":"localhost:33500","uri":"/api/test","headers":{"User-Agent":["Go-http-client/1.1"],"Accept-Encoding":["gzip"]}},"duration":0.000040091,"status":503,"err_id":"qe8hu1acn","err_trace":"reverseproxy.(*Handler).proxyLoopIteration (reverseproxy.go:486)"}

================

--- FAIL: TestCaddyfile (1.10s)
    caddyfile_test.go:76: 
        	Error Trace:	/home/justin/workspace/test/caddyfile_test.go:76
        	Error:      	Not equal: 
        	            	expected: 200
        	            	actual  : 503
        	Test:       	TestCaddyfile
    caddyfile_test.go:77: 
        	Error Trace:	/home/justin/workspace/test/caddyfile_test.go:77
        	Error:      	Not equal: 
        	            	expected: "Hello, World!\n"
        	            	actual  : ""
        	            	
        	            	Diff:
        	            	--- Expected
        	            	+++ Actual
        	            	@@ -1,2 +1 @@
        	            	-Hello, World!
        	            	 
        	Test:       	TestCaddyfile
FAIL
FAIL	github.com/hathvi/test	1.189s
FAIL

Problem

My problem with this set up is that Caddy logs a "connection refused" error for the health check even though the testcontainer is ready. I attempt to make a request to the Caddy server after startup but receive an HTTP 502 Bad Gateway error as the API server wasn't initially reachable even though it's running and accepting connections on the host. Caddy will continue to return an HTTP 502 until the next health check.

My Analysis

I can see that HostAccessPorts utilizes a separate container running an SSH server then sets up a PostReadies lifecycle hook on the Caddy container in order to then set up the forwarding in the SSH container. It appears to do the forwarding by firing off a go routine that connects to the SSH container with remote port forwarding, listening to the HostAccessPorts ports on the SSH container and tunneling this to the host.

PostReadies seems like a lot to late to setup the forwarding. I'm utilizing HostAccessPorts so I can talk to a server on the host from my testcontainer, so in order for my test container to be ready I would expect to be able to talk to that server before I do any of my testing. Logically I would assume I should be able to have a wait strategy that depends on that connection being made.

Test fix

I created a fork and updated exposeHostPorts to setup a lifecycle hook on PreCreates instead of PostReadies. This ensures the host port is accessible via the SSH container to the testcontainer from all lifecycle hooks and container command.

In theory this shouldn't break anything even if someone sets up the listener for the host port in a later lifecycle hook as connections back on the host port are only established once you try connecting to the remote port.

testcontainers-go.patch
diff --git a/port_forwarding.go b/port_forwarding.go
index 88f14f2d..ad17fb10 100644
--- a/port_forwarding.go
+++ b/port_forwarding.go
@@ -150,8 +150,8 @@ func exposeHostPorts(ctx context.Context, req *ContainerRequest, ports ...int) (
        // after the container is ready, create the SSH tunnel
        // for each exposed port from the host.
        sshdConnectHook = ContainerLifecycleHooks{
-               PostReadies: []ContainerHook{
-                       func(ctx context.Context, c Container) error {
+               PreCreates: []ContainerRequestHook{
+                       func(ctx context.Context, req ContainerRequest) error {
                                return sshdContainer.exposeHostPort(ctx, req.HostAccessPorts...)
                        },
                },

Relevant log output

No response

Additional information

No response

@hathvi hathvi added the bug An issue with the library label Oct 3, 2024
@hathvi hathvi changed the title [Bug]: [Bug]: Unable to connect to HostAccessPorts on container startup Oct 3, 2024
@hathvi
Copy link
Author

hathvi commented Oct 3, 2024

I probably should have just created a PR for this and we could discuss this further there. Let me know if you wish for me to do so and I'll find some time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug An issue with the library
Projects
None yet
Development

No branches or pull requests

1 participant