Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support additional TCP listeners on NLBs #438

Open
jhuntwork opened this issue Oct 5, 2021 · 13 comments
Open

Support additional TCP listeners on NLBs #438

jhuntwork opened this issue Oct 5, 2021 · 13 comments

Comments

@jhuntwork
Copy link
Contributor

I have a use case where an NLB needs 3 listeners, ports 80, 443, and 22. The picture below shows what I have set up manually and would like to achieve with kube-ingress-aws-controller.

WithSkipper

The ALB in the picture is already currently deployed and managed by the ingress controller. I would like to also automatically provision and manage the NLB similarly, but it needs the third pass-through TCP listener and target group.

I am willing to provide the additional features through a PR, but I have a few questions:

  • What is the preferred mechanism to define additional listeners and target groups, should it just be an additional annotation on an Ingress?
  • Might there ever be a use case for more than 3 listeners and should we account for that?
  • As per the above diagram, what would be the correct way to instruct kube-ingress-aws-controller to configure the NLB to route the HTTP and HTTPS listeners to another controller-managed ALB which contains our cert and redirect rule, and route the SSH passthrough listener to a NodePort?
@jhuntwork
Copy link
Contributor Author

Through some initial investigations, I was trying to see if I could remove the ALB above as well, and just use an NLB. The issue is that Skipper only listens on one port, and we need a cheap/easy place to do HTTP redirects, which is what the ALB provides.

Currently it seems that there is no support to create a TG that points to another, abitrary endpoint. I'd have to figure out if we can find the listening endpoints of an ALB and set up the TG to point at that.

@szuecs
Copy link
Member

szuecs commented Oct 5, 2021

@jhuntwork the recent skipper version allows you to specify a redirect listener.
Please also see zalando/skipper#1694 and we aim to fix it asap

@jhuntwork
Copy link
Contributor Author

Awesome, thanks!

@szuecs
Copy link
Member

szuecs commented Oct 5, 2021

Our current implementation is based on an internal container that starts 2 skipper processes:

  1. skipper redirect listen on port 9998 (https://github.com/zalando-incubator/kubernetes-on-aws/blob/dev/cluster/manifests/skipper/deployment.yaml#L56-L59 and https://github.com/zalando-incubator/kubernetes-on-aws/blob/dev/cluster/manifests/skipper/deployment.yaml#L85-L86)
  2. skipper-ingress listen on port 9999 serving everything else

basically we have a docker container that starts run.sh, which has this:

if [ -n "$HTTP_REDIRECT" ]
then
	(skipper -address=:9998 \
		-support-listener='' -metrics-listener='' \
		-inline-routes='redirect: * -> redirectTo(308, "https:") -> <shunt>; health: Path("/healthz") -> status(204) -> <shunt>;' \
		-access-log-disabled) &
fi

# skipper with all args, skipper will be pid 1, because we replace sh and skipper handles shutdown
exec "$@"

I hope it makes sense and you can easily adapt.

@jhuntwork
Copy link
Contributor Author

Yeah looks straightforward, thanks. That run.sh file isn't included in your public container, right?

@szuecs
Copy link
Member

szuecs commented Oct 6, 2021

No run.sh is custom

@AlexanderYastrebov
Copy link
Member

AlexanderYastrebov commented Oct 6, 2021

cheap/easy place to do HTTP redirects, which is what the ALB provides

ALB for HTTP redirects is not cheap IMO :)

Please also note that NLB (or rather existing NLB+Skipper setup) will not support HTTP/2 with TLS offloading on NLB. We have a draft zalando/skipper#1868 to support h2c in Skipper that may improve on this.

@szuecs
Copy link
Member

szuecs commented Oct 6, 2021

@AlexanderYastrebov the interesting question is about the 3rd TG with 3rd listener to support in this case SSH access for git

@AlexanderYastrebov
Copy link
Member

What is the preferred mechanism to define additional listeners and target groups

Two target groups for NLB is quite a new feature (#435) currently configured with startup flags.

@AlexanderYastrebov
Copy link
Member

Maybe the best option would be to manage NLB outside of the controller by e.g. a separate Cloud Formation stack.
We may think though to improve controller stack to export target groups https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-crossstackref.html

@jhuntwork
Copy link
Contributor Author

Maybe the best option would be to manage NLB outside of the controller by e.g. a separate Cloud Formation stack.

We can definitely do that, we have to do that for some other resources. But it would be nice if we can get the aws resource we need just by defining a kubernetes resource. I think the only thing missing at this point is the support for a third listener/TG.

@jhuntwork
Copy link
Contributor Author

I started working a while ago on an implementation for this, but got sidetracked with other things. I need to look at this again, but before I do I just want to ask if there's any recent changes or thoughts about design that would impact this potential feature?

@szuecs
Copy link
Member

szuecs commented Mar 21, 2022

@jhuntwork I think this project is low traffic normally and there was no significant change.

jhuntwork added a commit to jhuntwork/kube-ingress-aws-controller that referenced this issue Jul 26, 2022
This implements two new annotations in an attempt to cover the use
case described in zalando-incubator#438.

The `zalando.org/aws-nlb-extra-listeners` annotation accepts a JSON
string that describes a list of extra listen/target ports to add to an
NLB. These will be routed to pods matching a specific label in the same
namespace as the ingress. As such, this depends on the AWS CNI mode
feature.

The `zalando.org/aws-nlb-cascade-http-to-alb` allows the NLB to
"cascade" HTTP(S) traffic to another managed ALB. This was added to
solve two issues while testing in a live cluster:

1. When using skipper, if it is set to only accept https traffic, it
   will reject requests from an NLB that is offloading the
   SSL and passing on http traffic, which is the current default
   configuration for NLBs. Some clusters or use cases may require
   end-to-end encryption.
2. When using SSL offloading and testing with Gitlab (the use case
   described in the original issue) it became confused as to the actual
   protocol and produced incorrect redirect links which broke
   authentication. This could likely be remedied in the application
   itself, however the issue is not present when using the ALB and
   redirects are handled there. Since the ALB is already present,
   routing the traffic directly to it is a simple, reasonable choice.
jhuntwork added a commit to jhuntwork/kube-ingress-aws-controller that referenced this issue Jul 26, 2022
This implements two new annotations in an attempt to cover the use
case described in zalando-incubator#438.

The `zalando.org/aws-nlb-extra-listeners` annotation accepts a JSON
string that describes a list of extra listen/target ports to add to an
NLB. These will be routed to pods matching a specific label in the same
namespace as the ingress. As such, this depends on the AWS CNI mode
feature.

The `zalando.org/aws-nlb-cascade-http-to-alb` allows the NLB to
"cascade" HTTP(S) traffic to another managed ALB. This was added to
solve two issues while testing in a live cluster:

1. When using skipper, if it is set to only accept https traffic, it
   will reject requests from an NLB that is offloading the
   SSL and passing on http traffic, which is the current default
   configuration for NLBs. Some clusters or use cases may require
   end-to-end encryption.
2. When using SSL offloading and testing with Gitlab (the use case
   described in the original issue) it became confused as to the actual
   protocol and produced incorrect redirect links which broke
   authentication. This could likely be remedied in the application
   itself, however the issue is not present when using the ALB and
   redirects are handled there. Since the ALB is already present,
   routing the traffic directly to it is a simple, reasonable choice.
jhuntwork added a commit to jhuntwork/kube-ingress-aws-controller that referenced this issue Jul 26, 2022
This implements two new annotations in an attempt to cover the use
case described in zalando-incubator#438.

The `zalando.org/aws-nlb-extra-listeners` annotation accepts a JSON
string that describes a list of extra listen/target ports to add to an
NLB. These will be routed to pods matching a specific label in the same
namespace as the ingress. As such, this depends on the AWS CNI mode
feature.

The `zalando.org/aws-nlb-cascade-http-to-alb` allows the NLB to
"cascade" HTTP(S) traffic to another managed ALB. This was added to
solve two issues while testing in a live cluster:

1. When using skipper, if it is set to only accept https traffic, it
   will reject requests from an NLB that is offloading the
   SSL and passing on http traffic, which is the current default
   configuration for NLBs. Some clusters or use cases may require
   end-to-end encryption.
2. When using SSL offloading and testing with Gitlab (the use case
   described in the original issue) it became confused as to the actual
   protocol and produced incorrect redirect links which broke
   authentication. This could likely be remedied in the application
   itself, however the issue is not present when using the ALB and
   redirects are handled there. Since the ALB is already present,
   routing the traffic directly to it is a simple, reasonable choice.
jhuntwork added a commit to jhuntwork/kube-ingress-aws-controller that referenced this issue Jul 26, 2022
This implements two new annotations in an attempt to cover the use
case described in zalando-incubator#438.

The `zalando.org/aws-nlb-extra-listeners` annotation accepts a JSON
string that describes a list of extra listen/target ports to add to an
NLB. These will be routed to pods matching a specific label in the same
namespace as the ingress. As such, this depends on the AWS CNI mode
feature.

The `zalando.org/aws-nlb-cascade-http-to-alb` annotation allows the NLB to
"cascade" HTTP(S) traffic to another managed ALB. This was added to
solve two issues while testing in a live cluster:

1. When using skipper, if it is set to only accept https traffic, it
   will reject requests from an NLB that is offloading the
   SSL and passing on http traffic, which is the current default
   configuration for NLBs. Some clusters or use cases may require
   end-to-end encryption.
2. When using SSL offloading and testing with Gitlab (the use case
   described in the original issue) it became confused as to the actual
   protocol and produced incorrect redirect links which broke
   authentication. This could likely be remedied in the application
   itself, however the issue is not present when using the ALB and
   redirects are handled there. Since the ALB is already present,
   routing the traffic directly to it is a simple, reasonable choice.
jhuntwork added a commit to jhuntwork/kube-ingress-aws-controller that referenced this issue Jul 26, 2022
This implements two new annotations in an attempt to cover the use
case described in zalando-incubator#438.

The `zalando.org/aws-nlb-extra-listeners` annotation accepts a JSON
string that describes a list of extra listen/target ports to add to an
NLB. These will be routed to pods matching a specific label in the same
namespace as the ingress. As such, this depends on the AWS CNI mode
feature.

The `zalando.org/aws-nlb-cascade-http-to-alb` annotation allows the NLB to
"cascade" HTTP(S) traffic to another managed ALB. This was added to
solve two issues while testing in a live cluster:

1. When using skipper, if it is set to only accept https traffic, it
   will reject requests from an NLB that is offloading the
   SSL and passing on http traffic, which is the current default
   configuration for NLBs. Some clusters or use cases may require
   end-to-end encryption.
2. When using SSL offloading and testing with Gitlab (the use case
   described in the original issue) it became confused as to the actual
   protocol and produced incorrect redirect links which broke
   authentication. This could likely be remedied in the application
   itself, however the issue is not present when using the ALB and
   redirects are handled there. Since the ALB is already present,
   routing the traffic directly to it is a simple, reasonable choice.

Signed-off-by: Jeremy Huntwork <[email protected]>
jhuntwork added a commit to jhuntwork/kube-ingress-aws-controller that referenced this issue Aug 17, 2022
This implements two new annotations in an attempt to cover the use
case described in zalando-incubator#438.

The `zalando.org/aws-nlb-extra-listeners` annotation accepts a JSON
string that describes a list of extra listen/target ports to add to an
NLB. These will be routed to pods matching a specific label in the same
namespace as the ingress. As such, this depends on the AWS CNI mode
feature.

The `zalando.org/aws-nlb-cascade-http-to-alb` annotation allows the NLB to
"cascade" HTTP(S) traffic to another managed ALB. This was added to
solve two issues while testing in a live cluster:

1. When using skipper, if it is set to only accept https traffic, it
   will reject requests from an NLB that is offloading the
   SSL and passing on http traffic, which is the current default
   configuration for NLBs. Some clusters or use cases may require
   end-to-end encryption.
2. When using SSL offloading and testing with Gitlab (the use case
   described in the original issue) it became confused as to the actual
   protocol and produced incorrect redirect links which broke
   authentication. This could likely be remedied in the application
   itself, however the issue is not present when using the ALB and
   redirects are handled there. Since the ALB is already present,
   routing the traffic directly to it is a simple, reasonable choice.

Signed-off-by: Jeremy Huntwork <[email protected]>
jhuntwork added a commit to jhuntwork/kube-ingress-aws-controller that referenced this issue Aug 19, 2022
This implements two new annotations in an attempt to cover the use
case described in zalando-incubator#438.

The `zalando.org/aws-nlb-extra-listeners` annotation accepts a JSON
string that describes a list of extra listen/target ports to add to an
NLB. These will be routed to pods matching a specific label in the same
namespace as the ingress. As such, this depends on the AWS CNI mode
feature.

The `zalando.org/aws-nlb-cascade-http-to-alb` annotation allows the NLB to
"cascade" HTTP(S) traffic to another managed ALB. This was added to
solve two issues while testing in a live cluster:

1. When using skipper, if it is set to only accept https traffic, it
   will reject requests from an NLB that is offloading the
   SSL and passing on http traffic, which is the current default
   configuration for NLBs. Some clusters or use cases may require
   end-to-end encryption.
2. When using SSL offloading and testing with Gitlab (the use case
   described in the original issue) it became confused as to the actual
   protocol and produced incorrect redirect links which broke
   authentication. This could likely be remedied in the application
   itself, however the issue is not present when using the ALB and
   redirects are handled there. Since the ALB is already present,
   routing the traffic directly to it is a simple, reasonable choice.

Signed-off-by: Jeremy Huntwork <[email protected]>
jhuntwork added a commit to jhuntwork/kube-ingress-aws-controller that referenced this issue Sep 17, 2022
This implements two new annotations in an attempt to cover the use
case described in zalando-incubator#438.

The `zalando.org/aws-nlb-extra-listeners` annotation accepts a JSON
string that describes a list of extra listen/target ports to add to an
NLB. These will be routed to pods matching a specific label in the same
namespace as the ingress. As such, this depends on the AWS CNI mode
feature.

The `zalando.org/aws-nlb-cascade-http-to-alb` annotation allows the NLB to
"cascade" HTTP(S) traffic to another managed ALB. This was added to
solve two issues while testing in a live cluster:

1. When using skipper, if it is set to only accept https traffic, it
   will reject requests from an NLB that is offloading the
   SSL and passing on http traffic, which is the current default
   configuration for NLBs. Some clusters or use cases may require
   end-to-end encryption.
2. When using SSL offloading and testing with Gitlab (the use case
   described in the original issue) it became confused as to the actual
   protocol and produced incorrect redirect links which broke
   authentication. This could likely be remedied in the application
   itself, however the issue is not present when using the ALB and
   redirects are handled there. Since the ALB is already present,
   routing the traffic directly to it is a simple, reasonable choice.

Signed-off-by: Jeremy Huntwork <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants