我对入口和负载均衡器在Kubernetes中的角色感到非常困惑。

据我所知,Ingress用于将来自internet的传入流量映射到集群中运行的服务。

负载均衡器的作用是将流量转发到主机。在这方面,入口与负载均衡器有什么不同?另外,与Amazon ELB和ALB相比,kubernetes内部的负载均衡器的概念是什么?


当前回答

There are 4 ways to allow pods in your cluster to receive external traffic: 1.) Pod using HostNetworking: true and (Allows 1 pod per node to listen directly to ports on the host node. Minikube, bare metal, and rasberry pi's sometimes go this route which can allow the host node to listen on port 80/443 allow not using a load balancer or advanced cloud load balancer configurations, it also bypasses Kubernetes Services which can be useful for avoiding SNAT/achieving similar effect of externalTrafficPolicy: Local in scenarios where it's not supported like on AWS.) 2.) NodePort Service 3.) LoadBalancer Service (Which builds on NodePort Service) 4.) Ingress Controller + Ingress Objects (Which builds upon the above)

假设你的集群中有10个网站,你想把它们全部暴露给外部流量。 *如果你使用LoadBalancer服务类型,你将生成10个HA云负载均衡器(每个都需要花钱) *如果你使用类型Ingress Controller,你将生成1个HA云负载均衡器(节省资金),它将指向一个Ingress Controller在你的集群中运行。 一个入口控制器是:

由集群中运行的pod部署支持的负载均衡器类型的服务。 每个豆荚做两件事: 充当在集群内运行的第7层负载均衡器。(Nginx很受欢迎) 根据集群中的入口对象动态配置自身 (入口对象可以被认为是第7层负载均衡器的声明性配置片段。)

集群内的L7 LB/Ingress Controller负载均衡/反向代理流量到集群内的集群IP服务,如果你有一个TLS证书类型的Kubernetes Secret,它也可以终止HTTPS,并引用它的Ingress对象。)

其他回答

短版:

在Kubernetes中,对象定义定义所需的状态,而控制器则监视对象定义以实现该状态。

入口:

“Ingress”对象,它自己做的很少,但定义了L7负载平衡规则 “入口控制器”,监视入口对象的状态,以根据在入口对象中定义的规则创建L7 LB配置

loadbalance:

"Service"类型为"LoadBalancer"的对象,允许将服务附加到LoadBalancer “负载均衡器控制器”,根据服务对象中定义的规则创建负载均衡器

入口

进入对象:

一个kubernetes对象,它自己不做任何事情,因为默认情况下不包括入口控制器。Ingress对象只是通过指定请求路径、请求域和目标kubernetes服务来描述将第7层流量路由到集群的方法,而添加服务对象实际上可能会创建服务,因为默认情况下kubernetes中包含了服务控制器。

入口控制器:

Kubernetes部署/DaemonSet +服务:

  1. listens on specific ports (usually 80 and 443) for web traffic
  2. Watches for the creation, modification, or deletion of Ingress Resources
  3. Creates internal L7 routing rules based on desired state indicated by Ingress Objects

例如,Nginx入口控制器可以:

使用服务在端口80和443监听传入的流量 注意Ingress对象的创建,并将所需的状态转换为新的服务器{}部分,这些部分将动态地放入nginx.conf中

loadbalance

负载均衡器控制器:

负载均衡器控制器可以在AWS和GKE等平台上配置,并提供一种通过创建外部负载均衡器分配外部ip的方法。此功能可用于:

部署负载均衡器控制器(如果尚未部署) 设置服务类型为“LoadBalancer” 在服务中设置适当的注释以配置负载均衡器

服务类型:

当服务类型设置为LoadBalancer并且存在云提供的负载均衡器控制器时,该服务将使用云提供商的负载均衡器对外公开。NodePort和ClusterIP服务(外部负载均衡器路由到的服务)自动创建,从而分配服务外部ip和或DNS。

的关系

入口控制器服务通常配置为LoadBalancer类型,以便http和https请求可以通过外部ip代理/路由到特定的内部服务。

然而,这并不严格需要LoadBalancer。因为,通过使用hostNetwork或hostPort,您可以在技术上将主机上的端口绑定到服务(允许您通过主机的外部ip:port访问它)。虽然官方不建议这样做,因为它会占用实际节点上的端口。

参考文献

https://kubernetes.io/docs/concepts/configuration/overview/#services

https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/

https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#external-load-balancer-providers

https://kubernetes.io/docs/concepts/services-networking/ingress/ https://kubernetes.io/docs/concepts/architecture/cloud-controller/ https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/

Feature Ingress Load Balancer
Protocal HTTP level (Network layer 7) Network layer 4
Additional Features cookie-based session affinity, Ingress rules, Resource backends, Path types Only balance the load
Dependency Ingress controller need to be running. Different Kubernetes environments use different implementations of the controller, but several don’t provide a default controller at all. No dependency, Built-in support with K8
YAML manifest There is separate API for it. apiVersion: networking.k8s.io/v1 type: LoadBalancer
How it work Client connected to one of the pods through Ingress controller. The client first performed a DNS lookup of example.com, and the DNS server (or the local operating system) returned the IP of the Ingress controller. The client then sent an HTTP request to the Ingress controller and specified example.com in the Host header. From that header, the controller determined which service the client is trying to access, looked up the pod IPs through the Endpoints object associated with the service, and forwarded the client’s request to one of the pods. The load balancer redirects traffic to the node port across all the nodes. Clients connect to the service through the load balancer’s IP.

我强烈推荐阅读NodePort vs LoadBalancer vs Ingress?知识++

pod有自己的IP:PORT,但它本质上是动态的,如果删除或重新部署会发生变化。

服务被分配ClusterIPor NodePort(创建服务资源的虚拟机中的端口),可以映射到一组pod或其他后端[参见:无头服务]

要访问正确的Pod,请使用ClusterIP(从集群内部) NodePort可以用于从集群外部访问pods

LoadBalancer[外部/内部]:由云提供商提供,指向ClusterIP或NodePort。您可以通过LB的IP访问该服务

LB ~> SERVICE(ClusterIP或NodePort) ~> POD

入口资源是集群的入口点。负载均衡可以侦听入路规则,并路由到特定的服务。[请看这个例子]

LB(Ingress-managed) ~> SERVICE(ClusterIP或NodePort) ~> POD

我发现了一篇非常有趣的文章,它解释了NodePort、LoadBalancer和Ingress之间的区别。

从文章的内容来看:

loadbalance:

A LoadBalancer service is the standard way to expose a service to the internet. On GKE, this will spin up a Network Load Balancer that will give you a single IP address that will forward all traffic to your service. If you want to directly expose a service, this is the default method. All traffic on the port you specify will be forwarded to the service. There is no filtering, no routing, etc. This means you can send almost any kind of traffic to it, like HTTP, TCP, UDP, Websockets, gRPC, or whatever. The big downside is that each service you expose with a LoadBalancer will get its own IP address, and you have to pay for a LoadBalancer per exposed service, which can get expensive!

入口:

Ingress is actually NOT a type of service. Instead, it sits in front of multiple services and act as a “smart router” or entrypoint into your cluster. You can do a lot of different things with an Ingress, and there are many types of Ingress controllers that have different capabilities. The default GKE ingress controller will spin up a HTTP(S) Load Balancer for you. This will let you do both path based and subdomain based routing to backend services. For example, you can send everything on foo.yourdomain.example to the foo service, and everything under the yourdomain.example/bar/ path to the bar service. Ingress is probably the most powerful way to expose your services, but can also be the most complicated. There are many types of Ingress controllers, from the Google Cloud Load Balancer, Nginx, Contour, Istio, and more. There are also plugins for Ingress controllers, like the cert-manager, that can automatically provision SSL certificates for your services. Ingress is the most useful if you want to expose multiple services under the same IP address, and these services all use the same L7 protocol (typically HTTP). You only pay for one load balancer if you are using the native GCP integration, and because Ingress is “smart” you can get a lot of features out of the box (like SSL, Auth, Routing, etc)

TL:博士

Ingress位于公共网络(Internet)和公开Api实现的Kubernetes服务之间。 Ingress能够提供负载平衡、SSL终止和基于名称的虚拟主机。 入口功能允许从一个域名安全地公开多个API或应用程序。

让我们从实际用例开始:您有多个由服务实现包(ASIP)支持的api,可以在一个域名下部署。作为一名前沿开发人员,您实现了一种微服务架构,需要为每个ASIP单独部署,以便它们可以单独升级或扩展。当然,这些asip被封装在单独的docker容器中,并从容器存储库中提供给Kubernetes。

让我们假设现在你想在谷歌的GKE k8上部署这个。为了实现持续可用性,每个ASIP实例(副本)部署在不同的节点(VM)上,其中每个VM都有自己的云内部IP地址。每个ASIP部署都配置在一个合适的名称“deployment”中。您可以在其中声明性地指定给定ASIP k8应该部署的副本的数量。

The next step is to expose the API to the ouside world and funnel requests to one of the deployed ASIP instance. Since we have many replicas of the same ASIP running on different nodes, we need something that will distribute the request among those replicas. To resolve this, we can create and apply a "service.yaml" file that will configure a K8s service (KServ) that will be externally exposed and accessible through an IP address. This KServ will take charge of the API's request distribution among it's configured ASIPs. Note that a KServ will be automatically reconfigured by the K8s master when an ASIP's node fail and is restarted. Internal IP address are never reused in such case and the KServ must be advised of the new ASIP's deployment location.

但是我们有其他Api服务包,应该在相同的域名上公开。旋转一个新的KServ将创建一个新的外部IP地址,我们将无法在相同的域名上公开它。这就是Ingress的用武之地。

Ingress位于Internet和我们向外部世界公开的所有KServices之间。Ingress能够提供负载平衡、SSL终止和基于名称的虚拟主机。后一种功能能够通过分析传入请求的URL将其路由到正确的服务。当然,Ingress必须配置和应用一个…”入口。该文件将指定重写以及向正确的KServ发送请求所需的路由。

Internet ->入口-> K8s服务->副本

因此,通过正确的入口、KServices和asip配置,我们可以使用相同的域名安全地公开许多API。