我对入口和负载均衡器在Kubernetes中的角色感到非常困惑。

据我所知,Ingress用于将来自internet的传入流量映射到集群中运行的服务。

负载均衡器的作用是将流量转发到主机。在这方面,入口与负载均衡器有什么不同?另外,与Amazon ELB和ALB相比,kubernetes内部的负载均衡器的概念是什么?


当前回答

我发现了一篇非常有趣的文章,它解释了NodePort、LoadBalancer和Ingress之间的区别。

从文章的内容来看:

loadbalance:

A LoadBalancer service is the standard way to expose a service to the internet. On GKE, this will spin up a Network Load Balancer that will give you a single IP address that will forward all traffic to your service. If you want to directly expose a service, this is the default method. All traffic on the port you specify will be forwarded to the service. There is no filtering, no routing, etc. This means you can send almost any kind of traffic to it, like HTTP, TCP, UDP, Websockets, gRPC, or whatever. The big downside is that each service you expose with a LoadBalancer will get its own IP address, and you have to pay for a LoadBalancer per exposed service, which can get expensive!

入口:

Ingress is actually NOT a type of service. Instead, it sits in front of multiple services and act as a “smart router” or entrypoint into your cluster. You can do a lot of different things with an Ingress, and there are many types of Ingress controllers that have different capabilities. The default GKE ingress controller will spin up a HTTP(S) Load Balancer for you. This will let you do both path based and subdomain based routing to backend services. For example, you can send everything on foo.yourdomain.example to the foo service, and everything under the yourdomain.example/bar/ path to the bar service. Ingress is probably the most powerful way to expose your services, but can also be the most complicated. There are many types of Ingress controllers, from the Google Cloud Load Balancer, Nginx, Contour, Istio, and more. There are also plugins for Ingress controllers, like the cert-manager, that can automatically provision SSL certificates for your services. Ingress is the most useful if you want to expose multiple services under the same IP address, and these services all use the same L7 protocol (typically HTTP). You only pay for one load balancer if you are using the native GCP integration, and because Ingress is “smart” you can get a lot of features out of the box (like SSL, Auth, Routing, etc)

其他回答

TL:博士

Ingress位于公共网络(Internet)和公开Api实现的Kubernetes服务之间。 Ingress能够提供负载平衡、SSL终止和基于名称的虚拟主机。 入口功能允许从一个域名安全地公开多个API或应用程序。

让我们从实际用例开始:您有多个由服务实现包(ASIP)支持的api,可以在一个域名下部署。作为一名前沿开发人员,您实现了一种微服务架构,需要为每个ASIP单独部署,以便它们可以单独升级或扩展。当然,这些asip被封装在单独的docker容器中,并从容器存储库中提供给Kubernetes。

让我们假设现在你想在谷歌的GKE k8上部署这个。为了实现持续可用性,每个ASIP实例(副本)部署在不同的节点(VM)上,其中每个VM都有自己的云内部IP地址。每个ASIP部署都配置在一个合适的名称“deployment”中。您可以在其中声明性地指定给定ASIP k8应该部署的副本的数量。

The next step is to expose the API to the ouside world and funnel requests to one of the deployed ASIP instance. Since we have many replicas of the same ASIP running on different nodes, we need something that will distribute the request among those replicas. To resolve this, we can create and apply a "service.yaml" file that will configure a K8s service (KServ) that will be externally exposed and accessible through an IP address. This KServ will take charge of the API's request distribution among it's configured ASIPs. Note that a KServ will be automatically reconfigured by the K8s master when an ASIP's node fail and is restarted. Internal IP address are never reused in such case and the KServ must be advised of the new ASIP's deployment location.

但是我们有其他Api服务包,应该在相同的域名上公开。旋转一个新的KServ将创建一个新的外部IP地址,我们将无法在相同的域名上公开它。这就是Ingress的用武之地。

Ingress位于Internet和我们向外部世界公开的所有KServices之间。Ingress能够提供负载平衡、SSL终止和基于名称的虚拟主机。后一种功能能够通过分析传入请求的URL将其路由到正确的服务。当然,Ingress必须配置和应用一个…”入口。该文件将指定重写以及向正确的KServ发送请求所需的路由。

Internet ->入口-> K8s服务->副本

因此,通过正确的入口、KServices和asip配置,我们可以使用相同的域名安全地公开许多API。

我发现了一篇非常有趣的文章,它解释了NodePort、LoadBalancer和Ingress之间的区别。

从文章的内容来看:

loadbalance:

A LoadBalancer service is the standard way to expose a service to the internet. On GKE, this will spin up a Network Load Balancer that will give you a single IP address that will forward all traffic to your service. If you want to directly expose a service, this is the default method. All traffic on the port you specify will be forwarded to the service. There is no filtering, no routing, etc. This means you can send almost any kind of traffic to it, like HTTP, TCP, UDP, Websockets, gRPC, or whatever. The big downside is that each service you expose with a LoadBalancer will get its own IP address, and you have to pay for a LoadBalancer per exposed service, which can get expensive!

入口:

Ingress is actually NOT a type of service. Instead, it sits in front of multiple services and act as a “smart router” or entrypoint into your cluster. You can do a lot of different things with an Ingress, and there are many types of Ingress controllers that have different capabilities. The default GKE ingress controller will spin up a HTTP(S) Load Balancer for you. This will let you do both path based and subdomain based routing to backend services. For example, you can send everything on foo.yourdomain.example to the foo service, and everything under the yourdomain.example/bar/ path to the bar service. Ingress is probably the most powerful way to expose your services, but can also be the most complicated. There are many types of Ingress controllers, from the Google Cloud Load Balancer, Nginx, Contour, Istio, and more. There are also plugins for Ingress controllers, like the cert-manager, that can automatically provision SSL certificates for your services. Ingress is the most useful if you want to expose multiple services under the same IP address, and these services all use the same L7 protocol (typically HTTP). You only pay for one load balancer if you are using the native GCP integration, and because Ingress is “smart” you can get a lot of features out of the box (like SSL, Auth, Routing, etc)

Ingress基于L7规则(header, uri等)在内部将流量路由到多个服务

负载均衡器将外部流量路由到单个服务。

简单地说,负载均衡器将请求分配给多个后端服务(相同类型),而入口更像一个API网关(反向代理),它根据URL将请求路由到特定的后端服务。

为了理解这一点,我认为最好问问“为什么我们需要入口,为什么负载均衡器是不够的”。

我在《Kubernetes in action》一书中找到了最好的答案。

理解为什么需要ingress 一个重要的原因是每个LoadBalancer服务都需要它的 有自己的公网IP地址的负载均衡器,而Ingress 只需要一个,即使在提供对数十个服务的访问时也是如此。 当客户端向Ingress发送HTTP请求时,主机和路径 在请求中确定请求被转发到哪个服务