问题1:我正在阅读文档,我对其中的措辞有点困惑。它说:

ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>. LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.

NodePort服务类型是否仍然使用ClusterIP,但只是在一个不同的端口上,该端口对外部客户端开放?所以在这种情况下,<NodeIP>:<NodePort>与<ClusterIP>:<NodePort>?

或者NodeIP实际上是运行kubectl get节点时找到的IP,而不是用于ClusterIP服务类型的虚拟IP ?

问题2 -图表中也有下面的链接:

客户端在节点内部有什么特别的原因吗?我假设它需要在ClusterIP服务类型的情况下,ClusterIP服务类型?

如果为NodePort绘制相同的图,那么将客户端完全画在节点和集群之外是否有效?还是我完全错过了重点?


当前回答

ClusterIP:集群中的pod /服务可达 如果我在默认的命名空间类型:ClusterIP中创建一个名为myservice的服务,那么将为该服务创建以下可预测的静态DNS地址:

myservice.default.svc.cluster.local(或者只使用myservice.default,或者在默认命名空间中使用pod,只使用“myservice”即可)

这个DNS名称只能由集群内的pod和服务解析。

NodePort: Services are reachable by clients on the same LAN/clients who can ping the K8s Host Nodes (and pods/services in the cluster) (Note for security your k8s host nodes should be on a private subnet, thus clients on the internet won't be able to reach this service) If I make a service called mynodeportservice in the mynamespace namespace of type: NodePort on a 3 Node Kubernetes Cluster. Then a Service of type: ClusterIP will be created and it'll be reachable by clients inside the cluster at the following predictable static DNS address:

local(或者只是mynodeportservice.mynamespace)

For each port that mynodeportservice listens on a nodeport in the range of 30000 - 32767 will be randomly chosen. So that External clients that are outside the cluster can hit that ClusterIP service that exists inside the cluster. Lets say that our 3 K8s host nodes have IPs 10.10.10.1, 10.10.10.2, 10.10.10.3, the Kubernetes service is listening on port 80, and the Nodeport picked at random was 31852. A client that exists outside of the cluster could visit 10.10.10.1:31852, 10.10.10.2:31852, or 10.10.10.3:31852 (as NodePort is listened for by every Kubernetes Host Node) Kubeproxy will forward the request to mynodeportservice's port 80.

LoadBalancer: Services are reachable by everyone connected to the internet* (Common architecture is L4 LB is publicly accessible on the internet by putting it in a DMZ or giving it both a private and public IP and k8s host nodes are on a private subnet) (Note: This is the only service type that doesn't work in 100% of Kubernetes implementations, like bare metal Kubernetes, it works when Kubernetes has cloud provider integrations.) If you make mylbservice, then a L4 LB VM will be spawned (a cluster IP service, and a NodePort Service will be implicitly spawned as well). This time our NodePort is 30222. the idea is that the L4 LB will have a public IP of 1.2.3.4 and it will load balance and forward traffic to the 3 K8s host nodes that have private IP addresses. (10.10.10.1:30222, 10.10.10.2:30222, 10.10.10.3:30222) and then Kube Proxy will forward it to the service of type ClusterIP that exists inside the cluster.


你还问: NodePort服务类型是否仍然使用ClusterIP?是的* 或者NodeIP实际上是运行kubectl get nodes时找到的IP ?还没错* 让我们在基本原理之间画一个平行图: 一个容器在一个豆荚里面。一个豆荚在一个复制集里。复制集位于部署内部。 类似的: ClusterIP服务是NodePort服务的一部分。NodePort服务是负载均衡器服务的一部分。


在您展示的图表中,客户端将是集群中的一个pod。

其他回答

clusterIP:集群内可访问的IP(跨d集群内的节点)。

nodeA : pod1 => clusterIP1, pod2 => clusterIP2
nodeB : pod3 => clusterIP3.

pod3可以通过它们的clusterIP网络与pod1通信。

nodeport:为了让pod从集群外部通过nodeIP:nodeport访问,它将在上面创建/保留clusterIP作为它的clusterIP网络。

nodeA => nodeIPA : nodeportX
nodeB => nodeIPB : nodeportX

您可以通过nodeIPA:nodeportX或nodeIPB:nodeportX访问pod1上的服务。任何一种方式都可以工作,因为kube-proxy(安装在每个节点上)将接收您的请求,并使用clusterIP网络在节点之间分发它[重定向它(iptables术语)]。

负载均衡器

基本上就是把LB放在前面,这样入站流量就被分配到nodeIPA:nodeportX和nodeIPB:nodeportX,然后继续执行上面的流程流2。

ClusterIP公开了以下内容:

spec.clusterIp: spec.ports [*] .port

您只能在集群内访问此服务。可以从它的spec.clusterIp端口访问它。如果使用spec.ports[*]. xml文件。targetPort被设置,它将从端口路由到targetPort。调用kubectl get services时得到的cluster -IP是在集群内部分配给该服务的IP。

NodePort公开了以下内容:

< NodeIP >: spec.ports [*] .nodePort spec.clusterIp: spec.ports [*] .port

如果您从节点的外部IP访问nodePort上的这个服务,它将把请求路由到spec.clusterIp:spec.ports[*]。端口,这将反过来路由到您的spec.ports[*]。如果设置了targetPort。该服务的访问方式与ClusterIP相同。

nodeip是节点的外部IP地址。您无法从spec.clusterIp:spec.ports[*]. nodeport访问您的服务。

LoadBalancer公开以下内容:

spec.loadBalancerIp: spec.ports [*] .port < NodeIP >: spec.ports [*] .nodePort spec.clusterIp: spec.ports [*] .port

您可以从负载均衡器的IP地址访问此服务,该IP地址将请求路由到nodePort, nodePort又将请求路由到clusterIP端口。您可以像访问NodePort或ClusterIP服务一样访问这个服务。

下面是关于图表的问题2的答案,因为它似乎仍然没有直接回答:

客户端在节点内部有什么特别的原因吗?我 假设它需要在ClusterIP的情况下在ClusterIP中 服务类型?

At the diagram the Client is placed inside the Node to highlight the fact that ClusterIP is only accessible on a machine which has a running kube-proxy daemon. Kube-proxy is responsible for configuring iptables according to the data provided by apiserver (which is also visible at the diagram). So if you create a virtual machine and put it into the network where the Nodes of your cluster are and also properly configure networking on that machine so that individual cluster pods are accessible from there, even with that ClusterIP services will not be accessible from that VM, unless the VM has it's iptables configured properly (which doesn't happen without kubeproxy running on that VM).

如果为NodePort绘制了相同的图,那么绘制是否有效 客户端完全外部的节点和集群或我 完全没抓住重点?

在节点和集群外部绘制客户端是有效的,因为NodePort可以被任何能够访问集群节点和相应端口的机器访问,包括集群外部的机器。

为了在更简单的层面上为那些正在寻找这三者之间的区别的人澄清。您可以使用最小的ClusterIp(在k8s集群内)公开您的服务,也可以使用NodePort(在k8s集群外部的集群内)或LoadBalancer(外部世界或您在LB中定义的任何东西)公开您的服务。

ClusterIp暴露< NodePort暴露< LoadBalancer暴露

ClusterIp 通过ip/name:port的k8s集群公开服务 NodePort 通过内部网络虚拟机的外部k8s ip/name:port公开服务 loadbalance 通过外部世界或您在LB中定义的任何方式公开服务。

ClusterIP:集群中的pod /服务可达 如果我在默认的命名空间类型:ClusterIP中创建一个名为myservice的服务,那么将为该服务创建以下可预测的静态DNS地址:

myservice.default.svc.cluster.local(或者只使用myservice.default,或者在默认命名空间中使用pod,只使用“myservice”即可)

这个DNS名称只能由集群内的pod和服务解析。

NodePort: Services are reachable by clients on the same LAN/clients who can ping the K8s Host Nodes (and pods/services in the cluster) (Note for security your k8s host nodes should be on a private subnet, thus clients on the internet won't be able to reach this service) If I make a service called mynodeportservice in the mynamespace namespace of type: NodePort on a 3 Node Kubernetes Cluster. Then a Service of type: ClusterIP will be created and it'll be reachable by clients inside the cluster at the following predictable static DNS address:

local(或者只是mynodeportservice.mynamespace)

For each port that mynodeportservice listens on a nodeport in the range of 30000 - 32767 will be randomly chosen. So that External clients that are outside the cluster can hit that ClusterIP service that exists inside the cluster. Lets say that our 3 K8s host nodes have IPs 10.10.10.1, 10.10.10.2, 10.10.10.3, the Kubernetes service is listening on port 80, and the Nodeport picked at random was 31852. A client that exists outside of the cluster could visit 10.10.10.1:31852, 10.10.10.2:31852, or 10.10.10.3:31852 (as NodePort is listened for by every Kubernetes Host Node) Kubeproxy will forward the request to mynodeportservice's port 80.

LoadBalancer: Services are reachable by everyone connected to the internet* (Common architecture is L4 LB is publicly accessible on the internet by putting it in a DMZ or giving it both a private and public IP and k8s host nodes are on a private subnet) (Note: This is the only service type that doesn't work in 100% of Kubernetes implementations, like bare metal Kubernetes, it works when Kubernetes has cloud provider integrations.) If you make mylbservice, then a L4 LB VM will be spawned (a cluster IP service, and a NodePort Service will be implicitly spawned as well). This time our NodePort is 30222. the idea is that the L4 LB will have a public IP of 1.2.3.4 and it will load balance and forward traffic to the 3 K8s host nodes that have private IP addresses. (10.10.10.1:30222, 10.10.10.2:30222, 10.10.10.3:30222) and then Kube Proxy will forward it to the service of type ClusterIP that exists inside the cluster.


你还问: NodePort服务类型是否仍然使用ClusterIP?是的* 或者NodeIP实际上是运行kubectl get nodes时找到的IP ?还没错* 让我们在基本原理之间画一个平行图: 一个容器在一个豆荚里面。一个豆荚在一个复制集里。复制集位于部署内部。 类似的: ClusterIP服务是NodePort服务的一部分。NodePort服务是负载均衡器服务的一部分。


在您展示的图表中,客户端将是集群中的一个pod。