问题1:我正在阅读文档,我对其中的措辞有点困惑。它说:
ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
NodePort服务类型是否仍然使用ClusterIP,但只是在一个不同的端口上,该端口对外部客户端开放?所以在这种情况下,<NodeIP>:<NodePort>与<ClusterIP>:<NodePort>?
或者NodeIP实际上是运行kubectl get节点时找到的IP,而不是用于ClusterIP服务类型的虚拟IP ?
问题2 -图表中也有下面的链接:
客户端在节点内部有什么特别的原因吗?我假设它需要在ClusterIP服务类型的情况下,ClusterIP服务类型?
如果为NodePort绘制相同的图,那么将客户端完全画在节点和集群之外是否有效?还是我完全错过了重点?
假设你在本地机器上创建了一个Ubuntu虚拟机。IP地址是192.168.1.104。
登录VM,安装Kubernetes。然后你创建了一个运行nginx图像的pod。
1-如果你想在你的虚拟机中访问这个nginx pod,你将创建一个绑定到该pod的ClusterIP,例如:
$ kubectl expose deployment nginxapp --name=nginxclusterip --port=80 --target-port=8080
然后在浏览器上输入nginxclusterip的ip地址,端口为80,如下所示:
http://10.152.183.2:80
2-如果你想从你的主机上访问这个nginx pod,你需要用NodePort公开你的部署。例如:
$ kubectl expose deployment nginxapp --name=nginxnodeport --port=80 --target-port=8080 --type=NodePort
现在你可以从你的主机上访问nginx,像这样:
http://192.168.1.104:31865/
在我的仪表板上,它们显示为:
下图显示了基本关系。
假设你在本地机器上创建了一个Ubuntu虚拟机。IP地址是192.168.1.104。
登录VM,安装Kubernetes。然后你创建了一个运行nginx图像的pod。
1-如果你想在你的虚拟机中访问这个nginx pod,你将创建一个绑定到该pod的ClusterIP,例如:
$ kubectl expose deployment nginxapp --name=nginxclusterip --port=80 --target-port=8080
然后在浏览器上输入nginxclusterip的ip地址,端口为80,如下所示:
http://10.152.183.2:80
2-如果你想从你的主机上访问这个nginx pod,你需要用NodePort公开你的部署。例如:
$ kubectl expose deployment nginxapp --name=nginxnodeport --port=80 --target-port=8080 --type=NodePort
现在你可以从你的主机上访问nginx,像这样:
http://192.168.1.104:31865/
在我的仪表板上,它们显示为:
下图显示了基本关系。
ClusterIP公开了以下内容:
spec.clusterIp: spec.ports [*] .port
您只能在集群内访问此服务。可以从它的spec.clusterIp端口访问它。如果使用spec.ports[*]. xml文件。targetPort被设置,它将从端口路由到targetPort。调用kubectl get services时得到的cluster -IP是在集群内部分配给该服务的IP。
NodePort公开了以下内容:
< NodeIP >: spec.ports [*] .nodePort
spec.clusterIp: spec.ports [*] .port
如果您从节点的外部IP访问nodePort上的这个服务,它将把请求路由到spec.clusterIp:spec.ports[*]。端口,这将反过来路由到您的spec.ports[*]。如果设置了targetPort。该服务的访问方式与ClusterIP相同。
nodeip是节点的外部IP地址。您无法从spec.clusterIp:spec.ports[*]. nodeport访问您的服务。
LoadBalancer公开以下内容:
spec.loadBalancerIp: spec.ports [*] .port
< NodeIP >: spec.ports [*] .nodePort
spec.clusterIp: spec.ports [*] .port
您可以从负载均衡器的IP地址访问此服务,该IP地址将请求路由到nodePort, nodePort又将请求路由到clusterIP端口。您可以像访问NodePort或ClusterIP服务一样访问这个服务。
clusterIP:集群内可访问的IP(跨d集群内的节点)。
nodeA : pod1 => clusterIP1, pod2 => clusterIP2
nodeB : pod3 => clusterIP3.
pod3可以通过它们的clusterIP网络与pod1通信。
nodeport:为了让pod从集群外部通过nodeIP:nodeport访问,它将在上面创建/保留clusterIP作为它的clusterIP网络。
nodeA => nodeIPA : nodeportX
nodeB => nodeIPB : nodeportX
您可以通过nodeIPA:nodeportX或nodeIPB:nodeportX访问pod1上的服务。任何一种方式都可以工作,因为kube-proxy(安装在每个节点上)将接收您的请求,并使用clusterIP网络在节点之间分发它[重定向它(iptables术语)]。
负载均衡器
基本上就是把LB放在前面,这样入站流量就被分配到nodeIPA:nodeportX和nodeIPB:nodeportX,然后继续执行上面的流程流2。
Summary:
There are five types of Services:
ClusterIP (default): Internal clients send requests to a stable internal IP address.
NodePort: Clients send requests to the IP address of a node on one or more nodePort values that are specified by the Service.
LoadBalancer: Clients send requests to the IP address of a network load balancer.
ExternalName: Internal clients use the DNS name of a Service as an alias for an external DNS name.
Headless: You can use a headless service when you want a Pod grouping, but don't need a stable IP address.
The NodePort type is an extension of the ClusterIP type. So a Service of type NodePort has a cluster IP address.
The LoadBalancer type is an extension of the NodePort type. So a Service of type LoadBalancer has a cluster IP address and one or more nodePort values.
通过图像说明
Details
ClusterIP
ClusterIP is the default and most common service type.
Kubernetes will assign a cluster-internal IP address to ClusterIP service. This makes the service only reachable within the cluster.
You cannot make requests to service (pods) from outside the cluster.
You can optionally set cluster IP in the service definition file.
Use Cases
Inter-service communication within the cluster. For example, communication between the front-end and back-end components of your app.
NodePort
NodePort service is an extension of ClusterIP service. A ClusterIP Service, to which the NodePort Service routes, is automatically created.
It exposes the service outside of the cluster by adding a cluster-wide port on top of ClusterIP.
NodePort exposes the service on each Node’s IP at a static port (the NodePort). Each node proxies that port into your Service. So, external traffic has access to fixed port on each Node. It means any request to your cluster on that port gets forwarded to the service.
You can contact the NodePort Service, from outside the cluster, by requesting :.
Node port must be in the range of 30000–32767. Manually allocating a port to the service is optional. If it is undefined, Kubernetes will automatically assign one.
If you are going to choose node port explicitly, ensure that the port was not already used by another service.
Use Cases
When you want to enable external connectivity to your service.
Using a NodePort gives you the freedom to set up your own load balancing solution, to configure environments that are not fully supported by
Kubernetes, or even to expose one or more nodes’ IPs directly.
Prefer to place a load balancer above your nodes to avoid node failure.
LoadBalancer
LoadBalancer service is an extension of NodePort service. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
It integrates NodePort with cloud-based load balancers.
It exposes the Service externally using a cloud provider’s load balancer.
Each cloud provider (AWS, Azure, GCP, etc) has its own native load balancer implementation. The cloud provider will create a load balancer, which then automatically routes requests to your Kubernetes Service.
Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.
The actual creation of the load balancer happens asynchronously.
Every time you want to expose a service to the outside world, you have to create a new LoadBalancer and get an IP address.
Use Cases
When you are using a cloud provider to host your Kubernetes cluster.
ExternalName
Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service.
You specify these Services with the spec.externalName parameter.
It maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value.
No proxying of any kind is established.
Use Cases
This is commonly used to create a service within Kubernetes to represent an external datastore like a database that runs externally to Kubernetes.
You can use that ExternalName service (as a local service) when Pods from one namespace talk to a service in another namespace.