kubernetes - GCP Container Engine with no public IP VMs -
so created cluster containing 4 machines using command
gcloud container clusters create "[cluster-name]" \ --machine-type "n1-standard-1" \ --image-type "cos" \ --disk-size "100" \ --num-nodes "4"
and can see it's creating 4 vm instances inside compute engine. setup deployments pointing 1 or more entry(ies) in container registry , services single service exposing public ip
all of working well, bothers me 4 vm instances has created having public ip(s), please correct me if wrong, understanding here's happen behind scene
- a container created
- vm instances created based on #1
- an instance group created, vm instances on #2 members
- (since have 1 of service exposing public ip) network load balancer created pointing instance group on #3 or vm instances on #2
looking @ this, don't think need public ip on each of vm instances created cluster.
i have been reading documentation(s), although think might have missed something, can't seem find configuration arguments allow me achieve
currently gke vms public ip address, have firewall rules set block unauthorized network connections. service or ingress resources still accessed through load balancer’s public ip address.
as of writing there's no way prevent cluster nodes getting public ip addresses.
Comments
Post a Comment