kubernetes - GCP Container Engine with no public IP VMs -


so created cluster containing 4 machines using command

gcloud container clusters create "[cluster-name]" \   --machine-type "n1-standard-1" \   --image-type "cos" \   --disk-size "100" \   --num-nodes "4" 

and can see it's creating 4 vm instances inside compute engine. setup deployments pointing 1 or more entry(ies) in container registry , services single service exposing public ip

all of working well, bothers me 4 vm instances has created having public ip(s), please correct me if wrong, understanding here's happen behind scene

  1. a container created
  2. vm instances created based on #1
  3. an instance group created, vm instances on #2 members
  4. (since have 1 of service exposing public ip) network load balancer created pointing instance group on #3 or vm instances on #2

looking @ this, don't think need public ip on each of vm instances created cluster.

i have been reading documentation(s), although think might have missed something, can't seem find configuration arguments allow me achieve

currently gke vms public ip address, have firewall rules set block unauthorized network connections. service or ingress resources still accessed through load balancer’s public ip address.

as of writing there's no way prevent cluster nodes getting public ip addresses.


Comments

Popular posts from this blog

Is there a better way to structure post methods in Class Based Views -

performance - Why is XCHG reg, reg a 3 micro-op instruction on modern Intel architectures? -

jquery - Responsive Navbar with Sub Navbar -