Set up a Kubernetes cluster from scratch
- All hosts need to run Ubuntu 16.04
- All hosts need to be able to communicate with each other
- High-availability etcd Cluster
- PKI Tools
- CNI
- Weave
- Cloud-platform independent
- Your API Master can talk to the rest
./extras/install_cfssl.sh./extras/download_kubernetes_binaries.sh
A sample inventory file.
[all:vars]
kube_version=v1.9.1 // This should be fixed to v1.9.1
kube_cluster_name=$YOURCLUSTERNAME // Whatever name you want to give your cluster
kube_public_address=$IP_OF_MASTER // The IP address of the master
kube_master=$IP_OF_MASTER // The IP address of the master
kube_cluster_dns=$IP_OF_DNS_SERVICES // The IP address where your DNS service will run (most likely 10.32.0.10)
[kube-master]
// The IP address of the master
[kube_nodes]
// Multiple IP addresses where kubelets will run
// Has to be in the following format:
// ip hostname=hostname internal_ip=internal_ip external_ip=external_ip
[kube_apiservers]
// Multiple IP addresses where you want your API servers to run
[kube_scheduler]
// IP address where you want the scheduler to run
[kube_controller_manager]
// IP Address where you want the controller manager to run
[etcd]
// Multiple IP addresses where you want the etcd servers to run.
// Has to be in this format:
// 10.0.0.1 etcd_name=yourname
Save this as inventory.ini in the root directory
Because we're using cert based authentication we have to setup a proper pki. Run:
python setup.py
With the steps above completed, it's time to bootstrap the cluster:
ansible-playbook -i inventory.ini bootstrap.yml
kubectl config set-cluster $CLUSTER_NAME \
--certificate-authority=./files/certificates/ca.pem \
--embed-certs=true \
--server=https://${IP_OF_MASTER_API_SERVER}:6443
kubectl config set-credentials admin \
--client-certificate=./files/certificates/admin.pem \
--client-key=./files/certificates/admin-key.pem
kubectl config set-context $CLUSTER_NAME \
--cluster=$CLUSTER_NAME \
--user=admin
Make sure you're using the right context by calling: kubectl config use-context $CLUSTER_NAME
If you run kubectl get componentstatuses you should see the following output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
kubectl apply -f ./extras/kube-dns.yaml;
To make sure pods can talk to each other you have to run Weave on all your pods.
./extras/add_weave.sh
kubectl apply -f ./extras/kubernetes-dashboard.yml;
kube proxy
Now go to localhost:8001/ui to manage your Kubernetes cluster via the UI!
Tested on:
- Virtual Machines
- Digital Ocean