Assumptions
-
5 VMs running Debian 11
-
3 nodes control-plane+master roles, 2 nodes worker role
-
!CHANGE_internalNetworkCIDRsshould be your VMs subnet, in my case all nodes are in 192.0.2.128/27 -
m-0 192.0.2.130, m-1 192.0.2.131, m-2 192.0.2.132, w-3 192.0.2.133, w-4 192.0.2.134 are IPs for VMs
-
all VMs can be accessed by ssh key
dhctl phase
Create config.yml file as described https://deckhouse.io/gs/bm/step4.html
# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Static
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.23"
# cluster domain (used for local routing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
# the release channel in use
releaseChannel: Stable
configOverrides:
global:
modules:
# template that will be used for system apps domains within the cluster
# e.g., Grafana for %s.deckhouse-1.example.com will be available as grafana.deckhouse-1.example.com
publicDomainTemplate: "%s.deckhouse-1.example.com"
# enable cni-flannel module
cniFlannelEnabled: true
# cni-flannel module settings
cniFlannel:
# flannel backend, available values are VXLAN (if your servers have L3 connectivity) and HostGW (for L2 networks)
# you might consider changing this
podNetworkMode: VXLAN
---
# section with the parameters of the bare metal cluster (StaticClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: StaticClusterConfiguration
# list of internal cluster networks (e.g., '10.0.4.0/24'), which is
# used for linking Kubernetes components (kube-apiserver, kubelet etc.)
internalNetworkCIDRs:
- 192.0.2.128/27
Start DeckHouse docker image docker run --pull=always -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" registry.deckhouse.io/deckhouse/ce/install:stable bash
Initiate the first master m-0 dhctl bootstrap --ssh-user=root --ssh-host=192.0.2.130 --ssh-agent-private-keys=/tmp/.ssh/id_ed25519 --config=/config.yml
Wait for the installation process.
First master node phase
Login to the m-0 node ssh root@192.0.2.130. Issue kubectl get nodes
NAME STATUS ROLES AGE VERSION m-0 Ready control-plane,master 7m v1.23.9
Wait 7-10 minutes until node fully initialized.
Run kubectl patch nodegroup master --type json -p '[{"op": "remove", "path": "/spec/nodeTemplate/taints"}]'
Create a ingress-nginx-controller.yml
# section containing the parameters of nginx ingress controller
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: nginx
spec:
# the name of the Ingress class to use with the Ingress nginx controller
ingressClass: nginx
# Ingress version to use (use version 1.1 with Kubernetes 1.23+)
controllerVersion: "1.1"
# the way traffic goes to cluster from the outer network
inlet: HostPort
hostPort:
httpPort: 80
httpsPort: 443
# describes on which nodes the component will be located
# you might consider changing this
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- operator: Exists
Apply it kubectl create -f ingress-nginx-controller.yml
Create user.yaml file
kind: ClusterAuthorizationRule
metadata:
name: admin
spec:
# Kubernetes RBAC accounts list
subjects:
- kind: User
name: admin@deckhouse-1.example.com
# pre-defined access template
accessLevel: SuperAdmin
# allow user to do kubectl port-forward
portForwarding: true
---
# section containing the parameters of the static user
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: User
metadata:
name: admin
spec:
# user e-mail
email: admin@deckhouse-1.example.com
# this is a hash of the password fffffPAAASWORDffff, generated now
# generate your own or use it at your own risk (for testing purposes)
# echo "fffffPAAASWORDffff" | htpasswd -BinC 10 "" | cut -d: -f2
# you might consider changing this
password: '$2a$10$aaaaaaaaaaaaaaaaaaaaaaaa'
apply kubectl create -f user.yaml
Next is to emable node-manager plugin, create node-manager.yaml
apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: node-manager spec: enabled: true
apply kubectl apply -f node-manager.yaml
Check already configured groups kubectl get nodegroup
NAME TYPE READY NODES UPTODATE INSTANCES DESIRED MIN MAX STANDBY STATUS AGE master Static 1 1 1 15m
Now create a worker group, nodegroup-worker.yaml
apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: worker spec: nodeType: Static
apply kubectl apply -f nodegroup-worker.yaml
Check kubectl get nodegroup again:
NAME TYPE READY NODES UPTODATE INSTANCES DESIRED MIN MAX STANDBY STATUS AGE master Static 1 1 1 16m worker Static 0 0 0 1m
Additional master nodes
While on a m-0 node run kubectl -n d8-cloud-instance-manager get secret manual-bootstrap-for-master -o json | jq '.data."bootstrap.sh"' -r | base64 -d
save the output to the file bootstrap.sh and transfer in to the nodes m-1 and m-2
Log in to m-1 as root and run chmod 755 bootstrap.sh && ./bootstrap.sh. Repeat that for m-2.
Worker nodes
While on a m-0 node run kubectl -n d8-cloud-instance-manager get secret manual-bootstrap-for-worker -o json | jq '.data."bootstrap.sh"' -r | base64 -d
save the output to the file bootstrap-worker.sh and transfer in to the nodes w-3 and w-4.
Log in to w-3 as root and run chmod 755 bootstrap-worker.sh && ./bootstrap-worker.sh. Repeat thta for w-4.
Final result
On a m-0 run kubectl get nodes
NAME STATUS ROLES AGE VERSION m-0 Ready control-plane,master 171m v1.23.9 m-1 Ready control-plane,master 101m v1.23.9 m-2 Ready control-plane,master 92m v1.23.9 m-3 Ready worker 66m v1.23.9 m-4 Ready worker 65m v1.23.9
Now cluster has three nodes with control-plane and two worker nodes.
While on your workstation copy kubeconfig from n-0, scp root@192.0.2.130:/root/.kube/config ~/deckhouse-k8s-config
Replace IP in it with a m-0 IP sed -i s/127.0.0.1:6445/192.0.2.130:6443/g ~/deckhouse-k8s-config
Check k8s cluster KUBECONFIG=~/deckhouse-k8s-config kubectl get nodes
NAME STATUS ROLES AGE VERSION m-0 Ready control-plane,master 3h v1.23.9 m-1 Ready control-plane,master 109m v1.23.9 m-2 Ready control-plane,master 100m v1.23.9 m-3 Ready worker 74m v1.23.9 m-4 Ready worker 73m v1.23.9