Kubernetes RBAC and Enterprise PKS

Kubernetes RBAC and Enterprise PKS

This blog covers Kubernetes RBAC, Role-Based Access Control, with Enterprise PKS. It uses an OIDC connection to AD/LDAP. The blog demonstrates obtaining credentials as a pks admin and setting up various ClusterRoles, ClusterRoleBindings, Roles, and RoleBindings. Then, as a kubernetes consumer demonstrates getting credentials (ie..kubeconfig) and the capabilities of the user. See the Enterprise PKS documentation on configuring LDAP as a OIDC provider.

Assumptions

1) Pivotal Container Service (PKS) installed and operational
2) PKS configured to use AD/LDAP as its OIDC provider
3) An admin user account setup and mapping is completed between the AD group and UAA
4) A PKS Kubernetes cluster created

Kubeconfig

The kubeconfig file, simply put, is used to access your Kubernetes cluster. It contains the location of and credentials to the cluster. There are two ways to obtain a kubeconfig file using PKS. Lets call one way the “PKS Operator persona”, and the second the “Kubernetes consumer persona”.
The PKS operator persona manages the Kubernetes cluster itself, creation, deletion, resizing, granting permissions, etc. The Kubernetes consumer persona uses the Kubernetes cluster; creating, deleting, configuring, etc Kubernetes resources such as deployments, services, etc. Both of these personas will use the pks and the kubectl command line tools to accomplish their tasks.

PKS Operator Persona

As a PKS Operator, the kubeconfig file can be obtained by executing the pks login and then get-credentials command. An interesting item to note when performing the get-credentials command is that a Kubernetes ClusterRoleBinding resource is auto-magically created with a roleRef to the cluster-admin ClusterRole. This is what gives the PKS operator access to manage the Kubernetes side of the administration. To view this ClusterRoleBinding run a ‘kubectl get clusterrolebindings’ command and look for a binding prefixed with pks:oidc: and the user name in the name of the binding.
$ pks login -a <FQDN-TO-PKS-API-SERVER> -u <PKS-ADMIN-OR-MANAGER> -k  
$ pks get-credentials <A-K8S-CLUSTER>  

Kubernetes Consumer Persona

As a Kubernetes Consumer, the kubeconfig file can be obtained by executing pks get-kubeconfig command. Without RBAC resources in place for this user or group, the Kubernetes Consumer will be able to obtain a kubeconfig, but be forbidden to do anything on the cluster itself. For example, executing ‘kubectl get pods’ commands will return forbidden. The PKS Operator will have to create RBAC resources for the user or group.
$ pks get-kubeconfig -a <FQDN-TO-PKS-API-SERVER> -u <K8S-USER> -k   <A-K8S-CLUSTER>  

Defining RBAC Policies

When creating RBAC policies for Kubernetes, Kubernetes has several resources; ClusterRole, ClusterRoleBinding, Role, and RoleBinding. ClusterRole and Role define the what and ClusterRoleBinding and RoleBinding define the who. To see an example of a ClusterRoleBinding and its ClusterRole take a look at the cluster-admin binding created for the PKS Operator.
$ kubectl describe clusterrolebinding pks:oidc:<PKS-OPERATOR>-cluster-admin  
Name:         pks:oidc:<PKS-OPERATOR>-cluster-admin  
Labels:       generated=true  
Annotations:  <none>  
Role:  
  Kind:  ClusterRole  
  Name:  cluster-admin  
Subjects:  
  Kind  Name          Namespace  
  ----  ----          ---------  
  User  oidc:<PKS-OPERATOR>  default  
$ kubectl describe clusterrole cluster-admin  
Name:         cluster-admin  
Labels:       kubernetes.io/bootstrapping=rbac-defaults  
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true  
PolicyRule:  
  Resources  Non-Resource URLs  Resource Names  Verbs  
  ---------  -----------------  --------------  -----  
  *.*        []                 []              [*]  
             [*]                []              [*]  

The What of RBAC

As mentioned above, ClusterRoles and Role define the ‘what’ to which consumer will have access. For example, a role could define access to get, list, and watch pods. The item to which the action can be performed is defined in the resource section. The action that can be performed is defined in the verbs section. ClusterRoles are used at the cluster scope and Roles are used at the namespace scope. Other than that they are pretty much the same.
Kubernetes provides some default ClusterRoles to help when getting started; admin, edit, and view. Lets take a look at these below for some interesting examples. We will revisit them in the How to RBAC section later.

Admin ClusterRole

$ kubectl describe clusterrole admin  
Name:         admin  
Labels:       kubernetes.io/bootstrapping=rbac-defaults  
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true  
PolicyRule:  
  Resources                                       Non-Resource URLs  Resource Names  Verbs  
  ---------                                       -----------------  --------------  -----  
  rolebindings.rbac.authorization.k8s.io          []                 []              [create delete deletecollection get list patch update watch]  
  roles.rbac.authorization.k8s.io                 []                 []              [create delete deletecollection get list patch update watch]  
  configmaps                                      []                 []              [create delete deletecollection patch update get list watch]  
  endpoints                                       []                 []              [create delete deletecollection patch update get list watch]  
  persistentvolumeclaims                          []                 []              [create delete deletecollection patch update get list watch]  
  pods                                            []                 []              [create delete deletecollection patch update get list watch]  
  replicationcontrollers/scale                    []                 []              [create delete deletecollection patch update get list watch]  
  replicationcontrollers                          []                 []              [create delete deletecollection patch update get list watch]  
  services                                        []                 []              [create delete deletecollection patch update get list watch]  
  daemonsets.apps                                 []                 []              [create delete deletecollection patch update get list watch]  
  deployments.apps/scale                          []                 []              [create delete deletecollection patch update get list watch]  
  deployments.apps                                []                 []              [create delete deletecollection patch update get list watch]  
.....  

Edit ClusterRole

$ kubectl describe clusterrole edit  
Name:         edit  
Labels:       kubernetes.io/bootstrapping=rbac-defaults  
              rbac.authorization.k8s.io/aggregate-to-admin=true  
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true  
PolicyRule:  
  Resources                                    Non-Resource URLs  Resource Names  Verbs  
  ---------                                    -----------------  --------------  -----  
  configmaps                                   []                 []              [create delete deletecollection patch update get list watch]  
  endpoints                                    []                 []              [create delete deletecollection patch update get list watch]  
  persistentvolumeclaims                       []                 []              [create delete deletecollection patch update get list watch]  
  pods                                         []                 []              [create delete deletecollection patch update get list watch]  
  replicationcontrollers/scale                 []                 []              [create delete deletecollection patch update get list watch]  
  replicationcontrollers                       []                 []              [create delete deletecollection patch update get list watch]  
  services                                     []                 []              [create delete deletecollection patch update get list watch]  
  daemonsets.apps                              []                 []              [create delete deletecollection patch update get list watch]  
  deployments.apps/scale                       []                 []              [create delete deletecollection patch update get list watch]  
  deployments.apps                             []                 []              [create delete deletecollection patch update get list watch]  
....  

View ClusterRole

$ kubectl describe clusterrole view  
Name:         view  
Labels:       kubernetes.io/bootstrapping=rbac-defaults  
              rbac.authorization.k8s.io/aggregate-to-edit=true  
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true  
PolicyRule:  
  Resources                                    Non-Resource URLs  Resource Names  Verbs  
  ---------                                    -----------------  --------------  -----  
  bindings                                     []                 []              [get list watch]  
  configmaps                                   []                 []              [get list watch]  
  endpoints                                    []                 []              [get list watch]  
  events                                       []                 []              [get list watch]  
  limitranges                                  []                 []              [get list watch]  
  namespaces/status                            []                 []              [get list watch]  
  namespaces                                   []                 []              [get list watch]  
...  

The Who of RBAC

The ClusterRoleBinding and RoleBinding defines ‘who’ has access to the ‘what that is defined in a ClusterRole or Role. The ‘who’ is defined in the subjects section of the binding. The subject can be a User, Group, or ServiceAcount. The ‘what’ is defined in the roleref section of the binding. The roleref can be a ClusterRole or a Role. Lets review the auto-generated ClusterRoleBinding for the PKS Operator again. Note the roleref is in the YAML and replaced with just Role for the describe command.
The two sections to make note of are the Role and the Subjects sections. The Role section declares this binding refers to a ClusterRole and its name is cluster-admin. The Subjects section declares that a User named oidc:PKS-OPERATOR-USERNAME is bound to the cluster-admin ClusterRole defined in the Role section.

PKS Operator ClusterRoleBinding

$ kubectl describe clusterrolebinding pks:oidc:<PKS-OPERATOR>-cluster-admin  
Name:         pks:oidc:<PKS-OPERATOR>-cluster-admin  
Labels:       generated=true  
Annotations:  <none>  
Role:  
  Kind:  ClusterRole  
  Name:  cluster-admin  
Subjects:  
  Kind  Name          Namespace  
  ----  ----          ---------  
  User  oidc:<PKS-OPERATOR>  default  

The How of RBAC

Since we did the ‘what’ and the ‘who’ figured we would go with the ‘how’. The PKS Operator’s job will be to create the RBAC roles and bindings required for various environments and various consumers. For example, on a sandbox type environment there might be ClusterRolesBindings for a development group bound to the admin ClusterRole. Where as in a production environment, a developer might have a ClusterRoleBinding, if any access at all, bound to the view ClusterRole or something similar.
Other scenarios might bind a set of consumers to a particular namespace. This would be accomplished by using Roles and RoleBindings rather than ClusterRoles and ClusterRoleBindings. Most likely a combination of the two cluster-scoped and namespace-scoped approaches will be required.

Cluster-Scoped Sandbox Environment

This is a possible example for a ‘sandbox’ type of Kubernetes cluster. This example binds the oidc:grouptest AD group to the admin ClusterRole. This essentially gives the grouptest members rights to create, read, update, and delete almost everything. It does not give rights to a few resources such as ClusterRole and ClusterRoleBindings.
apiVersion: rbac.authorization.k8s.io/v1  
kind: ClusterRoleBinding  
metadata:  
  name: admin-binding  
roleRef:  
  apiGroup: rbac.authorization.k8s.io  
  kind: ClusterRole  
  name: admin  
subjects:  
- apiGroup: rbac.authorization.k8s.io  
  kind: Group  
  name: oidc:grouptest  
Modify the subject name to whatever is appropriate and then save the above YAML in a file called admin-binding.yaml. As the PKS Operator persona, create the binding and then as a Kubernetes Consumer persona, try some various tests as a member from that subject group.
PKS Operator
  • Remember to login and get-credentials as PKS Operator
$ kubectl create -f admin-binding.yaml  
clusterrolebinding.rbac.authorization.k8s.io/admin-binding created  
Kubernetes Consumer
  • remember to get-kubeconfig as a Kubernetes consumer
  • member user is ‘user1’ in demonstration
$ kubectl create deployment test-deployment --image=nginx  
deployment.apps/test-deployment created  
$ kubectl get deployment  
NAME              READY   UP-TO-DATE   AVAILABLE   AGE  
test-deployment   1/1     1            1           94s  
$ kubectl get clusterrolebindings  
Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "oidc:user1" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope  

Cluster-Scoped Production Environment

This is a possible example for a auditor type role of a ‘production’ Kubernetes cluster. This example binds the oidc:grouptest AD group to the view ClusterRole. This would give members of the oidc:grouptest only read rights to the Kubernetes cluster. Most likely a CICD pipeline would control actual create, update, delete of resources on the cluster. There would be a separate binding for the CICD pipeline.
apiVersion: rbac.authorization.k8s.io/v1  
kind: ClusterRoleBinding  
metadata:  
  name: view-binding  
roleRef:  
  apiGroup: rbac.authorization.k8s.io  
  kind: ClusterRole  
  name: view  
subjects:  
- apiGroup: rbac.authorization.k8s.io  
  kind: Group  
  name: oidc:grouptest  
Modify the subject name to whatever is appropriate and then save the above YAML in a file called view-binding.yaml. As the PKS Operator persona, create the binding and then as a Kubernetes Consumer persona, try some various tests as a member from that subject group.
PKS Operator
  • Remember to login and get-credentials as PKS Operator
$ kubectl create -f view-binding.yaml  
clusterrolebinding.rbac.authorization.k8s.io/view-binding created  
Kubernetes Consumer
  • remember to get-kubeconfig as a Kubernetes consumer
  • member user is ‘user1’ in demonstration
$ kubectl get deployments
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
test-deployment   1/1     1            1           19m
$ kubectl create deployment test2-deployment --image=nginx  
Error from server (Forbidden): deployments.apps is forbidden: User "oidc:user1" cannot create resource "deployments" in API group "apps" in the namespace "default"  

Namespace-Scoped Environment

Often access would need to be controlled at the namespace scope rather than the cluster scope. This example creates a Role and RoleBinding that binds a member of the oidc:grouptest AD group that can only perform deployment modifications in the namespace test.
apiVersion: rbac.authorization.k8s.io/v1  
kind: Role  
metadata:  
  name: developer-role  
  namespace: test  
rules:  
- apiGroups: ["extensions", "apps"]  
  resources: ["deployments"]  
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]  
---
apiVersion: rbac.authorization.k8s.io/v1  
kind: RoleBinding  
metadata:  
  name: developer-binding  
  namespace: test  
roleRef:  
  apiGroup: rbac.authorization.k8s.io  
  kind: Role  
  name: developer-role  
subjects:  
- apiGroup: rbac.authorization.k8s.io  
  kind: Group  
  name: oidc:grouptest  
Modify the subject name to whatever is appropriate and then save the above YAML in a file called developer-binding.yaml. Where as the previous examples used the default provided admin and view ClusterRoles this example creates a new Role. The new Role simply gives access to Deployments in the test namespace. This YAML combines the Role and RoleBinding in the same YAML file delimited by the ‘—‘. As the PKS Operator persona, create the binding and then as a Kubernetes Consumer persona, try some various tests as a member from that subject group.
PKS Operator
  • Remember to login and get-credentials as PKS Operator
$ kubectl create -f developer-binding.yaml  
role.rbac.authorization.k8s.io/developer-role created  
rolebinding.rbac.authorization.k8s.io/developer-binding created  
Kubernetes Consumer
  • remember to get-kubeconfig as a Kubernetes consumer
  • member user is ‘user1’ in demonstration
$ kubectl create deployment test-deployment --image=nginx -n test  
deployment.apps/test-deployment created  
$ kubectl get deployments -n test  
NAME              READY   UP-TO-DATE   AVAILABLE   AGE  
test-deployment   1/1     1            1           11s  
$ kubectl create deployment default-namespace-deployment --image=nginx  
Error from server (Forbidden): deployments.apps is forbidden: User "oidc:user1" cannot create resource "deployments" in API group "apps" in the namespace "default"  

Hybrid-Scoped Environment

This example takes the view cluster-scoped and the namespace-scoped examples above and combines them. This allows the members of the oidc:grouptest AD group to perform deployment modifications in the test namespace, but only allows view (ie..read) access to the rest of the cluster.
apiVersion: rbac.authorization.k8s.io/v1  
kind: Role  
metadata:  
  name: developer-role  
  namespace: test  
rules:  
- apiGroups: ["extensions", "apps"]  
  resources: ["deployments"]  
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]  
---
apiVersion: rbac.authorization.k8s.io/v1  
kind: RoleBinding  
metadata:  
  name: developer-binding  
  namespace: test  
roleRef:  
  apiGroup: rbac.authorization.k8s.io  
  kind: Role  
  name: developer-role  
subjects:  
- apiGroup: rbac.authorization.k8s.io  
  kind: Group  
  name: oidc:grouptest  
Modify the subject name to whatever is appropriate and then save the above YAML in a file called developer-binding.yaml. Where as the previous examples used the default provided admin and view ClusterRoles this example creates a new Role. The new Role simply gives access to Deployments in the test namespace. This YAML combines the Role and RoleBinding in the same YAML file delimited by the ‘—‘. As the PKS Operator persona, create the binding and then as a Kubernetes Consumer persona, try some various tests as a member from that subject group.
apiVersion: rbac.authorization.k8s.io/v1  
kind: ClusterRoleBinding  
metadata:  
  name: view-binding  
roleRef:  
  apiGroup: rbac.authorization.k8s.io  
  kind: ClusterRole  
  name: view  
subjects:  
- apiGroup: rbac.authorization.k8s.io  
  kind: Group  
  name: oidc:grouptest  
Modify the subject name to whatever is appropriate and then save the above YAML in a file called view-binding.yaml. As the PKS Operator persona, create the binding and then as a Kubernetes Consumer persona, try some various tests as a member from that subject group.
PKS Operator
  • Remember to login and get-credentials as PKS Operator
$ kubectl create -f developer-binding.yaml  
role.rbac.authorization.k8s.io/developer-role created  
rolebinding.rbac.authorization.k8s.io/developer-binding created  
$ kubectl create -f view-binding.yaml  
clusterrolebinding.rbac.authorization.k8s.io/view-binding created  
Kubernetes Consumer
  • remember to get-kubeconfig as a Kubernetes consumer
  • member user is ‘user1’ in demonstration
$ kubectl create deployment test-deployment --image=nginx -n test  
deployment.apps/test-deployment created  
$ kubectl get deployments -n test  
NAME              READY   UP-TO-DATE   AVAILABLE   AGE  
test-deployment   1/1     1            1           11s  
$ kubectl create deployment default-namespace-deployment --image=nginx  
Error from server (Forbidden): deployments.apps is forbidden: User "oidc:user1" cannot create resource "deployments" in API group "apps" in the namespace "default"  
$ kubectl get deployments  
NAME              READY   UP-TO-DATE   AVAILABLE   AGE  
test-deployment   1/1     1            1           65m  

Summary of Kubernetes RBAC and Enterprise PKS

This blog presented an introduction to Kubernetes RBAC, Role-Based Access Control, on a PKS environment. One of the nice features of PKS is it provides some auto-generated functionality, but for the most part everything is simply native Kubernetes. The same RBAC concepts in a PKS Kubernetes will apply to any native Kubernetes cluster. Happy Kubing…