Creating custom roles in kubernetes for different requirements

=======================

There are plenty of occassions when the default kubernetes/openshift roles / clusterroles don’t have the specific permissions that you need to assign to users. I recently faced similar situation where the development teams wanted to be able to setup their on PersistentVolumes and PersistentVolumeClaims. However, PersistentVolumes are cluster-scoped object so namespace-scoped users can’t actually interact with them.

This was a business requirement so I had to setup a custom clusterrole that will let users setup their own persistentvolumes. So in this quick write up, here are a few things we’re going to go through.

  • Roles vs ClusterRoles
  • Breaking down roles/clusterroles YAML file
  • Example yaml that I setup for my use-cases

Understanding Roles vs ClusterRoles

=======================

Its pretty straight-forward. Roles are namespace-scoped object while ClusterRoles are cluster-scoped objects. So there are a few things to understand here.

  1. Roles essentially mean the set of permissions that you want to define. Think of it as a “policy” in AWS IAM.So this is where you simply define what permissions you need. For example “view access to volumes”, “edit access to deployments” and so on.
  2. Roles can be attached to a user, a serviceaccount and so on. One key thing to be aware of is the permissions defined in “roles” are limited to the specific namesapce.
  3. Similarly, clusterroles do exactly what roles do, just on the cluster-level. Meaning, if you setup an “admin” cluster-role, it would mean that the user is a cluster level admin and can interact with all the namespaces (projects in openshift) in the cluster.
  4. Like roles, clusterroles can be attached to users.
  5. Just to summarize, roles/clusterroles are simply set of permissions that can be attached to users, serviceaccounts and so on. Based on the permissiosn defined in the roles, users would be able to execute the tasks while logged into Kubernetes / Openshift.
  6. Multiple roles/clusterroles can be attached to a single user or a group.

Breaking down the yaml file

=======================

Role yaml file

Take a moment and go throug the following yaml file.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role  #You're setting up a namespace-scoped role
metadata:
  namespace: default #Which namespace is this role for
  name: pod-reader  #Name of the role
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods"] # You want to define permissions for pod
  verbs: ["get", "watch", "list"]

So basically, the idea here is that you name the objects in “resources” and set of permissions in “verbs”. There’s obviously an “apiGroups” which may change depending on what type of permissions are you trying to set. However, for majority of native kubernetes objects, it would be [""].

So in this example, the role has [“get”,“watch”,“list”] verbs for the resouce called “pods”. It means, once you attach this role to a user, he/she will only be able to have “view-only” access. They would not be able to setup new pods / modify existing pods or delete existing pods.

Now, you may have a question, “Nishant, what if I want the user to be able to setup new pods as well BUT not be able to delete them”. Well, there are a bunch of verbs that you can use with the resources of the apiGroups. Here’s a full list.

  • get
  • list
  • watch
  • create
  • update
  • patch
  • delete

So depending on your requirement, you can allow add the “create/update” permissions in the list of the verbs.

Example yaml that I setup for my use-cases

=======================

In my case, I had to allow certain users to be able to setup their own persistent volumes as well as persistent volume claims so that they don’t have to depend on our team everytime they need it. However, its not a standard feature or there is no inbuilt role to separately allow it. That’s where I had to create a custom RBAC.

This is what the yaml file looks like.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: volumes-access
rules:
- apiGroups:
  - ""
  resources:
  - persistentvolumes
  - persistentvolumeclaims
  - persistentvolumeclaims/status
  verbs:
  - create
  - delete
  - get
  - list
  - watch
  - update

So in this case, I am referring to the resources “persistentvolumes”, “persistentvolumeclaims” and “status”. The users that this cluster role is attached to, will be able to “create, delete, get, list, watch, update” these resources.

I hope this brings some clarity and feel free to reach out if you have any specific questions that can’t be looked up :smiley:.