Manage Access Policies (ACLs)

A guide on how to manage Omni ACLs.

This guide will show how to give the user support@example.com full access to the staging cluster but limited access to the production cluster.

The default without RBAC is to grant Kubernetes admin-level access for users with write permissions on the Omni side.

Create an AccessPolicy resource

Create a local file acl.yaml:

metadata:
  namespace: default
  type: AccessPolicies.omni.sidero.dev
  id: access-policy
spec:
  rules:
    - users:
        - support@example.com
      clusters:
        - staging
      role: Operator
      kubernetes:
        impersonate:
          groups:
            - system:masters
    - users:
        - support@example.com
      clusters:
        - production
      role: Reader
      kubernetes:
        impersonate:
          groups:
            - my-app-read-only
  tests:
    - name: support engineer has full access to staging cluster
      user:
        name: support@example.com
      cluster:
        name: staging
      expected:
        role: Operator
        kubernetes:
          impersonate:
            groups:
              - system:masters
    - name: support engineer has read-only access to my-app namespace in production cluster
      user:
        name: support@example.com
      cluster:
        name: production
      expected:
        role: Reader
        kubernetes:
          impersonate:
            groups:
              - my-app-read-only

As an Omni admin, apply this ACL using omnictl:

omnictl apply -f acl.yaml

When users interact with Omni API or UI, they will be assigned to the role specified in the ACL.

When users access the Kubernetes cluster through Omni, they will have the groups specified in the ACL.

Kubernetes RBAC then can be used to grant permissions to these groups.

Only the users who have the Omni role Admin can manage ACLs. Users who have the Omni role Operator or above are assigned to the Kubernetes role system:masters by default, in addition to the ACLs.

Create Kubernetes RBAC resources

Locally, create rbac.yaml with a Namespace called my-app, and a Role & RoleBinding to give access to the my-app-read-only group:

apiVersion: v1
kind: Namespace
metadata:
  name: my-app
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: read-only
  namespace: my-app
rules:
  - apiGroups: ["", "extensions", "apps", "batch", "autoscaling"]
    resources: ["*"]
    verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-only
  namespace: my-app
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: read-only
subjects:
  - kind: Group
    name: my-app-read-only
    apiGroup: rbac.authorization.k8s.io

As the cluster admin, apply the manifests to the Kubernetes cluster production:

kubectl apply -f rbac.yaml

Test the access

Try to access the cluster with a kubeconfig generated by the user support@example.com:

kubectl get pods -n my-app

The user should be able to list pods in the my-app namespace because of the Role and RoleBinding created above.

Try to list pods in another namespace:

kubectl get pod -n default

The user should not be able to list pods in namespace default.

If the user support@example.com has the Omni role Operator or above assigned, they will have system:masters role in Kubernetes as well as the my-app-read-only role.

Therefore, they will still be able to list pods in all namespaces.

Last updated