So you just deployed your kubernetes cluster and got the kubeconfig file to interact with it. What if you lose that file or want to share it only with trusted individuals? In this case, we can protect the kubeconfig file against OIDC authentication.

Authentik is an open-source identity provider that can be integrated with an existing environment to enhance security through various authentication protocols. In this guide, we will see how to integrate Authentik OIDC with Google Kubernetes Engine (GKE) to add an extra layer of security for interacting with the cluster.

Prerequisite

  • Authentik instance
  • GKE cluster

Prepare Authentik for GKE

Logging into your authentik instance with superuser account.

Create provider

Navigate to Applications > Providers and create OAuth2/OpenID Provider

Give a name for the provider, and set the following:

Name: gke-authentik (you can name it anything)
Authentication flow: default-authentication-flow (Welcome to authentik!)
Authorization flow: default-provider-authorization-implicit-consent (Authorize Application)

Under Protocol settings

Client type: Confidential
Client ID: gke-authentik
Client Secret: (Copy to the clipboard)
Redirect URI: http://localhost:8000

Advanced protocol settings

Refresh Token validity: days=1 (re-login every day)
Scopes: (select following three options)
authentik default OAuth Mapping: OpenID 'email'
authentik default OAuth Mapping: OpenID 'openid'
authentik default OAuth Mapping: OpenID 'profile'
Include claims in id_token: enabled (make sure to enable this if not already)

Click finish.

Create Application

Now we need to create an application in order to make a relationship between application and provider. Navigate to Applications -> Applications and create an application for your new provider.

You can name it anything you want. Select the provider from dropdown menu and click Create

Create group and add user to group

Finally, make sure to add the user to a new group. Create group from Directory > Groups.

Add the existing user to that group Directory > Users > Select user > Select Groups tab > Add to existing group

Enable Identity Service for GKE

Let’s enable identity service if it’s not already. Identity service is required for OIDC authentication into GKE cluster. Run the following command in your existing GKE cluster.

gcloud container clusters update CLUSTER_NAME --enable-identity-service

Wait few minutes. The above command will deploy several Kubernetes objects including a namespace and oidc pods.

~ kubectl get pods -n anthos-identity-service
NAME                                 READY   STATUS    RESTARTS      AGE
gke-oidc-envoy-6b69949467-bwfs2      2/2     Running   1 (26h ago)   26h
gke-oidc-envoy-6b69949467-pv88h      2/2     Running   1 (26h ago)   26h
gke-oidc-operator-784c6fcfc4-rqqst   1/1     Running   0             26h
gke-oidc-service-598b68db48-468lm    2/2     Running   0             26h
gke-oidc-service-598b68db48-pnmg5    2/2     Running   0             26h

Those pods are responsible for authenticating you against the GKE cluster.

Also note the external IP of LoadBalancer service type.

~ kubectl get svc -n anthos-identity-service
NAME               TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)              AGE
gke-oidc-envoy     LoadBalancer   10.20.11.92   11.22.33.44      443:30195/TCP        26h
gke-oidc-service   ClusterIP      10.20.7.163   <none>           8443/TCP,10250/TCP   26h

Assume that it is 11.22.33.44. Save it in clipboard. Kubectl/Kubelogin will send an authentication request to that IP address.

Configure identity service for GKE

Our next step is to configure identity service by modifying the default ClientConfig file.

kubectl get clientconfig default -n kube-public -o yaml > client-config.yaml

Open the file and update the spec.authentication section like below

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: authentication.gke.io/v2alpha1
kind: ClientConfig
metadata:
  name: default
  namespace: kube-public
spec:
  certificateAuthorityData: REDACTED
  identityServiceOptions: {}
  internalServer: ""
  name: CLUSTER_NAME                              ## Cluster name 
  server: https://11.22.33.44:443                 ## OIDC service IP
  authentication:
  - name: authentik
    oidc:
      clientID: CLUSTER_NAME
      clientSecret: REDACTED
      extraParams: resource=token-groups-claim
      issuerURI:  https://example.com/application/o/gke-authentik/       
      cloudConsoleRedirectURI: https://console.cloud.google.com/kubernetes/oidc
      kubectlRedirectURI: http://localhost:8000
      scopes: openid, email
      userClaim: email
      groupsClaim: groups
      userPrefix: 'oidc:'
      groupPrefix: 'oidc:'

At line 18, replace example.com with your authentik server name and gke-authektik with the application slug. Apply the config

kubectl apply -f client-config.yaml

Setup Kubelogin and test OIDC auth

Kubelogin is kubectl plugin for OIDC authentication. Check out the installation guide and install it in your local machine https://github.com/int128/kubelogin{:target="_blank"}

Next, run the following command. Replace issuer-url, client-id & client-secret

kubectl oidc-login setup  \
  --oidc-issuer-url=https://example.com/application/o/gke-authentik/  \
  --oidc-client-id=gke-authentik \
  --oidc-client-secret=REDACTED \
  --oidc-extra-scope=email,profile

It launches the browser window and navigates to http://localhost:8000 for authentication. After authorization, you should see output like below

~ kubectl oidc-login setup \
  --oidc-issuer-url=https://example.com/application/o/gke-authentik/ \
  --oidc-client-id=gke-authentik \
  --oidc-client-secret=REDACTED \
  --oidc-extra-scope=email,profile
authentication in progress...

## 2. Verify authentication

You got a token with the following claims:

{
  "iss": "https://example.com/application/o/gke-authentik/",
  "sub": "REDACTED",
  "aud": "gke-authentik",
  "exp": ,
  "iat": ,
  "auth_time": ,
  "acr": "goauthentik.io/providers/oauth2/default",
  "nonce": "",
  "at_hash": "",
  "email": "[email protected]",
  "email_verified": true,
  "name": "Bruce Wayne",
  "given_name": "Bruce Wayne",
  "preferred_username": "batman",
  "nickname": "batman",
  "groups": [
    "authentik Admins",
    "cluster-admin"
  ]
}

The above output verifies that we have successfully configured Kubelogin with external OIDC.

Create ClusterRoleBinding

Now we need to create a CRB in order to authorize users of the cluster-admin group to the cluster. Create the following manifest and apply. Make sure cluster-admin has been set (oidc:cluster-admin) correctly

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: oidc-group-dokan-prod-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin 
subjects:
- kind: Group
  name: oidc:cluster-admin

Modify kubeconfig file

Let’s replace cluster API endpoint with the OIDC load balancer IP. We need to send the request to the gke-oidc-envoy pod in order to initiate OIDC authorization.

kubectl config view --minify --flatten > oidc-auth.yaml

Open oidc-auth.yaml and replace spec.server IP to the IP address of OIDC load balancer (exm: 11.22.33.44). Save and apply.

Setting new user for OIDC auth

Execute below command (make sure to replace issuer-url, client-id & client-secret)

kubectl config --kubeconfig=oidc-auth.yaml set-credentials oidc \
  --exec-api-version=client.authentication.k8s.io/v1beta1 \
  --exec-command=kubectl \
  --exec-arg=oidc-login \
  --exec-arg=get-token \
  --exec-arg=--oidc-issuer-url=https://example.com/application/o/gke-authentik/ \
  --exec-arg=--oidc-client-id=gke-authentik \
  --exec-arg=--oidc-client-secret=REDACTED \
  --exec-arg=--oidc-extra-scope=email \
  --exec-arg=--oidc-extra-scope=profile

This will add an user oidc entry to our custom kubeconfig.

Test

If you follow the steps correctly, you should interact with your cluster with oidc user

~ kubectl --user=oidc --kubeconfig=oidc-auth.yaml cluster-info 
Kubernetes control plane is running at https://11.22.33.44.204
GLBCDefaultBackend is running at https://11.22.33.44.204/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
KubeDNS is running at https://11.22.33.44.204/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://11.22.33.44.204/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

That’s it. You have protected the kubeconfig file from unauthorized access 😁