Supply Chain Security · 2026
Container Provenance
End-to-end GitOps container provenance pipeline using Sigstore + K8s admission webhooks. Stops npm/registry-style supply-chain attacks via keyless signing - stolen tokens can't publish trusted artifacts without CI OIDC identity.
- Sigstore
- Cosign
- Kubernetes
- ArgoCD
- GitHub Actions
- Kustomize
- Trivy
- Golang
- ▸ Lab
The problem
If a CI pipeline (or a stolen registry token) is the only thing between source and your cluster, then a single compromised credential publishes "trusted" images. The cluster has no way to know - it pulls the image by tag, and the tag points wherever the attacker says. The March 2026 axios npm incident put real numbers on that blast radius.
This is a different class of problem from "the cluster is misconfigured." The cluster is doing exactly what it was told. The trust boundary was upstream, and nothing inside the cluster reaches across it.
The approach
Three guarantees, in order:
- Every image gets signed in CI, with no long-lived private key anywhere.
- The cluster refuses to schedule pods backed by unsigned (or wrong-identity) images.
- Every signing event is auditable in a transparency log I don't run.
Sigstore covers (1) and (3) directly via Cosign + Fulcio + Rekor. Sigstore's Policy Controller handles (2) as a validating admission webhook inside the cluster.
Architecture
┌─────────────┐ OIDC token ┌────────┐ log ┌───────┐
│ GitHub CI │ ─────────────▶ │ Fulcio │ ───────▶ │ Rekor │
└──┬───────┬──┘ └────────┘ └───────┘
│ push │ bump tag ▲
▼ ▼ │ verify
┌──────┐ ┌────────────┐ reconcile ┌──────────────────┴────┐
│ Reg │ │ GitOps │ ────────▶ │ Sigstore Policy │
│istry │ │ repo (k8s) │ │ Controller (admission)│
└──┬───┘ └────────────┘ └───────────────────────┘
└──── pull at admission ─────▶ │ allow / deny
▼
kube-apiserver
The two-repo split is deliberate: the app repo carries source + CI;
the gitops repo carries Kustomize overlays per environment. CI
never runs kubectl - it bumps the image tag in the gitops repo and
ArgoCD reconciles. The dev overlay auto-syncs; prod promotion is a
separate PR a human approves.
Implementation highlights
CI signs the image (keyless)
permissions:
id-token: write # cosign keyless via GitHub OIDC
- run: cosign sign --yes
docker.io/kubeboiii/sigstore-app@${{ steps.build.outputs.digest }}The id-token permission is the magic. GitHub mints a short-lived OIDC
token, Fulcio issues a 10-minute cert bound to the workflow identity,
Cosign signs the digest (not the tag), and the entry lands in Rekor.
The private key is discarded the moment the run ends.
Cluster enforces the policy
apiVersion: policy.sigstore.dev/v1beta1
kind: ClusterImagePolicy
metadata:
name: require-signed-images
spec:
mode: enforce
images:
- glob: "index.docker.io/kubeboiii/sigstore-app*"
authorities:
- name: github-actions
keyless:
url: https://fulcio.sigstore.dev
identities:
- issuer: https://token.actions.githubusercontent.com
subject: https://github.com/kubeboiii/sigstore-app/.github/workflows/main.yaml@refs/heads/mainmode: enforce means a non-matching pod is rejected, not just
audited. The keyless identity pins the policy to a specific workflow
file on a specific ref - a different repo signing under the same
issuer won't satisfy it.
Results
- Demo: unsigned image → admission rejected with a clear
kubectl describereason. Signed image → pod scheduled. - Audit trail: every signing event landed in the public Rekor log and was retrievable by image digest weeks later.
- Promotion guarantee: the sha256 digest that passed PR CI is the same digest that lands in staging and prod - no rebuilds between environments.
- CI overhead: ~3 seconds added per release. Negligible.
What I learned
The conceptual win: "trust" in supply-chain security isn't a property of an image; it's a property of a chain of custody that can be replayed and verified. Image = artefact, signature = receipt, Rekor = notary. Each part is verifiable on its own, and together they tell you exactly who, what, when.
Sigstore is one layer. It doesn't help with malicious source, a compromised CI runner itself, or a known CVE in legitimately-signed code. Being precise about the boundary - registry integrity, not end-to-end security - is what made the rest of the design fall into place.