Skip to content

Pipeline Structures

A CI/CD pipeline is a series of steps that converts raw source code into a released artifact. But the shape of that pipeline - how stages connect, depend on each other, and scale - varies significantly based on the team’s needs, the repository structure, and the tooling in use.


Structural CI/CD design patterns define how pipeline components are arranged, composed, and connected. Their goal is to remove inherent complexity while enabling the system to scale as the organization grows.

All structural pipeline designs fall into one of two overarching approaches:

ApproachDescriptionTrade-off
MonolithicAll components of a project live in a single pipeline definitionSimple to reason about; becomes unmanageable as the project grows
PolylithicClear, enforced separation between different pipeline execution types or concernsMore initial design effort; dramatically easier to scale and maintain

Regardless of the approach, structural patterns are built on three layers:

  1. Common components layer - The foundational reusable elements: shared templates, configured tools, repository structures, and metadata schemas. This layer defines the vocabulary the rest of the pipeline speaks.
  2. Pipeline layer - The fundamental structure of a project’s CI process: build, test, and deliver stages. This layer also defines how new projects or services can extend the base without duplicating it.
  3. Deployment layer - The specific techniques and patterns used to release software to target environments (rolling, blue-green, canary, etc.).

The same logical pipeline (build → test → deploy) can be shaped in fundamentally different ways:

PatternHow it worksBest for
Basic (Linear)Stages run sequentially, one after the other. Simple and predictable.Small projects, single-service repos
Fan-Out / ParallelIndependent stages run simultaneously. Reduces total pipeline time.Tests split by type, multi-platform builds
DAG (Directed Acyclic Graph)Stages run based on declared job dependencies. Maximum flexibility.Complex multi-stage pipelines with mixed dependencies
Merge Request PipelineRuns only on proposed changes (open PRs). Does not run on direct commits to main.Feature branch validation, security gates on changes
Merged Results PipelineSimulates the result of merging the branch before running tests. Catches integration failures before they land.Reducing broken main branch frequency
Merge TrainQueues multiple merge requests to run sequentially with accumulated changes. Each MR tests on top of the previous one.High-velocity teams, preventing race conditions between concurrent PRs

How you structure pipelines per project is one of the highest-leverage design decisions you’ll make, because changing it at scale is extremely expensive.

Monolithic Pipeline

A single pipeline with heavy conditional logic that changes its execution path based on the trigger (e.g., a commit runs a different path than a pull request).

  • ✅ Centralized - everything is in one place
  • ❌ Highly complex internal logic; hard to test individual paths
  • ❌ Failure in one path can mask or block another; metrics are hard to separate

Multi-Pipeline Pattern

The project relies on multiple purpose-built pipelines, each triggered by a different action:

PipelineTriggerPurpose
Delivery pipelineMerge to mainCore transmission belt - artifact build, quality gates, and environment deployment
Pull Request pipelinePR opened / updatedCode validation, unit tests, coverage checks; blocks or allows the merge
Dev environment pipelineBranch created / deletedAutomatically provisions a dedicated test environment when a branch is opened; tears it down on merge
Custom test pipelineScheduled or manualDedicated to long-running tests (E2E, performance, stability) that shouldn’t block the PR flow

As a project grows, pipelines must grow with it. There are two directions a pipeline can grow:

Vertical scaling - Adding new steps, decision points, or test types to an existing pipeline. Makes a single pipeline longer and more complex. Rarely requires changes to the dependency model.

Horizontal scaling - Adding entirely new pipelines instead of extending existing ones. Organizations scale horizontally for two reasons:

  • New applications/services: Each new service needs its own delivery pipeline.
  • New processes: A new class of concern (like security scanning or E2E testing) gets its own dedicated pipeline rather than being bolted onto an existing one.

As pipelines multiply, temptation grows to chain them - triggering one pipeline immediately after another finishes. This should be approached carefully.

Two rules for healthy pipeline autonomy:

  1. Keep the pipeline autonomous. A pipeline should execute its complete logical function from start to finish, independently. If it deploys a microservice, it should handle that deployment end-to-end.
  2. Keep the process autonomous. Don’t split a single logical process across multiple pipelines (e.g., one pipeline to collect code, a second to build, a third to test). Each pipeline must own a complete, meaningful unit of work.

The CI/CD boundary is the one well-defined dependency that is acceptable: CI ends when an artifact is created and stored in the artifact repository. CD begins when that artifact is retrieved for further processing. This boundary can span tools (Jenkins for CI → Argo CD for CD) or be internal to a single platform.

Consequences of Complex Dependency Chains at Scale

Section titled “Consequences of Complex Dependency Chains at Scale”
ProblemWhat it looks like
Lost visibilityAdding one new workflow requires adding multiple new pipelines; the dependency graph becomes impossible to reason about
Monitoring fragmentationExecution metrics are split across different tools; correlating a single delivery workflow across three pipelines requires manual reconciliation
Maintenance overheadRenaming an artifact or changing a trigger breaks every downstream pipeline that depends on it

Choosing where the pipeline runs is as important as what it runs. There are three primary compute models:

ModelWhat it meansKey trade-offs
SaaSA dedicated platform managed entirely by a third-party provider (e.g., GitHub Actions hosted runners)Minimal ops overhead; vendor pricing, concurrent run limits, and throttling must be deeply understood before scaling
HybridThe platform control plane is SaaS, but execution runners are hosted within the organization’s own infrastructureReduces data egress and security concerns; the organization still manages runner security, patching, and scaling
On-premises (Self-hosted)Everything - control plane and runners - is hosted on the organization’s own infrastructureMaximum control and flexibility; requires the highest investment in skills, infrastructure, and ongoing maintenance

Large pipeline definitions quickly become unmanageable if written as a single monolithic file. Modularity patterns keep pipelines maintainable as the organization grows.

Manually coding every pipeline from scratch leads to code multiplication, inconsistent logic, and maintenance nightmares (updating one artifact path across 100 independent pipelines is a full-team incident).

PatternHow it worksExample
Reusable workflows / templatesDefine a common job group once and reference it from any pipelineGitHub Actions reusable workflows, GitLab CI include templates
Pipeline inheritanceA base pipeline defines shared stages; child pipelines extend or override specific stages without duplicating the restA base build-and-test template extended by each service’s delivery pipeline
Third-party componentsUse pre-built modules from vendor marketplacesGitHub Marketplace actions, GitLab catalog components

Modularity extends beyond pipeline code to the infrastructure and services the pipeline delivers:

  • Infrastructure as Code (IaC): Infrastructure changes are delivered separately from application code. Dedicated tools (Terraform Cloud, Spacelift) manage the IaC pipeline independently, preventing infrastructure and application deployments from blocking each other.
  • GitOps for microservices: Platform teams define Kubernetes configuration templates (load balancing, storage) once using GitOps tools (Argo CD, Flux), and development teams reference them across services without needing to understand the underlying infrastructure.

ToolTypeKey Characteristic
JenkinsSelf-hostedOpen source. Extensive plugin ecosystem. Supports Groovy-based Jenkinsfile declarative and scripted syntax.
GitLab CI/CDIntegrated (GitLab)Pipelines in .gitlab-ci.yml. Native DAG support, Merge Trains, Auto DevOps.
GitHub ActionsIntegrated (GitHub)Event-driven, huge marketplace. Reusable workflows for modularity. Ephemeral runners.
Azure DevOpsCloud (Microsoft)azure-pipelines.yml. Full lifecycle platform with deep Azure integration.
CircleCICloudYAML pipelines, ephemeral Docker runners, strong parallelism and caching.
Argo CDKubernetes-native (CD)GitOps controller - continuously reconciles cluster state with Git. Used for the CD layer.
TektonKubernetes-nativePipelines as Kubernetes CRDs. Default CI/CD engine in Red Hat OpenShift.
Travis CICloudGitHub-integrated. Popular for open source. Simple .travis.yml.
BambooSelf-hosted (Atlassian)Bamboo Specs (YAML or Java). Deep Jira + Bitbucket integration.

Pipelines produce a continuous stream of logs, metrics, and events that need to be captured and acted on:

ToolRole
ELK StackCentralized log aggregation and search across pipeline runs
Prometheus + GrafanaMetrics collection and dashboarding - pipeline duration, failure rates, queue depth
DatadogAll-in-one observability - logs, metrics, traces, CI Visibility (pipeline analytics)
New RelicApplication and infrastructure monitoring with CI/CD integration

Integrating pipeline observability closes the feedback loop: slow stages, flaky tests, and recurring failures become visible and measurable - which is the foundation of improving delivery performance with DORA Metrics.