CI/CD Anti-Patterns & Platform Engineering
Anti-patterns are common but ineffective practices that initially appear legitimate or even beneficial - but consistently produce negative long-term consequences: increased complexity, decreased maintainability, and fragile delivery pipelines. Recognizing them is the first step toward adopting genuine best practices.
The 7 CI/CD Anti-Patterns
Section titled “The 7 CI/CD Anti-Patterns”1. Lack of Proper Pipeline Modeling
Section titled “1. Lack of Proper Pipeline Modeling”A pipeline that hasn’t been explicitly designed doesn’t have a design - it has an accident. Without clearly mapped stages, it’s impossible to identify bottlenecks, measure lead times, or allocate resources effectively.
Symptoms:
- Critical stages are skipped - builds go directly to production without automated tests, security scans, or staging validation
- Failures are handled after production, not prevented before it
- No one can clearly describe what the pipeline does or where time is spent
Fix: Map every stage explicitly. Measure KPIs - lead time, process time per stage, failure rate. Balance investment between pipeline optimization and feature development.
2. Poor or Incomplete Test Automation
Section titled “2. Poor or Incomplete Test Automation”Ineffective test automation is usually invisible until it fails catastrophically. The pipeline runs, tests pass, and broken code ships.
Symptoms:
- Manual steps in processes that should be automated
- Flaky, non-deterministic tests that teams learn to ignore
- Test suites that haven’t been updated to reflect current behaviour
- Overreliance on slow, brittle end-to-end tests with minimal unit test coverage
- No automated rollback if production deployment fails
Fix: Apply the test automation pyramid - a solid foundation of unit tests, a middle layer of integration tests, and a minimal top layer of UI/E2E tests. Every layer must integrate into the pipeline for continuous feedback.
3. Ignoring Pipeline Failures
Section titled “3. Ignoring Pipeline Failures”Not all pipeline failures warrant the same response - but ignoring failures without deliberate design leads to broken code shipping silently.
The nuance: continueOnError flags and try-catch blocks in pipelines are valid tools for non-critical steps where a single failure shouldn’t halt everything. The problem is when they’re used to hide real problems rather than handle expected partial failures gracefully.
Fix: Design the failure handling explicitly - define which failures are blocking (test failures, security scan failures, artifact signing failures) and which are non-blocking (optional lint warnings, slow integration probes). Make every ignored failure visible in the pipeline output, not silent.
4. Poor Monitoring and Observability
Section titled “4. Poor Monitoring and Observability”A pipeline without observability is a black box. Failures are discovered by users, not by the team.
Symptoms:
- Slow deployments with no data on where time is being spent
- Incomplete testing handoffs between stages
- Credential mismanagement visible only after a breach
- No build time history - regressions go unnoticed for weeks
Fix: Instrument the pipeline. Track build times, test pass rates, deployment frequency, and failure rates using tools like InfluxDB + Grafana or Apache DevLake. Use OpenTelemetry Collector for distributed tracing across pipeline stages.
5. Single Point of Failure (SPOF) in CI/CD Infrastructure
Section titled “5. Single Point of Failure (SPOF) in CI/CD Infrastructure”A SPOF is any component whose failure causes a complete delivery halt. SPOFs are not just hardware - they are organizational.
| Type | Example |
|---|---|
| Infrastructure | Single build server, single artifact registry, single database |
| Network | One internet service provider for all outbound connections |
| People | One person who knows how to deploy the critical service |
| Service | One external SaaS tool that all pipelines depend on with no fallback |
Fix: Conduct business impact analyses to identify and rank SPOF risks. Introduce redundancy at every critical layer. Build a team culture where engineers can raise SPOF concerns without friction - silence kills here.
6. Bad Security Pipeline Integration
Section titled “6. Bad Security Pipeline Integration”Treating security as a final gate before release - rather than a continuous property of every pipeline stage - is the most expensive security anti-pattern. By the time a vulnerability is caught at the end of the pipeline, it has already been through code review, merged, built, and tested.
Common exposures: broken access controls, injection flaws, outdated components, missing SBOM generation.
Fix: Adopt shift-left security - embed security at every stage:
| Stage | Security action |
|---|---|
| Design | Threat modeling - identify attack surfaces before code is written |
| Development | Secure coding practices, pre-commit hooks for secrets scanning |
| CI | SAST (SonarQube, Semgrep), SCA (Snyk, Trivy), SBOM generation |
| CD | DAST (OWASP ZAP) against a running staging environment |
| Production | Runtime security monitoring, access controls, anomaly detection |
7. Static Pipeline Configuration
Section titled “7. Static Pipeline Configuration”A static YAML file that never changes is predictable and easy to understand - but it becomes a bottleneck as the project scales. Any change requires manual file edits, and the pipeline can’t adapt to the changing context of a build.
Symptoms: Hard-coded paths, environment names, and tool versions; no ability to conditionally skip stages; changes require touching every pipeline definition across every repository.
Fix: Use dynamic pipeline configurations - parameterized templates, pipeline-as-code patterns with matrix strategies, and centrally managed pipeline modules that individual services can extend rather than copy.
Additional Anti-Patterns
Section titled “Additional Anti-Patterns”| Anti-pattern | What it creates |
|---|---|
| Big bang deployments | Large, infrequent releases cause massive integration issues - the longer between releases, the more expensive each one becomes |
| Manual approval gates | Human bottlenecks in automated pipelines destroy the value of continuous delivery and create false confidence that manual review adds meaningful safety |
| Configuration drift | Dev/staging/production environments diverge silently. “Works on my machine” becomes “works in staging” - eliminated by IaC and containerization |
CI/CD Design Pattern Selection Guide
Section titled “CI/CD Design Pattern Selection Guide”Choosing the right design pattern depends on team size, project complexity, and architecture. The same pattern that accelerates one team can slow another.
| Pattern | Best for | Trade-off |
|---|---|---|
| Pipeline as Code | All teams | Requires familiarity with pipeline DSL (GitLab CI YAML, Jenkinsfile, GitHub Actions) |
| Immutable Infrastructure | High-reliability services | Higher initial cost; requires full automation of infra provisioning |
| Microservices Architecture | Large teams, independent services | Increases inter-service coordination complexity; requires orchestration and service discovery |
| Infrastructure as Code | All cloud deployments | Ongoing maintenance of infra code; requires Terraform/Pulumi/Bicep expertise |
| Blue-Green Deployment | Zero-downtime services | Doubles infrastructure cost; database consistency during cutover is complex |
| Canary Releases | Risk-sensitive rollouts | Requires robust monitoring to detect issues on small traffic percentages before full exposure |
| Automated Testing | All pipelines | Ongoing test suite maintenance; requires investment in test frameworks and data management |
| Continuous Feedback | Mature teams | Requires extensive monitoring integration and discipline to act on feedback |
| GitOps | Cloud-native, Kubernetes | Strict repo access controls required; teams need Git+CI/CD integration knowledge |
| Monorepo | Shared libraries, microservices with shared infra | CI tooling must support path filtering; repo scale requires careful maintenance |
| Polyrepo | Independent teams, isolated services | Cross-repo dependency management is complex; coordination across repos is expensive |
CI/CD and Platform Engineering
Section titled “CI/CD and Platform Engineering”CI/CD design patterns and platform engineering are two sides of the same coin - both aim to deliver software faster and more reliably. Where CI/CD patterns define how code is safely delivered, platform engineering ensures the underlying infrastructure is optimized to support that delivery at scale.
How They Work Together
Section titled “How They Work Together”| CI/CD Design Patterns | Platform Engineering |
|---|---|
| Feature toggles, blue-green, canary - managing deployment risk | Internal Developer Platforms (IDPs) providing self-service deployment environments |
| Pipeline templates and module registries - standardizing delivery | Managed infrastructure, shared tooling, certified base images - reducing team cognitive load |
| Automated testing, security gates, quality gates - enforcing standards | Platform-level enforcement: policy as code, access controls, secrets management |
| DORA metrics, observability integration - measuring delivery | Monitoring infrastructure, centralized logging, alerting standards |
Internal Developer Platforms (IDPs) are the concrete output of platform engineering. They expose self-service capabilities - set up a dev environment, deploy an application, check pipeline status - without requiring developers to understand the infrastructure beneath. The result: developers focus on code, not orchestration.
The Cognitive Load Argument
Section titled “The Cognitive Load Argument”Every minute a developer spends managing infrastructure, debugging environment differences, or navigating undocumented deployment processes is a minute not spent building product. Platform engineering’s primary metric of success is developer cognitive load - how much contextual knowledge a developer must carry just to ship a change. The goal is to minimize it to near-zero for routine operations.
Future Direction
Section titled “Future Direction”| Trend | What it means |
|---|---|
| Intelligent CI/CD | ML models analyze code change patterns and historical deployment data to automatically tune pipeline configuration - faster builds, smarter test selection, predictive failure detection |
| Dedicated platform teams | Most large engineering organizations already have or are building dedicated platform engineering teams - the tooling, staffing, and budget have shifted from “nice to have” to “table stakes” |
| Deeper integration | Platform engineering and CI/CD converge - the IDP is the pipeline interface. Developers interact with delivery through the platform, not directly with pipeline YAML files |