Managing Monorepos at Scale: Lessons from Meta, AWS, and a 15-Repo Migration

I’ve worked at both ends of the repository spectrum: Meta’s massive monorepo with thousands of engineers, and AWS’s distributed polyrepo architecture with thousands of services. Then I migrated SID Technologies from 15 separate repositories to a single monorepo over one intense weekend.

This post is about what I learned from each approach, when monorepos win, when they don’t, and the actual migration process with real metrics.

Two Extremes: Meta vs AWS

Meta’s Monorepo

At Meta, virtually everything lived in one Mercurial repository: Facebook, Instagram, WhatsApp web, internal tools, mobile apps, backend services. Thousands of engineers committing thousands of times per day.

What worked:

  • Cross-cutting changes were routine, not projects
  • One engineer could touch iOS, Android, backend, and web in a single diff
  • Refactoring was safe—your IDE could find all usages across the entire codebase
  • No version management for internal code
  • CI caught integration issues before merge

What was hard:

  • Onboarding required downloading gigabytes of code
  • Search was slow without custom tooling
  • Build times required distributed caching (Buck/Bazel)
  • Merge conflicts were frequent in hot files

AWS’s Polyrepo

At AWS, nearly every service lived in its own repository. Thousands of repositories, each with its own CI/CD, deployment pipeline, and ownership.

What worked:

  • Clear ownership boundaries (one team, one repo)
  • Independent deployment cadences
  • Technology diversity (each team chose their stack)
  • Strong security isolation between services
  • Small repositories were fast to clone and search

What was hard:

  • Cross-cutting changes required coordination across dozens of repos
  • Shared libraries meant version management hell
  • Discovery was painful (“which repo owns this feature?”)
  • Duplicate code accumulated across services
  • Integration testing required complex test environments

Neither approach was wrong. They fit their organizational constraints. Meta optimizes for velocity and coordination. AWS optimizes for autonomy and isolation.

What Fast-Moving Companies Share

Before we dive into SID’s migration, let’s look at the pattern:

Google stores billions of lines of code in a single repository (Piper). They built custom tooling—Blaze (now Bazel), CitC, Critique—specifically to make this scale. Over 25,000 engineers commit daily.

Microsoft migrated Windows to a single Git repository. They built VFS for Git to handle the scale. The Windows codebase is one of the largest Git repos in existence.

Stripe, Uber, Airbnb—many fast-moving companies converge on monorepos. Not because it’s the only way, but because for their organizational structure (high coordination, shared infrastructure), it reduces friction.

But Netflix, Amazon, and Spotify run successfully on polyrepo architectures. They’ve optimized for autonomy over coordination. Different constraints, different choices.

The Migration: 15 Repos to 1 (A Weekend Project)

At SID, we started with the “best practice” of one repo per service. By the time we had 15 services, the overhead was crushing us.

Friday Evening: The Decision (2 hours)

I spent Friday evening documenting the pain points:

Pain Point 1: Database Models Everywhere Our User model was defined in 8 different repositories. When we added a field to the users table:

  • Update 8 different type definitions
  • Hope they stay in sync
  • Debug runtime errors when they don’t

Pain Point 2: Security Fix Nightmare Found a security bug in authentication middleware (its own repo):

  1. Fix and release [email protected]
  2. Open PRs in 14 consuming repos
  3. Wait for 14 code reviews
  4. Merge and deploy 14 services
  5. Hope no one is still on the old version

For a security fix that should have been 10 minutes.

Pain Point 3: Feature Velocity The immediate trigger: adding organization-level permissions required coordinated changes across 9 repositories. Two weeks of work. One week coding. One week dependency management.

The breaking point: I couldn’t ship fast enough.

Saturday Morning: Structure Planning (3 hours)

Designed the monorepo structure:

sid-monorepo/
├── services/ # Each old repo becomes a directory
│ ├── authentication/
│ ├── billing/
│ └── ... # 15 services
├── pkg/ # Shared Go packages (consolidated)
├── packages/ # Shared TypeScript (consolidated)
├── db/ # Database models (single source of truth)
└── tools/ # Deployment scripts, CI tools

Key decisions:

  • Services stay independent (separate deploys, separate ownership)
  • Shared code moves to pkg/ and packages/ (no more versioning)
  • Database models in db/ (one definition, imported everywhere)
  • Tools centralized (one Makefile, one CI config)

Saturday Afternoon: The Migration (8 hours)

Step 1: Create the monorepo

mkdir sid-monorepo
cd sid-monorepo
git init

Step 2: Migrate services (one at a time)

For each service:

# Clone the old repo
git clone [email protected]:sid/authentication-service.git temp

# Move contents into services/authentication/
mkdir -p services/authentication
mv temp/* services/authentication/

# Preserve git history (optional, I skipped this for speed)
# git filter-branch --subdirectory-filter ...

# Commit
git add services/authentication
git commit -m "Migrate authentication service"

I did this for all 15 services. Took about 4 hours with breaks.

Step 3: Consolidate shared code (4 hours)

This was the hard part. I had:

  • 8 copies of User model (slightly different)

  • 5 copies of auth middleware (different versions)

  • 3 copies of Stripe client (different features) For each, I:

  • Compared all versions

  • Picked the most complete one

  • Added missing features from others

  • Moved to db/models/ or pkg/

  • Updated all imports

Example for User model:

// Before: 8 different definitions
// services/authentication/models/user.go
// services/billing/models/user.go
// ... 6 more

// After: One definition
// db/models/user.go
type User struct {
    ID              uuid.UUID
    Email           string
    OrganizationID  uuid.UUID  // This field was missing in 3 services
    CreatedAt       time.Time
    // ... all fields from all versions, reconciled
}

Updated imports across all services:

// Before:
import "github.com/sid/authentication-service/models"

// After:
import "github.com/sid/monorepo/db/models"

Used gofmt -w and search/replace to fix imports.

Part 3: CI/CD and Testing

Sunday Morning: CI/CD (6 hours)

Before: 15 separate GitHub Actions workflows, each 100-200 lines.

After: One workflow with change detection:

name: CI

on: [push, pull_request]

jobs:
  detect-changes:
    runs-on: ubuntu-latest
    outputs:
      services: ${{ steps.filter.outputs.services }}
    steps:
      - uses: actions/checkout@v3
      - uses: dorny/paths-filter@v2
        id: filter
        with:
          filters: |
            authentication:
              - 'services/authentication/**'
              - 'pkg/authentication/**'
              - 'db/**'
            billing:
              - 'services/billing/**'
              - 'pkg/stripe/**'
              - 'db/**'
            # ... repeat for all services

  test:
    needs: detect-changes
    if: needs.detect-changes.outputs.services != '[]'
    strategy:
      matrix:
        service: ${{ fromJson(needs.detect-changes.outputs.services) }}
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Test ${{ matrix.service }}
        run: |
          cd services/${{ matrix.service }}
          go test ./...

This meant:

  • Change services/billing? Test only billing.
  • Change pkg/authentication? Test all services that import it.
  • Change db/models? Test everything (it’s foundational).

Sunday Afternoon: Deployment (4 hours)

Rewrote deployment scripts. Before, each service had deploy.sh:

# services/authentication/deploy.sh (repeated 15 times)
docker build -t gcr.io/sid/authentication:$TAG .
docker push gcr.io/sid/authentication:$TAG
gcloud run deploy authentication --image=gcr.io/sid/authentication:$TAG

After, one deployment tool (this is when I started building Pilum):

# Deploy specific services
./tools/deploy.sh --tag=v1.0.0 --services=authentication,billing

# Deploy everything
./tools/deploy.sh --tag=v1.0.0 --all

Sunday Evening: Testing (3 hours)

Deployed to staging. Found issues:

  • Circular dependency: Service A imported Service B, Service B imported Service A. Hidden by versioning in polyrepo. Visible immediately in monorepo.
  • Import paths: Missed some imports during migration.
  • Database migrations: Had to reconcile conflicting migrations from different services. Fixed each issue, re-deployed, verified.

Total time: ~24 hours over a weekend. Could have been faster if I’d planned better. Could have preserved git history if I’d been more careful.

The Results: Measured Impact

We tracked metrics for 3 months before and 3 months after the migration:

MetricPolyrepo (15 repos)MonorepoChange
Time to deploy all services45 min (serial)8 min (parallel)-82%
PRs for cross-cutting change8-15 PRs1 PR-90%
Time to complete feature spanning 3 services10-14 days2-3 days-75%
Developer onboarding time2 days4 hours-75%
Lines of CI/CD config3,200 lines400 lines-87%
Time spent on dependency management4-6 hours/week0 hours/week-100%
Bugs from version skew2-3 per month0-100%

When Monorepo Makes Sense

Based on Meta, AWS, and SID experience, monorepo wins when:

Team Size: 1-50 Engineers

Small teams need to coordinate constantly. The overhead of service boundaries (versioning, PRs, deployments) outweighs the benefits of isolation.

At this scale, everyone knows the codebase anyway. The “cognitive load” argument doesn’t apply.

High Coordination Needs

If your services talk to each other constantly, share data models, and need coordinated deployments—monorepo reduces friction.

Example: At SID, our authentication service is imported by every other service. In polyrepo, updating auth meant 14 PRs. In monorepo, it’s one PR with all affected services updated atomically.

Rapid Iteration

Startups need to pivot quickly. Refactoring should be easy. Changing data models should be safe. Monorepo makes this possible because your IDE can find all usages.

In polyrepo, refactoring is risky. You can’t be sure which services are on which version of shared code.

Shared Infrastructure

If all services deploy to the same cloud, use the same database, share the same infrastructure—polyrepo doesn’t buy you isolation anyway.

The blast radius of an AWS outage is the same whether your code is in one repo or fifteen.

When Polyrepo Makes Sense

Polyrepo wins when:

Genuinely Independent Business Units

If Company A acquires Company B and they operate independently—different customers, different products, different roadmaps—don’t force a monorepo. The coordination overhead isn’t worth it.

Example: Amazon has AWS, Retail, Prime Video, Alexa—genuinely independent businesses with different scaling needs and operational requirements.

Different Security/Compliance Requirements

Your HIPAA-compliant healthcare service needs different access controls, audit logs, and deployment processes than your public marketing website.

Separate repos can enforce these boundaries at the infrastructure level.

Distributed Team Ownership

If teams truly never coordinate, rarely share code, and operate in different time zones with minimal overlap—polyrepo reduces friction.

But be honest: does your 10-person startup really have independent teams? Or do you just have services that should be coordinating?

Open Source Components

It’s difficult to open-source part of a monorepo. If you’re building a library you want to release publicly, a separate repo makes sense.

Example: At SID, our main codebase is a monorepo. But Pilum (our open-source deployment tool) lives in a separate repo because we want external contributors and a different release cadence.

Companies That Prove Both Models Work

Monorepo success stories:

  • Google (25,000+ engineers, billions of lines of code)
  • Meta (thousands of engineers, Facebook/Instagram/WhatsApp)
  • Microsoft (Windows, Office, Azure in increasingly unified repos)
  • Stripe (payments infrastructure, high coordination needs)

Polyrepo success stories:

  • Amazon (AWS is thousands of services, genuinely independent)
  • Netflix (microservices pioneers, strong autonomy culture)
  • Spotify (squads model, team autonomy over coordination)

The pattern: Monorepo for coordination-heavy organizations. Polyrepo for autonomy-heavy organizations. Choose based on your org structure, not ideology.

The Tooling Ecosystem

The “monorepos don’t scale” argument is really a tooling argument. Here’s what exists:

Build/Test Tools

Bazel (Google’s tool, open source)

  • Handles billions of lines of code
  • Language-agnostic, incremental builds
  • Steep learning curve, lots of configuration

Nx (JavaScript ecosystem)

  • Excellent for TypeScript/Node.js monorepos
  • Great DX, good documentation
  • Limited multi-language support

Turborepo (Vercel)

  • Simple, fast for JavaScript
  • Easy to adopt incrementally
  • Build-focused, not deployment

Deployment Tools

For deployment, we built Pilum—an open-source tool for deploying multiple services from a monorepo. But there are alternatives:

Bazel can handle deployment if you invest in writing rules.

Custom scripts work until you have 10+ services.

Platform-specific tools (ko for Go+K8s, Skaffold for K8s) work if you’re single-platform.

Choose based on your stack, not hype.

Why Polyrepo Fails Startups

The polyrepo model—where each service, library, or component lives in its own repository—sounds clean on paper. Separation of concerns. Independent deployability. Microservices nirvana.

In practice, it introduces friction at every turn:

Dependency Hell

When service-A depends on shared-lib, and shared-lib lives in a different repo, you’re managing versions. That’s not inherently bad, but consider what happens when you find a bug in shared-lib:

  1. Fix the bug in shared-lib repo
  2. Cut a new version
  3. Open PRs in every repo that consumes shared-lib
  4. Wait for code review in each repo
  5. Merge and deploy each repo
  6. Hope you didn’t miss any consumers

Now multiply this by every shared component in your system. At a startup, you might have 20-30 services after a year. That’s a lot of PRs for a one-line fix.

Diamond Dependency Problems

It gets worse. What if service-A depends on [email protected] and service-B depends on [email protected]? Now service-C, which needs both A and B, has to deal with version conflicts. In a monorepo, this can’t happen—everyone is always on the same version.

Inconsistent Tooling

Every repo accumulates its own CI/CD pipeline, its own linting rules, its own testing patterns. Teams make different choices. One repo uses Jest, another uses Vitest. One has 90% coverage requirements, another has none. The cognitive load of context-switching between repos is real.

Cross-Cutting Changes Are Painful

Want to:

  • Upgrade your Go version?
  • Switch from one library to another?
  • Add structured logging across all services?
  • Update authentication middleware?

In polyrepo land, each of these is a multi-week project. In a monorepo, it’s often a single PR.

The Discovery Problem

Where does the code live? In a monorepo, you grep and find it. In polyrepo, you’re searching across dozens of repositories, hoping the naming conventions are consistent, hoping the README is up to date, hoping someone documented which repo owns what.

Common Objections

”But Google built Bazel because monorepos don’t scale!”

Yes, and Google has successfully scaled a monorepo to billions of lines of code. The fact that they needed custom tooling doesn’t mean the approach is wrong—it means the default tools have limits at extreme scale.

Most startups will never hit those limits. And if you do, congratulations on your success.

”Microservices need multiple repos!”

Microservices are an architectural pattern (service boundaries, independent deployment). Repository structure is orthogonal.

You can have microservices in a monorepo (SID has 19). You can have a monolith split across multiple repos (don’t do this).

”A bug in shared code takes down everything!”

In monorepo, CI catches it before merge. In polyrepo, you catch it weeks later when services finally upgrade.

The blast radius is actually smaller in monorepo because you find and fix problems atomically.

”Code reviews are harder with a giant repo!”

Code reviews are scoped to files changed, not repo size. A PR touching 3 files is the same whether the repo has 100 files or 100,000 files.

CODEOWNERS ensures the right people review the right code.

”The blast radius is too large!”

This deserves a detailed response.

First, what actually causes production incidents?

In my experience:

  • 40% - Configuration changes (feature flags, environment variables)
  • 30% - Database issues (migrations, missing indexes)
  • 20% - External dependencies (third-party API outages)
  • 10% - Application code bugs that passed CI

That last category—bugs in code—is the only one where blast radius even applies. And it’s the smallest slice.

Second, monorepos don’t mean monolithic deployments.

At SID, we have 19 services in one repo. A bug in services/billing doesn’t affect services/authentication because they deploy independently. The monorepo is a development choice, not a deployment choice.

Third, what’s the blast radius of shared code?

Monorepo approach: Change pkg/middleware, CI runs tests for all affected services, you see failures before merge. Fix them atomically in the same PR.

Polyrepo approach: Change shared-middleware library, publish new version, each service updates at different times over weeks. Some services never update. When they finally do, they discover the bug—weeks later.

The monorepo has a larger immediate blast radius but a smaller total blast radius over time.

Finally, blast radius is a function of testing, not repository structure.

If you’re worried about blast radius, invest in:

  • Comprehensive CI that tests affected services before merge
  • Canary deployments that catch issues before full rollout
  • Feature flags that let you disable code without deploying
  • Automated rollback when health checks fail
  • Good monitoring that surfaces problems quickly

These practices work in monorepo or polyrepo. The repository structure is orthogonal to blast radius management.

Practical Migration Advice

If you’re considering monorepo:

1. Start Small

Don’t migrate all 50 repos at once. Pick 3-5 related services. Prove the value. Then expand.

2. Plan the Structure

Decide upfront:

  • Where do shared packages live? (pkg/, packages/, libs/)
  • Where do database models live? (db/, shared/db/)
  • How do services deploy? (all together? independently?)

Document this in CONTRIBUTING.md.

3. Invest in CI

Change detection is critical. Your CI must:

  • Detect which services changed
  • Test only affected services (and their dependents)
  • Run in <10 minutes for typical changes

This requires tooling. Budget time for it.

4. Use CODEOWNERS

Even in a small team, explicit ownership prevents the “everyone and no one owns this” problem.

/services/billing/ @sid-tech/billing-team
/services/authentication/ @sid-tech/auth-team
/pkg/ @sid-tech/platform-team

5. Measure Before and After

Track:

  • Time to deploy all services
  • PRs needed for cross-cutting changes
  • Features shipped per quarter
  • Time spent on dependency management

If you can’t measure improvement, you can’t justify the migration.

6. Preserve Git History (If You Care)

I didn’t preserve history during SID’s migration (speed over completeness). If you want to:

git filter-branch --subdirectory-filter <service-dir> --prune-empty --tag-name-filter cat -- --all

The Monorepo Structure That Works

At SID Technologies, we run a monorepo that follows a pattern I call “Monorepo, Polyservice.” Everything lives in one repository, but services maintain independent deployability and bounded contexts. Here’s the structure:

├── services/           # Independent microservices
│   ├── authentication/ # User auth, OAuth, tokens
│   ├── billing/        # Stripe integration, subscriptions
│   ├── calendar/       # Calendar management
│   ├── kanban/         # Task boards
│   ├── notifications/  # Push, email, SMS
│   ├── organization/   # Team and org management
│   ├── permissions/    # RBAC, access control
│   ├── search_engine/  # Full-text search
│   └── ...            # 19 services total

├── packages/           # Shared TypeScript packages
│   ├── api/           # Generated API clients
│   ├── configs/       # Shared ESLint, TS configs
│   ├── ui/            # Component library
│   └── utils/         # Common utilities

├── apps/               # Client applications
│   ├── web/           # Next.js web app
│   ├── desktop/       # Electron app
│   └── mobile/        # React Native

├── pkg/               # Shared Go packages
│   ├── authentication/# Auth utilities
│   ├── middleware/    # HTTP middleware
│   ├── stripe/        # Billing integration
│   ├── workos/        # SSO/SAML
│   └── ...           # 30+ shared packages

└── db/                # Database schemas, migrations

This structure gives us:

Atomic changes: A feature that touches the API, the web app, and a backend service is a single PR with a single code review.

Shared code without versioning: When we update pkg/authentication, every service gets the change immediately. No version bumps, no dependency updates.

Consistent tooling: One Makefile, one golangci.yaml, one set of pre-commit hooks. Every service follows the same patterns because it’s enforced at the repo level.

Easy refactoring: Renaming a function? Your IDE can find and replace across the entire codebase. Moving code between services? It’s just moving files.

Conway’s Law: Architecture Follows Organization

Here’s the key insight: the right architecture for your system is a function of your organization, not your technology choices.

Conway’s Law states: “Any organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure.”

A 5-person startup doesn’t need 50 microservices. They need to ship fast. When everyone sits in the same room (or Slack channel), the communication overhead of service boundaries is pure waste.

But a 5,000-person company with 50 teams across 10 time zones? Now those service boundaries map to organizational boundaries. Team A owns Service A. They can deploy independently, operate independently, make technology choices independently.

The Evolution Path

0-10 engineers: Monolith in a monorepo. Maybe 2-3 services. Don’t overthink it.

10-50 engineers: Natural service boundaries emerge. Split along team lines. The monorepo keeps coordination costs low.

50-200 engineers: Domain-driven design matters now. Services map to business domains. Still a monorepo, but with strong ownership conventions.

200+ engineers: You might consider polyrepo for genuinely independent business units. But Google has 25,000+ engineers in a monorepo, so don’t assume you’ve hit scale limits.

The mistake is treating the 200+ architecture as the starting point. Premature optimization is the root of all evil, and premature microservices are premature optimization.

Trunk-Based Development

Monorepo enables a workflow that’s painful in polyrepo: trunk-based development.

At SID, we work on short-lived feature branches (1-2 days max) and merge to main frequently:

  • All tests must pass before merge (CI enforces this)
  • Services deploy independently (change billing, deploy only billing)
  • Feature flags over long-lived branches (ship code dark, enable when ready)
  • Breaking changes require API versioning (even for internal APIs)

This works because:

  • CI catches integration issues immediately
  • Shared code changes are atomic (all services update together)
  • No version skew (there’s only one version—the current one)

In polyrepo, trunk-based development is risky. You can’t be sure which services are on which version of shared libraries.

Conclusion: Choose Based on Your Organization

The monorepo vs polyrepo debate isn’t about technology—it’s about organizational structure.

Monorepo optimizes for:

  • Coordination over autonomy
  • Velocity over isolation
  • Shared context over independent teams

Polyrepo optimizes for:

  • Autonomy over coordination
  • Isolation over velocity
  • Independent teams over shared context

For most startups (1-50 engineers, high coordination needs): Monorepo wins. You’ll ship faster, refactor safely, and spend less time on dependency management.

For large organizations (50+ engineers in truly independent units): Polyrepo might make sense. But don’t assume you’re there yet.

The evidence: Google, Meta, and Microsoft run massive monorepos successfully. Amazon, Netflix, and Spotify run polyrepos successfully. Both models work. Choose based on your organization, not ideology.

If you’re considering migration:

  • Start with 3-5 services (prove the value)
  • Invest in CI tooling (change detection is critical)
  • Measure before and after (justify the decision)
  • Use deployment automation (we built Pilum, but any tool works)

The giants of the industry have proven both models scale. The question isn’t “which is better?”—it’s “which fits your organization?”

For SID, the monorepo was the right choice. Your mileage may vary. But if you’re a small team fighting polyrepo overhead, consider the migration. It might be a weekend well spent.

Your first step: Count how many PRs last month touched multiple repositories or required coordinated releases. If it’s more than a handful, you’re paying the coordination tax daily. The monorepo conversation is worth having.

Further reading: