Git History Rewrite at Scale: Removing 100MB+ Files Safely

Introduction

Large files inside Git repositories are a silent problem. They increase clone times, inflate repository size, and in platforms like Bitbucket Cloud, can completely block pushes once files exceed 100MB.

During a migration exercise, we encountered multiple repositories containing large binary files embedded directly in Git history. Some were intentionally added during testing; others were legacy artifacts. Regardless of origin, the impact was the same: repository growth, push failures, and migration risk.

We needed a scalable, production-safe solution to:

  • Identify files larger than 100MB
  • Preserve those files safely
  • Remove them from Git history
  • Maintain traceability
  • Avoid Git LFS
  • Process multiple repositories in batch

This article explains the approach, implementation, and verification process.

The Problem with Large Files in Git

Git is optimized for source code, not large binaries. When a file larger than 100MB is committed:

  • It becomes part of Git object history.
  • Even if later deleted, the blob remains in history.
  • Every clone downloads that blob.
  • Bitbucket Cloud blocks pushes containing files ≥100MB.
  • Repository size increases permanently unless history is rewritten.

Deleting the file in a new commit is not enough. The blob must be removed from the entire commit graph.

Are you ready for limitless expansion? Take advantage of seamless cloud migration services designed to accelerate your business growth today.

Requirements

We defined clear technical requirements:

  1. Scan multiple repositories under a parent directory.
  2. Detect files larger than 100MB in:
    • Working directory
    • Full Git history
  3. Generate a detailed CSV audit report.
  4. Back up repositories before modification.
  5. Archive large blobs to S3 before removal.
  6. Rewrite Git history safely.
  7. Force push cleaned repositories.
  8. Verify that no large blobs remain.

Architecture Overview

The cleanup workflow followed this structure:

Implementation Strategy

1. Repository Discovery

All repositories were discovered under a specified parent directory by locating .git folders. This allowed batch processing without hardcoding repository names.

2. Scanning Git History

To detect large blobs in history, we relied on Git’s object database:

git rev-list --objects --all \
| git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)'

We filtered for blobs larger than 100MB (104857600 bytes). This approach ensures that even deleted historical files are detected.

3. CSV Report Generation

For traceability, a consolidated CSV report was generated containing:

  • Repository name
  • File path
  • File size (bytes + human-readable)
  • Blob hash
  • Commit hash
  • Target S3 path

This report served as:

  • A dry-run validation artifact
  • An audit record
  • A mapping between Git history and S3 storage

4. S3 Archival Before Deletion

Before rewriting history, large blobs were extracted using:

git cat-file -p <blob-hash>

They were uploaded to S3 using a structured path:

s3://<bucket>/<repo-name>/<commit-hash>/<file-name>

This ensured:

  • No data loss
  • Full traceability
  • Commit-level mapping
  • Easy retrieval if required

5. Safe History Rewrite

We used git-filter-repo, which is the modern, recommended alternative to git filter-branch.

For each repository:

  • Paths to remove were collected from the CSV report.
  • git-filter-repo --invert-paths was used to remove those paths from all commits.
  • A full backup tarball was created before execution.
  • A confirmation prompt prevented accidental execution.

This resulted in a new, clean commit graph with large blobs removed.

6. Force Push and Remote Restoration

Since history was rewritten:

  • All commit hashes changed.
  • A force push was required.
  • Team members were instructed to re-clone repositories.

Remote URLs were preserved or restored automatically to ensure push continuity.

Verification Method

Cleanup was verified using a direct scan of Git objects:

git rev-list--objects--all \
|git cat-file--batch-check='%(objecttype) %(objectname) %(objectsize)' \
|awk'$1 == "blob" && $3 >= 104857600'

If the command returned no output, it confirmed:

  • No blob ≥100MB remained
  • History rewrite was successful
  • Repository was safe to push and clone

This verification step is critical and should never be skipped.

Results

  • 10 repositories processed
  • ~3GB of large blobs identified
  • All large files archived to S3
  • Git history rewritten safely
  • Force push completed
  • No remaining blobs ≥100MB in any repository
  • Repositories ready for clean migration

Lessons Learned

  1. Deleting files in a commit does not remove them from history.
  2. Always run a dry-run before destructive operations.
  3. Always create a full backup before rewriting history.
  4. Always verify using Git’s object database.
  5. Separate source code storage from binary artifact storage.
  6. Avoid large binary commits unless using Git LFS intentionally.

Best Practices for Production

  • Keep repositories focused on source code.
  • Use external storage (S3, artifact repositories) for large binaries.
  • Automate detection of large files in CI pipelines.
  • Add pre-commit or pre-receive hooks to block oversized files.
  • Regularly audit repository object sizes.

Conclusion

Rewriting Git history at scale is a high-impact, high-risk operation if not handled properly. However, with a structured approach, proper backups, archival strategy, and verification, it becomes a controlled and repeatable process.

By combining Git object analysis, S3 archival, and git-filter-repo, we successfully removed large files from multiple repositories without data loss and without relying on Git LFS.

This approach provides a scalable blueprint for teams facing similar migration or repository health challenges.

Related Searches

Related Solutions

Automating Node Exporter and VMagent Deployment with Ansible

Introduction

Keeping your infrastructure healthy means keeping a close eye on it—and that is where monitoring tools like Node Exporter and VictoriaMetrics VMagent come in. They are great at collecting and shipping system metrics from your servers, but here is the catch: installing and configuring them manually across dozens of machines is tedious, messy, and frankly, a recipe for inconsistency. That is where automation saves the day. In this guide, we will walk through how to use Ansible to deploy Node Exporter and VMagent cleanly and reliably, following best practices that are ready for real-world production environments. Continue reading “Automating Node Exporter and VMagent Deployment with Ansible”

What is a Bare Git Repository?

A Git bare repository is a specialized version of a Git repository that serves a different purpose than a regular Git repository. Many platforms, such as GitHub, rely on bare repositories stored on their servers. When you clone a repository from GitHub, you’re actually accessing one of these bare repositories. They’re designed for server-side use, helping to securely and efficiently manage and distribute code, without the ability to directly modify files like a normal repository. Here’s a simple explanation:

  • Lack of a working directory: Unlike standard Git repositories, bare repositories don’t have a working directory. This means you can’t view or edit files directly. You also won’t be able to run Git commands that typically require a working directory.
  • Only contains Git data: A bare repository only contains the .git folder data you’d find in a normal repository. This includes version history, configuration, branches, etc., but it doesn’t include the actual project files that you can edit.
  • Used for Sharing: Bare repositories are typically found on servers where multiple developers share code. Instead of working directly in the bare repository, developers clone it to their local computers, make changes, and then push those changes back to the bare repository. Services like GitHub use a similar process.
  • Prevents direct editing: By not having a working directory, there’s no risk of users directly editing files on the server. This helps avoid conflicts and maintain version control.
  • Simplifies management: If you’re managing a server-side repository and only need to monitor history and branches, a bare repository is a more efficient and secure option.

By understanding these details, you can appreciate the role of bare repositories in a collaborative coding environment.

Continue reading “What is a Bare Git Repository?”