Updating Mercurial: The Definitive Guide
Matt Mackall
mpm at selenic.com
Wed Aug 24 02:26:56 UTC 2011
On Tue, 2011-08-23 at 22:08 -0400, Greg Ward wrote:
> On Mon, Aug 22, 2011 at 4:46 PM, Na'Tosha Bard <natosha at gmail.com> wrote:
> > Another thought that has occurred to me. Since we are aiming for
> > largefiles' inclusion into Mercurial 2.0 in November, and we are aiming to
> > complete the book for the 2.1 release in March, should we consider a section
> > regarding largefiles in the hgbook?
>
> Sounds like a good idea to me.
>
> > I'd propose that we omit the low-level technical gunk and simply describe
> > largefiles from a high-level, end user perspective, as well as provide clear
> > steps for what should be necessary to get a repo up and running with
> > largefiles.
>
> But I *like* low-level technical gunk! ;-)
>
> Seriously: I don't think we need to explain exactly why revlog doesn't
> work so well with compressed or encrypted data, but it could be useful
> to spend a paragraph explaining why DVCS and large incompressible
> files don't work so well together.
>
> (Hmmm: it would be interesting to do some synthetic tests with putting
> large incompressible files into base Mercurial, git, and bzr and see
> what the impact is. When I was converting our CVS repo at work to hg
> originally, the first couple of attempts produced a 900 MB repo,
> around 25% of which was the history of one single file -- a 30 MB PDF
> that changes fairly frequently. Hence bfiles.)
I expect such an experiment would be pretty dull: X revisions of an
uncompressible, undelta-ible Y MB file will take X*Y MB + a trivial
amount of metadata overhead.
..which will be just as true of a centralized system. The real issue is
that these files break the central 'keeping a local copy of everything
is fast/cheap' tradeoff.
--
Mathematics is the supreme nostalgia of our time.
More information about the Mercurial
mailing list