how much memory?

Matt Mackall mpm at selenic.com
Wed Nov 9 01:45:34 UTC 2005


On Tue, Nov 08, 2005 at 05:37:10PM -0800, TK Soh wrote:
> > Theoretically, it should be (general overhead) + (size of largest
> > revlog entry * N), where N is a small number.
> > 
> > The biggest memory user should be delta generation, which involves
> > unpacking two versions, breaking them into lines, then generating an
> > output which is potentially twice as large as the input. So I'd expect
> > N to be in the range of 4-6.
> 
> Which mean a single commit of larger than 300MB or so will break just about
> every servers we have. And unless I break down the repo then commit them by
> chunks, there will be problems.

Not a single commit. A single revision of a single file. If you're
committing single files that size, you're likely to run into other
problems.

> Any way to optimise hg/hgweb on memory usage?

Not really. Delta generation assumes that all the data fits in memory.
This is a reasonable assumption for just about everyone and doing it
any other way would be _immensely_ slower.

-- 
Mathematics is the supreme nostalgia of our time.



More information about the Mercurial mailing list