vcs for hefty video and graphics files
Matt Mackall
mpm at selenic.com
Mon Nov 22 23:28:19 UTC 2010
On Mon, 2010-11-22 at 23:08 +0100, Masklinn wrote:
> On 2010-11-22, at 22:43 , Michael Diamond wrote:
> > On Mon, Nov 22, 2010 at 12:32 PM, Masklinn <masklinn at masklinn.net> wrote:
> >> In any case, do not even attempt to handle such projects with a DVCS.
> >>
> >> I don't know if I would go that far. The major benefit of a DVCS is that
> > it stays on your computer, meaning there are fewer (or no) network
> > operations necessary, which would be very valuable on a many gigabyte
> > project. The setup effort for Mercurial is also vastly simpler than SVN or
> > Perforce, so that's another perk for a small user.
> >
> > The cost is of course your project will take up much more space on your
> > drive. Hard to say how much exactly, but it will be a fair amount. This is
> > no different than a centralized VCS, And only gets worse if you have several
> > versions of the repository floating around, which does not sound like it's
> > the case for this user.
>
> I'll give you the simpler setup, but that's about it
>
> 1. If you fear network latencies, you can install the svn or perforce repo on the local machine
> 2. DVCS are not targeted towards binary files and that shows: I
> believe some operations in mercurial require (more than) twice the RAM
> of the biggest file in the operation. For A/V works, this means
> gigabytes of RAM for a single operation. hg starts warning when you
> add files bigger than *10MB* to the repo. My point and shoot generates
> files bigger than that.
Yes, we warn at 10MB because most of the time you're adding a >10MB
file, it's a mistake that you probably don't want to preserve in your
history forever.
> 3. DVCS also tend not to focus much on binary diffing, so the
> repository grows much faster than with a solution taking in account
> those and trying to bdiff them.
What are you basing this on?
What generally distinguishes a "binary" diff alg from a "traditional"
one like we're using in Mercurial is:
a) streaming rather than in-memory
b) O(N) vs O(N^2) worst-case performance
c) worse compression
If we could have (a) and (b) with BETTER compression, we would.
> 4. big network operations (such as pushing/pulling big binaries) also
> aren't generally a big concern, and are therefore usually inefficient.
What are you basing this on?
--
Mathematics is the supreme nostalgia of our time.
More information about the Mercurial
mailing list