Network performance problems when pulling and cloning from HTTP server

Angel Ezquerra ezquerra at gmail.com
Tue Nov 20 16:30:27 UTC 2012


Hi,

one of my users has a repository with plenty of "big files" (in the
order of 50 to 100 MB). Our server is _not_ using the largefiles
extension (at least not yet), that is the files are "big" but are not
"largefiles".

The total repository working directory size is about 1.5 GB. The big
files do not change often, if ever.

We use an apache based CGI server, running on a Windows 2003 machine.

My colleague came to me complaining that the clone and the incoming
and pull operations are very slow.

To troubleshoot the problem I first tried to see what kind of
performance he was getting. To do so I ran "hg clone" and used the
windows task manager to view the network usage.

I saw that there is a very brief spike in network usage that goes up
to about 25% of the 100 Mbps network bandwidth, but which quickly and
exponentially decreases down to about 2.5% of the network bandwidth
(and stays there during the whole clone). At this speed it takes a
long while to finish the clone. I checked the CPU usage both at the
server and the client during the clone and I did not see a major
problem (i.e. none of the cores were even close to being fully used).

This seemed very low, which made me wonder whether we had a network
issue. To test this theory I shared the repository as a shared network
folder on the windows network, and copied it to the local machine
using windows explorer. The windows task manager showed that a 100% of
the network bandwidth was used, and the copy finished very quickly. I
believe this rules out a problem with the network infrastructure
itself.

My first advice was to use the --uncompressed clone method. The result
is that mercurial uses around 75% of the network bandwidth. The
network usage is quite flat too (i.e. it mostly stays around 75% and
does not go down until the clone is done). Thus this fixes the problem
for cloning, but pull operations are still pretty bad. Surprisingly
"hg incoming" is as bad as pull.

In order to test if this was a problem with the apache configuration I
setup a simple mercurial web server using the builtin "hg serve"
command (which I executed through tortoisehg). I repeated my
measurement and I got the same performance issue (i.e. 2.5% network
transfer cap). Thus it does not seem to be a problem with apache, but
with mercurial.

My final test was to create a copy of the repository using the
largefiles extension. This greatly improves the situation. The
performance is not quite as good as "clone --uncompressed" but it is
quite close (around 60%) and definitely acceptable.

So it seems that I should advise my colleague to use the largefiles
extension. However I am very surprised that there is a 10x performance
difference between a regular pull and a "largefiles pull". I know that
in one case mercurial must do much more work but the difference seems
excessive (specially since the CPU is not fully used). Is this to be
expected? Is there something that we can do to improve the situation
further, other than using largefiles?

Thank you in advance,

Angel



More information about the Mercurial mailing list