crew-stable test failure in ubuntu 12.10

Giorgos Keramidas gkeramidas at gmail.com
Sun Feb 3 20:22:32 UTC 2013


On 2013-01-25 18:53, Matt Mackall <mpm at selenic.com> wrote:
> On Sat, 2013-01-26 at 01:15 +0100, Giorgos Keramidas wrote:
> > ===== start of error log =====
> > 
> > --- /home/gkeramidas/hg/mercurial/gker/tests/test-inotify-debuginotify.t
> > +++ /home/gkeramidas/hg/mercurial/gker/tests/test-inotify-debuginotify.t.err
> > @@ -7,7 +7,13 @@
> >  inserve
> >  
> >    $ hg inserve -d --pid-file=hg.pid
> > +  Exception AttributeError: 'fd' in <bound method watcher.__del__ of <hgext.inotify.linux.watcher.watcher object at 0x28e13f8>> ignored
> > +  abort: inotify service not available: Too many open files
> 
> Sounds like you have too many inotify instances.
> 
>        EMFILE The user limit on the total number of inotify instances has been
>        reached.
> 
> https://www.kernel.org/doc/man-pages/online/pages/man2/inotify_init.2.html
> 
>        /proc/sys/fs/inotify/max_user_instances
>               This specifies an upper limit on the number of inotify instances that
>               can be created per real user ID.
> 
> https://www.kernel.org/doc/man-pages/online/pages/man7/inotify.7.html
> 
> $ cat /proc/sys/fs/inotify/max_user_instances 
> 128

Thanks for the pointer.  I've bumped this to 256.  Looking at the inotify
manpage I don't see a way to check how many active inotify instances a user
has, other than walking all processes under /proc and looking at fd's symlinked
to 'anon_inode:inotify'.

Right now, after a few laptop wakeups, my /proc filesystem has almost 128
inotify instances already, and I haven't even started the test suite of
Mercurial at all yet:

: 0203 21:12 saturn:/proc$ find . -exec ls -ld {} + 2>/dev/null | fgrep notify | fgrep gkeramidas | nl | tail
:    115	lr-x------ 1 gkeramidas gkeramidas 64 Φεβ   3 21:06 ./3541/task/3541/fd/13 -> anon_inode:inotify
:    116	lr-x------ 1 gkeramidas gkeramidas 64 Φεβ   3 21:06 ./3541/task/3543/fd/12 -> anon_inode:inotify
:    117	lr-x------ 1 gkeramidas gkeramidas 64 Φεβ   3 21:06 ./3541/task/3543/fd/13 -> anon_inode:inotify
:    118	lr-x------ 1 gkeramidas gkeramidas 64 Φεβ   3 21:10 ./3643/fd/7 -> anon_inode:inotify
:    119	lr-x------ 1 gkeramidas gkeramidas 64 Φεβ   3 21:06 ./3643/task/3643/fd/7 -> anon_inode:inotify
:    120	lr-x------ 1 gkeramidas gkeramidas 64 Φεβ   3 21:06 ./3643/task/3644/fd/7 -> anon_inode:inotify
:    121	lr-x------ 1 gkeramidas gkeramidas 64 Φεβ   3 21:06 ./7770/task/7770/fd/8 -> anon_inode:inotify
:    122	lr-x------ 1 gkeramidas gkeramidas 64 Φεβ   3 21:06 ./7770/task/7771/fd/8 -> anon_inode:inotify
:    123	lr-x------ 1 gkeramidas gkeramidas 64 Φεβ   3 21:06 ./7770/task/7774/fd/8 -> anon_inode:inotify
:    124	lr-x------  1 gkeramidas gkeramidas 64 Φεβ   3 21:10 ./7770/fd/8 -> anon_inode:inotify
: 0203 21:12 saturn:/proc$

It seems that the same problem is seen by other people after a laptop
wakes up: https://github.com/nex3/rb-inotify/issues/23

The biggest consumer of inotify instances seems to be Chrome so far, but this is a very quick and dirty way of crawling /proc:

: 0203 21:19 saturn:/proc$ find . -exec ls -ld {} + 2>/dev/null | fgrep notify | \
:   fgrep gkeramidas | sed -e 's@/fd/.*@@' -e 's at .*\./@@' | grep '^[0-9]'  | \
:   awk '{print $1}' | fgrep -v /exe | \
:   while read pp ; do \
:     cat $pp/cmdline | awk '{print $1}' ; \
:   done | \
:   sort | uniq -c | sort -nr
:      72 /opt/google/chrome/chrome
:       7 /usr/lib/gnome-settings-daemon/gnome-settings-daemon
:       6 kdeinit4:
:       5 update-notifier
:       4 xfce4-terminal--geometry=100x62
:       4 /usr/bin/pulseaudio--start--log-target=syslog
:       4 /usr/bin/gnome-screensaver--no-daemon
:       3 zeitgeist-datahub
:       3 /usr/lib/gvfs/gvfs-udisks2-volume-monitor
:       3 /usr/lib/gvfs/gvfsd-trash--spawner:1.7/org/gtk/gvfs/exec_spaw/0
:       3 /usr/bin/signon-ui
:       2 /bin/dbus-daemon--config-file=/etc/at-spi2/accessibility.conf--nofork--print-address3
:       2 //bin/dbus-daemon--fork--print-pid5--print-address7--session
: 0203 21:19 saturn:/proc$

I'll dig a bit more, and see if there's anything we can do from our side
about it, or if this is not something Mercurial should care about at all.

Thanks,
Giorgos



More information about the Mercurial-devel mailing list