[PATCH stable v2] worker: avoid potential partial write of pickled data
Manuel Jacob
me at manueljacob.de
Sun May 22 05:28:33 UTC 2022
# HG changeset patch
# User Manuel Jacob <me at manueljacob.de>
# Date 1653184234 -7200
# Sun May 22 03:50:34 2022 +0200
# Branch stable
# Node ID b475b0ea695438f6b2994eba0ddb3189d8b4fd05
# Parent 477b5145e1a02715f846ce017b460858a58e03b1
# EXP-Topic worker-pickle-fix_partial_write
worker: avoid potential partial write of pickled data
Previously, the code wrote the pickled data using os.write(). However,
os.write() can write less bytes than passed to it. To trigger the problem, the
pickled data had to be larger than 2147479552 bytes on my system.
Instead, open a file object and pass it to pickle.dump(). This also has the
advantage that it doesnât buffer the whole pickled data in memory.
Note that the opened file must be buffered because pickle doesnât support
unbuffered streams because unbuffered streamsâ write() method might write less
bytes than passed to it (like os.write()) but pickle.dump() relies on that all
bytes are written (see https://github.com/python/cpython/issues/93050).
The side effect of using a file object and a with statement is that wfd is
explicitly closed now while it seems like before it was implicitly closed by
process exit.
diff --git a/mercurial/worker.py b/mercurial/worker.py
--- a/mercurial/worker.py
+++ b/mercurial/worker.py
@@ -255,8 +255,10 @@
os.close(r)
os.close(w)
os.close(rfd)
- for result in func(*(staticargs + (pargs,))):
- os.write(wfd, util.pickle.dumps(result))
+ with os.fdopen(wfd, 'wb') as wf:
+ for result in func(*(staticargs + (pargs,))):
+ util.pickle.dump(result, wf)
+ wf.flush()
return 0
ret = scmutil.callcatch(ui, workerfunc)
More information about the Mercurial-devel
mailing list