User masquerading - audit trail?
John D. Mitchell
jdmitchell at gmail.com
Wed Jan 14 22:39:54 UTC 2009
On Wednesday 2009.01.14, at 13:40 , Jeremy Lizakowski wrote:
[...]
>> ZERO effort to prevent masquerading: it simply can't be
>> done and we don't want to give any false sense of security.
>
> I think masquerading can be prevented quite easily, and done securely.
Um, er, no it can't. If we could do that we would have an
incorruptible bridge between the meat and the electronic -- i.e., the
holy grail of identity -- and that's just not possible given what we
know at this point in time.
> The GPG extension seems to implement most of this already by allowing
> signatures. However, I'm not familiar with the repo format, so there
> could be implementation details that make it difficult?
>
> If a developer signs a commit, that signature cannot be forged by
> other
> developers. The name and email address they provide might be bogus,
> but
> the key for the signature is essentially not forgeable.
You might want to check out the latest stats on the various forms of
data theft (via stolen/lost devices such as laptops, pdas/smartphones/
etc., usb (thumb drives, ipods, etc.), social hacking, insider
actions, etc.).
> I may know that #F3C729 is a developer who I can trust. I may have
> seen
> their code before, or I might have met them in person. But if #327DB1
> submits a patch, and I don't know them, then it's time to review the
> code (or if it's inside a company, find out how they attained access).
"Trust, but verify."
> Yes, keys can be compromised, but then the issue is taken to another
> level, and nothing can be trusted. I'm not worried about that threat
> model, and I assume we can keep our keys reasonably secure.
In a distributed model like for Hg, those are NOT reasonable
assumptions.
> It is possible for a 3rd party to push code yet not be able to
> compromise the developer's ssh keys. For instance, if the repo is
> under
> the username "hg" on a shared folder, it could be tainted, but the 3rd
> party does not know my password and cannot see my keys. But If I pull
> from there, unaware of the new contributor, my repo is tainted. If
> the
> submitted code is not executed locally (e.g. deployed to a test
> server),
> they will not be able to compromise the keys on my machine via a
> trojan. So, non-authenticated submissions can affect the project
> without compromising keys.
>
>> a) trust the developers you're working with
>> and/or
>> b) audit every submission
>
> There is also a middle ground (and that's where I currently am). I
> can't audit every submission, or we would be inefficient.
> I trust my developers, but sometimes I have to work with external
> developers hired by the client, and I have no control over who they
> are,
> nor the circumstances why they are there.
> Essentially, my concern is accountability. If the option is enabled,
> each commit could be traced backed to a unique key. If I limit access
> to certain keys with hg-ssh, then it is pretty secure. In the case
> of a
> distributed open-source project, signatures establish accountability
> and
> unforgeable reputations, while simultaneously allowing anonymity if
> desired.
So you still have no control over their use of the keys that you give
to them.
I.e., I've worked with and audited a number of outsourced projects and
found a frightening amount of scary things. One example is where all
of the outsourced developers all used the very same account id for
everything because they were just barely competent enough to setup one
Windows machine on their end with the ability to communicate
"securely" with the in-house servers. Another client was smart enough
to contractually require their ok before the outsourcer could replace/
transfer/etc. anybody off/on their project (to protect against the
very common bait-n-switch practice) and so what did the outsourcer
do? They still moved off the original, qualified guy and replaced him
with another guy and just told the new guy to pretend he was the
original guy (and we're talking in emails, checkins, etc.).
Basically, the belief that you can enlarge your fundamental trust
boundary by relying on such signatures without significant risk is a
fallacy. Studying basic epidemiology is, IMHO, a requirement for
anyone in the security field. :-)
Anyways, all of that said, if you're comfortable with implementing the
gpg signing check for your company, commit hooks seem like the place.
Have fun,
John
More information about the Mercurial
mailing list