[SSL Observatory] Number of CAs

Andy Isaacson adi at hexapodia.org
Thu Dec 8 14:34:18 PST 2011


On Thu, Dec 08, 2011 at 01:11:41PM -0500, Patrick Patterson wrote:
> > I'm trying to find out how many distinct security policies and HSMs need
> > to be audited for a user, using the default trust root in browsers, to have
> > confidence that false certificates have not been issued.
> 
> Ok - what is your definition of "false certificates" - is it
> certificates that you feel were not issued correctly?  Is it
> certificates that the CA admits to being issued in violation of it's
> policies? The two are not the same. 

Certainly there are many overlapping definitions; from "arbitrary certs
issued with no audit trail" ala Diginotar; to "certs issued bypassing DV
checks but with audit trail" ala Comodohacker; "certs issued with faulty
DV checks" due to DV implementation weaknesses, or BGP poisoning leading
to SMTP MITM, or SMTP sniffing of verification URLs; "certs requested by
attacker and accidentally approved by ssladmin at victim.com"; to "certs
requested by victim and properly DVed, but then compromised by
attacker".  I won't draw a bright line in that continuum because I don't
think we are far enough into the requirements gathering to be able to
know what we (as a larger community) need.

> > To do that, first we (the entire SSL community) will need to
> > enumerate the private keys that have a valid certificate chain to
> > the trust root.  The thousand-plus CA certificates discovered by the
> > EFF SSL Observatory are a strict subset of that set of private keys.
> > 
> > Once we have enumerated the set of private keys (note that I'm not
> > assuming a single entity will have knowledge of the entire set!), we
> > need to enumerate where they are stored.  Hopefully they are all in
> > HSMs with undisturbed audit logs and can be shown to not have
> > exposed key material outside of their audited systems.
> 
> One does not need to "expose key material" for "illegitimate"
> certificates to be issued.

Definitely true (as was the case for both comodohacker and diginotar).

> All that is required is for some human to not fulfill their role
> correctly, and this often results in the CA's keys being used quite
> correctly (i.e. the CA software responding to a request from someone
> it considers legitimate), but the end result may be that a certificate
> was issued contrary to policy.

Yep, true.

> And when you say "HSM" - do you mean a smartcard, or a module that if
> you look at it wrong, it wipes the keys - depending on your attack

Again, you seem to be asking for a bright line, when there's clearly a
continuum of solutions and definitions.

> surface and your opponent, there are times when even the latter is not
> sufficient.
> Also, what standard would you like the HSM to follow to
> ensure protection of the private key? I know of at least one HSM
> manufacturer (which is rated to EAL4+) that "backs up" the private key
> by exporting it to a PKCS#12 (albeit with a very long passphrase).
> Would you consider that acceptable? 

I don't think that sounds reasonable, but given an appropriate audit
regime, possibly.

> And what about an offline key that was not in an HSM, but was only
> accessible under guard, by 2 or more people, inside of a highly secure
> facility? Making a blanket statement of "must be in an HSM" misses the
> point - how about "kept so that it is only accessible under a known
> set of conditions, by a known set of actors". 

Yes, your final statement is a pretty good summary of the goal.  Seems
to me the HSM is cheaper than extra armed guards, but some sort of guard
is probably necessary in any case.

> > I'd advocate that CAs take a proactive audit stance towards this
> > private key material.  I believe that CAs should at the very least,
> > provide browser vendors with an independently audited census of all
> > private keys chained from their in-browser certificates.  The audit
> > should have a publicly disclosed summary containing population
> > counts for categories like "keys maintained in HSM at CA", "keys
> > maintained in HSM at customer premises", "audit logs maintained by
> > CA", "audit logs maintained by customer", et cetera.
> 
> I know of CAs that would be able to provide such details, but where
> you could still find certificates that were issued on the whim of some
> administrator (it is 100% within the policy of that CA to allow this).
> Focussing once again on the integrity of the keys sort of misses the
> point of the PKI. It's the entire infrastructure that you need to look
> at (including the people), and not just the technology. We tend to say
> to clients that PKI is 90% people and process, 5% technology, and 5%
> crypto - without the 90%, you have some very nice technology and
> crypto, but you don't have very much security.

I'd certainly be interested in more details of that "whim" story, of
course!

> > Keys which are discovered to be potentially compromised in the
> > process of this audit MUST be publicly disclosed and blacklisted.  I
> > would claim that this must include keys which were protected during
> > their validity period but which lost integrity after certificate
> > expiration.
> 
> Again - what do you mean by "compromised" - do you mean "got out of
> their HSM", or "may have signed a certificate or three that was
> requested by a rogue Registration Authority". Should you revoke the
> entire CA for that, or should you just revoke those certificates and
> punish the Registration Authority?

Depends on the severity of the compromise, of course.  Certificate
revocation (via CRL or OCSP) doesn't defend against network-controlling
MITM attackers, so I think that we need to enhance the procedural
aspects of our responses to such compromised certificates.

Specifically, it is *not* sufficient for a CA, once compromised, to
silently issue revocations for the compromised certs via OCSP and CRL.
At a minimum the offended party (victim.com) must be notified.  We can
discuss what that notification should consist of, maybe DV-quality email
is sufficient or maybe there should be more thorough contact.

I think the audit results (number of compromised certificates issued)
should be disclosed, perhaps annually, at a minimum to Mozilla or
another independent party, but preferably publicly released.

> Not to mention that, in the case of the Malaysian government cert that
> was used to sign some malware, it was the USER/Subscriber who didn't
> report their key stolen, and so the CA had no way of putting it on the
> blacklist / CRL. How do you propose to solve the "Users are the
> weakest part of the security chain" problem (which is NOT PKI
> specific).

Users certainly are a weak link, I've been responsible for my share of
faulty security practices and party to more than one inadvertent key
disclosure (thankfully never beyond a strictly hobbyist context).  The
vast majority of SSL private keys are just one Wordpress vulnerability
(plus maybe a local root escalation if the admin remembers how to chmod)
away from leaking.

There is definite value to gathering statistical information about
numbers and kinds of accidental key exposures.  But we as a community
and the CA industry cannot avoid fixing our issues by pointing out there
are other issues elsewhere in the pipeline.

> > To manage the CA pushback from the bad publicity potential in this
> > area, perhaps an independent or community group (CA-B-F, EFF, IETF,
> > Mozilla) could manage a blacklist of such exposed keys without
> > necessarily disclosing what CA signed the certificate.
> 
> How, pray tell, could you do that? A certificate contains the Issuer.
> So unless you have this body only publish a list of hashes of certs,
> then you would certainly be exposing who signed the certs.

Publishing hashes would suffice for blacklisting.

> > I'm sure there is more information that the Relying Community would
> > benefit from.
> 
> We're not a CA, so we don't have an iron in this fire, but I would
> like to see a constructive dialog on how to educate organisations (at
> the user level, it's probably a lost cause) to perform their own trust
> audits, and take an active participation in who they and their users
> trust.

Yes, it's unreasonable to ask individual users to spend time on trust
audits.

It's not unreasonable to provide tools for anyone to perform such a
trust audit; and it's likely that most people will delegate their trust
decisions to a third party (their government, church, political party,
EFF, Masons Lodge, cousin, browser vendor, etc).

> Simply educating folks that PKI isn't about HSMs or keys, and is about
> binding identities and operationing according to a policy is probably
> a start.

I would say "PKI is about *more than* HSMs and keys".  Without the
technological foundation, PKI doesn't work.

> Once that sinks in, then people will start thinking of
> checking the audits of organisations, and only trusting them if they
> both agree with their policy (Hey, a CA can have a policy that says
> that it sells certs only to "the bad guys" and as long as that's what
> it does, it'll pass the audit :), and a valid audit from some
> accredited firm.
> 
> Now, the audit regime, and what it means to be accredited is a topic
> for yet another day :)

Indeed, accrediting is one of those critical areas that I never dreamed
I'd care about. :)

-andy



More information about the Observatory mailing list