This document records my personal opinions related to the creation of the official Mozilla CA Certificate Policy. It is not an official Mozilla document, and is of historical interest only.
Version 0.3, March 29, 2004. Created HTML version of the metapolicy posting to the netscape.public.mozilla.crypto newsgroup, with some changes from the original posting based on feedback in the newsgroup. Version number chosen to match those of the other draft documents.
This is a draft document for public discussion. It reflects the personal opinions of the author, and does not necessarily represent the views of mozilla.org staff and the Mozilla Foundation.
When distributing Mozilla and related software the Mozilla Foundation includes with such software a default set of X.509v3 certificates for various Certification Authorities (CAs). The certificates included by default are marked as being "trusted" for various purposes, so that Mozilla can use them automatically to verify certificates for SSL servers, S/MIME email users, etc., without having to ask Mozilla users for further permission or information.
The Mozilla Foundation decided to create a formal policy regarding how CA certificates are selected for inclusion with Mozilla and related software, and asked me (Frank Hecker) to take the lead in writing such a policy. As part of the process of creating public drafts of a proposed policy, I created and published an informal "metapolicy" to help explain and clarify the philosophy behind the proposed CA certificate. I decided it was worth preserving this metapolicy as a separate document, for anyone who might be interested.
Note that in the discussion below the term "Mozilla" in the context of software products refers not only to the traditional Mozilla suite but also the the Firefox browser, Thunderbird email client, and other end user products based on Mozilla code and distributed through mozilla.org.
Rationale: The way in which CA certificate selection was formerly done lacked transparency and was inconsistent with the way other security-relevant decisions are made in the Mozilla project. The Mozilla Foundation is the logical entity to take over formal responsibility for this task, as sponsor of the Mozilla project and distributor of the most common Mozilla-based software.
Rationale: Selection of CA certificates is a security-related issue, and security-related issues in general always seem to be more high-profile (because of the possibility of harm to users and others) and characterized by deeply-held opposing opinions on how to address them (see for example the issue of disclosing security vulnerabilities). In such a case it's useful to have a written policy, and for that policy to be created through public discussion and review, in order to create an approach to the issue that takes into account differing views and can be better justified to all the people with a stake in the policy.
Rationale: The Mozilla Foundation is interested in seeing that the policy promotes, not hampers, achieving the goals of the Mozilla project, including the goal of providing a high-quality secure product suitable for use by end users. Mozilla security developers have an interest in creating secure products and upholding their professional and personal reputations as people doing a good job of that. Other Mozilla developers and others in the Mozilla community share this motive, and in addition have an interest in the policy providing other benefits, e.g., making possible Mozilla-related functions of interest to them (as in the example below of promoting more signing of community-created Mozilla extensions).
CAs have a clear interest in seeing their own CA certificates included in Mozilla, and in having a fair and open process in which they can attempt to achieve that; CAs also have an interest in having high-enough standards applied in that selection to prevent their having to compete with CAs that fall short of those standards. CA customers have an interest in their certificates being accepted by all Mozilla users, without having to persuade each of those users to install a new CA cert and set trust flags appropriately. And of course Mozilla users have an interest in having a high-quality product that adequately protects the security of their information and other assets.
Finally, the wider open source and security communities (which overlap to some extent) have at least some interest in this issue. The open source community has an interest in the Mozilla project not acquiring a reputation as failing to ensure user security, a reputation that might taint other open source projects and open source software in general; the open source community may also be able to benefit from adoption of some or all aspects of the Mozilla project's approach if it turns out to work well in practice. Finally, the security community has an interest in Mozilla security in general, since Mozilla security failures may lead to security failures in other areas; the security community also has an interest in seeing how different approaches to the "CA problem" might succeed or fail.
Since all of the above groups have a clear and justifiable interest in the way the Mozilla project approaches this problem, the creators and implementors of the policy are accountable in one way or another to all of them. (Of course the exact level of accountability varies widely depending on the group in question.) Inviting all these groups into a public process of policy creation and encouraging them to participate is one of the best ways, if not the best way, to ensure that these groups' interests are properly taken into account.
Rationale: First, the primary risk associated with CA certificate selection is a security risk. The legal risk is secondary, in the sense that it is a consequence of the security risk and not vice versa; therefore the policy should address security risks first and foremost. Second, the people creating and implementing the policy are not in a position to assess legal risks and attempt to mitigate them, given that a) they are not lawyers, and b) even if they were lawyers, they would not necessarily be in a formal attorney/client relationship with all the parties with a stake in this policy.
More specifically: Any legal risk to the Mozilla Foundation as a result of this policy is for the officers and board of the Mozilla Foundation to judge, based on advice from Mozilla Foundation counsel. This policy will be submitted to the Mozilla Foundation for review and approval before its formal adoption, and that's the proper time for them to do any analysis needed and propose any desired changes to the policy.
If Mozilla security developers and other Mozilla developers are concerned about their personal legal liabilities as a result of the policy being implemented then they should consult their own lawyers, and take any measures they feel are necessary to mitigate the risk to themselves. If they participate in Mozilla development as part of their job duties then they should consult their managers and corporate counsel.
Finally, Mozilla users have to judge for themselves the legal risks of using Mozilla and accepting certificates issued by CAs whose certificates are pre-loaded into Mozilla; different users may perceive different risks in different contexts. There is ample public information that they and their lawyers can consult to help judge the legal situation, including Mozilla licensing terms, CA policy documents, relying party agreements, and so on.
Given that problems with other security-relevant features can have consequences that are comparable in nature and severity to those resulting from problems with CA certificates, it doesn't make sense to do risk/benefit analysis for one set of issues in a way that's wildly different than for the other set. In particular, it doesn't make sense to give significantly greater weight to risks over benefits in the case of CA certificates.
Rationale: Pre-loading CA certificates in Mozilla with given trust flag values is simply a special case of defining a default security configuration for users. The exact default configuration chosen is most important for those users who are unlikely to change it — in other words, the "typical" users described above. Those users in turn have a typical set of activities in which they tend to engage, including the activities listed.
Other Mozilla users are not typical in this sense; they fall into at least two groups. First, some individuals have significantly more knowledge of and interest in security-related issues than typical users; such "power users" may weight risks vs. benefits differently from typical users might, and may even weight them differently from one another. (For example, one power user may consider activity A to be much more risky than activity B, while another power user may believe the exact reverse.) Since these users are assumed to be security-knowledgeable we can depend on them to customize the default Mozilla security configuration to their own liking, as opposed to accepting it as a given.
Government and corporate users are even more different than typical users: They often engage in extremely high value transactions (e.g., involving extremely large sums of money or extremely sensitive information), they operate within a significantly different legal framework (e.g., based on specially-negotiated contractual relationships among all parties to transactions), and they have (or at least should have) experts on whom they can rely to mitigate the risks involved in using Mozilla and other software. As with power users as described above, we can depend on corporate and government users (or their IT staff) to customize the default Mozilla security configuration to their own liking, as opposed to accepting it as a given.
As a result we do not need to design the policy to accomodate the special requirements of power users and corporate/government users. However the policy should still take their needs into consideration; in particular, the policy should try to ensure that these users have sufficient information about the default Mozilla security configuration and why it was chosen, so that they can intelligently make their own decisions about how to change it for their own purposes.
Rationale: If, for example, we're considering a CA that is based in a particular country and issues certificates mainly to people and businesses in that country, and if the Mozilla users interacting with those certificate holders are also in that country, then there might be country-specific issues that would tend to make the threat model somewhat different than it would be otherwise. If this is the case then the implementors of the policy can take such factors into account when deciding whether or not to include the CA's certificate.
Rationale: The standard version of Mozilla released by the Mozilla Foundation also happens to serve as the "US-localized" version, and therefore it should take US-specific issues into account if and where that ever makes sense.
If for some reason a decision taken in a US context doesn't make sense for users in other countries, then people doing localized versions of Mozilla for those countries can be given leeway to make their own decisions. Thus, for example, if we include a particular CA's certificate in the standard (US-localized) version of Mozilla, and the people doing the "localized for France" version don't think that this is a good idea (based on the situation in France), then there's no reason in principle why they couldn't remove that CA's certificate from the France-localized version.
There's still the trademark issue, but I don't see why this couldn't be handled consistently with other localization-specific changes. For example, if the Mozilla Foundation allows the creators of the France-localized version to include, say, default links to French search engines, and still use official Mozilla logos, etc., then I don't see why the Mozilla Foundation couldn't also let them make changes to the list of included CA certificates, if there are good reasons for such changes.
Rationale: Risk analysis doesn't make sense in the absence of an agreed-upon threat model, and that threat model should be based on what users are actually doing in practice.
Rationale: The key decisions here have to do with adding or not adding a particular CA's certificate into the default list. The principal potential benefit from adding a new CA certificate is that typical users can more easily take advantage of certificate-based services (e.g., SSL-enabled web applications, signed and encrypted email, etc.) provided by customers of that CA; put another way, those services now have a potentially larger user base than they would otherwise have. To the extent that users use such services in preference to services that are arguably less secure (e.g., non-SSL web applications, or email sent in the clear), that improved security should be considered a benefit that should be taken in consideration as part of this policy.
Note that the benefit may not be experienced by all typical Mozilla users, but only by some of them; nevertheless it is still a benefit and should be taken into account. In this case the benefit to some typical Mozilla users is arguably accompanied by some increased level of risk to all typical users (since we have yet another CA to worry about, and all typical users will have its certificate pre-loaded with trust flags set). But that's not a reason to spurn the potential benefit; it just calls for a reasonable risk/benefit analysis: is the size of the benefit (even if confined to a subset of users) sufficient to outweigh any increased risk to all users?
The first draft of the CA certificate policy called for consideration of the benefits to the Mozilla project, separate from any benefits to Mozilla users. Subsequently I concluded that that would be a mistake: It is the users who are incurring the security risk, and thus any benefits considered have to be those that accrue to the users. Otherwise we could have a situation where the Mozilla project might make a decision based on benefits to the project that were not necessarily shared by the users themselves.
Finally, it should go without saying our first priority is the interests of users, not the interests of CA or their customers. We can take the interests of CAs into account in some way, but not in a way that compromises the interests of users.
Rationale: We have to deal with the world as it is, not as it might be or as we might wish it to be. It is a simple fact that the services of interest here (e.g., accessing SSL-enabled web servers, exchanging signed and/or encrypted S/MIME email, and downloading and installing and/or executing signed code) have traditionally depended on a CA-based infrastructure.
We might argue, for example, that the interests of users would be better served (or at least would not be harmed) by moving to a model where Mozilla automatically accepted self-signed certificates (whether from end entities or CAs) without question. However such an approach varies widely from that employed by comparable products that typical Mozilla users might be experienced with, and typical users would not be at all equipped to understanding the reasoning behind such an approach or the consequences of adopting it.
Rationale: The Mozilla project depends on volunteer efforts for a large portion of Mozilla development. Where people are in fact paid to do Mozilla development, it is usually to develop features of interest to their employers, and not anything else. Thus even though it might be nice to have new Mozilla features relating to CA certificates we have no guarantee that such features will be developed in a timely manner, or developed at all.
On the other hand we need a policy now, since we are building a backlog of requests from CAs who'd like to have their certificates included, and we need to address those requests one way or the other. Therefore we shouldn't wait for new Mozilla features, but should create and implement the policy in the context of current Mozilla functionality.
Note that this means that for the most part we have to live with the "one size fits all" problem where all pre-loaded CA certificates in Mozilla are treated essentially identically. Although it would be nice to have features like grouping CAs into different categories for purposes of trust, providing CA "branding" for viewing by users, and so on, we do not have the luxury of delaying the policy until such features are available.
Rationale: People's trust in the selection process, and hence in the security of Mozilla itself, will depend on the degree to which the people doing the selection are perceived to be impartial and fair. Introduction of monetary or related considerations would distort the process and lead to the perception that CAs can buy their way into Mozilla at the expense of the security of Mozilla users and others.
Rationale: Since there is already an existing system by which CAs can undergo independent audits, it would be foolish not to take advantage of that system if and when it makes sense to do so. That's especially true if the audit process results in public information beyond a simple "pass/fail" grade.
Rationale: Mandating independent audits imposes certain costs that preclude the possibility of certain benefits to Mozilla users. Also, there are other options other than independent audit, most notably doing our own evaluation, particularly when the CA's operations are transparent enough that we have ready access to the relevant information. We should retain the flexibility to forgo requiring independent auditing and substitute our own evaluation, if by doing so we can provide benefits to users while still protecting them adequately against security risks.
To expand on these points: First, although the Mozilla Foundation does not incur direct monetary costs due to mandating independent audits (because the audits are paid for by the CAs), there are indirect costs imposed on Mozilla users and the Mozilla community.
To take but one example, a growing community of independent developers is creating extensions for the Mozilla and Firefox browsers and the Thunderbird email program. These extensions are packaged in the form of so-called "XPI" files, and are designed to be installed by clicking on a link pointing to the extension file. Ideally these files should be digitally signed, with signatures validated prior to installation; Mozilla et.al. do in fact support this feature. However in practice people don't sign their extensions. Why not? Maybe it's a hassle to get a developer certificate for object signing, maybe it's the cost, maybe there are other reasons. (Remember that even small costs can be a significant barrier to developers in certain countries, or for that matter developers in certain life circumstances.)
One could imagine someone like the mozdev.org or texturizer.net folks sponsoring a no-cost CA specifically to issue object signing certificates to volunteer extension developers, and it's quite conceivable that they could do a good job of operating such a CA, particularly if they had help from other individuals and non-profit groups with CA expertise. However it's quite doubtful they'd go to the trouble and expense of having a formal independent audit of this CA.
If we then require independent audits as a condition of having a CA certificate included in Mozilla, etc., then we can't include the extension developers' CA certificate, and that means that Mozilla users would have to explicitly download the CA certificate before installing the extensions. Based on experience most people wouldn't do this, so in practice developers still wouldn't sign their extensions, and Mozilla users would still run whatever security risks they run by downloading and installing unsigned code.
Second, it is not clear as a general matter that independent auditing would be inevitably superior to our own evaluation. Auditing in general is in part a response to the fact that the entities being audited (e.g., CAs, public companies, etc.) do not expose to public view all details of their internal operations. The auditor therefore acts as a "stand in" for the people who have an actual interest in the soundness of the practices of the entity being audit. (In the case of public companies these are investors, in the case of CAs these are certificate holders, certificate users, and others.)
If, on the other hand, we are dealing with an entity (in our case a CA) whose internal operations are sufficiently open to public view, then we have all (or almost all) the information available to a auditor, and in theory can make judgements that are just as informed — and possibly more informed, since we can evaluate according to the criteria specifically of most interest to us. If we further conduct such an evaliation as a public process, and solict the participation of others who have an interest in the matter and expertise regarding it, then arguably we can achieve in practice as informed a judgement as is possible in theory.
Rationale: If a CA is to be included that has not undergone independent audit, then the Mozilla project owes it to Mozilla users to attempt to perform some level of audit itself. Otherwise we can't properly assess whether including the CA's certificate(s) would lead to a security risk for users.
The project also owes it to other CAs who have gone to the trouble and expense of undergoing audits themselves, and who might perceive it as unfair that a CA could be included without being audited. However this is a secondary consideration, given that the policy should put less weight on benefits to CAs, as noted above; this applies whether those benefits accrue to the CA under consideration (which would benefit from its own certificate being included) or to other CAs (which would benefit from their competitor's certificate being excluded).
Rationale: As noted above, performing our own assessment will work best when we have a fair amount of visibility into a CA's internal operations, and if a CA is unable or unwilling to undergo an independent audit then it is arguably fair to ask it to provide sufficient details to enable us to make our own assessment. Since that assessment will be done in public manner, the details provided should be publicly available. Similar considerations apply to any additional assessments we might wish to perform of an already-audited CA, over and beyond the audit itself.
The assessment should be based on specific, objective, and verifiable critera so that it is less dependent on personal whim, and so that multiple people assessing the CA can more easily reach consistent conclusions.
Rationale: The policy is specifically intended to address risk/benefit tradeoffs related to including a CA's certificate; it is not our job to "pass judgement" on CAs in the sense of approving or disapproving of the way they conduct business in general. Some of the factors listed may be considered when deciding whether it would be of benefit to include a CA; for example, it may be of benefit to Mozilla users to include CA certificates for non-profit CAs that offer certificates at lower cost for particular purposes. However the factors should not be used as an excuse to exclude a CA from consideration; for example, just because a CA is small doesn't necessarily mean it can't necessarily perform its functions in a manner consistent with ensuring the security of Mozilla users.
Rationale: Decisions related to including CA certificates are simply a special case of decisions, including security-related decisions, that are made in every area of the Mozilla project. It makes sense to apply practices that are already being used in other areas and have worked well over the course of the project's life.
Unlike what is done in handling reports of security vulnerabilities, I see no reason to keep private any phase of the decision process related to including new CA certificates. The only reason non-public discussions can be justified in the case of security vulnerabilities is because there are already users out there who are potentially exposed to the vulnerability and might suffer from harm should it become publicly known. This is not the case when deciding whether to include new CA certificates, since no Mozilla user is using the CA certificate yet unless they've included it themselves (in which case they aren't a typical user).
The only time I can see justifying special handling of requests related to CA certificates is when the request is to delete a CA cert or turn off its trust flags, based on some imminent threat to Mozilla users that is not already publicly known. In that case the request should be treated according to the existing mozilla.org policy on handling reports of security vulnerabilities.
Rationale: Most Mozilla users will not have the time or inclination to follow in-depth discussions in bugzilla or the Mozilla public forums. We need to use other communication channels to make them aware of decisions that have been taken that are relevant to their security, channels that they would be more likely to be paying attention to.
Rationale: There is no implied obligation on the part of the Mozilla Foundation to continue including a given CA's certificate in Mozilla. However a CA's certificate should be removed only if there is adequate reason, not just in the form of increased risk but in the form of a major reversal of the risk/benefit tradeoffs relating to the CA.
Note that this implies that for CAs with significant market share (i.e., whose certificates are used by many) the perceived increase in risk must be fairly large in order to offset the benefit to Mozilla users of including that CA's cert.