[HTTPS-Everywhere] HTTPS Everywhere is unscalable (uses central database)

Phil Vandry vandry at TZoNE.ORG
Fri Apr 29 07:00:17 PDT 2011


(By the way, I thought this would be a FAQ but I didn't find it
at http://www.eff.org/https-everywhere/faq )

Please correct me if I misunderstand, but ISTM HTTPS Everywhere
uses a centralized database to aggregate and publish the rules
that tell it which sites can be used with HTTPS, and this makes
it unscalable and creates a dangerous central point of control.

Using such a centralized database reminds me of the old HOSTS.TXT
file that used to contain the name to IP address mappings of all
hosts on the Internet and used to be updated and published on a
regular basis. The distributed datatase that replaced this (DNS)
was a *good* idea and we should try to emulate DNS, not HOSTS.TXT.

In fact, HTTPS Everywhere could be piggybacked on DNS, couldn't it?
To determine the HTTPS vs. HTTP policy for www.example.org, you
query for a TXT record as _httpspolicy.www.example.org. and parse
it to find out whether you should rewrite URLs from http to https.
Like the results from any other DNS query, this "_httpspolicy"
resource record is cachable and securable with DNSSEC.

The centralized database could be kept as a legacy measure to keep
rules for websites which have not yet published their policies
themselves.

I would be eager to publish an HTTPS Everywhere rule for my own
company's web site, but I'm not so enthusiastic about the centralized
publication method that's currently in place.

Would the project care to define and implement a protocol such as
the one I suggest (or instead enlighten me about why it is not
necessary)?

-Phil



More information about the HTTPS-everywhere mailing list