[HTTPS-Everywhere] persistent user-generated rules

Claudio Moretti flyingstar16 at gmail.com
Thu Jan 16 10:28:26 PST 2014


On Tue, Jan 14, 2014 at 3:10 AM, Drake, Brian <brian at drakefamily.tk> wrote:

> [snip]
>

>
I guess those are valid points. To address the issue of reading the
> e-mails, I will quote from Yan [1]:
>
> You'd have to make it very very clear to the user that they are disclosing
>> their IP address and what website they were visiting by sending a rule to
>> the server.
>>
>
> To address both reading and writing, I will quote from “How to Deploy
> HTTPS Correctly” [2]:
>
>> HTTPS provides the baseline of safety for web application users, and
>> there is no performance- or cost-based reason to stick with HTTP. Web
>> application providers undermine their business models when, by continuing
>> to use HTTP, they enable a wide range of attackers anywhere on the internet
>> to compromise users' information.
>>
>
> If HTTP is not acceptable in web applications, regardless of what
> information is being transferred through them, then shouldn’t non-secure
> e-mail (which in practice seems to be virtually all e-mail) be equally
> unacceptable?
>

My point is that we're talking about two completely different technologies:

HTTP websites are insecure by nature, but you _visit_ them, so you may be
unaware that they are recording your visit (HTTPS can't deal with this, of
course) or that there's a third party that sniffs your traffic (and here
HTTPS comes in handy).

When you send an email, you are _aware_ that you are disclosing personal
information, at least you email address. If you have something to hide, you
can _decide_ not to send an email, but you can't decide what a webserver is
going to store or if somebody is looking at your traffic.

So you can't relate emails and HTTP in terms of privacy, because they have
two completely different scopes.


>
>  With the POST idea, you would be using Tor if it’s available, right? On
>>> the server side, you could do some basic validation on the rules when they
>>> are submitted; wouldn’t that make it hard to use this system for spam
>>> distribution or file storage? It would be nice if everyone could write and
>>> everyone could read.
>>>
>>
>> Still, I don't see the point in using TOR: it's not like you're sending
>> out personal information.
>> But it could be done like it's done for the Observatory.
>>
>
> The fact that you’re visiting a particular website is personal information
> (see the quote from Yan above). But attackers could have seen that
> information anyway. So does it matter?
>
I guess if Tor is available, its use should generally be encouraged, unless
> there’s a good reason to not encourage it (and the only case I know of is
> torrents). Therefore, its use should be encouraged here.


You are correct in saying that, and actually using TOR might help you in
case the servers are breached. I guess if you're trying to send in a
ruleset anonymously (for whatever reason you might have) it could help.



>
>
>>  If you read my previous posts, I mentioned that one of the requirements
>> of an automatic submission system would be that the system should be
>> write-only.
>> It would be nice to have public access, but (quoting myself)
>>
>> It should be "write only", of course, meaning that everybody can write to
>>> is but only a few chosen ruleset reviewers can read from it (otherwise,
>>> you'll find your repository being used as a spam distribution point / file
>>> storage website in a couple hours or less).
>>>
>>
>>  How would you propose we avoid a read-write system being used as a "spam
>> distribution or file storage"?
>>
>
> I guess you can’t guarantee that your repository won’t be used for spam
> distribution or file storage, but hopefully, if you force the submitted
> data to be in the form of a ruleset, then it won’t be worth the effort to
> try to stuff other things in there.
>

Well, probably not for file storage purposes (filesize limit, perhaps?) but
it could be used for spam purposes pretty easily: just embed something in
the ruleset in a way that respects the ruleset specs and you're done.
Not easy, but not even that difficult.


>
> But if that’s not good enough, can we expand the read access to, say,
> anyone who’s ever made a legitimate submission?
>

That's, IMHO, a good way to do it. File storage is "easily" avoided, and
this would dramatically reduce the risk of spam, because only who has
access to the list could read the spam message, so there would be no point
in spamming there.


> Finally, I don’t understand what the difference is between this repository
> and the mailing list archives. The mailing list archives are readable by
> everyone, writable by everyone, and can store arbitrary data, yet we don’t
> seem to have a spam problem (maybe there’s a spam protection system in
> place and the admin would disagree with me, but I’m not a position to know
> that). I think the same is true for the bug tracker.
>
>
I believe there is, it should be SpamAssassin (just guessing), and also
remember that you can only send an email to the list if you are subscribed
to it. If you're not subscribed, your message doesn't go through unless
approved.


> To try to address the fingerprinting concerns, we could try to check the
>> rulesets against what others see. We could do, for rulesets, what people
>> already do for certificates:
>> - centralised approach: SSL Observatory
>>
>
>  Cool, but needs maintenance and as Yan said the EFF tech staff is
>> already overloaded
>>
>
> I’m not expecting the EFF to implement this (at least for now), but
> hopefully someone else out there has the time and the knowledge to do it.
>

The problem here is privacy concerns: how could we guarantee that the third
party respects EFF rules regarding privacy matters? It's a really
complicated topic...


>
>>  - distributed approach: Perspectives [1] – this should have a custom
>>> notary list, of course
>>
>>
>> Also really cool, but rules change faster than SSL certificates, so I'm
>> not really sure how this approach could be implemented effectively.
>> Remember that we are also focusing on speed, so the overhead caused by
>> checking rules against notaries every time you open a website might be a
>> little too much, and it may prove to be far less useful than checking SSL
>> certificates.
>>
>
> I agree, checking every time you access a website makes sense for
> certificates but not for rulesets. I was thinking of doing it every time
> the rulesets are updated, which I’m guessing would be once a day, but I’m
> not familiar with update mechanisms like this.
>
>
For 10000+ rules it might be difficult. Even if you enable it only against
newly added rules, first-time HTTPS-E users would have to wait a really
long time to get all the rulesets checked the first time, and they might be
discouraged in using the extension.

Maybe trying to find a way to check hashes? Still complicated and very
expensive in terms of computational power, but still a little better..
Benchmarks could be really useful, or we might leave it disabled by default
and add a warning that says "if you enable this, it's probably going to
kill you internet speed until it's finished, which depending on your ISP
might take anything between 1 hour and 1 decade" :P

Cheers,

Claudio
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.eff.org/pipermail/https-everywhere/attachments/20140116/c9d816ad/attachment.html>


More information about the HTTPS-Everywhere mailing list