I think it's really interesting to follow Dave Winer (inventor of RSS) on Twitter - he's very concerned about the current push towards HTTPS.
He's afraid raising the secrity bar will make the web less open and less accessible. And he's right; adding technical requirements favours the entrenched big players with big budgets.
Dave also fears for the historic web, in the (unlikely?) event that browser vendors actually deprecate HTTP.
I don't agree with everything he says, but the POV has value.
@HerraBRE there's not much pushing in the other direction imo. just Building A Thing is 20x more complicated than it used to be
@aeonofdiscord Yeah.
I'm really conflicted about this, because part of the reason things were easier before is because they were ridiculously insecure - that's not OK in engineering or food or transport... and it shouldn't be in IT either.
But openness and low barriers to entry have great value too, I don't want us to lose those.
@HerraBRE The pressure (as of right now) for HTTPS seems less severe than the pressure to centralize e-mail, but itโs still there.
@HerraBRE
I don't think so. Crypto will become streamlined and it will become part of the APIs.
The real problem IMO was governments' resistance to encryption which only make it so difficult to implement.
E-mail is a very different problem: It assumed there were no hostile agents in the network. It was based on trust. The protocol is completely broken: anyone can put trash on your inbox FOR FREE. Change the protocol so that sending costs money, even a cent, and you'll fix spam forever.
@HerraBRE
As for the web, I see a different trend: People are abandoning adware in favor of community driven platforms (just look where you're posting). And with things like Hubzilla, and coming up, Plume, well have decentralized websites, blogs, zines, all supported by the community.
The Facebook backlash showed us a new future: a decentralized, even peer to peer web. It's no longer webs spun by giant spiders to catch all; it's tunnels built by ants to communicate. This is just the beginning.
@HerraBRE the daft thing is that servers going down is something that's actually allowed for in email. (If the host you're sending to is down, your SMTP server will usually retry over the course of a few days before it finally gives up)
"Availability" only became an issue when people started using browser-based email instead of local email client software.
@HerraBRE I haven't acted as a closed relay for years -- but I did operate that way for years before. It's also the huge spam load as well, of course.
Heck, even with jumping through all the hoops I can barely get my cronjobs and logs off without running afoul of mr google. He'll suddenly start bouncing them for a few days and then silently go back to accepting my mails. Why??? wtf knows.
Has anyone mentioned Let's Encrypt?
I haven't had much trouble setting it up, and it runs unattended.
It's still hard to protect from DOS attacks, but I've coasted by doing nothing since my site isn't popular enough to take down.
I do fear for email, and I wonder why there isn't something like fediverse for email, but written in something easy like Go or Rust, where a noob admin can set up a sane server in 10 minutes and start accepting email.
Spam may always be hard.
@golf_oil @gemlog The sites where I've recently experienced SSL related downtime were almost all using Let's Encrypt certs.
Add a moving part, you've added another thing that can break. If that part is authenticated crypto, that means your site is down.
Again, I'm not saying people shouldn't do it. But in my experience it DOES make things less reliable.
Re: e-mail, see https://mailinabox.email/. The reason it fails, is the Internet rejecting mail as Spam. No simple server kit can avoid that.
@maiyannah @xrevan86 Hah, me too. But I'm pretty used to my mail not getting delivered...
@HerraBRE yeah - i've advocated https-all-the-time kind of reflexively for ages, but there's merit to at least some of the qualms that people have.
a _lot_ of the merit arises from the simple fact that https is a garbage design in many important ways and _absolutely terrible_ to work with for just about everyone, from novices with a static blog to full time pros with complex infrastructure.
(and yes, this remains true in the era of letsencrypt.)
@HerraBRE The web is getting a lot more complex as a long running trend, universal https one of the smallest parts of that complexity and one of the most defensible imo.
Big players add a lot of non-security related requirements.
HTTPS is actually one of those things that is getting slightly less complex over time, with tooling support increasing for the basic usecase.
Correction: I need to remember not to just credit people with inventing stuff. Dave was one of the pioneers of RSS, not THE inventor. My bad, sorry.
This Chrome security announcement is one I have mixed feelings about:
https://blog.chromium.org/2018/05/evolving-chromes-security-indicators.html
On the one hand, this is the right thing for security and protecting the average web surfer's privacy.
On the other, I can totally see why people are concerned about the future of the legacy clear-text web of HTTP URLs and pages (see thread).
@HerraBRE Yeah, so when my 'secure' blog shows the http inline images, users can get a scary and alarming warning about a page with insecure elements.
@HerraBRE My web hosting doesn't reasonably support cheap SSL. Thankfully I have little reason to deploy it, since I don't give a half a crud how Google ranks my websites.
@HerraBRE MITMing most web traffic is pretty uninteresting, encrypting it is good, sure, but the cost is high, especially since you're now dependent on a centralized list of CAs.
Whoever thought requiring a CA for encrypting traffic was cool should be publicly shamed for all eternity.
@ocdtrekkie You're missing a threat which is actually common in the wild: MITMing to inject crapware.
ISPs do this, this isn't hypothetical.
Also, if you believe people should be able to surf anonymously and want Tor users to have access, consider that it's super easy to spin up a malicious exit node that corrupts traffic.
Securing your sites with TLS protects your visitors' from that sort of thing, which makes it worth doing almost no matter what sort of content your provide.
@HerraBRE I try to avoid ISPs when inject junk. Arguably, if the ability to inject junk is part of your agreement with them (and one would hope it is factored into the price/value equation), they should be able to in nonsecure contexts.
I'd be happy to jump on the encryption bandwagon, that being said, if CAs weren't involved. They've been proven untrustworthy over and over again. The fact that we have people trying to push a *mandate* that we deal with them is borderline insane.
@HerraBRE (Re: ISPs that inject junk, NetZero was an amazing thing to exist back i nthe day.)
@ocdtrekkie Again, it's not about you. It's about your users.
People don't know about these terms and they don't know the implications. And they may have no choice, not all areas have competing ISPs.
Anyway, such EULAs are problematic for a bajillion reasons, I'm surprised you'd use them as justification for anything!
You can shrug and say their ISP is not your problem. I tend to err on the side of saying we have a duty of care towards our users, but people can disagree with me on that. ๐
@HerraBRE @ocdtrekkie Also, people on such ISPs (including many national mobile providers, I understand) maybe won't be sophisticated enough to distinguish between what's actually on your site and what the ISP has injected. They'll just see that your site has silly ads or whatever.
I can confirm, my ISP is known for browser hijacking
@ocdtrekkie @HerraBRE you should change your webhosting provider. in times of letsencrypt it's more than bad to not support "easy ssl".
@ninjafoss @HerraBRE You'd be surprised how many web hosts don't. Shared hosting is pretty darn common.
In my case, I haven't found a suitable replacement: I won't buy either domains or hosting from a company that doesn't have 24/7 US-based phone support, which is a rapidly decreasing commodity these days.
@ocdtrekkie @ninjafoss @HerraBRE Shared hosting doesn't preclude SSL.
@edavies @ninjafoss @HerraBRE No, but since you don't have root or often even shell access on shared hosting, you're at the whims of your host's offerings.
You can't run something like a Let's Encrypt certbot, for instance, unless the host sets it up for you.
@ocdtrekkie @ninjafoss @HerraBRE Yes, need to pick your host carefully, which can indeed be difficult.
Had my domain registered with my current hosting provider while my hosting was elsewhere until previous hosting provider became untenable at which point current one was really the only acceptable option - annoying as I'd prefer to keep the domain and hosting separate.
But the point: SSL hosting isn't intrinsically difficult or expensive, just a matter of what the market provides.
@edavies @ocdtrekkie @ninjafoss I wouldn't be at all surprised if a bunch of hosting providers were still pretending Lets Encrypt doesn't exist and using SSL as a differentiator for "premium" hosting plans.
That'll slow down adoption at the lower end of the hosting market.
@HerraBRE @edavies @ninjafoss I think in my host's case, they just provide a fairly dated version of cPanel/WHM which isn't yet EOL. I believe newer cPanels support Let's Encrypt out of the box, so presumably once they have to upgrade, it'll support it.
@edavies @ninjafoss @HerraBRE Keeping domain and web hosting separate is an absolute must. Same for email service as well. Three different companies control this aspect of my online presence, which makes losing any one of them at a time fairly recoverable as a condition.
@HerraBRE He's been for decentralisation for possily decades now. However he remains a liberal capitalist and he's exclusively using Twitter and Medium.
He's right about unnecessary complexity. But you can't make progress without improving the web on things such as #ActivityPub. The discussion shouldn't be whether we want a simpler web (which we do) but whether we want a more decentralised web.
@HerraBRE Making the web more decentralised involves complex work, oversimplifying everything is not a solution. That's how improving things works. We just need to decide which parts of the web we want to leave behind meanwhile we build what's new. We're changing wheels whilst driving on the driveway.
Part of the web some of us want to leave behind is the web that enables the existence of Facebook, Google, Twitter.
@HerraBRE Arming ourselves with true substitutes for those is going to take a lot of work, and adding new technology that is not currently available for everybody, largely because Facebook, Google, and Twitter have suppressed it, or hindered its development.
Twitter used to publish RSS feeds for every stream in the early days.
But they also had something similar to a PubSub api in the early days, and other things we can do so much better than they did.
Oversimplification is never the answer.
@HerraBRE is it that hard though? Most websites are fairly small and don't require a difficult infrastructure which would prevent the use of HTTPS.
It's like one click away on your cPanel (or whatever you have) for most hosting service. And Let'sEncrypt works very well.
If the website ia big or the infrastructure is huge, well you already crossed the "small player" line imo...
@alexmercier It doesn't need to be "that hard" to be one little step along a slippery slope.
This particular step is also backwards incompatible. Even if sites upgrade, there's no tool to globally search/replace all the old, insecure http:// links other people have made to your site.
So, either you keep HTTP working so those links keep working, or you break a piece of the historic web.
Some people choose to break the web. This trend worries some people. ๐คท
@HerraBRE sincere question: if this is a site you still use today and plan to upgrade it to HTTPS, why not setting a redirect 301 at the same time? You can't control how people write your address but you can direct them once they come.
I tend to agree that breaking the old web isn't ideal (nor necessary)
We've actually seen this dynamic before - in e-mail.
Because of spam, the weird technical hoops you had to jump through to run a mail server kept getting more and more insane.
And at the same time, people demand more and more functionality, fancy web interfaces, high availability.
Fast forwared to today: even big universities and governments with dedicated IT departments have given up and pointed their MX records at GMail.
Is the web headed the same way?