Sysadmin reminder: 11 october is the #DNS root key rollover. Brace yourself and check *today* that the DNS resolver you manage knows both keys, 19036 (the old) and 20326 (the new) https://www.icann.org/en/system/files/files/ksk-rollover-expect-22aug18-en.pdf #DNSSEC
@Shaft @bortzmeyer Yes, there are dissenting voices (Google, VeriSign, etc.0 , it may be postponed. Decision by the board in september. http://domainincite.com/23353-icann-faces-critical-choice-as-security-experts-warn-against-key-rollover
@szbalint Bad policy. Fire the sysadmin and the IT manager.
People always talk about security but when it comes to practical solutions, nobody wants to do an effort :-(
@bortzmeyer I am the security officer and after careful evaluation, reading the standards, knowing the history we decided it actually increases risk, not decreases it.
We're going to roll out DNS over HTTPS when it's a bit more mature.
@szbalint How does it increase the risk? (I know some people claim it is useless but "increases the risk" is new to me).
@bortzmeyer It trades confidentiality for integrity, it is a massive increase in complexity with no obvious benefit (that someone actually backed up with a threat model), were people to turn on validation it would cause disruption and downtime because it's fragile. It's also 90s crypto. See also: https://sockpuppet.org/blog/2015/01/15/against-dnssec/
(I won't comment on https://sockpuppet.org/blog/2015/01/15/against-dnssec/ which is full of bullshit and display a serious ignorance of #DNSSEC, specially on the crypto side).
@bortzmeyer I was talking about confidentiality.
That post was written by Thomas Ptacek, who's been reviewing cryptography systems and standards since like 1995. There are very few people in the world more qualified than him to comment on crypto design.
@szbalint @bortzmeyer In technical matters, the argument of authority is the weakest. Noone is free from making errors, not even geniuses: Einstein introduced the cosmological constant just because it felt right to him, and then retracted it when observations proved him wrong (that it still a debate now...). Also everyone has its own agenda to push. While you can trust some more than others based on various past signals, you should never trust anyone 100% and always mix viewpoints.
cryptographic hair splitting
90s crypto matters because we really didn't know how to build reliable / secure protocols on top of cryptographic primitives in the decade the internet went mainstream.
There were a lot of things that cryptographers only figured out after the mid-2000s in protocol design.
(People are trying to transition towards PQC btw, but coming up with workable post quantum primitives is hard. see for example: https://docbox.etsi.org/workshop/2014/201410_CRYPTO/S07_Systems_and_Attacks/S07_Groves_Annex.pdf)
Note that a double-digit percentage of resolvers can only work with RSA atm.
(for comparison the TLS1.3 wire format for the unencrypted parts had to stay because a 0.x% of clients couldn't handle it)
If using ECDSA today is practical, it's only because noone actually relies on DNSSEC for anything. It's a half-assed standard looking for a problem.
@szbalint @bortzmeyer So like said at the beginning, the problem is not with the technology, but its implementations. Same problem than IPv6. Same for TLS 1.3 today. Which in fact needed to retain a lot of TLS 1.2 like advertising itself as 1.2 in the ClientHello and having to deport true version negotiation in a new extension...
@szbalint @bortzmeyer Different goals. Read https://blog.apnic.net/2018/08/17/sunrise-dns-over-tls-sunset-dnssec/ that says like you in summary and the rebuttal at http://www.circleid.com/posts/20180820_dnssec_and_dns_over_tls/
I almost linked to the first article but didn't. The first article is way too conciliatory towards DNSSEC.
People keep mentioning 1500+ CAs but in reality it's a solved problem. CAA is mandatory by the CA/B baseline reqs. CT is mandatory. Anyone breaking the rules gets distrusted fast, even the largest CAs. Symantec learned that the hard way.
This is more than good enough for most of the world.
DNSSEC (especially DANE) is horrible and not needed.
DNSSEC claims to be end-to-end but it doesn't protect last-mile consumers, it typically stops at the CPE equipment or somewhere on the ISP edge. It's not end-to-end.
What do you need for something to be end2end? Confidentiality. There are too many middleware and noncompliant servers to make it work otherwise. This was the lesson with HTTP2 aswell, notwithstanding ethical reasons that's why TLS is mandatory there. Or why TLS1.3 looks like 1.2 in headers.
Spoofing DNS is not really a huge problem by itself, it's a larger issue if someone is in a MITM position on a network.
DNSSEC only protects the host->ip mapping. It by itself is insufficient to protect against MITM attacks. It's trying to fix a trust problem on the network level where experience has shown that the application has to be involved aswell somehow.
@szbalint @bortzmeyer Spoofing DNS is not just for MITM, it can just be a form of DOS. DNS does not just handle host->ip mapping, this is a kind of often repeated shortcuts. Applications have to be involved indeed, which is why newer DNS APIs like getdns gives more control over resolution and DNSSEC resolution to the application. getaddrinfo was never written back then with the mindset of validating records.
@szbalint @pmevzek "DNSSEC is not E2E" ? It can be, there is nothing in DNSSEC that mandates or forbids how you use this information. The resolver at my home validates and I hope the LAN is secure so it is E2E for me. (If I were more paranoid at home, I would do validation on every box, DNSSEC allows that).
DNSSEC requires active cooperation from ISPs and middlewares.
Encrypted protocols only require cooperation between the applicatons that communicate (over an untrusted network)
@szbalint @bortzmeyer PGP does not work in practice because it forces people to ask themselves questions about trust. TLS in browsers works because they are bundled by hundreds of CAs to which the users give ultimate trust... without ever knowing it. If browsers were deployed with empty trust stores and users were expected to give trust or not to each CA, you would get the same kind of problems.
@szbalint @bortzmeyer DNSSEC is end to end, you just need to have your own resolver that validates answers. You do not need confidentiality (for that problem, you need it for privacy reasons), as you will cryptographically verify everything you get back, from a trust anchor that you downloaded out of band.
@szbalint @bortzmeyer If the webserver IP is hijacked TLS does not protect anything either. The attacker could have acquired a Let's Encrypt certificate just after the BGP hijack and all browsers would happily connect to it and show the brilliant lock in the address because the CA would be recognized. In the specific MyEtherWallet case, DNSSEC would have protected because no way to disable it it would have been NXDOMAIN. For TLS in browsers you can bypass the security warning.
@szbalint @bortzmeyer It was a BGP hijack of the DNS servers and not the end webserver. Hence if the domain had DNSSEC the new nameservers answering at the same IP as the true ones would not have been able to sign anything with a key matching the DS in the parent zone. Any validating resolver will have found that and give NXDOMAIN back that is cutting the domain completely and not allowing any traffic.
@szbalint @bortzmeyer TLS is not just for the web (so the trust/distrust of 3 players does not extend outside browsers and they still force people to silly rules like showing self certificates as untrust-worthy). CAA may be mandatory for the CA to check... but certainly not for users to put these records (and I guess the percentage is very low, and you have the chicken and egg problem of creating a certificate at the same time than registering the domain)
@szbalint @bortzmeyer Like any non trivial technology, DNSSEC has its benefits and drawbacks. The assessment on the risks it mitigates and the new risks it creates will certainly be different for different cases, certainly not the case for a personal blog with 1 visit/day and a bank access to personal accounts. 1/x
@szbalint @bortzmeyer However the problem with DNS over HTTPS is that we declare in that way that middleboxes have won and that the future is ossification, which is both bad and sad. We are stacking two complex protocols that are not a natural fit, and I am sure a ton of vulnerabilities will happen because of that. DNS over TLS makes at least a little more sense than over HTTPS on the technical level, even if you remove UDP out of the equation whose effects are certainly not all known yet 2/x
I also don't like the fact, but middleboxes (or untrusted network) really did in fact win. Any new protocol or standard requires confidentiality (or cryptographic opaqueness) to have a realistic chance of deployment.
As for stacking protocols: DNS is starting to die from complexity. Baking DNSSEC on top of it, plus more insane stuff like DANE is imo worse than taking something already well known like TLS that is happening anyway and composing.
@szbalint @pmevzek Isn't it a bit strange to blame the #DNS world (#DNSSEC, #DANE, etc) for its "complexity" when you talk about #TLS, which is also a winner in that respect : 147 pages for the version supposed to be "trimmed", plus CAA, CT and of course PKIX (name constraints are so badely implemented that Chrome had to do it by hand, see the code to limit the french CA to .fr).
@szbalint @bortzmeyer Cryptographic opaqueness will bite us later on as it will only open the door(!) to more hidden backdoors and heavy filtering that will have wide negative consequences. Also when absolutely all protocols will be funelled inside HTTP/2 over TLS, power will be concentrated even more into an handful of players (like people today complaining about emails being in the hand of a handful of players dictating rules).
@szbalint @bortzmeyer More than the technical problems we are also about to create DNS segregationism: it will be (it already is) led by website hosters (and hence by web browsers, latest move in that regard by Mozilla is a very bad signal) because with HTTP/2 you will be able to mix HTTP and DNS traffic in a single HTTP session, which opens the door to all kind of "niceties" like pushing DNS records to you even befeore asking. 3/x
browsers and dns
@szbalint @bortzmeyer We will see more and more islands of DNS services only targeting the users of the associated hosting services, which will create nightmares of debugging. Also DNS over HTTPS (or even TLS) for now resolves only the problem of access to the resolver, it has no impact on how to reach the authoritative nameservers. 4/x
@szbalint @bortzmeyer Some people seem to dislike DNSSEC also because it mandates you to have a trust anchor, that finally depends on ICANN. Some people have been burnt by the PKIX state in the web world and hence reject centralized trust, and some (the same or others) also dislike ICANN as an organization. However DNS over TLS/HTTPS still need certificates (in general) and hence trust in something. You have DANE of course to bypass and secure that, but how do you secure DANE without DNSSEC? 5/5
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!