In recent times it is likely that http access to this website is going to be blocked and https will become the preferred system, and doing nothing may no longer be a good option.

https also lets access between here and readers be obfuscated so that intermediaries will find it difficult to read.

We have a big problem in that https needs an introducer to get the remote end to trust our keys. The common system is a certificate authority that user software trusts. I am not too fond of making a guarantee of the legal sort to keep keys secret, though it is not ruled out and may go for this. In recent times the user agreement of free CA has softened, making "reasonable".

Updated, this system is set to regenerates keypairs every Saturday, refresh the TLSA records, generate fallback stapled selfsigned certificates and then and try to replace them with letsencrypt, and all automatic.

Updated again 2020, cert regeneration now handled by certbot after internal errors style failures provoked by a major incident at letsencrypt.

This needed to change how TLSA get handled, thought I had caught all the records. Soon after got chased to fix TLSA records on smtp. Its fixed, for now.

Something wrong with the trust model?

https has the idea of certificate authorities, and sets of these are distributed with web browsers and operating systems. This has the effect of making the browser supplier into a hyper certificate authority, deciding exactly for its users what they should trust, in a top down model.

This vetting is supposedly more rigorous than that ran by such CA, that is, check out the holder of a public key holds a specific domain or, maybe, try to discover if it is a specific organisation.

Installing itself as a master-ca does put the browser vendor in effective control of users reading list, possibly ready for discrete seizure by its host country.

How I might like it to be.

Saying that a public key belongs to a domain names attracts a technical resolution. The DNSSEC with TLSA provides this chain. There are then no new contracts to sign beyond the one we already have with the domain registry.

Traditional CA may try to fight this as it will be no longer necessary to sign keys on the basis of domain control, and it looks like they are out of business; They are not, because there remains much use for trust chains independent of the domain system.

The preferable resolution may be of a web of trust, where the server public key to be signed many times, in different certificates, from various trust sources.

If I look at a https connection, for a low risk transaction such as casual web browsing then only the DNSSEC asserted TLSA may be satisfactory.

If it is more important then the website's certificate shall be countersigned in x509 by the regulators of the activity it carries out, so I can tell if it has consumer protection. The browser will show carry the national flag of the country of the regulation, and a small logo of the regulated activity in the trust padlock.

Multiple regulated activities or valid countries are less likely, though better be handled.

If it is even more important, it is my friend website, then an exchange of roots via business card can be considered.

DNSSEC distrust issues

Lets go through some of the argumentation regarding dnssec

Weak keys at the root

It is asserted that 1024 bits is too weak and 2048 and even 4096 is wanted.

This issue if it is even one can get fixed over time with longer keys.

They might like to manually collect the KSK of the top level domains where these are stronger, and trust those, maybe registry can send them in as a QR code as a postcard, or display it on their building.

Also, are a big internet company with its own registered autonomous system and exchange directly with the top level domain operator might assess record accuracy directly.


Has to do with replies being bigger than their requests. Generic resolution would be to pad requests to the expected response size.

Can you hide records?

DNSSEC appears to allow private subdomains, if users want, only the DS from the delegation point KSK(s) are shared.

The private part of the KSK(s) are then used to sign the private zone ZSK(s) or duplicates of them.

Then information about signed records, and replacement KSK at key rotation, are regularly propogated into the private system to "prove" the delegation to dns users in there.

The result is that even if a KSK is successfully attacked to calculate the private value, anything signed with it (like the ZSK) within a private network stays there, so implementing DNSSEC would likley not reveal additional info outside.

These machines might even be completely disconnected, since it is just given the KSK private part, with the public part generating the DS which goes in the upstream zone, and only when rolling over KSK, the machine gets updates, and never need to send data outside.

The KSK will be accessible, and the private half given to the hidden server, just to sign its internal ZSK.

This seems to be asked in the context of enterprises who want confidential branches of the DNS.

Even with that, there is NSEC3


On the other side, PKIX such as via letsencrypt does not allow hidden domains, as they now all end up in a massive distributed database called certificate transparency, that is searchable.

PKIX and DNSSEC are not exclusive to each other, both can get used together. TLSA records can be the plain public key flavour, they can still work when the public key goes in a certificate.

Although CT appear to be originally concieved as a CA discipline system, to deter issuance of certificates for purposes such as interception gateways, this is actually really useful in abolishing the centralisation issue…


The one remaining "issue" with Internet, rather name and number "assignment", not specifically DNSSEC, that has actually helped stimulate it.

  1. Public key network sockets, that means the whole public key of an entity is to be used an actual network socket to contact that entity.
  2. Use petnames as key aliases rather than a DNS that is transient..
  3. While we await good decentralised F2F networks, look up the public key in certificate transparency to get the remote entity current dns name, if they get their name taken away, sign a new one with the same key or sign the new key with a chain starting with the old one.
  4. Wiki style disambiguation to resolve petname clashes.

System Implementation

So many algorithms to choose from

We can dump the list of methods presented by ldns-keygen -k -a list

LabelIndex numberCommentary
RSAMD5001MD5 is very weak against attacks.
RSASHA1005SHA1 still quite weak
RSASHA1-NSEC3-SHA1007Same thing as RSASHA1 with a hint that NSEC3 is in use.
RSASHA256008SHA256, probbaly NSEC3+ assumed from here onwards.
RSASHA512010SHA512 stronger still
ECC-GOST012First elliptic curve
ECDSAP256SHA256013Elliptic curve with a stronger hash
ECDSAP384SHA384014Elliptic curve stronger hash

Rebuild a zone

Herein follows the nearly actual scripts used to rebuild my certs automatically on Certificate Saturday. It is essential to script it. First thing is a script to sign the indiviual zone.

I am using nsd and unbound on Debian, some nsd config is needed to tell it to read .zone.signed files instead.

  1. #!/bin/bash
  2. shopt -s nullglob
  3. Z=$1
  4. DOMAIN=${Z:0:$(( ${#Z} -5 )) }
  5. ZSK=0
  6. KSK=0
  7. for N in K${DOMAIN}.+???+?????.key;
  8. do
    1. F="$(<$N cut -f4 | cut -d" " -f1)";
    2. if test "${F}" = "256";
    3. then
      1. ZSK=$(( $ZSK + 1));
    4. fi;
    5. if test "${F}" = "257";
    6. then
      1. KSK=$(( $KSK + 1));
    7. fi;
  9. done
  10. if test "${ZSK}" -eq 0
  11. then
    1. echo creating zsk
    2. ldns-keygen -a RSASHA1-NSEC3-SHA1 -b 2048 ${DOMAIN}
    3. #ldns-keygen -a ECDSAP384SHA384 -b 2048 ${DOMAIN}
  12. fi
  13. if test "${KSK}" -eq 0
  14. then
    1. ldns-keygen -k -a RSASHA1-NSEC3-SHA1 -b 4096 ${DOMAIN}
    2. #ldns-keygen -k -a ECDSAP384SHA384 -b 4096 ${DOMAIN}
  15. fi
  16. KEYS=`for N in K${DOMAIN}.+???+?????.private; do echo ${N:0:$(( ${#N} - 8)) }; done| sort | uniq`
  17. dns-signzone -n -p -s $(hexdump -ve \"%08x\" -n 8 /dev/random) $DOMAIN.zone $KEYS
  18. ldns-key2ds -n -1 $DOMAIN.zone.signed && ldns-key2ds -n -2 $DOMAIN.zone.signed

Could run as if rebuild my.example.zone; nsd-control reload; killall -HUP nsd

This is for signing a zone with a ZSK, itself signed by a KSK.

If the keys need changing because they are stale or did not exist before, they are regenerated, though if KSK is updated the new key must be added to the upstream zone in the from of DS records for the delegation to remain valid.

To aid in key rotation, the delegation is to be treated as valid if any DS are correct so the DS records for disused and especially weak or compromised keys should be removed quickly.

Since this step has to be rerun everytime that there are zonefile changes it is useful to keep it in a separate file, then NSD is told to reload the zone to make it available.

Templating OpenSSL

I found it is useful to make separate .cnf file for each certificate being maintained, here e.g. _5269._tcp.xmpp.my.example.cnf

  1. oid_section = new_oids
  2. [ new_oids ]
    1. dnssecEmbeddedChain =
    2. xmppAddr =
  3. [ req ]
    1. default_bits = 4096
    2. distinguished_name = req_distinguished_name
    3. attributes = req_attributes
    4. x509_extensions = example_extensions
    5. req_extensions = example_extensions
    6. utf8 = yes
  4. [ req_distinguished_name ]
    1. commonName =
    2. commonName_default = "my.example"
  5. [ req_attributes ]
  6. [ example_extensions ]
    1. #dnssecEmbeddedChain = ASN1:FORMAT:HEX,OCT:${ENV::CHAIN}
    2. dnssecEmbeddedChain = DER:${ENV::CHAIN}
    3. basicConstraints = CA:FALSE
    4. #keyUsage = digitalSignature,keyEncipherment
    5. #extendedKeyUsage = serverAuth
    6. subjectKeyIdentifier=hash
    7. subjectAltName=@example_alt_names
  7. [ example_alt_names ]
    1. DNS.0 = my.example
    2. DNS.1 = chat.my.example
    3. DNS.2 = xmpp.my.example
    4. otherName.0 = xmppAddr;UTF8:my.example
    5. otherName.1 = xmppAddr;UTF8:chat.my.example