| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
When positive wildcard expansion happens, NSEC(3) records are needed
to prove that the expansion was allowed. If the NSEC3 had too many
iterations, we downgrade the answer to insecure status, but
unintentionally we also dropped the NSEC3 record from the answer.
That was breaking DNSSEC validation of that answer, e.g. when
forwarding to Knot Resolver. The validator needs the NSEC3 -
either to validate the expansion or to determine that it's too expensive.
|
|
|
|
|
|
|
|
|
|
| |
This commit makes lots of changes to the C code to appease the
Clang-Tidy linter. Some of the less obvious ones are due to C's weird
semantics regarding handling of numeric literals.
We also disable a bunch of the detections because they are
super-pedantic, arguably useless, or we have our own unwritten coding
style rules that solve the issues.
|
|
|
|
|
| |
knot_wire_next_label used to return NULL when applied to . (root)
but that's not allowed anymore, and some of our calls relied on that.
|
|
|
|
| |
knot_wire_next_label isn't allowed with NULL wire anymore.
|
|
|
|
|
| |
The issue was exposed when working on rrl-wip branch:
lib/dnssec/nsec.c:19:10: fatal error: resolve.h: No such file or director
|
| |
|
|
|
|
|
|
|
| |
Ideally we would've done that at once with increasing NSEC3 strictness,
i.e. in 5.7.1 + 6.0.6, as otherwise we could run into some recoverable
assertions until the records got removed or expired.
We at least do the bump now.
|
|
|
|
| |
It was no longer correct after commit cc5051b444130 (KeyTrap).
|
|
|
|
| |
Improve: don't retry in this case.
|
| |
|
|
|
|
|
|
|
| |
Keep the first error in case priorities are equal.
At least with the current KeyTrap topic that should work better,
but blaming a single error is alchemy anyway, at least in some cases.
|
| |
|
|
|
|
| |
The value is in IANA registry, so it's very constant anyway.
|
| |
|
| |
|
|
|
|
| |
That's when searching NSEC3 aggressive cache.
|
|
|
|
|
|
|
| |
Limit combination of iterations and salt length, based on estimated
expense of the computation. Note that the result only differs for
salt length > 44 which is rather nonsensical and very rare:
https://chat.dns-oarc.net/community/pl/h58qx9sjkbgt9dajb7x988p78a
|
|
|
|
|
|
|
|
| |
Also done by BIND9 >= 9.19.19:
https://gitlab.isc.org/isc-projects/bind9/-/merge_requests/8515
The latest real-life measurements show that values above 50 are rare:
https://chat.dns-oarc.net/community/pl/aadp9wwrp7g7ux1b8chbzebmze
|
|
|
|
|
| |
I'm adding this as a function, as in knot-resolver 6.x we have
one more place where it is used, and I find this more readable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This filtering was dropped in 4565cc596680 (v5.3.0).
Now it's reintroduced - but inside the function, as that seems nicer.
Nit: naming and comment were updated to fit the current usage.
As the code is designed so far (in whole history probably), in order
to detect whether we need to choose a zone cut closer to the root,
we need to do something like this in lib/zonecut.c already,
instead of just during server selection.
I don't think this change can break anything.
Fetching unusable addresses from cache seems pointless,
as selection wouldn't be allowed to use them or try resolving them.
|
|
|
|
|
|
|
| |
The reserved size in packet is a messy thing, broken by
https://gitlab.nic.cz/knot/knot-dns/-/commit/ded5fbf01d00a875f141
Fortunately this function is trivial, so we can inline what we need.
It gets complicated by an earlier typo fix, though.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
It could happen that this condition didn't get triggered,
but the structures weren't completely clear. In particular,
the current section could be past KNOT_ANSWER already.
Let's be more conservative here; pkt_recycle() shouldn't be expensive.
I'm not sure why I only ran into this on the new-policy branch,
but it really seems like bug here on master already.
|
|
|
|
|
|
| |
- selection: utilize address_state::broken also when forwarding
- selection: drop fallbacks that don't make sense when forwarding
- iterate: copy EDE codes on DNSSEC SERVFAILs
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The debug dumps of packets used UNIX timestamps (in RRSIG validity)
instead of the customary human stamps.
This was an unintentional regression of 0555828e, i.e. since v5.4.1
I looked again at all other differences from default kdig style,
and the only ones are that we don't show class and don't do IDN.
(both seem suitable here)
|
|
|
|
|
| |
On most fundamental issues like DNS message not parsing,
we did not call this. Selection needs such information.
|
| |
|
|
|
|
|
|
|
|
|
| |
- apply to first (uncached) answer already
- don't extend over signature validity
Nit: the tests were using too high TTL (RFCs disallow the "sign bit").
It was working because (manual) cache-insertion was applying bounds,
but now the bounds don't get applied anymore, so it would fail.
|
|
|
|
|
| |
Allowing too much seems to have more risk than benefit. For example,
the 2-day TTL on DS records in .com zone (e.g. Slack issue months ago).
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a whole packet is cached (instead of individual RRs),
let's simplify the way the packet's TTL gets computed.
The previous mechanism came from commit 5b383a2bb7,
probably a misunderstanding of:
https://datatracker.ietf.org/doc/html/rfc2308#section-5
Anyway, I see no motivation to do it, and this way we should
get rid of some weird cases where we might extend TTL of some records,
except if they were below the cache.min_ttl() setting (5s default).
|
|
|
|
|
| |
We can get stricter here;
with negligible fraction of real-life names regressing.
|
|
|
|
|
| |
I can't see any motivation for the copying behavior,
and it made caching non-deterministic.
|
|
|
|
|
| |
In particular, avoids unintentional NXDOMAIN on grafted subtrees.
Consequently the users can drop 'NO_CACHE' flag and get caching.
|
|
|
|
|
| |
As the web is now, combination without www doesn't redirect https
(only http). So let's switch to the final URL; apex is problematic.
|
|
|
|
|
| |
It's a mitigation for CVE-2022-40188 and similar DoS attempts.
It's using really trivial approaches, at least for now.
|
|
|
|
|
|
|
|
|
|
|
|
| |
For long arrays we really want to increase their length by a fraction.
Otherwise it will cost lots of CPU. Doubling seems customary,
though I could imagine e.g. keeping the +50% growth on longest arrays.
I finally got sufficiently angry with this piece of code when debugging
https://forum.turris.cz/t/how-to-debug-a-custom-hosts-file-for-kresd/17449
though in that case it wasn't the main source of inefficiency.
CI: two of the mysterious/bogus warnings around arrays disappeared.
|
|
|
|
| |
Fixes CIDs 355763 and 355764. Also fixes a minor typo.
|
| |
|
| |
|
|
|
|
|
|
| |
Our strategy was (and remains) that the in-header QNAME is overwritten
in-place, so most of our code was already (correctly) assuming that
knot_pkt_qname() returns lower-case only. That simplifies this commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I used -Winline (optimizing, gcc 11 or 12) to gather warnings
about cases that were considered too expensive for inlining.
Some of these probably used not to happen when we were dropping
assertions during preprocessing in -DNDEBUG builds.
This commit mainly improves size of the compiled binary by several KiB.
- queue_head_impl(): optionally (un)inline; not big but in warnings
- queue_pop_impl(): uninline; too complex for my today's eyes
- kr_rand_bytes(): optionally (un)inline
The inlining potential there comes from calling with a constant.
- kr_straddr(): uninline. It's never been meant for hot code,
and this gives us large savings due to deduplicating the static array.
- For some I couldn't see a good resolution due to restrictions in C.
C hint: `static inline` is probably well known;
the other inline combination is well explained at:
https://stackoverflow.com/a/6312813/587396
|
|
|
|
|
|
|
|
| |
And that made the "NO6: is KO" line extraneous.
Example in context:
[select][14162.01] => id: '15271' choosing from addresses: 0 v4 + 1 v6; names to resolve: 6 v4 + 5 v6; force_resolve: 0; NO6: IPv6 is OK
[select][14162.01] => id: '15271' choosing: 'ns1.p31.dynect.net.'@'2600:2000:2210::31#00053' with timeout 774 ms zone cut: 'amazon.com.'
[select][14162.01] => id: '15271' updating: 'ns1.p31.dynect.net.'@'2600:2000:2210::31#00053' zone cut: 'amazon.com.' with rtt 316 to srtt: 311 and variance: 89
|
|
|
|
|
|
|
|
|
|
|
|
| |
It was still possible to get into a deadlock here.
https://forum.turris.cz/t/not-connecting-to-applications-like-discord/17111/7
If A records for a NS fell out of cache but AAAA remained,
with probability 1-\epsilon we'd choose an AAAA address
even if IPv6 was considered broken.
I looked at *the whole* no6 strategy again, and I do think that
there are no such holes anymore. A few percent attempts will still
go over IPv6 even if it's considered broken, but that sounds OK-ish.
|
|
|
|
|
|
| |
https://clangd.llvm.org/design/include-cleaner
Though somehow I'm all the time getting false positives for
"daemon/bindings/impl.h"
|
|
|
|
|
| |
It provides more information and the condition is typically
easier to read, too.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
https://man7.org/linux/man-pages/man3/posix_fallocate.3.html#ERRORS:
> EOPNOTSUPP
> The filesystem containing the file referred to by fd does not support
> this operation. This error code can be returned by C libraries that
> don't perform the emulation shown in NOTES, such as **musl libc**.
I've encountered this problem on Alpine Linux running inside an LXC
container on Ubuntu with data on ZFS.
|
| |
|