| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
The detections are mostly academic and useless for our purposes. We have
other static analyzers that better suit our needs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the bulk of the CI/CD overhaul.
Most of the changes are to the `.gitlab-ci.yml` file, where the build
images used are replaced with the ones provided by the
`knot-resolver-ci` repository. Some cleanups have also been done.
The commit also adds unit testing with Knot Resolver built against
multiple versions of Knot DNS, including the `master` branch. The
`master` branch image is built nightly in the `knot-resolver-ci` repo.
We have also removed `scan-build`, as its tests change frequently, with
lots of false-positives, which are very different on each version, and
there is no good way to ignore some detections. Clang-Tidy covers some
of the same issues, and we also have Coverity Scan. Should be more than
enough.
A few config tests were also excluded in the AddressSanitizer tests,
because they produce false-positives.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes the default use-case for developers when they put their
install prefix somewhere where the system `LD_LIBRARY_PATH` does not
point. Before this, `kresd` would fail to start after `ninja install`
because it would not be able to find the `libkres.so` library.
The original workaround to this was to use `meson configure
-Ddefault_library=static`, but firstly, we would like it to be working
with the default settings, and secondly, we would like to have it as
similar to what most users will encounter as possible.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This commit renames `docs:public` to `pages` as required by GitLab CI to
recognize Pages jobs correctly. It also adds the `public` directory into
`artifacts:paths`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This leverages Environments on GitLab to expose different versions of
Knot Resolver docs. The `docs:build` job builds the documentation and
exposes it via job artifacts. Then `docs:develop` (for branches) and
`docs:release` (for tags) take these artifacts and expose them via an
Environment link (an example of this in action may be seen at
[https://gitlab.nic.cz/ostava/knot-resolver/-/environments]).
There is also an optional, manually runnable `docs:public` job, which,
when run, propagates the documentation to the main GitLab Pages of the
project (e.g. [https://knot.pages.nic.cz/knot-resolver]) - this will
probably be mostly used for the latest release, although this setup
pretty much allows us to swap it for whatever version we like at any
time.
|
| |
|
|
|
|
|
|
|
| |
It looks like downloads won't work anymore:
https://gitlab.nic.cz/knot/knot-resolver/-/jobs/890201
https://gitlab.nic.cz/knot/knot-resolver/-/jobs/890312
which is probably because long-term support ended last summer.
|
|
|
|
|
|
|
|
|
|
|
|
| |
These packaging tests are dying anyway;
the manager branch reworked them.
So at least the breakages won't be shown in red until then.
https://gitlab.nic.cz/knot/knot-resolver/-/jobs/852665
https://build.opensuse.org/request/show/1050454
obs:leap15 after updating fails later in the vagrant step though:
https://gitlab.nic.cz/knot/knot-resolver/-/jobs/852799
|
|
|
|
|
|
| |
They've been failing for many months, e.g. see
https://gitlab.nic.cz/knot/knot-resolver/-/pipelines/104497
This way it at least won't be confusing by showing red in CI.
|
| |
|
| |
|
|
|
|
| |
In that case there's no need to wait for other jobs, too.
|
|
|
|
|
|
|
|
|
| |
We're usually not interested in CI on older commits,
and this default will help cancelling expensive respdiff jobs.
Also add default runner tags to make them less likely
to get underspecified. For example, each job should choose
one option in the docker/lxc and amd64/arm64 pairs.
|
|
|
|
|
|
|
| |
This reverts commit 15c1353544be, with some modifications.
On LXC we've had issues with
FileExistsError: [Errno 17] File exists: '/tmp/pytest-kresd-portdir'
.. which disappear with this commit. (I don't know how/why.)
|
|
|
|
|
| |
I don't know how to fix building the image with it.
A few things were tried around different go versions (from -backports).
|
|
|
|
| |
21.10 isn't supported anymore, which is probably why it's failing.
|
|
|
|
| |
We've already done that on OBS side, which is probably why it's failing.
|
|
|
|
|
|
|
|
|
|
|
|
| |
For long arrays we really want to increase their length by a fraction.
Otherwise it will cost lots of CPU. Doubling seems customary,
though I could imagine e.g. keeping the +50% growth on longest arrays.
I finally got sufficiently angry with this piece of code when debugging
https://forum.turris.cz/t/how-to-debug-a-custom-hosts-file-for-kresd/17449
though in that case it wasn't the main source of inefficiency.
CI: two of the mysterious/bogus warnings around arrays disappeared.
|
|
|
|
| |
That way we get at least basic testing before 3.2 is made default in CI.
|
| |
|
|
|
|
|
|
|
| |
Some of our CI jobs use project-specific GitLab runners (e.g. requiring
the `dind` tag). The jobs then fail when someone forks the repository
and opens a merge request. This commit confines those jobs to the
`knot/knot-resolver` repository.
|
|
|
|
|
|
|
|
|
| |
I have no idea why this one appeared right now (part not touched),
and it does not make sense at all:
../../../lib/utils.c:524:20: warning: Out of bound memory access (accessed memory precedes memory block)
buf[len_need - 1] = 0;
~~~~~~~~~~~~~~~~~~^~~
|
|
|
|
|
|
| |
No other job can do it, as we don't have docker images ready for that,
and the usual manual workflow won't be well usable with arm64.
We'll need to convert their generation to (manual?) CI schedules.
|
|
|
|
| |
Debian 10 could probably get dropped soon, but not yet.
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a few places the tag-set specification for jobs could match
either amd64 or arm64 runners. That non-determinism is bad,
especially when passing platform-specific artifacts around.
This is just a stop-gap measure. Later we'll need to rethink our CI
in terms of the two platforms.
I didn't touch tag-sets with `condor`, as that will probably always be
just a single machine (which coordinates scheduling on others).
|
|
|
|
|
| |
These are running on a hardware setup which is hard to maintain. In the
near future, ARM64 should be covered by a dedicated runner.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Due to missing support on some of the regular runners, let's migrate
these tests to our special LXC runners. This should hopefully make the
results more reliable and stable.
The downside is that we have to keep an additional image (and recipe)
for LXC, since it' slightly different. However, it's probably worth it,
since we'll likely migrate some other tests there in the future (for
better stability).
|
| |
|
| |
|
| |
|
|
|
|
| |
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
|
| |
|
|
|
|
|
| |
Today it was often failing due to starting too soon.
Nothing depends on this job, so it's cheap to start its check later.
|
| |
|
|
|
|
|
|
|
| |
Builds are still checked by the other pkftest suite. However, OBS
mirrors for CentOS 7 are just problematic. We've already tried to
contact them once, they fixed the issue but mentioned it will probably
come back. No point in wasting any more time with this test then.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Code wouldn't be leaked. We'd just send the branch name to GH servers.
Still, it' better to skip the step.
|
| |
|
|
|
|
|
|
| |
Some tests (typically those using network) ocassionally fail due to
timeouts, which is probably due to increased CI load - perhaps reducing
it could make the tests more stable.
|