AWS-LC lacks support for a number of elliptic curve algorithms so this
adds some conditional macros to avoid registering the related plugin
features. Support for curves ed448 and x448 is completely absent and are
not planned for implementation as they are no longer recommended for use.
While ed25519 is supported by the library, a single missing API for
ASN.1 DER encoding of its private keys is missing which prevents its
use in strongSwan. Future work may remove this limitation, but for now
we will disable the functionality.
Closesstrongswan/strongswan#2109
AWS-LC is a BoringSSL-based libcrypto implementation. SHA_CTX is declared with
the hash data specified as an array rather than as a field in upstream OpenSSL.
Since AWS-LC builds against C99, we are unable to handle this with anonymous
unions like BoringSSL. The workaround I propose is to add these conditional
macros around the accessors within openssl_sha1_prf. After this change,
everything builds successfully with AWS-LC headers.
Closesstrongswan/strongswan#2103
This also refactors the BPF handling so it can be shared between the
dhcp and farp plugins. The latter is adapted accordingly.
Closesstrongswan/strongswan#2047
Co-authored-by: Tobias Brunner <tobias@strongswan.org>
Different users in the strongSwan code base use the refcount helpers to
allocate incrementing unique values. So far the risk of overflows for
these unsigned 32-bit values has been considered mostly theoretical, as
it requires a longer uptime and a lot of activity to hit such an overflow.
At least for the Netlink sequence numbers, this is not only theoretical,
though, and an overflow has been hit on a productive setup. Unfortunately,
the consequences are rather unpleasant, as the response with a zero
sequence number can't be matched to the request. This results in the
offending thread to block indefinitely while holding the Netlink mutex.
So add a helper to allocate incrementing unique identifiers that checks
for overflows and never returns 0. Use it for Netlink sequence numbers
and some other potential users affected, namely those allocating
IKE_SA/CHILD_SA unique identifiers, marks and interface identifiers.
Closesstrongswan/strongswan#2062
The refcount_t for allocating unique marks and interface IDs may overflow or
hit the special value for unique marks/if_ids, in the worst case not setting it
on CHILD_SAs that should have one.
As (potentially two) marks/if_ids are allocated only for newly created CHILD_SAs,
but not for rekeying, this not very likely. Still, if a setup uses
aggressive re-authentication and or re-creates CHILD_SAs every minute,
a gateway with 100'000 tunnels may hit the overflow within a month uptime.
CHILD_SA unique identifier allocation starts at 1. If the counter overflows,
a unique ID of 0 is assigned to an CHILD_SA, which may have unclear
consequences.
Overflowing the unique ID counter is theoretical for most setups, but on
a Gateway terminating 100'000 tunnels and rekeying CHILD_SAs every 60s
overflows the counter after a month uptime. So avoid a 0 unique identifier
by using ref_get_nonzero().
IKE_SA unique identifier allocation starts at 1. If the counter overflows,
a unique ID of 0 is assigned to an IKE_SA, which may have unclear consequences.
Overflowing the unique ID counter is theoretical for most setups, but on
a Gateway terminating 100'000 tunnels and rekeying the IKE_SA every 60s
overflows the counter after a month uptime. So avoid a 0 unique identifier
by using ref_get_nonzero().
A refcount variable is used to allocate sequential unique identifiers for
Netlink sequence numbers, subject to overflows. The risk of an overflow
has so far not been considered practical, as it requires 2^32 netlink
requests.
It seems that this issue is not only theoretical. A host with thousands
of tunnels doing aggressive rekeying and/or aggressive status checking
(via vici list-sas) may trigger the overflow after a few weeks uptime.
The consequences are rather devastating: Once the refcount overflows, a
Netlink request is sent with sequence number 0. This request is answered
by the kernel, but can't be matched to the request, resulting in the error:
"received unknown netlink seq 0, ignored". Without Netlink timeouts, the
thread indefinitely waits for a response while holding the Netlink mutex,
bringing all threads to a halt.
So at all costs avoid zero sequence numbers. Also, start at sequence number
1 instead of the arbitrary 201, so the same range is used on start and after
an overflow.
Also fix the path to the sdkmanager (the old one was removed in the latest
images and the incorrect path caused a weird sudo error) and install
Java 17 as that's necessary for newer versions of the Gradle plugin.
PowerMock isn't maintained anymore and causes issues with newer Java
versions. We only used it to mock static methods, which Mockito now
supports as well. Instead of using the try-with-resources construct,
this uses a @Before and @After method so we don't have to change all the
test methods.
This also references the NDK via ndkVersion and replaces the custom
ndk-build tasks. It also replaces the deprecated compileSdkVersion and
increases it because dependencies of updated dependencies require that.
targetSdkVersion is not yet updated because there might be some work
required for Android 14 compatibility.
The gperf version that's already available on the system generates
function declarations with K&R syntax (separate arguments) for which newer
compilers produce a warning as C23 doesn't support that syntax anymore.
These functions are declared without arguments, passing arguments to them
causes warnings such as the following with newer compilers:
passing arguments to 'return_null' without a prototype is deprecated in all versions of C and is not supported in C2x [-Werror,-Wdeprecated-non-prototype]
We only use them via function pointers, which doesn't trigger any warnings
and hopefully continues to work.
Newer curl versions (as used on macOS via Homebrew) add attributes like
__attribute__ ((format(printf, a, b)))
to their `curl_*printf*` functions, which fails if we redefine `printf`
as e.g. `builtin_printf` (pulled in via library.h). We could disable
these checks via CURL_NO_FMT_CHECKS, but reordering the headers should
do the trick as well.
Bison generates code that only increases the yynerrs counter, it's never
read. This causes a warning in newer compilers (in particular clang).
Newer versions of bison mark yynerrs with __attribute__((unused)), but
at least on FreeBSD 14 that's not yet available.
systemd seems to use this and if we indirectly use libraries provided
by it, which can e.g. happen via getgrnam_r() and nss-systemd, this may
be called on pointers returned by leak detective's malloc(), which will
not point to the original start of the block and cause a segmentation
fault.
Closesstrongswan/strongswan#2045
Fixes a regression with handling OCSP error responses and adds a new
option to specify the length of nonces in OCSP requests. Also adds some
other improvements for OCSP handling and fuzzers for OCSP
requests/responses.
Closesstrongswan/strongswan#2011