Some variables that are only assigned to be used in DBG statements
will otherwise trigger a "set but not used" warning/error if DEBUG_LEVEL
is too low.
When we reset the initiator SPI, we have to migrate the adopted children
again so the correct IKE_SA can later be checked out.
Closesstrongswan/strongswan#1663
As long as any `child*` selector is received, only CHILD_SAs will be
terminated or rekeyed. Any passed `ike*` selectors will only be used to
filter the IKE_SAs when looking for matching CHILD_SAs. However, the
previous log messages seemed to indicate that IKE_SAs will also be
terminated/rekeyed.
References strongswan/strongswan#1655
While there is a status message sent to systemd (can be seen e.g. in
systemctl status), the version etc. is currently not logged to the
journal, syslog or any log files.
Previously, the logger installed by the controller always announced
LEVEL_PRIVATE(4), which produced completely useless logging calls with
the common clients (vici/stroke) whose default log level is LEVEL_CTRL(1).
This can produce quite some overhead if there are e.g. a lot of concurrent
initiate() calls.
Until we know which IKE_SA is affected by an initiate() or terminate_*()
command, unrelated log messages that don't have any IKE context (i.e.
the passed `ike_sa` is NULL) would previously get logged.
Exiting the loop previously could cause watcher to busy wait (i.e.
rebuild the array and call poll() repeatedly) until the active callback
was done.
Assume watcher observes two FDs 15 and 22, which are in the list in that
order. FD 15 is signaled and its callback gets triggered. The array of
FDs is rebuilt and does not include 15 anymore. Now FD 22 is ready for
reading. However, when enumerating all registered FDs, the loop previously
was exited when reaching FD 15 and seeing that it's active. FD 22 was
never checked and the array was immediately rebuilt and poll() called.
If the callback for 15 took longer, this was repeated over and over.
This basically reverts d16d5a245f0b ("watcher: Avoid queueing multiple
watcher callbacks at the same time"), whose goal is quite unclear to me.
If it really wanted to allow only a single callback for all FDs, it didn't
achieve that as any FD before an active one would get notified and if
multiple FDs are ready concurrently, they'd all get triggered too.
Skipping entries with active callback makes sense as it avoids a lookup
in the FD array and subsequent revents checks. But why would we need to
rebuild the array if we see such an entry? Once the callback is done,
the watcher is notified and the array rebuilt anyway (also if any other
FD was ready and jobs get queued).
The list of FDs is recreated quite often (e.g. due to the kernel-netlink
event sockets) and if a logger depends on watcher_t in some way this
might cause conflicts if the mutex is held.
Since the same FD may be added multiple times (e.g. with separate
callbacks for WATCHER_READ and WATCHER_WRITE), the previous check
might not have found the correct entry. As the entry can't be removed
while in a callback, the pointer comparison is fine.
This could be problematic in case loggers in some way rely on watcher_t
themselves. This particular log message should rarely occur if at all,
but still avoid holding the mutex.
This should prevent a deadlock that could previously be caused when a
control-log event was raised. The deadlock looked something like this:
* Thread A holds the read lock on bus_t and raises the control-log event.
This requires acquiring the connection entry in write mode to queue the
outgoing message. If it is already held by another thread, this blocks
on a condvar.
* Thread B is registering the on_write() callback on the same connection's
stream due to a previous log message. Before this change, the code
acquired the entry in write mode as well, thus, blocking thread A. To
remove/add the stream, the mutex in watcher_t needs to be acquired.
* Thread C is in watcher_t's watch() and holds the mutex while logging on
level 2 or 3. The latter requires the read lock on bus_t, which should
usually be acquirable even if thread A holds it. Unless writers are
concurrently waiting on the lock and the implementation is blocking
new readers to prevent writer starvation.
* Thread D is removing a logger from the bus (e.g. after an initiate()
call) and is waiting to acquire the write lock on bus_t and is thereby
blocking thread C.
With this change, thread B should not block thread A anymore. So thread D
and thread C should eventually be able to proceed as well.
Thread A could be held up a bit if there is a thread already sending
messages for the same connection, but that should only cause a delay, no
deadlock, as on_write() and do_write() don't log (or lock) anything while
keeping the entry locked in write mode.
Closesstrongswan/strongswan#566
This triggers an error for functions that take chunk_t as variadic
arguments (cat, debug, builders, ASN.1 wrap).
Since we are not using C++, this should be fine as we are only passing
POD types anyway.
Adds support for CMS-style signatures in PKCS#7 containers, which allows
verifying RSA-PSS and ECDSA signatures.
Ed25519 signatures should be supported when verifying, however, they
currently can't be created. Ed448 signatures are currently not supported.
That's because RFC 8419 has very strict requirements in regards to the
hash algorithms used for signed attributes. With Ed25519 only SHA-512 is
allowed (pki currently has an issue with Ed25519 in combination with
SHA-512 due to its associated HASH_IDENTITY) and with Ed448 only SHAKE256
with 512-bit output, which has to be encoded in the algorithmIdentifier
parameters (something we currently don't support at all).
Closesstrongswan/strongswan#1615
For the legacy schemes with rsaEncryption nothing changes, but if an
actual signature scheme is encoded we use that to find the key and
verify the signature.
The descriptions for the PKCS#7 structure are adapted for CMS.