* add wc_linuxkm_drbg_ctx.n_rngs, and in wc_linuxkm_drbg_init_tfm(), set it to max(4, nr_cpu_ids), to avoid stalling on unicore targets;
* add explanatory comments re architecture to get_drbg() and get_drbg_n();
* add opportunistic cond_sched() to get_drbg_n();
* add runtime asserts in get_drbg(), wc_linuxkm_drbg_seed(), and get_default_drbg_ctx(), checking that we have the right tfm with an allocated DRBG array;
* wc_linuxkm_drbg_startup(): return failure if registering the random_bytes handlers fails;
linuxkm/patches/6.1.73/WOLFSSL_LINUXKM_HAVE_GET_RANDOM_CALLBACKS-6v1v73.patch: fix flub.
* implement interception of _get_random_bytes() and get_random_bytes_user() (implicitly intercepts /dev/random and /dev/urandom):
* get_crypto_default_rng()
* get_default_drbg_ctx()
* wc__get_random_bytes()
* wc_get_random_bytes_user()
* wc_extract_crng_user()
* wc_mix_pool_bytes()
* wc_crng_reseed()
* wc_get_random_bytes_by_kprobe()
* wc_get_random_bytes_user_kretprobe_enter()
* wc_get_random_bytes_user_kretprobe_exit()
* add LINUXKM_DRBG_GET_RANDOM_BYTES sections to wc_linuxkm_drbg_startup() and wc_linuxkm_drbg_cleanup()
* add linuxkm/patches/*/WOLFSSL_LINUXKM_HAVE_GET_RANDOM_CALLBACKS-*.patch, initially for versions:
* 5.10.17
* 5.10.236
* 5.15
* 5.17
* 6.1.73
* 6.12
* 6.15
* remove "*.patch" from .gitignore.
* add linuxkm/patches/regen-patches.sh.
* in wc_linuxkm_drbg_ctx_clear(), check lock count before freeing.
* in get_drbg() and put_drbg(), use migrate_disable(), not DISABLE_VECTOR_REGISTERS().
* in wc_linuxkm_drbg_generate(), explicitly DISABLE_VECTOR_REGISTERS() for the crypto_default_rng.
* in wc_linuxkm_drbg_generate(), add DRBG reinitialization code to handle RNG_FAILURE_E. This handles the situation where a DRBG was instantiated in a vector-ops-allowed context, caching a vectorized SHA256 ethod, but later used in a no-vector-ops-allowed context.
* in wc_linuxkm_drbg_seed(), add DISABLE_VECTOR_REGISTERS() wrapper around wc_RNG_DRBG_Reseed() for crypto_default_rng.
linuxkm/x86_vector_register_glue.c:
* add crash recovery logic to wc_linuxkm_fpu_state_assoc_unlikely()
* in wc_linuxkm_fpu_state_assoc(), when wc_linuxkm_fpu_states is null, don't call wc_linuxkm_fpu_state_assoc_unlikely() if !assume_fpu_began.
* in can_save_vector_registers_x86(), save_vector_registers_x86(), and restore_vector_registers_x86(), check for hard interrupt context first, to return early failure if current->pid is unusable.
* in save_vector_registers_x86(), tweak logic around WC_FPU_INHIBITED_FLAG, adding local_bh_disable()...local_bh_enable() to provide for safe recursion.
wolfcrypt/src/random.c: optimization: in Hash_df(), for WOLFSSL_LINUXKM, don't put digest[WC_SHA256_DIGEST_SIZE] in the heap, keep it on the stack.
wolfssl/wolfcrypt/types.h: add WOLFSSL_NO_ASM no-op definitions for DISABLE_VECTOR_REGISTERS() and REENABLE_VECTOR_REGISTERS().
configure.ac:
* move --enable-linuxkm and --enable-linuxkm-defaults initial detection early, so that HMAC_COPY_DEFAULT picks it up.
* add ENABLED_ENTROPY_MEMUSE_DEFAULT, and enable it by default when ENABLED_LINUXKM_DEFAULTS.
* update linuxkm-lkcapi-register help message.
linuxkm/linuxkm_wc_port.h:
* add my_kallsyms_lookup_name().
* add preempt_count, _raw_spin_lock_irqsave, _raw_spin_trylock, _raw_spin_unlock_irqrestore, and _cond_resched, to wolfssl_linuxkm_pie_redirect_table, and add spin_unlock_irqrestore() macro to mask native inline.
* move linuxkm mutex wrappers from wolfcrypt/src/wc_port.c to linuxkm_wc_port.h, make them inlines, and add new default spinlock-based implementation, with old method now gated on WOLFSSL_LINUXKM_USE_MUTEXES.
* change malloc() and realloc() wrappers from GFP_KERNEL to GFP_ATOMIC.
linuxkm/lkcapi_glue.c: make misc.h/misc.c inclusion unconditional, and trim now-redundant inclusions out of lkcapi_dh_glue.c and lkcapi_ecdh_glue.c.