| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
interconnect: debugfs: initialize src_node and dst_node to empty strings
The debugfs_create_str() API assumes that the string pointer is either NULL
or points to valid kmalloc() memory. Leaving the pointer uninitialized can
cause problems.
Initialize src_node and dst_node to empty strings before creating the
debugfs entries to guarantee that reads and writes are safe. |
| In the Linux kernel, the following vulnerability has been resolved:
libceph: reset sparse-read state in osd_fault()
When a fault occurs, the connection is abandoned, reestablished, and any
pending operations are retried. The OSD client tracks the progress of a
sparse-read reply using a separate state machine, largely independent of
the messenger's state.
If a connection is lost mid-payload or the sparse-read state machine
returns an error, the sparse-read state is not reset. The OSD client
will then interpret the beginning of a new reply as the continuation of
the old one. If this makes the sparse-read machinery enter a failure
state, it may never recover, producing loops like:
libceph: [0] got 0 extents
libceph: data len 142248331 != extent len 0
libceph: osd0 (1)...:6801 socket error on read
libceph: data len 142248331 != extent len 0
libceph: osd0 (1)...:6801 socket error on read
Therefore, reset the sparse-read state in osd_fault(), ensuring retries
start from a clean state. |
| In the Linux kernel, the following vulnerability has been resolved:
nvmet: fix race in nvmet_bio_done() leading to NULL pointer dereference
There is a race condition in nvmet_bio_done() that can cause a NULL
pointer dereference in blk_cgroup_bio_start():
1. nvmet_bio_done() is called when a bio completes
2. nvmet_req_complete() is called, which invokes req->ops->queue_response(req)
3. The queue_response callback can re-queue and re-submit the same request
4. The re-submission reuses the same inline_bio from nvmet_req
5. Meanwhile, nvmet_req_bio_put() (called after nvmet_req_complete)
invokes bio_uninit() for inline_bio, which sets bio->bi_blkg to NULL
6. The re-submitted bio enters submit_bio_noacct_nocheck()
7. blk_cgroup_bio_start() dereferences bio->bi_blkg, causing a crash:
BUG: kernel NULL pointer dereference, address: 0000000000000028
#PF: supervisor read access in kernel mode
RIP: 0010:blk_cgroup_bio_start+0x10/0xd0
Call Trace:
submit_bio_noacct_nocheck+0x44/0x250
nvmet_bdev_execute_rw+0x254/0x370 [nvmet]
process_one_work+0x193/0x3c0
worker_thread+0x281/0x3a0
Fix this by reordering nvmet_bio_done() to call nvmet_req_bio_put()
BEFORE nvmet_req_complete(). This ensures the bio is cleaned up before
the request can be re-submitted, preventing the race condition. |
| In the Linux kernel, the following vulnerability has been resolved:
net: fix segmentation of forwarding fraglist GRO
This patch enhances GSO segment handling by properly checking
the SKB_GSO_DODGY flag for frag_list GSO packets, addressing
low throughput issues observed when a station accesses IPv4
servers via hotspots with an IPv6-only upstream interface.
Specifically, it fixes a bug in GSO segmentation when forwarding
GRO packets containing a frag_list. The function skb_segment_list
cannot correctly process GRO skbs that have been converted by XLAT,
since XLAT only translates the header of the head skb. Consequently,
skbs in the frag_list may remain untranslated, resulting in protocol
inconsistencies and reduced throughput.
To address this, the patch explicitly sets the SKB_GSO_DODGY flag
for GSO packets in XLAT's IPv4/IPv6 protocol translation helpers
(bpf_skb_proto_4_to_6 and bpf_skb_proto_6_to_4). This marks GSO
packets as potentially modified after protocol translation. As a
result, GSO segmentation will avoid using skb_segment_list and
instead falls back to skb_segment for packets with the SKB_GSO_DODGY
flag. This ensures that only safe and fully translated frag_list
packets are processed by skb_segment_list, resolving protocol
inconsistencies and improving throughput when forwarding GRO packets
converted by XLAT. |
| In the Linux kernel, the following vulnerability has been resolved:
gpio: virtuser: fix UAF in configfs release path
The gpio-virtuser configfs release path uses guard(mutex) to protect
the device structure. However, the device is freed before the guard
cleanup runs, causing mutex_unlock() to operate on freed memory.
Specifically, gpio_virtuser_device_config_group_release() destroys
the mutex and frees the device while still inside the guard(mutex)
scope. When the function returns, the guard cleanup invokes
mutex_unlock(&dev->lock), resulting in a slab use-after-free.
Limit the mutex lifetime by using a scoped_guard() only around the
activation check, so that the lock is released before mutex_destroy()
and kfree() are called. |
| In the Linux kernel, the following vulnerability has been resolved:
rocker: fix memory leak in rocker_world_port_post_fini()
In rocker_world_port_pre_init(), rocker_port->wpriv is allocated with
kzalloc(wops->port_priv_size, GFP_KERNEL). However, in
rocker_world_port_post_fini(), the memory is only freed when
wops->port_post_fini callback is set:
if (!wops->port_post_fini)
return;
wops->port_post_fini(rocker_port);
kfree(rocker_port->wpriv);
Since rocker_ofdpa_ops does not implement port_post_fini callback
(it is NULL), the wpriv memory allocated for each port is never freed
when ports are removed. This leads to a memory leak of
sizeof(struct ofdpa_port) bytes per port on every device removal.
Fix this by always calling kfree(rocker_port->wpriv) regardless of
whether the port_post_fini callback exists. |
| In the Linux kernel, the following vulnerability has been resolved:
ipv6: annotate data-race in ndisc_router_discovery()
syzbot found that ndisc_router_discovery() could read and write
in6_dev->ra_mtu without holding a lock [1]
This looks fine, IFLA_INET6_RA_MTU is best effort.
Add READ_ONCE()/WRITE_ONCE() to document the race.
Note that we might also reject illegal MTU values
(mtu < IPV6_MIN_MTU || mtu > skb->dev->mtu) in a future patch.
[1]
BUG: KCSAN: data-race in ndisc_router_discovery / ndisc_router_discovery
read to 0xffff888119809c20 of 4 bytes by task 25817 on cpu 1:
ndisc_router_discovery+0x151d/0x1c90 net/ipv6/ndisc.c:1558
ndisc_rcv+0x2ad/0x3d0 net/ipv6/ndisc.c:1841
icmpv6_rcv+0xe5a/0x12f0 net/ipv6/icmp.c:989
ip6_protocol_deliver_rcu+0xb2a/0x10d0 net/ipv6/ip6_input.c:438
ip6_input_finish+0xf0/0x1d0 net/ipv6/ip6_input.c:489
NF_HOOK include/linux/netfilter.h:318 [inline]
ip6_input+0x5e/0x140 net/ipv6/ip6_input.c:500
ip6_mc_input+0x27c/0x470 net/ipv6/ip6_input.c:590
dst_input include/net/dst.h:474 [inline]
ip6_rcv_finish+0x336/0x340 net/ipv6/ip6_input.c:79
...
write to 0xffff888119809c20 of 4 bytes by task 25816 on cpu 0:
ndisc_router_discovery+0x155a/0x1c90 net/ipv6/ndisc.c:1559
ndisc_rcv+0x2ad/0x3d0 net/ipv6/ndisc.c:1841
icmpv6_rcv+0xe5a/0x12f0 net/ipv6/icmp.c:989
ip6_protocol_deliver_rcu+0xb2a/0x10d0 net/ipv6/ip6_input.c:438
ip6_input_finish+0xf0/0x1d0 net/ipv6/ip6_input.c:489
NF_HOOK include/linux/netfilter.h:318 [inline]
ip6_input+0x5e/0x140 net/ipv6/ip6_input.c:500
ip6_mc_input+0x27c/0x470 net/ipv6/ip6_input.c:590
dst_input include/net/dst.h:474 [inline]
ip6_rcv_finish+0x336/0x340 net/ipv6/ip6_input.c:79
...
value changed: 0x00000000 -> 0xe5400659 |
| In the Linux kernel, the following vulnerability has been resolved:
drm: Do not allow userspace to trigger kernel warnings in drm_gem_change_handle_ioctl()
Since GEM bo handles are u32 in the uapi and the internal implementation
uses idr_alloc() which uses int ranges, passing a new handle larger than
INT_MAX trivially triggers a kernel warning:
idr_alloc():
...
if (WARN_ON_ONCE(start < 0))
return -EINVAL;
...
Fix it by rejecting new handles above INT_MAX and at the same time make
the end limit calculation more obvious by moving into int domain. |
| In the Linux kernel, the following vulnerability has been resolved:
netfs: Fix early read unlock of page with EOF in middle
The read result collection for buffered reads seems to run ahead of the
completion of subrequests under some circumstances, as can be seen in the
following log snippet:
9p_client_res: client 18446612686390831168 response P9_TREAD tag 0 err 0
...
netfs_sreq: R=00001b55[1] DOWN TERM f=192 s=0 5fb2/5fb2 s=5 e=0
...
netfs_collect_folio: R=00001b55 ix=00004 r=4000-5000 t=4000/5fb2
netfs_folio: i=157f3 ix=00004-00004 read-done
netfs_folio: i=157f3 ix=00004-00004 read-unlock
netfs_collect_folio: R=00001b55 ix=00005 r=5000-5fb2 t=5000/5fb2
netfs_folio: i=157f3 ix=00005-00005 read-done
netfs_folio: i=157f3 ix=00005-00005 read-unlock
...
netfs_collect_stream: R=00001b55[0:] cto=5fb2 frn=ffffffff
netfs_collect_state: R=00001b55 col=5fb2 cln=6000 n=c
netfs_collect_stream: R=00001b55[0:] cto=5fb2 frn=ffffffff
netfs_collect_state: R=00001b55 col=5fb2 cln=6000 n=8
...
netfs_sreq: R=00001b55[2] ZERO SUBMT f=000 s=5fb2 0/4e s=0 e=0
netfs_sreq: R=00001b55[2] ZERO TERM f=102 s=5fb2 4e/4e s=5 e=0
The 'cto=5fb2' indicates the collected file pos we've collected results to
so far - but we still have 0x4e more bytes to go - so we shouldn't have
collected folio ix=00005 yet. The 'ZERO' subreq that clears the tail
happens after we unlock the folio, allowing the application to see the
uncleared tail through mmap.
The problem is that netfs_read_unlock_folios() will unlock a folio in which
the amount of read results collected hits EOF position - but the ZERO
subreq lies beyond that and so happens after.
Fix this by changing the end check to always be the end of the folio and
never the end of the file.
In the future, I should look at clearing to the end of the folio here rather
than adding a ZERO subreq to do this. On the other hand, the ZERO subreq can
run in parallel with an async READ subreq. Further, the ZERO subreq may still
be necessary to, say, handle extents in a ceph file that don't have any
backing store and are thus implicitly all zeros.
This can be reproduced by creating a file, the size of which doesn't align
to a page boundary, e.g. 24998 (0x5fb2) bytes and then doing something
like:
xfs_io -c "mmap -r 0 0x6000" -c "madvise -d 0 0x6000" \
-c "mread -v 0 0x6000" /xfstest.test/x
The last 0x4e bytes should all be 00, but if the tail hasn't been cleared
yet, you may see rubbish there. This can be reproduced with kafs by
modifying the kernel to disable the call to netfs_read_subreq_progress()
and to stop afs_issue_read() from doing the async call for NETFS_READAHEAD.
Reproduction can be made easier by inserting an mdelay(100) in
netfs_issue_read() for the ZERO-subreq case.
AFS and CIFS are normally unlikely to show this as they dispatch READ ops
asynchronously, which allows the ZERO-subreq to finish first. 9P's READ op is
completely synchronous, so the ZERO-subreq will always happen after. It isn't
seen all the time, though, because the collection may be done in a worker
thread. |
| In the Linux kernel, the following vulnerability has been resolved:
mISDN: annotate data-race around dev->work
dev->work can re read locklessly in mISDN_read()
and mISDN_poll(). Add READ_ONCE()/WRITE_ONCE() annotations.
BUG: KCSAN: data-race in mISDN_ioctl / mISDN_read
write to 0xffff88812d848280 of 4 bytes by task 10864 on cpu 1:
misdn_add_timer drivers/isdn/mISDN/timerdev.c:175 [inline]
mISDN_ioctl+0x2fb/0x550 drivers/isdn/mISDN/timerdev.c:233
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:597 [inline]
__se_sys_ioctl+0xce/0x140 fs/ioctl.c:583
__x64_sys_ioctl+0x43/0x50 fs/ioctl.c:583
x64_sys_call+0x14b0/0x3000 arch/x86/include/generated/asm/syscalls_64.h:17
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xd8/0x2c0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
read to 0xffff88812d848280 of 4 bytes by task 10857 on cpu 0:
mISDN_read+0x1f2/0x470 drivers/isdn/mISDN/timerdev.c:112
do_loop_readv_writev fs/read_write.c:847 [inline]
vfs_readv+0x3fb/0x690 fs/read_write.c:1020
do_readv+0xe7/0x210 fs/read_write.c:1080
__do_sys_readv fs/read_write.c:1165 [inline]
__se_sys_readv fs/read_write.c:1162 [inline]
__x64_sys_readv+0x45/0x50 fs/read_write.c:1162
x64_sys_call+0x2831/0x3000 arch/x86/include/generated/asm/syscalls_64.h:20
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xd8/0x2c0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
value changed: 0x00000000 -> 0x00000001 |
| In the Linux kernel, the following vulnerability has been resolved:
drm/bridge: synopsys: dw-dp: fix error paths of dw_dp_bind
Fix several issues in dw_dp_bind() error handling:
1. Missing return after drm_bridge_attach() failure - the function
continued execution instead of returning an error.
2. Resource leak: drm_dp_aux_register() is not a devm function, so
drm_dp_aux_unregister() must be called on all error paths after
aux registration succeeds. This affects errors from:
- drm_bridge_attach()
- phy_init()
- devm_add_action_or_reset()
- platform_get_irq()
- devm_request_threaded_irq()
3. Bug fix: platform_get_irq() returns the IRQ number or a negative
error code, but the error path was returning ERR_PTR(ret) instead
of ERR_PTR(dp->irq).
Use a goto label for cleanup to ensure consistent error handling. |
| In the Linux kernel, the following vulnerability has been resolved:
slab: fix kmalloc_nolock() context check for PREEMPT_RT
On PREEMPT_RT kernels, local_lock becomes a sleeping lock. The current
check in kmalloc_nolock() only verifies we're not in NMI or hard IRQ
context, but misses the case where preemption is disabled.
When a BPF program runs from a tracepoint with preemption disabled
(preempt_count > 0), kmalloc_nolock() proceeds to call
local_lock_irqsave() which attempts to acquire a sleeping lock,
triggering:
BUG: sleeping function called from invalid context
in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 6128
preempt_count: 2, expected: 0
Fix this by checking !preemptible() on PREEMPT_RT, which directly
expresses the constraint that we cannot take a sleeping lock when
preemption is disabled. This encompasses the previous checks for NMI
and hard IRQ contexts while also catching cases where preemption is
disabled. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/xe/nvm: Fix double-free on aux add failure
After a successful auxiliary_device_init(), aux_dev->dev.release
(xe_nvm_release_dev()) is responsible for the kfree(nvm). When
there is failure with auxiliary_device_add(), driver will call
auxiliary_device_uninit(), which call put_device(). So that the
.release callback will be triggered to free the memory associated
with the auxiliary_device.
Move the kfree(nvm) into the auxiliary_device_init() failure path
and remove the err goto path to fix below error.
"
[ 13.232905] ==================================================================
[ 13.232911] BUG: KASAN: double-free in xe_nvm_init+0x751/0xf10 [xe]
[ 13.233112] Free of addr ffff888120635000 by task systemd-udevd/273
[ 13.233120] CPU: 8 UID: 0 PID: 273 Comm: systemd-udevd Not tainted 6.19.0-rc2-lgci-xe-kernel+ #225 PREEMPT(voluntary)
...
[ 13.233125] Call Trace:
[ 13.233126] <TASK>
[ 13.233127] dump_stack_lvl+0x7f/0xc0
[ 13.233132] print_report+0xce/0x610
[ 13.233136] ? kasan_complete_mode_report_info+0x5d/0x1e0
[ 13.233139] ? xe_nvm_init+0x751/0xf10 [xe]
...
"
v2: drop err goto path. (Alexander)
(cherry picked from commit a3187c0c2bbd947ffff97f90d077ac88f9c2a215) |
| In the Linux kernel, the following vulnerability has been resolved:
l2tp: avoid one data-race in l2tp_tunnel_del_work()
We should read sk->sk_socket only when dealing with kernel sockets.
syzbot reported the following data-race:
BUG: KCSAN: data-race in l2tp_tunnel_del_work / sk_common_release
write to 0xffff88811c182b20 of 8 bytes by task 5365 on cpu 0:
sk_set_socket include/net/sock.h:2092 [inline]
sock_orphan include/net/sock.h:2118 [inline]
sk_common_release+0xae/0x230 net/core/sock.c:4003
udp_lib_close+0x15/0x20 include/net/udp.h:325
inet_release+0xce/0xf0 net/ipv4/af_inet.c:437
__sock_release net/socket.c:662 [inline]
sock_close+0x6b/0x150 net/socket.c:1455
__fput+0x29b/0x650 fs/file_table.c:468
____fput+0x1c/0x30 fs/file_table.c:496
task_work_run+0x131/0x1a0 kernel/task_work.c:233
resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
__exit_to_user_mode_loop kernel/entry/common.c:44 [inline]
exit_to_user_mode_loop+0x1fe/0x740 kernel/entry/common.c:75
__exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]
syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]
do_syscall_64+0x1e1/0x2b0 arch/x86/entry/syscall_64.c:100
entry_SYSCALL_64_after_hwframe+0x77/0x7f
read to 0xffff88811c182b20 of 8 bytes by task 827 on cpu 1:
l2tp_tunnel_del_work+0x2f/0x1a0 net/l2tp/l2tp_core.c:1418
process_one_work kernel/workqueue.c:3257 [inline]
process_scheduled_works+0x4ce/0x9d0 kernel/workqueue.c:3340
worker_thread+0x582/0x770 kernel/workqueue.c:3421
kthread+0x489/0x510 kernel/kthread.c:463
ret_from_fork+0x149/0x290 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
value changed: 0xffff88811b818000 -> 0x0000000000000000 |
| In the Linux kernel, the following vulnerability has been resolved:
perf: Fix refcount warning on event->mmap_count increment
When calling refcount_inc(&event->mmap_count) inside perf_mmap_rb(), the
following warning is triggered:
refcount_t: addition on 0; use-after-free.
WARNING: lib/refcount.c:25
PoC:
struct perf_event_attr attr = {0};
int fd = syscall(__NR_perf_event_open, &attr, 0, -1, -1, 0);
mmap(NULL, 0x3000, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
int victim = syscall(__NR_perf_event_open, &attr, 0, -1, fd,
PERF_FLAG_FD_OUTPUT);
mmap(NULL, 0x3000, PROT_READ | PROT_WRITE, MAP_SHARED, victim, 0);
This occurs when creating a group member event with the flag
PERF_FLAG_FD_OUTPUT. The group leader should be mmap-ed and then mmap-ing
the event triggers the warning.
Since the event has copied the output_event in perf_event_set_output(),
event->rb is set. As a result, perf_mmap_rb() calls
refcount_inc(&event->mmap_count) when event->mmap_count = 0.
Disallow the case when event->mmap_count = 0. This also prevents two
events from updating the same user_page. |
| In the Linux kernel, the following vulnerability has been resolved:
io_uring/io-wq: check IO_WQ_BIT_EXIT inside work run loop
Currently this is checked before running the pending work. Normally this
is quite fine, as work items either end up blocking (which will create a
new worker for other items), or they complete fairly quickly. But syzbot
reports an issue where io-wq takes seemingly forever to exit, and with a
bit of debugging, this turns out to be because it queues a bunch of big
(2GB - 4096b) reads with a /dev/msr* file. Since this file type doesn't
support ->read_iter(), loop_rw_iter() ends up handling them. Each read
returns 16MB of data read, which takes 20 (!!) seconds. With a bunch of
these pending, processing the whole chain can take a long time. Easily
longer than the syzbot uninterruptible sleep timeout of 140 seconds.
This then triggers a complaint off the io-wq exit path:
INFO: task syz.4.135:6326 blocked for more than 143 seconds.
Not tainted syzkaller #0
Blocked by coredump.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.135 state:D stack:26824 pid:6326 tgid:6324 ppid:5957 task_flags:0x400548 flags:0x00080000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5256 [inline]
__schedule+0x1139/0x6150 kernel/sched/core.c:6863
__schedule_loop kernel/sched/core.c:6945 [inline]
schedule+0xe7/0x3a0 kernel/sched/core.c:6960
schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75
do_wait_for_common kernel/sched/completion.c:100 [inline]
__wait_for_common+0x2fc/0x4e0 kernel/sched/completion.c:121
io_wq_exit_workers io_uring/io-wq.c:1328 [inline]
io_wq_put_and_exit+0x271/0x8a0 io_uring/io-wq.c:1356
io_uring_clean_tctx+0x10d/0x190 io_uring/tctx.c:203
io_uring_cancel_generic+0x69c/0x9a0 io_uring/cancel.c:651
io_uring_files_cancel include/linux/io_uring.h:19 [inline]
do_exit+0x2ce/0x2bd0 kernel/exit.c:911
do_group_exit+0xd3/0x2a0 kernel/exit.c:1112
get_signal+0x2671/0x26d0 kernel/signal.c:3034
arch_do_signal_or_restart+0x8f/0x7e0 arch/x86/kernel/signal.c:337
__exit_to_user_mode_loop kernel/entry/common.c:41 [inline]
exit_to_user_mode_loop+0x8c/0x540 kernel/entry/common.c:75
__exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]
syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]
do_syscall_64+0x4ee/0xf80 arch/x86/entry/syscall_64.c:100
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fa02738f749
RSP: 002b:00007fa0281ae0e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
RAX: fffffffffffffe00 RBX: 00007fa0275e6098 RCX: 00007fa02738f749
RDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007fa0275e6098
RBP: 00007fa0275e6090 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fa0275e6128 R14: 00007fff14e4fcb0 R15: 00007fff14e4fd98
There's really nothing wrong here, outside of processing these reads
will take a LONG time. However, we can speed up the exit by checking the
IO_WQ_BIT_EXIT inside the io_worker_handle_work() loop, as syzbot will
exit the ring after queueing up all of these reads. Then once the first
item is processed, io-wq will simply cancel the rest. That should avoid
syzbot running into this complaint again. |
| In the Linux kernel, the following vulnerability has been resolved:
mm/damon/sysfs-scheme: cleanup access_pattern subdirs on scheme dir setup failure
When a DAMOS-scheme DAMON sysfs directory setup fails after setup of
access_pattern/ directory, subdirectories of access_pattern/ directory are
not cleaned up. As a result, DAMON sysfs interface is nearly broken until
the system reboots, and the memory for the unremoved directory is leaked.
Cleanup the directories under such failures. |
| In the Linux kernel, the following vulnerability has been resolved:
sctp: move SCTP_CMD_ASSOC_SHKEY right after SCTP_CMD_PEER_INIT
A null-ptr-deref was reported in the SCTP transmit path when SCTP-AUTH key
initialization fails:
==================================================================
KASAN: null-ptr-deref in range [0x0000000000000018-0x000000000000001f]
CPU: 0 PID: 16 Comm: ksoftirqd/0 Tainted: G W 6.6.0 #2
RIP: 0010:sctp_packet_bundle_auth net/sctp/output.c:264 [inline]
RIP: 0010:sctp_packet_append_chunk+0xb36/0x1260 net/sctp/output.c:401
Call Trace:
sctp_packet_transmit_chunk+0x31/0x250 net/sctp/output.c:189
sctp_outq_flush_data+0xa29/0x26d0 net/sctp/outqueue.c:1111
sctp_outq_flush+0xc80/0x1240 net/sctp/outqueue.c:1217
sctp_cmd_interpreter.isra.0+0x19a5/0x62c0 net/sctp/sm_sideeffect.c:1787
sctp_side_effects net/sctp/sm_sideeffect.c:1198 [inline]
sctp_do_sm+0x1a3/0x670 net/sctp/sm_sideeffect.c:1169
sctp_assoc_bh_rcv+0x33e/0x640 net/sctp/associola.c:1052
sctp_inq_push+0x1dd/0x280 net/sctp/inqueue.c:88
sctp_rcv+0x11ae/0x3100 net/sctp/input.c:243
sctp6_rcv+0x3d/0x60 net/sctp/ipv6.c:1127
The issue is triggered when sctp_auth_asoc_init_active_key() fails in
sctp_sf_do_5_1C_ack() while processing an INIT_ACK. In this case, the
command sequence is currently:
- SCTP_CMD_PEER_INIT
- SCTP_CMD_TIMER_STOP (T1_INIT)
- SCTP_CMD_TIMER_START (T1_COOKIE)
- SCTP_CMD_NEW_STATE (COOKIE_ECHOED)
- SCTP_CMD_ASSOC_SHKEY
- SCTP_CMD_GEN_COOKIE_ECHO
If SCTP_CMD_ASSOC_SHKEY fails, asoc->shkey remains NULL, while
asoc->peer.auth_capable and asoc->peer.peer_chunks have already been set by
SCTP_CMD_PEER_INIT. This allows a DATA chunk with auth = 1 and shkey = NULL
to be queued by sctp_datamsg_from_user().
Since command interpretation stops on failure, no COOKIE_ECHO should been
sent via SCTP_CMD_GEN_COOKIE_ECHO. However, the T1_COOKIE timer has already
been started, and it may enqueue a COOKIE_ECHO into the outqueue later. As
a result, the DATA chunk can be transmitted together with the COOKIE_ECHO
in sctp_outq_flush_data(), leading to the observed issue.
Similar to the other places where it calls sctp_auth_asoc_init_active_key()
right after sctp_process_init(), this patch moves the SCTP_CMD_ASSOC_SHKEY
immediately after SCTP_CMD_PEER_INIT, before stopping T1_INIT and starting
T1_COOKIE. This ensures that if shared key generation fails, authenticated
DATA cannot be sent. It also allows the T1_INIT timer to retransmit INIT,
giving the client another chance to process INIT_ACK and retry key setup. |
| In the Linux kernel, the following vulnerability has been resolved:
of: unittest: Fix memory leak in unittest_data_add()
In unittest_data_add(), if of_resolve_phandles() fails, the allocated
unittest_data is not freed, leading to a memory leak.
Fix this by using scope-based cleanup helper __free(kfree) for automatic
resource cleanup. This ensures unittest_data is automatically freed when
it goes out of scope in error paths.
For the success path, use retain_and_null_ptr() to transfer ownership
of the memory to the device tree and prevent double freeing. |
| In the Linux kernel, the following vulnerability has been resolved:
mm/damon/sysfs: cleanup attrs subdirs on context dir setup failure
When a context DAMON sysfs directory setup is failed after setup of attrs/
directory, subdirectories of attrs/ directory are not cleaned up. As a
result, DAMON sysfs interface is nearly broken until the system reboots,
and the memory for the unremoved directory is leaked.
Cleanup the directories under such failures. |