diff options
author | Zhongqiu Duan <dzq.aishenghu0@gmail.com> | 2025-01-02 18:14:26 +0100 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2025-01-04 17:17:23 +0100 |
commit | 3479c7549fb1dfa7a1db4efb7347c7b8ef50de4b (patch) | |
tree | 5f268bc6a923286792ba785c3f4e151c76122fc9 /include | |
parent | net: 802: LLC+SNAP OID:PID lookup on start of skb data (diff) | |
download | linux-3479c7549fb1dfa7a1db4efb7347c7b8ef50de4b.tar.xz linux-3479c7549fb1dfa7a1db4efb7347c7b8ef50de4b.zip |
tcp/dccp: allow a connection when sk_max_ack_backlog is zero
If the backlog of listen() is set to zero, sk_acceptq_is_full() allows
one connection to be made, but inet_csk_reqsk_queue_is_full() does not.
When the net.ipv4.tcp_syncookies is zero, inet_csk_reqsk_queue_is_full()
will cause an immediate drop before the sk_acceptq_is_full() check in
tcp_conn_request(), resulting in no connection can be made.
This patch tries to keep consistent with 64a146513f8f ("[NET]: Revert
incorrect accept queue backlog changes.").
Link: https://lore.kernel.org/netdev/20250102080258.53858-1-kuniyu@amazon.com/
Fixes: ef547f2ac16b ("tcp: remove max_qlen_log")
Signed-off-by: Zhongqiu Duan <dzq.aishenghu0@gmail.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Reviewed-by: Jason Xing <kerneljasonxing@gmail.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://patch.msgid.link/20250102171426.915276-1-dzq.aishenghu0@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/net/inet_connection_sock.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h index 3c82fad904d4..c7f42844c79a 100644 --- a/include/net/inet_connection_sock.h +++ b/include/net/inet_connection_sock.h @@ -282,7 +282,7 @@ static inline int inet_csk_reqsk_queue_len(const struct sock *sk) static inline int inet_csk_reqsk_queue_is_full(const struct sock *sk) { - return inet_csk_reqsk_queue_len(sk) >= READ_ONCE(sk->sk_max_ack_backlog); + return inet_csk_reqsk_queue_len(sk) > READ_ONCE(sk->sk_max_ack_backlog); } bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req); |