diff options
| author | Eric Dumazet <edumazet@google.com> | 2025-11-21 08:32:47 +0000 |
|---|---|---|
| committer | Paolo Abeni <pabeni@redhat.com> | 2025-11-25 16:10:32 +0100 |
| commit | 2773cb0b3120eb5c4b66d949eb99853d5bae1221 (patch) | |
| tree | d69f093e7f09af1f83c6524e9437c5b92df56e6c /net/sched/sch_taprio.c | |
| parent | f9e00e51e391d08de31ca98d9f8609a1bceec2d2 (diff) | |
net_sched: use qdisc_skb_cb(skb)->pkt_segs in bstats_update()
Avoid up to two cache line misses in qdisc dequeue() to fetch
skb_shinfo(skb)->gso_segs/gso_size while qdisc spinlock is held.
This gives a 5 % improvement in a TX intensive workload.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Link: https://patch.msgid.link/20251121083256.674562-6-edumazet@google.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Diffstat (limited to 'net/sched/sch_taprio.c')
| -rw-r--r-- | net/sched/sch_taprio.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c index 39b735386996..300d577b3286 100644 --- a/net/sched/sch_taprio.c +++ b/net/sched/sch_taprio.c @@ -595,6 +595,7 @@ static int taprio_enqueue_segmented(struct sk_buff *skb, struct Qdisc *sch, skb_list_walk_safe(segs, segs, nskb) { skb_mark_not_on_list(segs); qdisc_skb_cb(segs)->pkt_len = segs->len; + qdisc_skb_cb(segs)->pkt_segs = 1; slen += segs->len; /* FIXME: we should be segmenting to a smaller size |
