net: Use k_fifo instead of k_work in RX and TX processing

The k_work handler cannot manipulate the used k_work. This means
that it is not easy to cleanup the net_pkt because it contains
k_work in it. Because of this, use k_fifo instead between
RX thread and network driver, and between application and TX
thread.

A echo-server/client run with IPv4 and UDP gave following
results:

Using k_work
------------
TX traffic class statistics:
TC  Priority	Sent pkts	bytes	time
[0] BK (1)	21922		5543071	103 us	[0->41->26->34=101 us]
[1] BE (0)	0		0	-
RX traffic class statistics:
TC  Priority	Recv pkts	bytes	time
[0] BK (0)	0		0	-
[1] BE (0)	21925		6039151	97 us	[0->21->16->37->20=94 us]

Using k_fifo
------------
TX traffic class statistics:
TC  Priority	Sent pkts	bytes	time
[0] BK (1)	15079		3811118	94 us	[0->36->23->32=91 us]
[1] BE (0)	0		0	-
RX traffic class statistics:
TC  Priority	Recv pkts	bytes	time
[0] BK (1)	0		0	-
[1] BE (0)	15073		4150947	79 us	[0->17->12->32->14=75 us]

So using k_fifo gives about 10% better performance with same workload.

Fixes #34690

Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
This commit is contained in:
Jukka Rissanen 2021-04-29 15:58:54 +03:00 committed by Kumar Gala
commit bcdc762609
6 changed files with 96 additions and 54 deletions

View file

@ -316,12 +316,9 @@ static bool net_if_tx(struct net_if *iface, struct net_pkt *pkt)
return true;
}
static void process_tx_packet(struct k_work *work)
void net_process_tx_packet(struct net_pkt *pkt)
{
struct net_if *iface;
struct net_pkt *pkt;
pkt = CONTAINER_OF(work, struct net_pkt, work);
net_pkt_set_tx_stats_tick(pkt, k_cycle_get_32());
@ -355,8 +352,6 @@ void net_if_queue_tx(struct net_if *iface, struct net_pkt *pkt)
return;
}
k_work_init(net_pkt_work(pkt), process_tx_packet);
#if NET_TC_TX_COUNT > 1
NET_DBG("TC %d with prio %d pkt %p", tc, prio, pkt);
#endif