During a dump, this attribute is essential, it enables the userspace to
know on which interface the context is linked to.
Fixes: 459aa660eb1d ("gtp: add initial driver for datapath of GPRS Tunneling Protocol (GTP-U)")
Signed-off-by: Nicolas Dichtel <nicolas.dichtel(a)6wind.com>
Tested-by: Gabriel Ganne <gabriel.ganne(a)6wind.com>
---
I target this to net, because I think this is a bug fix. The dump result cannot
be used if there is more than one gtp interface on the system.
drivers/net/gtp.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
index 21640a035d7d..8e47d0112e5d 100644
--- a/drivers/net/gtp.c
+++ b/drivers/net/gtp.c
@@ -1179,6 +1179,7 @@ static int gtp_genl_fill_info(struct sk_buff *skb, u32 snd_portid, u32 snd_seq,
goto nlmsg_failure;
if (nla_put_u32(skb, GTPA_VERSION, pctx->gtp_version) ||
+ nla_put_u32(skb, GTPA_LINK, pctx->dev->ifindex) ||
nla_put_be32(skb, GTPA_PEER_ADDRESS, pctx->peer_addr_ip4.s_addr) ||
nla_put_be32(skb, GTPA_MS_ADDRESS, pctx->ms_addr_ip4.s_addr))
goto nla_put_failure;
--
2.26.2
Hi All,
This is my first post, I have a similar problem to the topic "Network
is unreachable error for GTP interface".
I followed all instructions, installed all needed dependencies,
upgrade my kernel version to 4.9.0-6 as stated in
https://osmocom.org/projects/openggsn/wiki/Kernel_GTP
Unfortunately, no GTP T-PDU encapsulation for my packets.
## Tunnel listing is OK
root@routeurA:/home/bob/libgtpnl/tools# ./gtp-tunnel list
version 1 tei 200/100 ms_addr 172.23.10.163 sgsn_addr 10.11.12.14
## I have upgraded my Kernel version to 4.9.0-6 as stated in
https://osmocom.org/projects/openggsn/wiki/Kernel_GTP
At the time of writing (2018-04-26) of this wiki, below listed
distributions have support of GTP kernels :
Debian
Debian 9 "stretch" (kernel 4.9.0-6)
root@routeurA:/home/bob/libgtpnl/tools# uname -r
4.9.0-6-amd64
## GTP module
root@routeurA:/home/bob/libgtpnl/tools# lsmod | grep gtp
gtp 28672 0
udp_tunnel 16384 1 gtp
## ping remote ms_addr is not ok
root@routeurA:/home/bob/libgtpnl/tools# ping 172.23.10.163
PING 172.23.10.163 (172.23.10.163) 56(84) bytes of data.
^C
--- 172.23.10.163 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
## remove the route using "gtpa" device
root@routeurA:/home/bob/libgtpnl/tools# ip route del 172.23.10.163/32 dev gtpa
## add new route using normal interface
root@routeurA:/home/bob/libgtpnl/tools# ip route add 172.23.10.163/32
via 10.11.12.14
## ping is OK
root@routeurA:/home/bob/libgtpnl/tools# ping 172.23.10.163
PING 172.23.10.163 (172.23.10.163) 56(84) bytes of data.
64 bytes from 172.23.10.163: icmp_seq=1 ttl=64 time=0.592 ms
64 bytes from 172.23.10.163: icmp_seq=2 ttl=64 time=0.713 ms
## remove again the route
root@routeurA:/home/bob/libgtpnl/tools# ip route del 172.23.10.163/32
## switch it to "gtpa" device
root@routeurA:/home/bob/libgtpnl/tools# ip route add 172.23.10.163/32 dev gtpa
root@routeurA:/home/bob/libgtpnl/tools# ping 172.23.10.163
PING 172.23.10.163 (172.23.10.163) 56(84) bytes of data.
^C
## tcpdump shows ICMP between the 2 ms_addr, no encapsulation at all
Am I missing something somewhere?
FYI, I'm not using openggsn or ergw, I have developped my small
userspace GTP-C ready, but I'm stuck at GTP-U side.
Thanks in advance,
Best Regards,
Hi all,
I am running an opensource 5G Core however I cannot get my UPF up and running ads the GTP links and tunnels I am trying to create are failing with “Operation Not Permitted”.
I’ll recompile again to run through GDB but would really like to hear back if you guys have seen this before on Ubuntu 18.04 (Kernel > 5)
Any help would be appreciated
Donal
Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
There is an urgent need to migrate our most important public
infrastructure to a new server, and I will be doing that on
*Sunday, July 19 2020*, starting about 9am CEST.
The migration involves redmine (main osmocom.org website), jenkins, gerrit,
git, and cgit.
In theory, the migration should be quick. I would expect (significantly)
less than one hour of downtime. However, we all know Murphys law.
Services not affected are mail (including mailman lists), ftp, dns. So in case
of doubt, we can still use mailing lists to communicate.
In case anyone urgently needs osmocom source code on Sunday morning
during the downtime: There are public mirrors available on github.
Regards,
Harald
--
- Harald Welte <laforge(a)osmocom.org> http://laforge.gnumonks.org/
============================================================================
"Privacy in residential applications is a desirable marketing option."
(ETSI EN 300 175-7 Ch. A6)
Hi
I'm trying to build a setup by using GTP module and libgtpnl tools on
Centos 7 but I haven't been successful yet. Worse, I don't know how to
debug the problem. I also stopped firewall and iptables.
How can I debug/solve, I will be very glad if you help. dmesg or
system messages show show nothing. Why is GTP interface (link) is
unreachable.
Thanks in advance
- Volkan
$ modinfo gtp
filename:
/lib/modules/5.7.7-1.el7.elrepo.x86_64/kernel/drivers/net/gtp.ko
alias: net-pf-16-proto-16-family-gtp
alias: rtnl-link-gtp
description: Interface driver for GTP encapsulated traffic
author: Harald Welte <hwelte(a)sysmocom.de>
license: GPL
srcversion: 191407DA5399304D93D62C7
depends: udp_tunnel
retpoline: Y
intree: Y
name: gtp
vermagic: 5.7.7-1.el7.elrepo.x86_64 SMP mod_unload modversions
$ modinfo udp_tunnel
filename:
/lib/modules/5.7.7-1.el7.elrepo.x86_64/kernel/net/ipv4/udp_tunnel.ko
license: GPL
srcversion: 0A315BA6124B0664F4D23FB
depends:
retpoline: Y
intree: Y
name: udp_tunnel
vermagic: 5.7.7-1.el7.elrepo.x86_64 SMP mod_unload modversions
$ ip addr add 172.0.0.1/24 dev enp9s0
$ ip addr add 172.99.0.1/32 dev lo
$ ./gtp-link add gtp1
WARNING: attaching dummy socket descriptors. Keep this process running
for testing purposes.
$ ./gtp-tunnel add gtp1 v1 200 100 172.99.0.2 172.0.0.2
$ ip route add 172.99.0.2/32 dev gtp1
$ ./gtp-tunnel list
version 1 tei 200/100 ms_addr 172.99.0.2 sgsn_addr 172.0.0.2
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 172.99.0.1/32 scope global lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
7: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
UP group default qlen 1000
link/ether 08:35:71:ab:54:5f brd ff:ff:ff:ff:ff:ff
inet 172.0.0.1/24 scope global enp9s0
valid_lft forever preferred_lft forever
8: gtp1: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 0 qdisc noqueue
state UNKNOWN group default qlen 1000
link/none
$ ip route
default via 192.168.1.1 dev enp2s0 proto static metric 100
172.0.0.0/24 dev enp9s0 proto kernel scope link src 172.0.0.1
172.99.0.2 dev gtp1 scope link
$ ping 172.99.0.2
PING 172.99.0.2 (172.99.0.2) 56(84) bytes of data.
ping: sendmsg: Network is unreachable
ping: sendmsg: Network is unreachable
ping: sendmsg: Network is unreachable
Hello
I tried to follow the steps in the basic testing below but it is not
up-to-date
https://osmocom.org/projects/linux-kernel-gtp-u/wiki/Basic_Testing
To make the setup work, I execute the GGSN on the host by using
osmo-ggsn/doc/examples/osmo-ggsn.cfg. I changed the "gtp bind-ip" to
172.31.1.1, "ip prefix dynamic" to 192.168.71.0/24, "ip ifconfig" to
192.168.71.0/24 and then execute the emulated SGSN inside the sgsn
namespace (ip netns exec sgsn sgsnemu -d -r 172.31.1.1 -l 172.31.1.2
--defaultroute --createif). In this case; GGSN can create PDP Context
successful but there is not any GTP tunnel. How can I get the
up-to-date version of this document? or how can I run the basic
testing steps?
Thanks in advance.
- Volkan
*Hi, Osmocom-SGSN*
I have downloaded latest master branch sources for TTCN3 tests.
I followed instructions for compilations and dependencies.
In file :
*regen_makefile.sh *
If line with
../regen-makefile.sh SGSN_Tests.ttcn $FILES
is replaced with
../regen-makefile.sh $FILES
then this warning is not present
File `SGSN_Tests.ttcn' was given more than once for the Makefile.
Also,
after command
*make compile *
*(same with *
*make sgsn *
*from upper dir.)*
I got from output:
....
GSM_RR_Types.ttcn:404.2-408.2: In type definition `SecondPartAssign':
GSM_RR_Types.ttcn:409.18-20: error: in variant attribute, at or before
token `CSN': syntax error, unexpected XIdentifier, expecting $end
GSM_RR_Types.ttcn:510.2-512.2: In type definition `IaRestOctLL':
GSM_RR_Types.ttcn:513.42-44: error: in variant attribute, at or before
token `CSN': syntax error, unexpected XIdentifier, expecting $end
GSM_RR_Types.ttcn:588.2-594.2: In type definition `IaRestOctets':
GSM_RR_Types.ttcn:595.23-25: error: in variant attribute, at or before
token `CSN': syntax error, unexpected XIdentifier, expecting $end
........
Notify: Errors found in the input modules. Code will not be generated.
make: *** [Makefile:206: compile] Error 1
Last time in April, I played with TTCN3 for SGSN, everything was good.
Can you help me with this ?
HI, Osmo Packet Core Guys,
I am working from home nowadays, testing commercial packet core equipment
(Ericsson).
I have already listen/read about using TTCN3 in all your famous products.
But never get a try.
I am interesting to deploy similar tests for production systems like yours
for OsmoSGSN, due to lack of test radio equipment from home and also to
make some sort of automation.
I ran your TTCN3 tests for OsmoSGSN and get loved on a first sight.
You did amazing job and made countless possibilities. Thank for that.
I know you have C code support in libosmocore library for 3g/2g auth.
*Can we use SIM card reader in TTCN3 as an option to evaluate
authentication ?!*
*Did you ever tried?!*
Best regards,
Mirko K.
On Sun, Jan 05, 2020 at 06:36:07PM +0100, Christophe JAILLET wrote:
> 'gtp_encap_disable_sock(sk)' handles the case where sk is NULL, so there
> is no need to test it before calling the function.
>
> This saves a few line of code.
>
> Signed-off-by: Christophe JAILLET <christophe.jaillet(a)wanadoo.fr>
Reviewed-by: Simon Horman <simon.horman(a)netronome.com>
This patchset fixes several bugs in the GTP module.
1. Do not allow adding duplicate TID and ms_addr pdp context.
In the current code, duplicate TID and ms_addr pdp context could be added.
So, RX and TX path could find correct pdp context.
2. Fix wrong condition in ->dumpit() callback.
->dumpit() callback is re-called if dump packet size is too big.
So, before return, it saves last position and then restart from
last dump position.
TID value is used to find last dump position.
GTP module allows adding zero TID value. But ->dumpit() callback ignores
zero TID value.
So, dump would not work correctly if dump packet size too big.
3. Fix use-after-free in ipv4_pdp_find().
RX and TX patch always uses gtp->tid_hash and gtp->addr_hash.
but while packet processing, these hash pointer would be freed.
So, use-after-free would occur.
4. Fix panic because of zero size hashtable
GTP hashtable size could be set by user-space.
If hashsize is set to 0, hashtable will not work and panic will occur.
Taehee Yoo (4):
gtp: do not allow adding duplicate tid and ms_addr pdp context
gtp: fix wrong condition in gtp_genl_dump_pdp()
gtp: fix an use-after-free in ipv4_pdp_find()
gtp: avoid zero size hashtable
drivers/net/gtp.c | 109 +++++++++++++++++++++++++++-------------------
1 file changed, 63 insertions(+), 46 deletions(-)
--
2.17.1
GTP default hashtable size is 1024 and userspace could set specific
hashtable size with IFLA_GTP_PDP_HASHSIZE. If hashtable size is set to 0
from userspace, hashtable will not work and panic will occur.
Fixes: 459aa660eb1d ("gtp: add initial driver for datapath of GPRS Tunneling Protocol (GTP-U)")
Signed-off-by: Taehee Yoo <ap420073(a)gmail.com>
---
drivers/net/gtp.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
index 5450b1099c6d..e5b7d6d2286e 100644
--- a/drivers/net/gtp.c
+++ b/drivers/net/gtp.c
@@ -667,10 +667,13 @@ static int gtp_newlink(struct net *src_net, struct net_device *dev,
if (err < 0)
return err;
- if (!data[IFLA_GTP_PDP_HASHSIZE])
+ if (!data[IFLA_GTP_PDP_HASHSIZE]) {
hashsize = 1024;
- else
+ } else {
hashsize = nla_get_u32(data[IFLA_GTP_PDP_HASHSIZE]);
+ if (!hashsize)
+ hashsize = 1024;
+ }
err = gtp_hashtable_new(gtp, hashsize);
if (err < 0)
--
2.17.1
Dear fellow Osmocom developers,
I would like to invite all developers and contributors to Osmocom [sub]projects
to register for OsmoDevCon 2020 (held on April 24th-27th, 2020 in Berlin).
For details known so far, please check
http://osmocom.org/projects/osmo-dev-con/wiki/OsmoDevCon2020
Please enter your name at
https://osmocom.org/projects/osmo-dev-con/wiki/OsmoDevCon2020#Requested
in case you would like to attend. Registering early allows proper
planning. Thanks!
Looking forward to meeting old and new Osmocom developers in April 2020.
Regards,
Harald
--
- Harald Welte <laforge(a)osmocom.org> http://laforge.gnumonks.org/
============================================================================
"Privacy in residential applications is a desirable marketing option."
(ETSI EN 300 175-7 Ch. A6)
Firstly I would like to say great thanks Firat for the reply, it certainly
put me on a different investigation path. And apologies for not replying
sooner. I wanted to make sure it was the correct path before I replied back
to the group with the findings and associated solution.
If the GTP-U connection is connecting to the PG-W with a single IP at
(src/dst) each side, and UDP flow has isn't been enabled on the network
card of the host using gtp.ko in the kernel all the associated network
traffic will be received on a single queue on the network, which is then
serviced by a single ksoftirq thread. at somepoint the system will be
receiving more traffic than there is available thread space to service the
request, your ksoftirq will burn at 100%. That means all your traffic will
be bound to a single network queue, bound to a single irq thread, and limit
your overall throughput, no matter how big your network pipe is.
This is because the network card hashed the packet via
SRC_IP:SRC_PORT:DEST_IP:DEST_PORT:PROTO to a single queue.
# take note on the discussions about udp-flow-hash udp4 using ethtool
https://home.regit.org/tag/performance/https://www.joyent.com/blog/virtualizing-nicshttps://www.serializing.me/2015/04/25/rxtx-buffers-rss-others-on-boot/
You can check if your card supports adjustable parameters by using "ethtool
-k DEV | egrep -v fixed". As firat eludes to (below) udp flow hashing
should be supported.
If you enabled UDP flow hash then it will spread the hash over multiple
queues. The default number of queues on the network card can vary,
depending on your hardware firmware driver, and any additional associated
kernel parameters.
Would recommend having the latest firmware driver for your network card,
and latest kernel driver for the network card if possible.
Alas the network cards used by my hardware didn't support flow hash, it had
intel flow director, which wasn't granular enough and worked with TCP, so
to work around this limitation having multiple SRC_IPs in different name
spaces with the same GTP UDP PORT numbers resolved the problem. Of course
if you are sending GTP-U to a single destination from multiple sources (say
6 IP's), via 6 different kernel name spaces, you spread the load over 6
queues, which is better than nothing on a limited feature network card.
Time to upgrade the 10G network card....
This took the system from 100% ksoftirq on a single cpu running at
throughput 1G, to around 7 to 8GIG throughput at 90% ksoftirq over multiple
cpu;s... There is still massive room for improvement.
For performance some things to investigate/consider... Which I had
different levels of success... Here are my ramblings.....
on the linux host... Assuming your traffic is now spread across multiple
queues (above) - or at least spread as best as can be...
Kernel sysctl tweaking is always of benefit, if your using out of the box
kernel config... Example udp buffers, queue sizes, paging and virtual
memory settings... There is a application called "tuned", which allows you
to adjust profiles for the kernel sysctl... My performance profile which
suited the testing best was "throuhput-performance"
if your looking for straight performance, disable audit processing like
"auditd".
Question use of SELINUX, enforcing/permissive or disabled. can bring
results on performance, if you doing testing or load testing... ofcourse
its a security consideration..
If you don't need to use ipfilters/firewall in my case can increase the
throughput by a 3rd by disabling (cleaning the filter tables and unloading
the modules). Black listing the modules so they dont get loaded at kernel
time. Note you can stop modules getting loaded with
kernel.modules_disabled=1, but be careful if your also messing with
initramfs rebuilds, because you don't get any modules once your set that
parameter, i learnt the hard way :)
Investigate smp_affinity and affinity_hint, along with irqbalane using the
--hintpolicy=exact. understand which irq's service the network cards, and
how many queues you have.. /proc/interrupts will guide you (grep 'CPU|rxtx'
/proc/interrupts)... understand the smp_affinity numbers.. "for
((irq=START_IRQ; irq<=END_IRQ; irq++)); do cat
/proc/irq/$irq/smp_affinity; done | sort –u", as you can adjust which
queue goes to which ksoftirq to manually balance the queues if you so
desire. brilliant document on irq debugging....
https://events.static.linuxfound.org/sites/events/files/slides/LinuxConJapa…
you can monitor what calls are been executed on cpu's by using... I found
this most useful to understand that ipfilter was eating a significant
amount of CPU cycles, and also what other calls are eating up cycles inside
ksoftirq. https://github.com/brendangregg/FlameGraph
Investigate additional memory management using numactl (numa daemon).
remember if you are using virtualisation you might want to pin guests to
specific sockets, along with numa pinning on the vmhost.. Also look at
reserved memory allocation in the vmhost for the guest... This will make
your guest perform better.
enable sysstat (sar) as it will aid your investigation if you havent
already (sar -u ALL -P ALL 1). This will show which softirqs are eating
most cpu and to which cpu they are bound, this also translates directly to
the network queue that the traffic is coming in on.. Ie, network card queue
6, talks to cpu/6 talking to irq/6 and so on... Using flamegraph will help
you understand what syscalls and chewing the CPU..
If your using virtualisation then the number of default queues that vxnet
(vmware in this example) presents to the guest might be less than the
number of network card queues the vmhost sees (so watch out for that). You
can adjust the number of queues to the guest by params in the vmware
network driver... investigate VDMQ / netqueue, to increase the number of
available hardware queues from the vmhost to the guest. depending which
quest driver your using vxnet3, or others some drivers dont support NAPI
(see further down).
* VMDQ: array of int*
* Number of Virtual Machine Device Queues: 0/1 = disable, 2-16 enable
(default=8)*
* RSS: array of int*
* Number of Receive-Side Scaling Descriptor Queues, default 1=number of
cpus*
* MQ: array of int*
* Disable or enable Multiple Queues, default 1*
Node: array of int
set the starting node to allocate memory on, default -1
* IntMode: array of int*
* Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default 2*
* InterruptType: array of int*
* Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default IntMode
(deprecated)*
Make sure your virtual switch (vmware) if used has Pass-through
(Direct-path I/O) enabled. NIC teaming policy should be validated depending
on your requirement, example Policy "route based on IP hash" can be of
benefit.
Check the network card is MSI-X, and the linux driver supports NAPI (most
should these days, but you never know), also check your vmhost driver
supports napi, if not get a NAPI supported kvm driver, or vmware driver
(vib update).
Upgrade your kernel, to a later release 4.x.. even consider using a later
distro of linux... I tried fedora 29. I also compiled latest osmocom from
source, with compile options for "optimisation -O3 and other such".
"bmon -b" was a good tool understand throughput loads, along with loading
through qdisc/fq_dodel mq's.... Understand qdisc via ip link or ifconfig (
http://tldp.org/HOWTO/Traffic-Control-HOWTO/components.html), adjusting the
queues has some traction, but if unsure leave as default.
TSO/UFo?GSO/LRO/GRO - understand your network card with respects to these,
this can improve performance if you haven't already enabled (or adversely
disabled options, since sometimes it doesn't actually help). You can get
the your card options using ethool
TCP Segmentation Offload (TSO)
Uses the TCP protocol to send large packets. Uses the NIC to
handle segmentation, and then adds the TCP, IP and data link layer
protocol headers to each segment.
UDP Fragmentation Offload (UFO)
Uses the UDP protocol to send large packets. Uses the NIC to
handle IP fragmentation into MTU sized packets for large UDP
datagrams.
Generic Segmentation Offload (GSO)
Uses the TCP or UDP protocol to send large packets. If the NIC
cannot handle segmentation/fragmentation, GSO performs the same
operations, bypassing the NIC hardware. This is achieved by delaying
segmentation until as late as possible, for example, when the packet
is processed by the device driver.
Large Receive Offload (LRO)
Uses the TCP protocol. All incoming packets are re-segmented as
they are received, reducing the number of segments the system has to
process. They can be merged either in the driver or using the NIC. A
problem with LRO is that it tends to resegment all incoming packets,
often ignoring differences in headers and other information which can
cause errors. It is generally not possible to use LRO when IP
forwarding is enabled. LRO in combination with IP forwarding can lead
to checksum errors. Forwarding is enabled if
/proc/sys/net/ipv4/ip_forward is set to 1.
Generic Receive Offload (GRO)
Uses either the TCP or UDP protocols. GRO is more rigorous than
LRO when resegmenting packets. For example it checks the MAC headers
of each packet, which must match, only a limited number of TCP or IP
headers can be different, and the TCP timestamps must match.
Resegmenting can be handled by either the NIC or the GSO code.
Traffic steering was on by default with the version of linux i was using,
but worth checking if your using older versions.
https://www.kernel.org/doc/Documentation/networking/scaling.txt
(from the txt link) note: Some advanced NICs allow steering packets to
queues based on programmable filters. For example, webserver bound TCP
port 80 packets can be directed to their own receive queue. Such
“n-tuple†filters can be configured from ethtool
(--config-ntuple).
Interestingly investigate your network card, for its hashing algorithms,
how it distributes the traffic over its ring buffers, you can on some cards
adjust the RSS hash function. Alas the card i was using stuck to "toeplitz"
for hits hashing, which others were disabled and unavailable / xor and
crc32. The indirection table can be adjusted based on the tuplets "ethtool
-X" but didn't really assist too much on this.
ethtool -x <dev>
RX flow hash indirection table for ens192 with 8 RX ring(s):
0: 0 1 2 3 4 5 6 7
8: 0 1 2 3 4 5 6 7
16: 0 1 2 3 4 5 6 7
24: 0 1 2 3 4 5 6 7
RSS hash key:
Operation not supported
RSS hash function:
toeplitz: on
xor: off
crc32: off
Check the default size of the rx/tx ring buffers, they maybe suboptimal.
ethtool -g ens192
Ring parameters for ens192:
Pre-set maximums:
RX: 4096
RX Mini: 0
RX Jumbo: 4096
TX: 4096
Current hardware settings:
RX: 1024
RX Mini: 0
RX Jumbo: 256
TX: 512
If your using port channels, make sure you have the correct hashing policy
enabled at the switch end...
I haven't investigated this option yet but some switches also do scaling,
to assist (certainly with virtualisation)... Maybe one day i will get
around to this...
Additionally CISCO describe that you should have VM-FEX optimisation
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualizati…
note:
*table 4.* Scaling of Dynamic vNIC with VMDirectPath, Virtual Machines
Running on Linux Guest with VMXNET3 Emulated Driver and Multi-Queue Enabled
*Table 5.* Scaling of Dynamic vNIC with VMDirectPath, Virtual Machines
Running on Linux Guest with VMXNET3 Emulated Driver and Multi-Queue Disabled
Another thing to consider/investigate - openvswitch/bridging... If your
using eth pairs to send your traffic down name spaces... you can have some
varied results with performance by trying openvswitch/brctl
I really enjoyed the investigation path, again thanks to Firat for the
pointer, otherwise it would have taken longer to get the answer...
Tony
On Fri, Jun 21, 2019 at 6:50 AM fırat sönmez <firatssonmez(a)gmail.com> wrote:
> Hi,
>
> It has been over 2 years that I have worked with gtp and I kind of had the
> same problem that time, we had a 10gbit cable and tried to see how much udp
> flow we could get. I think we used iperf to test it and when we list all
> the processes, the ksoftirq was using all the resource. Then I found this
> page: https://blog.cloudflare.com/how-to-receive-a-million-packets/. I do
> not remember the exact solution, but I guess when you configure your out
> ethernet interface with the command below, it must work then. To my
> understanding all the packets are processed in the same core in your
> situation, because the port number is always the same. So, for example, if
> you add another network with gtp-u tunnel on another port (different than
> 3386) then again your packets will be processed on the other core, too. But
> with the below command, the interface will be configured in a way that it
> wont check the port to process on which core it should be processed, but it
> will use the hash from the packet to distribute over the cores.
> ethtool -n (your_out_eth_interface) rx-flow-hash udp4
>
> Hope it will work you.
>
> Fırat
>
> Tony Clark <chiefy.padua(a)gmail.com>, 19 Haz 2019 Çar, 15:07 tarihinde
> şunu yazdı:
>
>> Dear All,
>>
>> I've been using the GTP-U kernel module to communicate with a P-GW.
>>
>> Running Fedora 29, kernel 4.18.16-300.fc29.x86_64.
>>
>> At high traffic levels through the GTP-U tunnel I see the performance
>> degrade as 100% CPU is consumed by a single ksoftirqd process.
>>
>> It is running on a multi-cpu machine and as far as I can tell the load is
>> evenly spread across the cpus (ie either manually via smp_affinity, or even
>> irqbalance, checking /proc/interrupts so forth.).
>>
>> Has anyone else experienced this?
>>
>> Is there any particular area you could recommend I investigate to find
>> the root cause of this bottleneck, as i'm starting to scratch my head where
>> to look next...
>>
>> Thanks in advance
>> Tony
>>
>> ---- FYI
>>
>> modinfo gtp
>> filename:
>> /lib/modules/4.18.16-300.fc29.x86_64/kernel/drivers/net/gtp.ko.xz
>> alias: net-pf-16-proto-16-family-gtp
>> alias: rtnl-link-gtp
>> description: Interface driver for GTP encapsulated traffic
>> author: Harald Welte <hwelte(a)sysmocom.de>
>> license: GPL
>> depends: udp_tunnel
>> retpoline: Y
>> intree: Y
>> name: gtp
>> vermagic: 4.18.16-300.fc29.x86_64 SMP mod_unload
>>
>> modinfo udp_tunnel
>> filename:
>> /lib/modules/4.18.16-300.fc29.x86_64/kernel/net/ipv4/udp_tunnel.ko.xz
>> license: GPL
>> depends:
>> retpoline: Y
>> intree: Y
>> name: udp_tunnel
>> vermagic: 4.18.16-300.fc29.x86_64 SMP mod_unload
>>
>
Dear all,
Please do not consider the last message.
First I want to introduce myslef, I'm Gael and few months ago I bought a Lime-Mini to use as BTS with EDGE. I have spent some time working in that and it is doing it but I have noticed that the UP LINK MCS is not stable, it changes very much with the mobile near the board. I have read that this issue was solved in the ticket 1833 but it is not working for me, I hope that this not about my board. Do you have any suggest for this behavior?
To solve that issue, I force as I read in another post, setting the minimum an maximum values to zero till the MCS to be used. It works fine setting the pcu-cfg file like: ... mcs7 0 35 mcs8 35 35.
It works but it does not look well and if I change by VTY to use mcs 9 it works too but is not posible to start the system with this configuration, so I have to start it using mcs7.
With this cfg I can reach 130kbps using a mobile class 12 ( 4TS RX + TX), anyway I have 6 TS configured for PDCH. I guess this mobile should reach almost 200 kbps (4TS * 54Kbps). What is the maximum speed reached?
Thanks in advance for your support.
Best regards.
--
Securely sent with Tutanota. Get your own encrypted, ad-free mailbox:
https://tutanota.com
From: Taehee Yoo <ap420073(a)gmail.com>
[ Upstream commit e30155fd23c9c141cbe7d99b786e10a83a328837 ]
If an invalid role is sent from user space, gtp_encap_enable() will fail.
Then, it should call gtp_encap_disable_sock() but current code doesn't.
It makes memory leak.
Fixes: 91ed81f9abc7 ("gtp: support SGSN-side tunnels")
Signed-off-by: Taehee Yoo <ap420073(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
drivers/net/gtp.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
index f38e32a7ec9c..dba3869b61be 100644
--- a/drivers/net/gtp.c
+++ b/drivers/net/gtp.c
@@ -845,8 +845,13 @@ static int gtp_encap_enable(struct gtp_dev *gtp, struct nlattr *data[])
if (data[IFLA_GTP_ROLE]) {
role = nla_get_u32(data[IFLA_GTP_ROLE]);
- if (role > GTP_ROLE_SGSN)
+ if (role > GTP_ROLE_SGSN) {
+ if (sk0)
+ gtp_encap_disable_sock(sk0);
+ if (sk1u)
+ gtp_encap_disable_sock(sk1u);
return -EINVAL;
+ }
}
gtp->sk0 = sk0;
--
2.20.1