Age | Commit message (Collapse) | Author |
|
'struct net_device' is refcounted and can stick around for quite a
while if someone is still holding a reference to it. However, we
free the vport that it is attached to in the next RCU grace period
after detach. This assigns the vport to NULL on detach and adds
appropriate checks.
|
|
In some places we would put the return type on the same line as
the rest of the function definition and other places we wouldn't.
Reformat everything to match kernel style.
|
|
Internal devices currently keep track of stats themselves. However,
we now have stats tracking in the vport layer, so convert to use
that instead to avoid code duplication and take advantage of
additional features such as 64-bit counters.
|
|
Since vport implementations have no header files they needed to be
declared as extern before being used. They are currently declared
in vport.c but this isn't safe because the compiler will silently
accept it if the type is incorrect. This moves those declarations
into vport.h, which is included by all implementations and will
cause errors about conflicting types if there is a mismatch.
|
|
Pull some generic implementations of vport functions out of the
GRE vport so they can be used by others.
Also move the code to set the MTUs of internal devices to the minimum
of attached devices to the generic vport_set_mtu layer.
|
|
Places that update per-cpu stats without locking need to have bottom
halves disabled. Otherwise we can be running in process context and
in the middle of an operation and be interrupted by a softirq.
|
|
Enables checksum offloading, scatter/gather, and TSO on internal
devices. While these optimizations were not previously enabled on
internal ports we already could receive these types of packets from
Xen guests. This has the obvious performance benefits when these
packets can be passed directly to hardware.
There is also a more subtle benefit for GRE on Xen. GRE packets
pass through OVS twice - once before encapsulation and once after
encapsulation, moving through an internal device in the process.
If it is a SG packet (as is common on Xen), a copy was necessary
to linearize for the internal device. However, Xen uses the
memory allocator to track packets so when the original packet is
freed after the copy netback notifies the guest that the packet
has been sent, despite the fact that it is actually sitting in the
transmit queue. The guest then sends packets as fast as the CPU
can handle, overflowing the transmit queue. By enabling SG on
the internal device, we avoid the copy and keep the accounting
correct.
In certain circumstances this patch can decrease performance for
TCP. TCP has its own mechanism for tracking in-flight packets
and therefore does not benefit from the corrected socket accounting.
However, certain NICs do not like SG when it is not being used for
TSO (these packets can no longer be handled by TSO after GRE
encapsulation). These NICs presumably enable SG even though they
can't handle it well because TSO requires SG.
Tested controllers (all 1G):
Marvell 88E8053 (large performance hit)
Broadcom BCM5721 (small performance hit)
Intel 82571EB (no change)
|
|
We currently acquire dp_mutex when we are notified that the MTU
of a device attached to the datapath has changed so that we can
set the internal devices to the minimum MTU. However, it is not
required to hold dp_mutex because we already have RTNL lock and it
causes a deadlock, so don't do it.
Specifically, the issue is that DP mutex is acquired twice: once in
dp_device_event() before calling set_internal_devs_mtu() and then
again in internal_dev_change_mtu() when it is actually being changed
(since the MTU can also be set directly). Since it's not a recursive
mutex, deadlock.
|
|
Needed by XAPI to accurately report bond statistics.
Ugh.
Bug NIC-63.
|
|
Currently the datapath directly accesses devices through their
Linux functions. Obviously this doesn't work for virtual devices
that are not backed by an actual Linux device. This creates a
new virtual port layer which handles all interaction with devices.
The existing support for Linux devices was then implemented on top
of this layer as two device types. It splits out and renames dp_dev
to internal_dev. There were several places where datapath devices
had to handled in a special manner and this cleans that up by putting
all the special casing in a single location.
|