Multi-Protocol PPP

Users can run IP, IPv6 and IPX on a ppp session over rs232 or emulations of that

PPP over Serial

Server End

A file like this could be the ppp peers file at the server end.

ipx-network 0x00000002
ipx-router-name rs232
ipx-node 1:2
ipx-routing 2
ipx-routing 4
ipv6 ::1,::2
mtu 65535
mru 65535

User end

I have this ppp peers configuration file at the guest's end of the connection. It is configured to autodetect just about everything that pppd can run.

mtu 65535
mru 65535

Jumboframe PPPoE

PPP can run within ethernet. We can setup tunnels of this sort to test ppp client functions

It has been found that from pppd, and pppoe-server need patching to attempt huge MTU negotiation with RFC4638. The source points out this may trigger issues, though this is research so not bothered.

Server and client best tried in separate machine or vm so able to check jumbos work and observe the goodies via wireshark. the interfaces need to see each other via ethernet, though can be vlan or bridge.

start a pppoe server, it uses the client parts of pppd (though possibly not plugin), so you see it breeds pppd pet processes, after initial negotiation, that can be seen as "rp_pppoe_sess" and "rp_pppoe_service"

Start a client, it may also be possible to use the os pppd as modifications mainly target

With this change applied I had got ppp to show an mtu limited only by the underlying ethernet mtu in most cases, here it is an mtu of 6120, yes, bigger than 1500, now even observed passing superjumbos back and forth with kernel mode driver.

I think we can go even bigger, supersize even, to test this between containers running on the same physical machine it may be needed to open /dev/ppp to userland with an /etc/udev/rules.d/example.rules rule like:

With virtual between containers, physical limitations are out of the way, so it can be seen just how big a jumbo frame can really go with pppoe:

Launch a couple of network den, presume the host systemd-networkd catches and attaches them to a host bridge:

  1. netden ()
  2. {
  3. unshare -Unrum --propagation unchanged /bin/sh -c 'mount -t ramfs none $HOME; ip link add eth0 mtu 65535 up type veth peer name rt_$(printf %x $(ps -p $$ -o netns=)) netns 1; /bin/bash'
  4. }

65535 is the most veth allows for now, using pppoe it can exchange pings, an enormous "ping -Mdo -s 65498"

Is it any good on ADSL or VDSL?

This depends upon the provider, going beyond 1500 is expected to makes many things possible, such as IPv6 tunnels which are very nearly as good as native where there is jumbo connectivity to the tunnel provider, or bringing home the mobile data natively.

PPPoA on adsl was not so limited by mtu caps in pppd, so on some ISP it was possible to get bigger

Testing pppoe jumboframe support

This has been tried with the pppoe client in windows 10, possibly some router appliances can be evaluated this way as well. Windows 10 wan miniport driver seems to not do RFC4638, let alone going beyond 1500, may get retested upon updates to see if that changes. Other routers unchecked so far.


A File like this is used at the other end, substituting the IPv4 addresses as needed.


With IPv6, only the link-local addresses are specified. Radvd is run on the host's end of the connection in the same way as it is for ethernet, and the pppd at the guest end then automatically puts the IPv6 network prefix from radvd together with the interface address from the ipv6 option in the peers file above to form the full ipv6 address.

Example: After the connection is up, firstly give the local end a real address like this: ifconfig ppp0 add 2001:db8::1/64

Then when radvd is run or sighup'd with the following file, the guest ppp end automatically gets the IPv6 2001:db8::2/64

interface ppp0
        IgnoreIfMissing on;
        AdvSendAdvert on;
        prefix 2001:db8::/64
                AdvOnLink on;
                AdvAutonomous on;

It's useful to do the ifconfig ppp0 add 2001:db8::1/64 and killall -HUP radvd in a suitable file in /etc/ppp/ipv6-up.d/, to the corresponding ppp connection.

It may also be needed to add a route to ppp host from elsewhere in the network if it does not already get data destined for the chosen IPv6 prefix. Something like ip -f inet6 route add 2001:db8::/64 dev eth0 may do it.


In 2018 neither ifconfig or iproute2 will configure ipx addresses, though ifconfig show them, and iproute2 instead let config decnet addresses. As of 2019 IPX is removed completely from linux so continued use needs to go to userspace.

Utilities to configure ipx addressing are in ncpfs, no longer carried by debian so need local build.

  1. ./ipx_configure --auto-primary on --auto-interface on
  2. ./ipx_interface add bridge.vlan EtherII

socat used to send and receive IPX packets, according to their format, the layout is 16 bits socket, 32 bits network, 48 bits node, such as broadcast, and 8 bits type.

  1. socat SOCKET-RECV:4:2:4:x500000000000FFFFFFFFFFFF1100 -
  2. socat SOCKET-SENDTO:4:2:4:x500000000000FFFFFFFFFFFF1100 -

After IPX is gone from kernel it needs raw sockets, something like:

  1. socat SOCKET-SENDTO:17:2:$((0x8137)):x8137$(printf %02x $(</sys/class/net/br41/ifindex))00000000000000ACDE482345670000 -

Can also recompile net-snmp for exotic, i.e. (not Internet Protocol) transports adding in debian/rules for ./configure; atm needs libatm1-dev

  1. --with-transports="UDPIPv6 TCPIPv6 UDP TCP Alias Unix Callback AAL5PVC IPX"

and IPX can then be used with it

  1. ./snmpwalk ipx::FFFFFFFFFFFF

You might not be running this protocol but some games like to use it. It also is to be merged into IPv6, therefore IPv6 fields can represent IPv4 and IPX addresses as well as native IPv6.

ipxripd may be useful also needed but version 0.7 will not run unless built with a slight change to the source code on linux 2.6 like so.

--- ipxripd-0.7/ipxkern.c	2005-08-30 18:30:40.000000000 +0100
+++ ipxripd-0.7/ipxkern.c	2005-08-30 18:31:19.000000000 +0100
@@ -57,7 +57,7 @@
 	FILE *ipx_route;
 	char buf[512];
-	ipx_route = fopen("/proc/net/ipx_route", "r");
+	ipx_route = fopen("/proc/net/ipx/route", "r");
 	if (ipx_route == NULL)
@@ -115,7 +115,7 @@
 	FILE *ipx_ifc;
 	char buf[512];
-	ipx_ifc = fopen("/proc/net/ipx_interface", "r");
+	ipx_ifc = fopen("/proc/net/ipx/interface", "r");
 	if (ipx_ifc == NULL)

It could be applied and installed like as follows:

apt-get source ipxripd
patch -p0 < patchfile
cd ipxripd-0.7/
dpkg-buildpackage -r fakeroot
cd ..
dpkg -i *.deb

IPX configuration

Firstly you'd eanable it on the network of the ppp host. The master of that network (i.e. dhcp/radvd server) would get the following in /etc/network/interfaces

iface eth0 ipx static
        netnum 1
        frame EtherII

The other machines would have the following appended to /etc/network/interfaces, although they could also be statically configured if you don't mind having to change them if you want to change network number too.

iface eth0 ipx dynamic
        frame EtherII

The static means that the machine will use the specified IPX network number. It does need to be different for each IPX subnet in a connected internetworking of subnets.

The dynamic means that the machine does not use IPX at all until it sees an IPX packet, and then communicates on the IPX network number seen in that packet. Some documentation refers to this as network number 0, meaning I don't have a configured network number.

Now you hopefully have an IPX enabled ethernet subnet with entries in /proc/net/ipx showing the information. ipxripd needs to be running too so that ppp will get it's routes copied into the tables.

pppd can now participate, and ipxd would broadcast routes to the ethernet when it is started. If your ppp guest is a Windows 98 machine, it may need to be told the Network Address in Advanced, which is set the same as ipx-network in the host's ppp peers file.