to troubleshoot bridging instabilities. IPvlan L2 modes is well suited for
isolated VLANs only trunked into a pair of ToRs that can provide a loop-free
non-blocking fabric. The next step further is to route at the edge via IPvlan L3
mode that reduces a failure domain to a local host only.
- L3 mode needs to be on a separate subnet as the default namespace since it
requires a netlink route in the default namespace pointing to the IPvlan parent
interface.
- The parent interface used in this example is `eth0` and it is on the subnet
`192.168.1.0/24`. Notice the `docker network` is not on the same subnet
as `eth0`.
- Unlike IPvlan l2 modes, different subnets/networks can ping one another as
long as they share the same parent interface `-o parent=`.
```console
$$ ip a show eth0
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:50:56:39:45:2e brd ff:ff:ff:ff:ff:ff
inet 192.168.1.250/24 brd 192.168.1.255 scope global eth0
```
- A traditional gateway doesn't mean much to an L3 mode IPvlan interface since
there is no broadcast traffic allowed. Because of that, the container default
gateway points to the containers `eth0` device. See below for CLI output
of `ip route` or `ip -6 route` from inside an L3 container for details.
The mode `-o ipvlan_mode=l3` must be explicitly specified since the default
IPvlan mode is `l2`.
The following example does not specify a parent interface. The network drivers
will create a dummy type link for the user rather than rejecting the network
creation and isolating containers from only communicating with one another.
```console
# Create the IPvlan L3 network
$ docker network create -d ipvlan \
--subnet=192.168.214.0/24 \
--subnet=10.1.214.0/24 \
-o ipvlan_mode=l3 ipnet210
# Test 192.168.214.0/24 connectivity
$ docker run --net=ipnet210 --ip=192.168.214.10 -itd alpine /bin/sh
$ docker run --net=ipnet210 --ip=10.1.214.10 -itd alpine /bin/sh
# Test L3 connectivity from 10.1.214.0/24 to 192.168.214.0/24
$ docker run --net=ipnet210 --ip=192.168.214.9 -it --rm alpine ping -c 2 10.1.214.10
# Test L3 connectivity from 192.168.214.0/24 to 10.1.214.0/24
$ docker run --net=ipnet210 --ip=10.1.214.9 -it --rm alpine ping -c 2 192.168.214.10
```
> [!NOTE]
>
> Notice that there is no `--gateway=` option in the network create. The field
> is ignored if one is specified `l3` mode. Take a look at the container routing
> table from inside of the container:
>
> ```console
> # Inside an L3 mode container
> $$ ip route
> default dev eth0
> 192.168.214.0/24 dev eth0 src 192.168.214.10
> ```
In order to ping the containers from a remote Docker host or the container be
able to ping a remote host, the remote host or the physical network in between
need to have a route pointing to the host IP address of the container's Docker
host eth interface.
### Dual stack IPv4 IPv6 IPvlan L2 mode
- Not only does Libnetwork give you complete control over IPv4 addressing, but
it also gives you total control over IPv6 addressing as well as feature parity
between the two address families.
- The next example will start with IPv6 only. Start two containers on the same
VLAN `139` and ping one another. Since the IPv4 subnet is not specified, the
default IPAM will provision a default IPv4 subnet. That subnet is isolated
unless the upstream network is explicitly routing it on VLAN `139`.
```console
# Create a v6 network
$ docker network create -d ipvlan \
--ipv6 --subnet=2001:db8:abc2::/64 --gateway=2001:db8:abc2::22 \
-o parent=eth0.139 v6ipvlan139
# Start a container on the network
$ docker run --net=v6ipvlan139 -it --rm alpine /bin/sh
```
View the container eth0 interface and v6 routing table:
```console
# Inside the IPv6 container
$$ ip a show eth0
75: eth0@if55: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 00:50:56:2b:29:40 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.2/16 scope global eth0