$ docker run --net=ipvlan114 --ip=192.168.114.11 -it --rm alpine /bin/sh
```
A key takeaway is, operators have the ability to map their physical network into
their virtual network for integrating containers into their environment with no
operational overhauls required. NetOps drops an 802.1Q trunk into the
Docker host. That virtual link would be the `-o parent=` passed in the network
creation. For untagged (non-VLAN) links, it is as simple as `-o parent=eth0` or
for 802.1Q trunks with VLAN IDs each network gets mapped to the corresponding
VLAN/Subnet from the network.
An example being, NetOps provides VLAN ID and the associated subnets for VLANs
being passed on the Ethernet link to the Docker host server. Those values are
plugged into the `docker network create` commands when provisioning the
Docker networks. These are persistent configurations that are applied every time
the Docker engine starts which alleviates having to manage often complex
configuration files. The network interfaces can also be managed manually by
being pre-created and Docker networking will never modify them, and use them
as parent interfaces. Example mappings from NetOps to Docker network commands
are as follows:
- VLAN: 10, Subnet: 172.16.80.0/24, Gateway: 172.16.80.1
- `--subnet=172.16.80.0/24 --gateway=172.16.80.1 -o parent=eth0.10`
- VLAN: 20, IP subnet: 172.16.50.0/22, Gateway: 172.16.50.1
- `--subnet=172.16.50.0/22 --gateway=172.16.50.1 -o parent=eth0.20`
- VLAN: 30, Subnet: 10.1.100.0/16, Gateway: 10.1.100.1
- `--subnet=10.1.100.0/16 --gateway=10.1.100.1 -o parent=eth0.30`
### IPvlan L3 mode example
IPvlan will require routes to be distributed to each endpoint. The driver only
builds the IPvlan L3 mode port and attaches the container to the interface. Route
distribution throughout a cluster is beyond the initial implementation of this
single host scoped driver. In L3 mode, the Docker host is very similar to a
router starting new networks in the container. They are on networks that the
upstream network will not know about without route distribution. For those
curious how IPvlan L3 will fit into container networking, see the following
examples.

IPvlan L3 mode drops all broadcast and multicast traffic. This reason alone
makes IPvlan L3 mode a prime candidate for those looking for massive scale and
predictable network integrations. It is predictable and in turn will lead to
greater uptimes because there is no bridging involved. Bridging loops have been
responsible for high profile outages that can be hard to pinpoint depending on
the size of the failure domain. This is due to the cascading nature of BPDUs
(Bridge Port Data Units) that are flooded throughout a broadcast domain (VLAN)
to find and block topology loops. Eliminating bridging domains, or at the least,
keeping them isolated to a pair of ToRs (top of rack switches) will reduce hard
to troubleshoot bridging instabilities. IPvlan L2 modes is well suited for
isolated VLANs only trunked into a pair of ToRs that can provide a loop-free
non-blocking fabric. The next step further is to route at the edge via IPvlan L3
mode that reduces a failure domain to a local host only.
- L3 mode needs to be on a separate subnet as the default namespace since it
requires a netlink route in the default namespace pointing to the IPvlan parent
interface.
- The parent interface used in this example is `eth0` and it is on the subnet
`192.168.1.0/24`. Notice the `docker network` is not on the same subnet
as `eth0`.
- Unlike IPvlan l2 modes, different subnets/networks can ping one another as
long as they share the same parent interface `-o parent=`.
```console
$$ ip a show eth0
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:50:56:39:45:2e brd ff:ff:ff:ff:ff:ff
inet 192.168.1.250/24 brd 192.168.1.255 scope global eth0