{"id":2623,"date":"2016-07-21T11:19:42","date_gmt":"2016-07-21T11:19:42","guid":{"rendered":"https:\/\/live-infoblox-blog.pantheonsite.io\/?p=2623"},"modified":"2022-10-20T14:34:25","modified_gmt":"2022-10-20T21:34:25","slug":"using-the-infoblox-ipam-driver-for-docker","status":"publish","type":"post","link":"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/","title":{"rendered":"Using the Infoblox IPAM Driver for Docker"},"content":{"rendered":"<p>As you saw in my previous blog\u00a0<a href=\"https:\/\/community.infoblox.com\/t5\/Community-Blog\/How-Docker-Networking-Works-and-the-Importance-of-IPAM\/ba-p\/6871\" target=\"_self\" rel=\"noopener noreferrer\">How Docker Networking Works and the Importance of IPAM Functionality<\/a>, Docker\u2019s networking model enables 3rd party vendors to \u2018plug-in\u2019 enterprise class network solutions. Docker requires the services of an IPAM infrastructure to enable the creation of network address spaces\/pools, subnets and the allocation of individual IP addresses for the container-based microservices. In a complex container deployment is important to have a service like Infoblox IPAM to help maintain consistency in a very dynamic multi-host environment dealing with IP address and network creation and deletions. \u00a0In this paper we examine the details of the Infoblox Docker IPAM driver for specific use cases and including command syntax.<\/p>\n<p>Our focus in this post is around the User Defined Network and more specifically the bridge type. Near the end, we will build a real example of a 3 node cluster configured to share a network.<\/p>\n<h2 id=\"toc-hId-649905281\">Network Subnet\/Addresses Lifecycle<\/h2>\n<p>You can create separate networks for different microservices based applications across multiple-hosts that do not need to interact, therefore isolating the traffic between containers. You can also create a common shared network across multiple hosts of cooperating applications and associated microservices. If each microservice has its own subnet, this can also simplify any security rules used to control traffic between microservices. Alternately, you may have pre-defined networks or VLANs within your environment to which you would like to attach containers. The Docker container networking model (CNM), and the competing Container Network Interface (CNI), enable the creation and management of these networks to serve all of these use cases and more.<\/p>\n<p>In this post, we will focus on Docker, with a later post showing similar functionality using CNI.<\/p>\n<p>In Docker, networks are created using the\u00a0<span style=\"font-family: courier new,courier;\">docker network create<\/span>\u00a0command. \u00a0This can be run manually, or by an orchestrator. In either case, you must specify the IPAM and network drivers to use, and can optionally specify a subnet. For example:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">docker network create --driver bridge --ipam-driver infoblox --subnet 10.1.1.0\/24 blue\r\n<\/span><\/pre>\n<p>creates a bridge network\u00a0<span style=\"font-family: courier new,courier;\">blue<\/span>\u00a0with the specified subnet, using the\u00a0<span style=\"font-family: courier new,courier;\">infoblox<\/span>\u00a0IPAM driver (which has previously been started as a container, and has registered itself with the docker daemon via an API &#8211; more details below). The IPAM driver is invoked here with a \u201cRequestPool\u201d call to allocate the specified subnet.<\/p>\n<p>Optionally, the user can leave the subnet selection up to the IPAM system:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">docker network create --driver bridge --ipam-driver infoblox red\r\n<\/span><\/pre>\n<p>creates a bridge network but does not pass any subnet into the IPAM driver. This allows the IPAM driver to decide on the subnet, using whatever logic or criteria are established by the driver. In our driver, we provide the\u00a0<em>next available network<\/em>\u00a0in this case; meaning, the next subnet of the appropriate size that is available.<\/p>\n<p>Our driver provides options passed at driver startup that anchor these \u201cnext available networks\u201d in a larger pool. \u00a0In Infoblox terminology, this larger pool is called a \u201cnetwork container\u201d (not to be confused with a Docker container). For example, if we started up the driver with Infoblox network container as \u201c10.10.0.0\/16\u201d and a default prefix length of \u201c24\u201d, then allocating several networks in a row without the\u00a0<span style=\"font-family: courier new,courier;\">&#8211;subnet\u00a0<\/span>option would allocate 10.10.0.0\/24, 10.10.1.0\/24, 10.10.2.0\/24, 10.10.3.0\/24. If the user manually went into the grid master (GM) user interface and allocated, say, 10.10.4.0\/24 and 10.10.5.0\/24, then the next Docker network would get allocated as 10.10.6.0\/24. This ensures that there is no accidental IP overlap, and allows the network administrator to allocate a large pool (say a \/18 or \/20) to the container infrastructure, leaving the detailed division of that pool to the application developers. This is a useful feature in organization your address spaces as you plan a large deployment of container application clusters.<\/p>\n<p>The CLI and API also allow the user to specify arbitrary network and IPAM driver options. With our driver, the user can use this to control the prefix length of the subnet:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">docker network create --driver bridge \\\r\n  --ipam-driver infoblox --ipam-opt prefix-length=28 green\r\n<\/span><\/pre>\n<p>creates a bridge network \u201cgreen\u201d with the \u201cnext available\u201d \/28 subnet. In this way, you can conserve IPs when creating subnets for specific applications. We also use the IPAM options to provide a way to specify a network by name, rather than forcing you to either specify a subnet, or get the next available subnet. We will use this in the use case below to share a subnet between multiple hosts.<\/p>\n<p>Our IPAM driver must also be able to support the various network drivers by providing parameters to initialize the network containers for local and global address spaces respectively.<\/p>\n<p>At the time of startup of our driver process, the user can specify various parameters: \u00a0grid connectivity parameters, the network view name, the starting address for the network container (or a comma-separated list) and the default prefix length to be used when doing the \u201cnext available network\u201d. The prefix lengths and network containers are specified on a per-address space basis, as you generally will have different size subnets in different address values for local and global spaces. For example, here is a command that starts our driver:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">docker run -v \/var\/run:\/var\/run -v \/run\/docker:\/run\/docker infoblox\/ipam-driver \\\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0--plugin-dir \/run\/docker\/plugins --driver-name infoblox \\\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0--grid-host 172.22.128.240 --wapi-port 443 --wapi-username admin \\\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0--wapi-password infoblox --wapi-version 2.3 --ssl-verify false \\\r\n<strong>\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0--global-view docker-global \\\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0--global-network-container 172.22.192.0\/18 --global-prefix-length 25 \\\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0--local-view docker-local \\\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0--local-network-container 10.123.0.0\/16 --local-prefix-length 24\r\n<\/strong><\/span><\/pre>\n<p>This will allocate \u201cglobal\u201d subnets from 172.22.192.0\/18 in network view \u201cdocker-global\u201d, with a default prefix length of 25. \u201cLocal\u201d subnets will come from 10.123.0.0\/16 in network view \u201cdocker-local\u201d, with a default prefix length of 24.<\/p>\n<p>The other parameters specify how the Docker daemon communicates with the driver (the\u00a0<span style=\"font-family: courier new,courier;\">&#8211;plugin-dir<\/span>\u00a0and\u00a0<span style=\"font-family: courier new,courier;\">&#8211;driver-name\u00a0<\/span>as well as the\u00a0<span style=\"font-family: courier new,courier;\">-v\u00a0<\/span>mounts), and the connection parameters for the Infoblox grid.<\/p>\n<p>Finally, when the network is no longer needed, it is deleted with the\u00a0<span style=\"font-family: courier new,courier;\">docker network rm\u00a0<\/span>command. At that point, the IPAM driver will receive a ReleasePool API call so that the subnet may be de-allocated.<\/p>\n<h2 id=\"toc-hId-678534432\">Container\/Address Lifecycle<\/h2>\n<p>The IPAM driver is also called to allocate IP addresses on a given Docker network for a microservice container. The gateway address is always allocated right when the network is created. After that, addresses are allocated when an endpoint is attached to a container on a specific network. An \u201cendpoint\u201d is the Docker terminology for a connection between a specific container and a specific network.<\/p>\n<p>An endpoint is created when a container is run and a network is specified. For example,<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">docker run -it --net blue ubuntu \/bin\/bash<\/span><\/pre>\n<p>would create an Ubuntu container with an interface in the \u201cblue\u201d network, and run the\u00a0<span style=\"font-family: courier new,courier;\">\/bin\/bash<\/span>\u00a0command. At that time, the IPAM driver would be called to allocate an IP in the \u201cblue\u201d network via a \u201cRequestAddress\u201d API call.<\/p>\n<p>You may also create an endpoint by attaching an existing container to a network; in that case, the RequestAddress would be made at that time:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">docker network connect blue mycontainer<\/span><\/pre>\n<p>On the tear-down side, you can use the\u00a0<span style=\"font-family: courier new,courier;\">docker network disconnect\u00a0<\/span>command, or it will be automatically disconnected when the container exits. This calls a \u201cReleaseAddress\u201d API on the IPAM driver.<\/p>\n<h2 id=\"toc-hId-707163583\">Example: A shared, flat container address space<\/h2>\n<p>Now that you have learned the basic commands for our IPAM driver to create networks and allocate IP addresses, let&#8217;s expand on how to put it to use. In the previous blog, we discussed the how the macvlan driver can enable containers to have addresses on the external network, but that macvlan is new in Docker 1.12. Here, we demonstrate a simple technique to enable similar functionality using the \u2018user-defined\u2019 bridge driver and Infoblox IPAM driver, albeit with some manual intervention.<\/p>\n<p>In this scenario, we have three hosts with two NICs each attached to separate physical networks. One NIC we will call the\u00a0<i>management<\/i>\u00a0NIC and it will be eth0 on these hosts, and have an IP address. The second NIC will remain\u00a0<i>unnumbered<\/i>, meaning it has no IP associated with it. Instead, it is just used to bridge the internal Linux bridge with the external physical network.<\/p>\n<p>For this example, we will use the logical network topology shown in Figure 1.<\/p>\n<p><span class=\"lia-message-image-wrapper\"><img decoding=\"async\" class=\"lia-media-image\" tabindex=\"0\" title=\"blog-3-bridged-container-net.png\" src=\"https:\/\/cixhp49439.i.lithium.com\/t5\/image\/serverpage\/image-id\/687iF8E9489C6A8EDDC0\/image-size\/original?v=v2&amp;px=-1\" alt=\"blog-3-bridged-container-net.png\" border=\"0\" \/><i class=\"lia-fa lia-fa-search-plus lia-media-lightbox-trigger\" tabindex=\"0\" aria-label=\"Enlarge image\"><\/i><\/span><\/p>\n<p>Looking at the diagram, notice that each host is attached to the management network, and this is also where you will find the Infoblox appliance. There is an instance of the Docker daemon and the Infoblox IPAM driver running on each host. These two communicate with one another via a Unix domain socket, and the IPAM driver communicates with the Infoblox appliance via HTTPS over the management network. Also on each host is a bridge &#8211; this is the bridge that is created by the <span style=\"font-family: courier new,courier;\">docker network create<\/span>\u00a0command, which we will see how to do below. Additionally, we have added the eth1 interfaces to these bridges. Those interfaces are connected to another network, the container network. The host NICs themselves do not have IP addresses, nor does the bridge.<\/p>\n<p>Once this environment is completely configured, the parts shown in orange above will all constitute a single L2 broadcast domain. When a container puts a broadcast frame (say, an ARP request) on its local Linux bridge, the bridge will push that frame to all containers on the local host, as well as out the eth1 interface onto the external network. From there it will be delivered to the other hosts and in turn their containers. This means that traffic is allowed out the eth1 interfaces from any container to any other container.<\/p>\n<p>We also need a common L3 subnet to make the connectivity fully functional. This is where our IPAM driver comes in. Rather than independently allocating subnets, we use a driver-specific\u00a0<span style=\"font-family: courier new,courier;\">&#8211;ipam-opt<\/span>\u00a0flag (\u201c<span style=\"font-family: courier new,courier;\">network-name<\/span>\u201d) to tell the driver to use the same subnet on each host. The subnet is tagged in Infoblox with an extensible attribute corresponding to this value, allowing the driver to search Infoblox for the requested, named subnet. We\u2019ll see this in more detail below.<\/p>\n<p>Let\u2019s look at the steps needed to set this up. First, of course, we have to set up the physical (or virtual) networking of the hosts and the Infoblox appliance, and get Docker running on the hosts. In my example, we deployed a three-node CoreOS cluster on OpenStack, using VMs for the hosts. CoreOS already has Docker installed and running. We also ran a VM version of the Infoblox appliance in the same cloud. The specific cloud provider or physical infrastructure isn\u2019t critical, though different providers and infrastructure will need to be configured in different ways. For OpenStack, the Heat templates and scripts used may be found at\u00a0<a href=\"https:\/\/github.com\/infobloxopen\/engcloud\/tree\/master\/mddi\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">https:\/\/github.com\/infobloxopen\/engcloud\/tree\/master\/mddi<\/a>\u00a0if you wish to duplicate the enviornment.<\/p>\n<p>Once we have the basic network topology described above, we need to finish the picture by instantiating the Infoblox IPAM driver on each host, creating the appropriate bridges via\u00a0<span style=\"font-family: courier new,courier;\">docker network create<\/span>, and running some containers to test it out.<\/p>\n<p>To run the IPAM driver, you log into the first host and simply run this command (modifying the Infoblox connectivity parameters for your environment):<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">docker run --detach -v \/var\/run:\/var\/run -v \/run\/docker:\/run\/docker \\\r\n\u00a0 infoblox\/ipam-driver --grid-host mddi-gm.engcloud.infoblox.com \\\r\n\u00a0 --wapi-username admin --wapi-password infoblox \\\r\n\u00a0 --local-view default --local-network-container \"10.0.0.0\/8\" \\\r\n\u00a0 --local-prefix-length 22<\/span><\/pre>\n<p>This will execute the containerized version of the IPAM driver. We don\u2019t bother to specify the\u00a0<span style=\"font-family: courier new,courier;\">global<\/span>\u00a0view and other parameters, since we aren\u2019t using them in this exercise. After this runs, we can check the logs to make sure the container came up properly:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">core@host-0 ~ $ docker run --detach ... (the command above) ...\r\na4a5345059679226e0e769efbe7677223a7d07b91669e838145f5be639aaa579\r\ncore@host-0 ~ $ docker logs a4a5\r\n2016\/07\/13 17:51:42 Created Plugin Directory: '\/run\/docker\/plugins'\r\n2016\/07\/13 17:51:42 Driver Name: 'infoblox'\r\n2016\/07\/13 17:51:42 Socket File: '\/run\/docker\/plugins\/infoblox.sock'\r\n2016\/07\/13 17:51:42 Docker id is '6EDV:6ZSN:DA4P:MS3B:PNWH:IL7B:JKI2:EZQY:NSAT:FXMV:SSPO:4QNA'\r\ncore@host-0 ~ $<\/span><\/pre>\n<p>Next, let\u2019s create the docker network. We are going to create a bridge network, which is normally considered to be a \u201chost local\u201d network. That is, it can communicate outbound via NAT, but no inbound traffic is possible without port mappings. In our case, we will use a few extra commands to allow containers on the bridge to communicate directly with containers on other hosts. The command<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">docker network create --ipam-driver infoblox \\\r\n\u00a0 --ipam-opt network-name=container-net \\\r\n\u00a0 --driver bridge \\\r\n\u00a0 --opt com.docker.network.bridge.name=container-net \\\r\n\u00a0 container-net<\/span><\/pre>\n<p>instantiates a new bridge on this host (<span style=\"font-family: courier new,courier;\">&#8211;driver bridge<\/span>). The option passed via\u00a0<span style=\"font-family: courier new,courier;\">&#8211;opt<\/span>\u00a0allows us to name the bridge, making future commands a little easier. The\u00a0<span style=\"font-family: courier new,courier;\">&#8211;ipam-driver<\/span>\u00a0option is what tells Docker to use the Infoblox IPAM driver, which has already registered itself with the Docker daemon. And the\u00a0<span style=\"font-family: courier new,courier;\">&#8211;ipam-opt<\/span>\u00a0is used by the driver to choose the right subnet. It will look for a network in Infoblox with the specified name; if one is not found, it will allocate it. Otherwise, it will re-use that subnet. This option changes the \u201cnext available network\u201d behavior that would normally be associated with the\u00a0<span style=\"font-family: courier new,courier;\">network create<\/span>\u00a0command without a\u00a0<span style=\"font-family: courier new,courier;\">&#8211;subnet<\/span>\u00a0option.<\/p>\n<p>Let\u2019s look at the output of the host\u00a0<span style=\"font-family: courier new,courier;\">ip addr<\/span>\u00a0command after running this:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">core@host-0 ~ $ ip addr\r\n[...snip irrelevant lines...]\r\n8: container-net: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc noqueue state DOWN group default\r\n\u00a0\u00a0\u00a0link\/ether 02:42:71:3a:0f:5f brd ff:ff:ff:ff:ff:ff\r\n\u00a0\u00a0\u00a0inet 10.0.0.1\/22 scope global container-net\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0valid_lft forever preferred_lft forever\r\ncore@host-0 ~ $\r\n<\/span><\/pre>\n<p>A bridge\u00a0<em>container-net<\/em>\u00a0has been created, and it has been given the IP 10.0.0.1. Looking back in Infoblox in Figure 2, we see that the network container was created.<\/p>\n<p><span class=\"lia-message-image-wrapper\"><img decoding=\"async\" class=\"lia-media-image\" tabindex=\"0\" title=\"blog-3-figure-2.png\" src=\"https:\/\/cixhp49439.i.lithium.com\/t5\/image\/serverpage\/image-id\/688iA3EBB5CDC7511BA8\/image-size\/original?v=v2&amp;px=-1\" alt=\"blog-3-figure-2.png\" border=\"0\" \/><i class=\"lia-fa lia-fa-search-plus lia-media-lightbox-trigger\" tabindex=\"0\" aria-label=\"Enlarge image\"><\/i><\/span><\/p>\n<p>Drilling into that in Figure 3, we can see that a subnet was allocated, and the \u201cNetwork Name\u201d extensible attribute was assigned our \u201ccontainer-net\u201d name.<\/p>\n<p><span class=\"lia-message-image-wrapper\"><img decoding=\"async\" class=\"lia-media-image\" tabindex=\"0\" title=\"blog-3-figure-3.png\" src=\"https:\/\/cixhp49439.i.lithium.com\/t5\/image\/serverpage\/image-id\/689i93434611D8C2B0EF\/image-size\/original?v=v2&amp;px=-1\" alt=\"blog-3-figure-3.png\" border=\"0\" \/><i class=\"lia-fa lia-fa-search-plus lia-media-lightbox-trigger\" tabindex=\"0\" aria-label=\"Enlarge image\"><\/i><\/span><\/p>\n<p>Finally, looking at Figure 4, we see that the bridge IP address 10.0.0.1 has been allocated.<\/p>\n<p><span class=\"lia-message-image-wrapper\"><img decoding=\"async\" class=\"lia-media-image\" tabindex=\"0\" title=\"blog-3-figure-4.png\" src=\"https:\/\/cixhp49439.i.lithium.com\/t5\/image\/serverpage\/image-id\/690i059DFF70EBD729D2\/image-size\/original?v=v2&amp;px=-1\" alt=\"blog-3-figure-4.png\" border=\"0\" \/><i class=\"lia-fa lia-fa-search-plus lia-media-lightbox-trigger\" tabindex=\"0\" aria-label=\"Enlarge image\"><\/i><\/span><\/p>\n<p>This is a problem. We have an infrastructure router already with address 10.0.0.1. We don\u2019t even really want the bridge to have an IP &#8211; ideally we would be able to tell Docker not to give it one &#8211; \u00a0and just tell it to use 10.0.0.1 as the default gateway for all of the containers. However, the bridge driver does not offer that level of configuration &#8211; we\u2019ll have to wait for the macvlan driver for that.<\/p>\n<p>The IPAM driver works around this issue by always returning the same IP for the same MAC; it happens that when Docker requests an IP for the gateway, it passes always MAC 00:00:00:00:00. So, as long as we have reserved that IP in Infoblox without a MAC (a \u201cReservation\u201d not a \u201cFixed Address\u201d), the driver will end up returning the reserved router IP. Then, we can just remove this IP from the bridge; the containers will be configured by Docker to have the right gateway since it still sees 10.0.0.1 as the gateway address:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">core@host-0 ~ $ sudo ip addr del 10.0.0.1\/22 dev container-net<\/span><\/pre>\n<p>We could have just let Docker allocate an IP for the bridge (that is, have the IPAM driver return a new IP even though MAC 00:00:00:00:00 already has one). However, this is more of an issue than it sounds. In that case, a container\u2019s default route will point to the bridge IP, any traffic bound for the a network other than 10.0.0.0\/22 will be routed through the host network namespace, and out the host management interface, rather than via 10.0.0.1 on eth1. This means that services outside of 10.0.0.0\/22 will see the requests coming from the NAT\u2019d host IPs, rather than from the original container IP.<\/p>\n<p>Now let\u2019s launch a container and try to ping the infrastructure router at 10.0.0.1:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">core@host-0 ~ $ docker run -it --net container-net alpine sh\r\n\/ # ping 10.0.0.1\r\nPING 10.0.0.1 (10.0.0.1): 56 data bytes\r\n^C\r\n--- 10.0.0.1 ping statistics ---\r\n7 packets transmitted, 0 packets received, 100% packet loss\r\n\/ #<\/span><\/pre>\n<p>So, what went wrong? Why couldn\u2019t we ping the external router? Well, we forgot to add eth1 to the bridge. Let\u2019s try that now by running the following command to add the eth1 interface to the container-net bridge:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">\/ # exit\r\ncore@host-0 ~ $ sudo brctl addif container-net eth1<\/span><\/pre>\n<p>This \u201cplugs\u201d the eth1 interface into the container-net bridge, thus connecting the bridge the outside network. Let\u2019s try that ping again:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">core@host-0 ~ $ docker run -it --net container-net alpine sh\r\n\/ # ping 10.0.0.1\r\nPING 10.0.0.1 (10.0.0.1): 56 data bytes\r\n64 bytes from 10.0.0.1: seq=0 ttl=64 time=1.877 ms\r\n64 bytes from 10.0.0.1: seq=1 ttl=64 time=0.482 ms\r\n^C\r\n--- 10.0.0.1 ping statistics ---\r\n2 packets transmitted, 2 packets received, 0% packet loss\r\nround-trip min\/avg\/max = 0.482\/1.179\/1.877 ms\r\n\/ #<\/span><\/pre>\n<p>And there we have it! The container attached to the internal bridge can talk to the outside router.<\/p>\n<p>Now, let\u2019s do the same thing on our other two hosts by running the same commands. Open separate ssh sessions to each host, and execute these commands:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">docker run --detach -v \/var\/run:\/var\/run -v \/run\/docker:\/run\/docker \\\r\n  infoblox\/ipam-driver --grid-host mddi-gm.engcloud.infoblox.com \\\r\n  --wapi-username admin --wapi-password infoblox \\\r\n  --local-view default --local-network-container \"10.0.0.0\/8\" \\\r\n  --local-prefix-length 22\r\ndocker network create --ipam-driver infoblox \\\r\n \u00a0--ipam-opt network-name=container-net \\\r\n \u00a0--driver bridge \\\r\n \u00a0--opt com.docker.network.bridge.name=container-net \\\r\n \u00a0container-net\r\nsudo ip addr del 10.0.0.1\/22 dev container-net\r\nsudo brctl addif container-net eth1\r\ndocker run -it --net container-net alpine sh<\/span><\/pre>\n<p>You should now see the \u201c<span style=\"font-family: courier new,courier;\">\/ #<\/span>\u201d prompt for the alpine container on each host. If you look back in the Infoblox appliance, you will see something like Figure 5, showing the different allocated IPs and the associated container MAC addresses.<\/p>\n<p><span class=\"lia-message-image-wrapper\"><img decoding=\"async\" class=\"lia-media-image\" tabindex=\"0\" title=\"blog-3-figure-5.png\" src=\"https:\/\/cixhp49439.i.lithium.com\/t5\/image\/serverpage\/image-id\/691i7354F7685078B491\/image-size\/original?v=v2&amp;px=-1\" alt=\"blog-3-figure-5.png\" border=\"0\" \/><i class=\"lia-fa lia-fa-search-plus lia-media-lightbox-trigger\" tabindex=\"0\" aria-label=\"Enlarge image\"><\/i><\/span><\/p>\n<p>For demonstration purposes, I reserved an IP address (\u201cReserved IP\u201d) via the Infoblox UI before running the container on the third host. When Docker requests the next IP, Infoblox skips that IP.<\/p>\n<p>On one of the hosts, run the\u00a0<span style=\"font-family: courier new,courier;\">ip addr<\/span>\u00a0command to get the IP address (you can also see that the MAC matches the one in Figure 5):<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">\/ # ip addr\r\n1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN qlen 1\r\n\u00a0\u00a0\u00a0link\/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\r\n\u00a0\u00a0\u00a0inet 127.0.0.1\/8 scope host lo\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0valid_lft forever preferred_lft forever\r\n\u00a0\u00a0\u00a0inet6 ::1\/128 scope host\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0valid_lft forever preferred_lft forever\r\n11: eth0@if12: &lt;BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN&gt; mtu 1500 qdisc noqueue state UP\r\n\u00a0\u00a0\u00a0link\/ether <span style=\"color: #ff0000;\"><strong>02:42:9a:e5:b5:2d<\/strong><\/span> brd ff:ff:ff:ff:ff:ff\r\n\u00a0\u00a0\u00a0inet <span style=\"color: #ff0000;\"><strong>10.0.0.3\/22<\/strong><\/span> scope global eth0\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0valid_lft forever preferred_lft forever\r\n\u00a0\u00a0\u00a0inet6 fe80::42:9aff:fee5:b52d\/64 scope link\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0valid_lft forever preferred_lft forever\r\n\/ #<\/span><\/pre>\n<p>Then, let\u2019s listen on a network socket in that same container with:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">\/ # nc -l -p 2000<\/span><\/pre>\n<p>Switching back to another host, you can see the cross-host networking working by using\u00a0<span style=\"font-family: courier new,courier;\">telnet<\/span>\u00a0to connect to the specific host and port above:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">\/ # telnet 10.0.0.3 2000\r\nHi from host-0!<\/span><\/pre>\n<p>Back on the host running\u00a0<span style=\"font-family: courier new,courier;\">nc<\/span>, we see the message show up:<\/p>\n<pre><span style=\"font-family: courier new,courier; font-size: small;\">\/ # nc -l -p 2000\r\nHi from host-0!<\/span><\/pre>\n<h2 id=\"toc-hId-735792734\">Conclusion<\/h2>\n<p>In this post, you learned about the basic Docker networking commands, and how the use of Docker network commands improves the integration of networking within the Docker infrastructure. Furthermore you can see how the use of an external, centralized IPAM can increase the flexibility of your Docker networking solution and enable cross-host networking without the complexity and performance concerns of overlays.<\/p>\n<p>&nbsp;<\/p>\n<p>One thing that would make the external IPAM even more useful would be for the Infoblox driver to capture information about the Docker containers that own each IP &#8211; for example, the host name, container name and other meta-data. Unfortunately right now this is only possible to a limited extent, because the Docker IPAM interface does not pass the information needed. We have submitted a pull request to Docker (<a href=\"https:\/\/github.com\/docker\/libnetwork\/pull\/977\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">https:\/\/github.com\/docker\/libnetwork\/pull\/977<\/a>) to address this, and hope to see that added. In a future version of the driver, we will work around this issue to get some of the information by querying the Docker API using the MAC address. This will work for container-specific information, but not for network information (at least, not efficiently).<\/p>\n<p>In the next post, we will learn about the alternate networking stack for containers, the Container Network Interface (CNI), and see how we can achieve a similar result with that.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As you saw in my previous blog\u00a0How Docker Networking Works and the Importance of IPAM Functionality, Docker\u2019s networking model enables 3rd party vendors to \u2018plug-in\u2019 enterprise class network solutions. Docker requires the services of an IPAM infrastructure to enable the creation of network address spaces\/pools, subnets and the allocation of individual IP addresses for the [&hellip;]<\/p>\n","protected":false},"author":206,"featured_media":2626,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[3],"tags":[118,51],"class_list":{"0":"post-2623","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-community","8":"tag-docker","9":"tag-ipam","10":"entry"},"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.3 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Using the Infoblox IPAM Driver for Docker<\/title>\n<meta name=\"description\" content=\"Infoblox offers various services and plug-ins to better protect your network. Learn about Infoblox&#039;s IPAM driver and how it can optimize your Docker Network.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Using the Infoblox IPAM Driver for Docker\" \/>\n<meta property=\"og:description\" content=\"Infoblox offers various services and plug-ins to better protect your network. Learn about Infoblox&#039;s IPAM driver and how it can optimize your Docker Network.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/\" \/>\n<meta property=\"og:site_name\" content=\"Infoblox Blog\" \/>\n<meta property=\"article:published_time\" content=\"2016-07-21T11:19:42+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-10-20T21:34:25+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.infoblox.com\/blog\/wp-content\/uploads\/image001-4.png\" \/>\n\t<meta property=\"og:image:width\" content=\"660\" \/>\n\t<meta property=\"og:image:height\" content=\"454\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"John Belamaric\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"John Belamaric\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"16 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/community\\\/using-the-infoblox-ipam-driver-for-docker\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/community\\\/using-the-infoblox-ipam-driver-for-docker\\\/\"},\"author\":{\"name\":\"John Belamaric\",\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/#\\\/schema\\\/person\\\/053c2b18a442a1874857175dc9a15b97\"},\"headline\":\"Using the Infoblox IPAM Driver for Docker\",\"datePublished\":\"2016-07-21T11:19:42+00:00\",\"dateModified\":\"2022-10-20T21:34:25+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/community\\\/using-the-infoblox-ipam-driver-for-docker\\\/\"},\"wordCount\":2818,\"publisher\":{\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/community\\\/using-the-infoblox-ipam-driver-for-docker\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/wp-content\\\/uploads\\\/image001-4.png\",\"keywords\":[\"Docker\",\"IPAM\"],\"articleSection\":[\"Community\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/community\\\/using-the-infoblox-ipam-driver-for-docker\\\/\",\"url\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/community\\\/using-the-infoblox-ipam-driver-for-docker\\\/\",\"name\":\"Using the Infoblox IPAM Driver for Docker\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/community\\\/using-the-infoblox-ipam-driver-for-docker\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/community\\\/using-the-infoblox-ipam-driver-for-docker\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/wp-content\\\/uploads\\\/image001-4.png\",\"datePublished\":\"2016-07-21T11:19:42+00:00\",\"dateModified\":\"2022-10-20T21:34:25+00:00\",\"description\":\"Infoblox offers various services and plug-ins to better protect your network. Learn about Infoblox's IPAM driver and how it can optimize your Docker Network.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/community\\\/using-the-infoblox-ipam-driver-for-docker\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/community\\\/using-the-infoblox-ipam-driver-for-docker\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/community\\\/using-the-infoblox-ipam-driver-for-docker\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/wp-content\\\/uploads\\\/image001-4.png\",\"contentUrl\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/wp-content\\\/uploads\\\/image001-4.png\",\"width\":660,\"height\":454,\"caption\":\"Using the Infoblox IPAM Driver for Docker\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/community\\\/using-the-infoblox-ipam-driver-for-docker\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Community\",\"item\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/category\\\/community\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Using the Infoblox IPAM Driver for Docker\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/\",\"name\":\"infoblox.com\\\/blog\\\/\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/#organization\",\"name\":\"Infoblox\",\"url\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/wp-content\\\/uploads\\\/infoblox-logo-2.svg\",\"contentUrl\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/wp-content\\\/uploads\\\/infoblox-logo-2.svg\",\"width\":137,\"height\":30,\"caption\":\"Infoblox\"},\"image\":{\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/#\\\/schema\\\/person\\\/053c2b18a442a1874857175dc9a15b97\",\"name\":\"John Belamaric\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/7ab6c9e6e797ea8913b402cc81b586865538d4c2af6d47c87d3dbd804c02e886?s=96&d=blank&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/7ab6c9e6e797ea8913b402cc81b586865538d4c2af6d47c87d3dbd804c02e886?s=96&d=blank&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/7ab6c9e6e797ea8913b402cc81b586865538d4c2af6d47c87d3dbd804c02e886?s=96&d=blank&r=g\",\"caption\":\"John Belamaric\"},\"url\":\"https:\\\/\\\/www.infoblox.com\\\/blog\\\/author\\\/john-belamaric\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Using the Infoblox IPAM Driver for Docker","description":"Infoblox offers various services and plug-ins to better protect your network. Learn about Infoblox's IPAM driver and how it can optimize your Docker Network.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/","og_locale":"en_US","og_type":"article","og_title":"Using the Infoblox IPAM Driver for Docker","og_description":"Infoblox offers various services and plug-ins to better protect your network. Learn about Infoblox's IPAM driver and how it can optimize your Docker Network.","og_url":"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/","og_site_name":"Infoblox Blog","article_published_time":"2016-07-21T11:19:42+00:00","article_modified_time":"2022-10-20T21:34:25+00:00","og_image":[{"width":660,"height":454,"url":"https:\/\/www.infoblox.com\/blog\/wp-content\/uploads\/image001-4.png","type":"image\/png"}],"author":"John Belamaric","twitter_card":"summary_large_image","twitter_misc":{"Written by":"John Belamaric","Est. reading time":"16 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/#article","isPartOf":{"@id":"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/"},"author":{"name":"John Belamaric","@id":"https:\/\/www.infoblox.com\/blog\/#\/schema\/person\/053c2b18a442a1874857175dc9a15b97"},"headline":"Using the Infoblox IPAM Driver for Docker","datePublished":"2016-07-21T11:19:42+00:00","dateModified":"2022-10-20T21:34:25+00:00","mainEntityOfPage":{"@id":"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/"},"wordCount":2818,"publisher":{"@id":"https:\/\/www.infoblox.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/#primaryimage"},"thumbnailUrl":"https:\/\/www.infoblox.com\/blog\/wp-content\/uploads\/image001-4.png","keywords":["Docker","IPAM"],"articleSection":["Community"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/","url":"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/","name":"Using the Infoblox IPAM Driver for Docker","isPartOf":{"@id":"https:\/\/www.infoblox.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/#primaryimage"},"image":{"@id":"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/#primaryimage"},"thumbnailUrl":"https:\/\/www.infoblox.com\/blog\/wp-content\/uploads\/image001-4.png","datePublished":"2016-07-21T11:19:42+00:00","dateModified":"2022-10-20T21:34:25+00:00","description":"Infoblox offers various services and plug-ins to better protect your network. Learn about Infoblox's IPAM driver and how it can optimize your Docker Network.","breadcrumb":{"@id":"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/#primaryimage","url":"https:\/\/www.infoblox.com\/blog\/wp-content\/uploads\/image001-4.png","contentUrl":"https:\/\/www.infoblox.com\/blog\/wp-content\/uploads\/image001-4.png","width":660,"height":454,"caption":"Using the Infoblox IPAM Driver for Docker"},{"@type":"BreadcrumbList","@id":"https:\/\/www.infoblox.com\/blog\/community\/using-the-infoblox-ipam-driver-for-docker\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.infoblox.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Community","item":"https:\/\/www.infoblox.com\/blog\/category\/community\/"},{"@type":"ListItem","position":3,"name":"Using the Infoblox IPAM Driver for Docker"}]},{"@type":"WebSite","@id":"https:\/\/www.infoblox.com\/blog\/#website","url":"https:\/\/www.infoblox.com\/blog\/","name":"infoblox.com\/blog\/","description":"","publisher":{"@id":"https:\/\/www.infoblox.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.infoblox.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.infoblox.com\/blog\/#organization","name":"Infoblox","url":"https:\/\/www.infoblox.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.infoblox.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.infoblox.com\/blog\/wp-content\/uploads\/infoblox-logo-2.svg","contentUrl":"https:\/\/www.infoblox.com\/blog\/wp-content\/uploads\/infoblox-logo-2.svg","width":137,"height":30,"caption":"Infoblox"},"image":{"@id":"https:\/\/www.infoblox.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.infoblox.com\/blog\/#\/schema\/person\/053c2b18a442a1874857175dc9a15b97","name":"John Belamaric","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/7ab6c9e6e797ea8913b402cc81b586865538d4c2af6d47c87d3dbd804c02e886?s=96&d=blank&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/7ab6c9e6e797ea8913b402cc81b586865538d4c2af6d47c87d3dbd804c02e886?s=96&d=blank&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/7ab6c9e6e797ea8913b402cc81b586865538d4c2af6d47c87d3dbd804c02e886?s=96&d=blank&r=g","caption":"John Belamaric"},"url":"https:\/\/www.infoblox.com\/blog\/author\/john-belamaric\/"}]}},"_links":{"self":[{"href":"https:\/\/www.infoblox.com\/blog\/wp-json\/wp\/v2\/posts\/2623","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.infoblox.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.infoblox.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.infoblox.com\/blog\/wp-json\/wp\/v2\/users\/206"}],"replies":[{"embeddable":true,"href":"https:\/\/www.infoblox.com\/blog\/wp-json\/wp\/v2\/comments?post=2623"}],"version-history":[{"count":2,"href":"https:\/\/www.infoblox.com\/blog\/wp-json\/wp\/v2\/posts\/2623\/revisions"}],"predecessor-version":[{"id":8169,"href":"https:\/\/www.infoblox.com\/blog\/wp-json\/wp\/v2\/posts\/2623\/revisions\/8169"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.infoblox.com\/blog\/wp-json\/wp\/v2\/media\/2626"}],"wp:attachment":[{"href":"https:\/\/www.infoblox.com\/blog\/wp-json\/wp\/v2\/media?parent=2623"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.infoblox.com\/blog\/wp-json\/wp\/v2\/categories?post=2623"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.infoblox.com\/blog\/wp-json\/wp\/v2\/tags?post=2623"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}