docker网络配置方法总结
docker啟動時,會在宿主主機上創(chuàng)建一個名為docker0的虛擬網(wǎng)絡(luò)接口。默認(rèn)選擇172.17.42.1/16,一個16位的子網(wǎng)掩碼給容器提供了65534個IP地址。docker0僅僅是一個在綁定到這上面的其它網(wǎng)卡間自己主動轉(zhuǎn)發(fā)數(shù)據(jù)包的虛擬以太網(wǎng)橋,它能夠使容器和主機相互通信,容器與容器間通信。問題是,怎樣讓位于不同主機上的docker容器能夠通信。怎樣有效配置docker網(wǎng)絡(luò)眼下來說還是一個較復(fù)雜的工作,因而也涌現(xiàn)了非常多的開源項目來解決問題,如flannel、Kubernetes、weave、pipework等等。
1. flannel?
CoreOS團(tuán)隊出品,是一個基于etcd的覆蓋網(wǎng)絡(luò)(overlay network)并為每臺主機提供一個獨立子網(wǎng)的服務(wù)。Rudder簡化了集群中Docker容器的網(wǎng)絡(luò)配置,避免了多主機上容器子網(wǎng)沖突的問題,更能夠大幅度降低端口映射方面的工作。詳細(xì)代碼見https://github.com/coreos/flannel。其工作原理為:
An overlay network is first configured with an IP range and the size of the subnet for each host. For example, one could configure the overlay to use 10.100.0.0/16 and each host to receive a /24 subnet. Host A could then receive 10.100.5.0/24 and host B could get 10.100.18.0/24. flannel uses etcd to maintain a mapping between allocated subnets and real host IP addresses. For the data path, flannel uses UDP to encapsulate IP datagrams to transmit them to the remote host. We chose UDP as the transport protocol for its ease of passing through firewalls. For example, AWS Classic cannot be configured to pass IPoIP or GRE traffic as its security groups only support TCP/UDP/ICMP.(摘自https://coreos.com/blog/introducing-rudder/)
2. Kubernetes
Kubernetes是由Google推出的針對容器管理和編排的開源項目,它讓用戶能夠在跨容器主機集群的情況下輕松地管理、監(jiān)測、控制容器化應(yīng)用部署。
Kubernete有一個特殊的與SDN非常類似的網(wǎng)絡(luò)化概念:通過一個服務(wù)代理創(chuàng)建一個能夠分配給隨意數(shù)目容器的IP地址。前端的應(yīng)用程序或使用該服務(wù)的用戶僅通過這一IP地址調(diào)用服務(wù)。不須要關(guān)心其它的細(xì)節(jié)。這樣的代理方案有點SDN的味道,可是它并非構(gòu)建在典型的SDN的第2-3層機制之上。
Kubernetes uses a proxying method, whereby a particular?service?— defined as a query across containers — gets its own IP address. Behind that address could be hordes of containers that all provide the same service — but on the front end, the application or user tapping that service just uses the one IP address.
This means the number of containers running a service can grow or shrink as necessary, and no customer or application tapping the service has to care. Imagine if that service were a mobile network back-end process, for instance; during traffic surges, more containers running the process could be added, and they could be deleted once traffic returned to normal. Discovery of the specific containers running the service is handled in the background, as is the load balancing among those containers. Without the proxying, you could add more containers, but you’d have to tell users and applications about it; Google’s method eliminates that need for configuration. (https://www.sdncentral.com/news/docker-kubernetes-containers-open-sdn-possibilities/2014/07/)
3.?為不同宿主機上全部容器配置同樣網(wǎng)段的IP地址,配置方法見http://www.cnblogs.com/feisky/p/4063162.html,這篇文章是基于Linux bridge的,當(dāng)然也能夠用其它的方法,如用OpenvSwitch+GRE建立宿主機之間的連接:
# From http://goldmann.pl/blog/2014/01/21/connecting-docker-containers-on-multiple-hosts/?
# Edit this variable: the 'other' host.REMOTE_IP=188.226.138.185# Edit this variable: the bridge address on 'this' host.BRIDGE_ADDRESS=172.16.42.1/24# Name of the bridge (should match /etc/default/docker).BRIDGE_NAME=docker0# bridges# Deactivate the docker0 bridgeip link set $BRIDGE_NAME down# Remove the docker0 bridgebrctl delbr $BRIDGE_NAME# Delete the Open vSwitch bridgeovs-vsctl del-br br0# Add the docker0 bridgebrctl addbr $BRIDGE_NAME# Set up the IP for the docker0 bridgeip a add $BRIDGE_ADDRESS dev $BRIDGE_NAME# Activate the bridgeip link set $BRIDGE_NAME up# Add the br0 Open vSwitch bridgeovs-vsctl add-br br0# Create the tunnel to the other host and attach it to the# br0 bridgeovs-vsctl add-port br0 gre0 -- set interface gre0 type=gre options:remote_ip=$REMOTE_IP# Add the br0 bridge to docker0 bridgebrctl addif $BRIDGE_NAME br0# iptables rules# Enable NATiptables -t nat -A POSTROUTING -s 172.16.42.0/24 ! -d 172.16.42.0/24 -j MASQUERADE# Accept incoming packets for existing connectionsiptables -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT# Accept all non-intercontainer outgoing packetsiptables -A FORWARD -i docker0 ! -o docker0 -j ACCEPT# By default allow all outgoing trafficiptables -A FORWARD -i docker0 -o docker0 -j ACCEPT# Restart Docker daemon to use the new BRIDGE_NAMEservice docker restart4. 使用weave為容器配置IP(用法見http://www.cnblogs.com/feisky/p/4093717.html)。weave的特性包含
- 應(yīng)用隔離:不同子網(wǎng)容器之間默認(rèn)隔離的。即便它們位于同一臺物理機上也相互不通;不同物理機之間的容器默認(rèn)也是隔離的
- 物理機之間容器互通:weave connect $OTHER_HOST
- 動態(tài)增加網(wǎng)絡(luò):對于不是通過weave啟動的容器,能夠通過weave attach 10.0.1.1/24 $id來增加網(wǎng)絡(luò)(detach刪除網(wǎng)絡(luò))
- 安全性:能夠通過weave launch -password wEaVe設(shè)置一個密碼用于weave peers之間加密通信
- 與宿主機網(wǎng)絡(luò)通信:weave expose 10.0.1.102/24。這個IP會配在weave網(wǎng)橋上
- 查看weave路由狀態(tài):weave ps
- 通過NAT實現(xiàn)外網(wǎng)訪問docker容器
5. 改動主機docker默認(rèn)的虛擬網(wǎng)段,然后在各自主機上分別把對方的docker網(wǎng)段增加到路由表中,配合iptables就可以實現(xiàn)docker容器夸主機通信。配置方法例如以下:
設(shè)有兩臺虛擬機
- v1: 192.168.124.51
- v2: 192.168.124.52
更改虛擬機docker0網(wǎng)段。v1為172.17.1.1/24,v2為172.17.2.1/24
#v1sudo ifconfig docker0 172.17.1.1 netmask 255.255.255.0sudo bash -c 'echo DOCKER_OPTS="-B=docker0" >> /etc/default/docker' sudo service docker restart# v2 sudo ifconfig docker0 172.17.2.1 netmask 255.255.255.0
sudo bash -c 'echo DOCKER_OPTS="-B=docker0" >> /etc/default/docker'
sudo service docker restart
然后在v1上把v2的docker虛擬網(wǎng)段增加到路由表中,在v2上將v1的docker虛擬網(wǎng)段增加到自己的路由表中
# v1 192.168.124.51 sudo route add -net 172.17.2.0 netmask 255.255.255.0 gw 192.168.124.52 sudo iptables -t nat -F POSTROUTING sudo iptables -t nat -A POSTROUTING -s 172.17.1.0/24 ! -d 172.17.0.0/16 -j MASQUERADE# v2 192.168.124.52 sudo route add -net 172.17.1.0 netmask 255.255.255.0 gw 192.168.124.51 sudo iptables -t nat -F POSTROUTING sudo iptables -t nat -A POSTROUTING -s 172.17.2.0/24 ! -d 172.17.0.0/16 -j MASQUERADE至此。兩臺虛擬機中的docker容器能夠互相訪問了。
轉(zhuǎn)載于:https://www.cnblogs.com/yxwkf/p/5394577.html
總結(jié)
以上是生活随笔為你收集整理的docker网络配置方法总结的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: C语言的问卷调查
- 下一篇: Google Map API V3开发(