Sunday, January 25, 2015

Set up Two Node RDO Juno ML2&OVS&VXLAN Cluster runnig Docker Hypervisor on Controller and KVM on Compute (CentOS 7, Fedora 21)

****************************************************************************************
UPDATE as of 01/31/2015 to get Docker && Nova-Docker working on Fedora 21
****************************************************************************************
Per https://github.com/docker/docker/issues/10280
download systemd-218-3.fc22.src.rpm && build 218-3 rpms and upgrade systemd
First packages for rpmbuild :-

 $ sudo yum install audit-libs-devel autoconf  automake cryptsetup-devel \
    dbus-devel docbook-style-xsl elfutils-devel  \
    glib2-devel  gnutls-devel  gobject-introspection-devel \
    gperf     gtk-doc intltool kmod-devel libacl-devel \
    libblkid-devel     libcap-devel libcurl-devel libgcrypt-devel \
    libidn-devel libmicrohttpd-devel libmount-devel libseccomp-devel \
    libselinux-devel libtool pam-devel python3-devel python3-lxml \
    qrencode-devel  python2-devel  xz-devel

Second:-

$cd rpmbuild/SPEC
$rpmbuild -bb systemd.spec
$ cd ../RPMS/x86_64

Third:-

$ sudo yum install libgudev1-218-3.fc21.x86_64.rpm \
libgudev1-devel-218-3.fc21.x86_64.rpm \
systemd-218-3.fc21.x86_64.rpm \
systemd-compat-libs-218-3.fc21.x86_64.rpm \
systemd-debuginfo-218-3.fc21.x86_64.rpm \
systemd-devel-218-3.fc21.x86_64.rpm \
systemd-journal-gateway-218-3.fc21.x86_64.rpm \
systemd-libs-218-3.fc21.x86_64.rpm \
systemd-python-218-3.fc21.x86_64.rpm \
systemd-python3-218-3.fc21.x86_64.rpm

.  .  .  .  .  .  .  .  .  .

Dependencies Resolved

=================================================================================================
 Package                  Arch    Version      Repository                                   Size
=================================================================================================
Installing:
 libgudev1-devel          x86_64  218-3.fc21   /libgudev1-devel-218-3.fc21.x86_64          281 k
 systemd-debuginfo        x86_64  218-3.fc21   /systemd-debuginfo-218-3.fc21.x86_64         69 M
 systemd-journal-gateway  x86_64  218-3.fc21   /systemd-journal-gateway-218-3.fc21.x86_64  571 k
Updating:
 libgudev1                x86_64  218-3.fc21   /libgudev1-218-3.fc21.x86_64                 51 k
 systemd                  x86_64  218-3.fc21   /systemd-218-3.fc21.x86_64                   22 M
 systemd-compat-libs      x86_64  218-3.fc21   /systemd-compat-libs-218-3.fc21.x86_64      237 k
 systemd-devel            x86_64  218-3.fc21   /systemd-devel-218-3.fc21.x86_64            349 k
 systemd-libs             x86_64  218-3.fc21   /systemd-libs-218-3.fc21.x86_64             1.0 M
 systemd-python           x86_64  218-3.fc21   /systemd-python-218-3.fc21.x86_64           185 k
 systemd-python3          x86_64  218-3.fc21   /systemd-python3-218-3.fc21.x86_64          191 k

Transaction Summary
=================================================================================================
Install  3 Packages
Upgrade  7 Packages

Total size: 94 M
Is this ok [y/d/N]: y

  View also  https://ask.openstack.org/en/question/59789/attempt-to-install-nova-docker-driver-on-fedora-21/
*************************************************************************************** 
As a final result of performing configuration bellow Juno dashboard will automatically  spawn,launch  and run Nova-Dockers containers on Controller and usual nova instances supposed to run on KVM Hypervisor (Libvirt driver) on Compute Node

Set up initial configuration via RDO Juno packstack run

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VXLAN )
- Compute node: Nova (nova-compute), Neutron (openvswitch-agent)


juno1dev.localdomain   -  Controller (192.168.1.127)
juno2dev.localdomain   -  Compute   (192.168.1.137)

Management&&Public  network is 192.168.1.0/24
VXLAN tunnel is (192.168.0.127,192.168.0.137)


Answer File :-

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.168.1.127
CONFIG_COMPUTE_HOSTS=192.168.1.137
CONFIG_NETWORK_HOSTS=192.168.1.127
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.168.1.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.168.1.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.168.1.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=20G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=enp2s0
CONFIG_NOVA_NETWORK_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=enp5s1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.168.1.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

Only on Controller updates :-
[root@juno1 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.168.1.127"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.168.1.255"
GATEWAY="192.168.1.1"
NM_CONTROLLED="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
ONBOOT="yes"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex

DEVICETYPE="ovs"

[root@juno1 network-scripts(keystone_admin)]# cat ifcfg-enp2s0
DEVICE="enp2s0"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

************************
On Controller :-
************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart
# reboot

[root@juno1dev ~(keystone_admin)]# ifconfig

br-ex: flags=4163  mtu 1500
        inet 192.168.1.127  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
        ether 00:22:15:63:e4:e2  txqueuelen 0  (Ethernet)
        RX packets 516087  bytes 305856360 (291.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 474282  bytes 62485754 (59.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


enp2s0: flags=4163  mtu 1500
        inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
        ether 00:22:15:63:e4:e2  txqueuelen 1000  (Ethernet)
        RX packets 1121900  bytes 1194013198 (1.1 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 768667  bytes 82497428 (78.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 17

enp5s1: flags=4163  mtu 1500
        inet 192.168.0.127  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::2e0:53ff:fe13:174c  prefixlen 64  scopeid 0x20
        ether 00:e0:53:13:17:4c  txqueuelen 1000  (Ethernet)
        RX packets 376087  bytes 49012215 (46.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1136402  bytes 944635587 (900.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 1381792  bytes 250829475 (239.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1381792  bytes 250829475 (239.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



After packstack completion  switch both nodes to IPv4  iptables firewall
*********************************************************************************
As of 01/25/2015 dnsmasq fails to serve private subnets, unless following lines
to be commented out
*********************************************************************************

# -A INPUT -j REJECT --reject-with icmp-host-prohibited
# -A FORWARD -j REJECT --reject-with icmp-host-prohibited
 
Set up Nova-Docker on Controller&&Network Node
***************************
Initial docker setup
***************************
# yum install python-pbr

# yum install docker-io -y
# yum install -y python-pip git
 
# git clone https://github.com/stackforge/nova-docker
# cd nova-docker
# git checkout stable/juno
# python setup.py install
# systemctl start docker
# systemctl enable docker
# chmod 660  /var/run/docker.sock
#  mkdir /etc/nova/rootwrap.d


************************************************************************************
On Fedora 21 even running systemd 218-3 you should expect
six.__version__  to be dropped to 1.2 right after `python setup.py install`

Then run:-

# pip install --upgrade six

Downloading/unpacking six from https://pypi.python.org/packages/3.3/s/six/six-1.9.0-py2.py3-none-any.whl#md5=9ac7e129a80f72d6fc1f0216f6e9627b
  Downloading six-1.9.0-py2.py3-none-any.whl
Installing collected packages: six
  Found existing installation: six 1.7.3
    Uninstalling six:
      Successfully uninstalled six
Successfully installed six
Cleaning up...
***************************************************************************************

Proceed as normal.

************************************************
Create the docker.filters file:
************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root

*****************************************
Add line /etc/glance/glance-api.conf
*****************************************
container_formats=ami,ari,aki,bare,ovf,ova,docker
:wq

*************************************
Restart Service glance-api
*************************************
usermod -G docker nova
systemctl restart openstack-glance-api

********************************************************************************
  Creating openstack-nova-docker service per http://blog.oddbit.com/2015/01/17/running-novalibvirt-and-novadocker-on-the-same-host/
Due to configuration of answer-file  in our case /etc/nova/nova.conf on Controller doesn't have any compute_driver at all , and libvirt driver on Compute node.
*********************************************************************************

Create new file /etc/nova/nova-docker.conf


[DEFAULT]
 host=juno1dev.localdomain
 compute_driver=novadocker.virt.docker.DockerDriver
 log_file=/var/log/nova/nova-docker.log
 state_path=/var/lib/nova-docker
 
Create openstack-nova-compute.service unit on  system, and saved it as
/etc/systemd/system/openstack-nova-docker.service
 
[Unit]
Description=OpenStack Nova Compute Server (Docker)
After=syslog.target network.target

[Service]
Environment=LIBGUESTFS_ATTACH_METHOD=appliance
Type=notify
Restart=always
User=nova
ExecStart=/usr/bin/nova-compute --config-file /etc/nova/nova.conf \
          --config-file /etc/nova/nova-docker.conf

[Install]
WantedBy=multi-user.target

 
SCP /usr/bin/nova-compute  from Compute node to Controller and run :- 
 
# systemctl enable openstack-nova-docker
# systemctl start openstack-nova-docker
 
Update /etc/nova/nova.conf on Compute Node

vif_plugging_is_fatal=False 
vif_pligging_timeout=0
# systemctl restart openstack-nova-compute 
 

********************************************************************************
On Fedora 21 keep this entries as is ( no changes),however to launch new
instance on Compute, you would have stop service openstcak-nova-docker
on Controller. Just for 2-3 min coming from spawn => active , restart
openstcak-nova-docker on Controller
********************************************************************************

As final result dashboard will autonatically spawn,load and run Nova-Dockers containers on Controller and usual nova instances supposed to run on KVM Hypervisor (Libvirt driver) on Compute Node
 
 
  
 
 
[root@juno1dev ~(keystone_admin)]# nova service-list
+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                 | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | juno1dev.localdomain | internal | enabled | up    | 2015-01-26T06:42:16.000000 | -               |
| 2  | nova-scheduler   | juno1dev.localdomain | internal | enabled | up    | 2015-01-26T06:42:16.000000 | -               |
| 3  | nova-conductor   | juno1dev.localdomain | internal | enabled | up    | 2015-01-26T06:42:24.000000 | -               |
| 4  | nova-cert        | juno1dev.localdomain | internal | enabled | up    | 2015-01-26T06:42:16.000000 | -               |
| 5  | nova-compute     | juno2dev.localdomain | nova     | enabled | up    | 2015-01-26T06:42:23.000000 | -               |
| 6  | nova-compute     | juno1dev.localdomain | nova     | enabled | up    | 2015-01-26T06:42:24.000000 | -               |
+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+

[root@juno1dev ~(keystone_admin)]# systemctl | grep nova

openstack-nova-api.service          loaded active running   OpenStack Nova API Server
openstack-nova-cert.service         loaded active running   OpenStack Nova Cert Server
openstack-nova-conductor.service    loaded active running   OpenStack Nova Conductor Server
openstack-nova-consoleauth.service  loaded active running   OpenStack Nova VNC console auth Server
openstack-nova-docker.service       loaded active running   OpenStack Nova Compute Server (Docker)
openstack-nova-novncproxy.service   loaded active running   OpenStack Nova NoVNC Proxy Server
openstack-nova-scheduler.service    loaded active running   OpenStack Nova Scheduler Server
 
 
 
  
 
******************************************* 
Tunning VNC Console in dashboard :-
*******************************************
 
Controller - 192.168.1.127 


running: nova-consoleauth nova-novncproxy nova.conf: 

novncproxy_host=0.0.0.0 
novncproxy_port=6080 
novncproxy_base_url=http://192.168.1.127:6080/vnc_auto.html 


Compute - 192.168.1.137 

running: nova-compute nova.conf:
 
vnc_enabled=True
novncproxy_base_url=http://192.168.1.137:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.1.137

References
 
https://ask.openstack.org/en/question/520/vnc-console-in-dashboard-fails-to-connect-ot-server-code-1006/