技术到老

如果有如果

  • 博客访问: 118266
  • 博文数量: 8
  • 用 户 组: 普通用户
  • 注册时间: 2014-02-28 12:06
个人简介

IT男,喜欢技术,但是在中国真的不能靠技术。

ITPUB论坛APP

ITPUB论坛APP



APP发帖 享双倍积分

文章分类

全部博文(8)

文章存档

2016年(1)

2014年(7)

我的朋友
微信关注

IT168企业级官微



微信号:IT168qiye



系统架构师大会



微信号:SACC2013

订阅
热词专题

分类: 云计算与虚拟化

1.1    三个节点的openstack系统的安装   

说明本次构建主要依据手册进行安装。在网络方面开始选择了VLAN ,但是由于硬件switch不支持,后又从VLAN格式更改为GRE。
构筑下面的测试环境。 

拓扑图

 


点名称

Ascontroller

CPU

1

MEM

2GB

HDD

SCSI 30GB(root Disk)

NIC

eth0: 172.28.160.205/255.255.255.0

 

点名称

asnetnode

CPU

1

MEM

1GB

HDD

SCSI 20GB(root Disk)

NIC

eth0: 172.28.160.203/255.255.255.0

eth1: 10.0.0.103/255.255.255.0

eth2: 192.168.70.103/255.255.255.0

 

点名称

ascomnode1

CPU

1

MEM

4GB

HDD

SCSI 30GB(root Disk)

NIC

eth0: 172.28.160.204/255.255.255.0

eth1: 10.0.0.104/255.255.255.0

 

有以下的网络。

?管理网络(172.28.160.0/255.255.255.0)

?数据网络(10.0. 0.0/255.255.255.0)

?外部网络(192.168.70.0/255.255.255.0)

 

n  修改hosts文件

vi /etc/hosts

172.28.160.205 ascontroller

172.28.160.204 ascomnode1

172.28.160.202 ascomnode2

172.28.160.203 asnetnode

 

n  设置机器的hostname

vi /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=ascontroller

 

请确认能够互相ping通。

1.1.1           控制节点(ascontroller

1.1.2           安装共同包

1.1.2.1          ntp服务的安装

控制节点上

yum install ntp

vi /etc/ntp.conf

# Hosts on local network are less restricted.

#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap 指定可以访问的范

# Use public servers from the pool.ntp.org project.

# Please consider joining the pool (http://www.pool.ntp.org/join.html).

#server 0.centos.pool.ntp.org iburst internet点注

#server 1.centos.pool.ntp.org iburst

#server 2.centos.pool.ntp.org iburst

#server 3.centos.pool.ntp.org iburst

server 127.127.1.0  #local clock hacontroller1

fudge 127.127.1.0 stratum 10

 

service ntpd start

chkconfig ntpd on

 

 

1.1.2.2 停止Firewall

安装前停止iptables.

service iptables stop

chkconfig iptabls off

 

1.1.2.3 安装MySQL服务器

yum install mysql mysql-server MySQL-python

service mysqld start

chkconfig mysqld on

 

设置MySQL服务器的「root」用户的密码为「admin123」。

[root@ascontroller ~]# mysql

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 667

Server version: 5.1.73 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> grant all privileges on *.* to root@"%" identified by "admin123" with grant option;

 

1.1.2.4 安装通信消息队列服务器

yum install qpid-cpp-server memcached

service qpidd start

chkconfig qpidd on

 

下面始正式的openstack安装。

 

1.1.3  OpenStack基础包

1.1.3.1 安装Yum

yum install http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-6.noarch.rpm

yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

 

1.1.3.2 安装OpenStack所使用的包

yum install openstack-utils

yum install openstack-selinux

yum install crudini

 

1.1.4  Keystone

1.1.4.1 安装Keystone

KeyStone的安装

yum install openstack-keystone python-keystoneclient

 

Keystone的配置

设置需要的DB设置。

openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:admin123@ascontroller/keystone

openstack-db --init --service keystone --password admin123

请注意红字。openstack-config命令是用来设置配置文件的命令。openstack-db是追加DB表的命令。这里将用户名设置为「keystone」和密码设置为「admin123,添加到DB中,并写到配置文件。请根据自己的情况进行修正。

为了简单,后面将所有的用户的密码统一成「admin123」。

 

n  设置管理用的TOKEN字符串。

ADMIN_TOKEN=$(openssl rand -hex 10)

echo $ADMIN_TOKEN

openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN

这里环境变量$ADMIN_TOKEN为生成的内容。后面也要使用。另外因为保存在keystone.conf文件中,需要时可以进行查询。

 

n  设置管理用的key。

keystone-manage pki_setup --keystone-user keystone --keystone-group keystone

chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log

 

Keystone服务设置为自动启动

service openstack-keystone start

chkconfig openstack-keystone on

 

1.1.4.2          Keystone配置

n  用户, tenantrole的定义

Keystone命令行中可以通过「--os-token」选项来设置token。也可以通过OS_SERVICE_TOKEN环境变量来设置。

在这使用环境变量。

export OS_SERVICE_TOKEN=35e889d14eeab295a81e  刚刚记录下的内容

export OS_SERVICE_ENDPOINT=http://ascontroller:35357/v2.0

 

做成管理用的tenantadmin」和服务用的tenantservice」。

keystone tenant-create --name=admin --description="Admin Tenant"

keystone tenant-create --name=service --description="Service Tenant"

 

做成管理用的用户「admin」。

keystone user-create --name=admin --pass=admin123 --email=admin@example.com

 

做成管理用的role「admin

keystone role-create --name=admin

 

关联用户和role及tenant

keystone user-role-add --user=admin --tenant=admin --role=admin

 

1.1.4.3          环境变量设置脚本keystonerc

①     做成环境变量脚本。

vi keystonerc

export OS_USERNAME=admin

export OS_PASSWORD=admin123

export OS_TENANT_NAME=admin

export SERVICE_TOKEN=35e889d14eeab295a81e

export OS_AUTH_URL=http://ascontroller:5000/v2.0

export SERVICE_ENDPOINT=http://ascontroller:35357/v2.0

 

1.1.4.4          Keystone服务和APIEndpoint

 

n  做成Keystone的服务。

keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"

+-------------+----------------------------------+

|   Property  |              Value               |

+-------------+----------------------------------+

| description |    Keystone Identity Service     |

|      id     | 74595f66e26d4994b0029220ad16a978 |

|     name    |             keystone             |

|     type    |             identity             |

+-------------+----------------------------------+

 

Keystone API

keystone endpoint-create \

--service-id=74595f66e26d4994b0029220ad16a978 ←上コマンド的出力内容

--publicurl=http://ascontroller:5000/v2.0 \

--internalurl=http://ascontroller:5000/v2.0 \

--adminurl=http://ascontroller:35357/v2.0

 

1.1.4.5          Keystone安装后的测试

①     清除OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT环境变量。

unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

 

②     取得「admin」用户的token和key。

keystone --os-username=admin --os-password=admin123 --os-auth-url=http://ascontroller:35357/v2.0 token-get

 

③     admin」管理用户来取得「tenant」的用户一览。

keystone --os-username=admin --os-password=admin123 --os-tenant-name=admin --os-auth-url=http://ascontroller:35357/v2.0 user-list

 

④     执行shell

source keystonerc

 

⑤     执行下面的命令来确认环境变量的有效性。

keystone token-get

keystone user-list

可以步骤②③一样的结果。

 

1.1.5           Glance

1.1.5.1          安装Glance

yum install openstack-glance

 

1.1.5.2          配置Glance

Glance使用DB保存信息,因此进行DB的设置。

openstack-config --set /etc/glance/glance-api.conf DEFAULT sql_connection mysql://glance:admin123@ascontroller/glance

openstack-config --set /etc/glance/glance-registry.conf DEFAULT sql_connection mysql://glance:admin123@ascontroller/glance

※参考keystone的说明。

 

n  做成DB的表。

openstack-db --init --service glance --password admin123

 

n  管理用的glance用户

keystone user-create --name=glance --pass=admin123 --email=glance@example.com

keystone user-role-add --user=glance --tenant=service --role=admin

 

设置/etc/glance/glance-api.conf

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host ascontroller

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password admin123

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host ascontroller

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password admin123

 

设置/etc/glance/glance-api-paste.ini/etc/glance/glance-registry-paste.ini文件

cp /usr/share/glance/glance-api-dist-paste.ini /etc/glance/glance-api-paste.ini

cp /usr/share/glance/glance-registry-dist-paste.ini /etc/glance/glance-registry-paste.ini

下面两个文件添加下面的内容。

vi /etc/glance/glance-api-paste.ini

[filter:authtoken]

paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory

auth_host=ascontroller

admin_user=glance

admin_tenant_name=service

admin_password=admin123

flavor=keystone

vi /etc/glance/glance-registry-paste.ini

[filter:authtoken]

paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory

auth_host=ascontroller

admin_user=glance

admin_tenant_name=service

admin_password=admin123

flavor=keystone

 

n  做成glance的openstack服务

keystone service-create --name=glance --type=image --description="Glance Image Service"

 

n  做成glance服务的Endpoint

keystone endpoint-create \

--service-id=5b6258f1b163404c827f6b55b6ee8ef6 \

--publicurl=http://ascontroller:9292 \

--internalurl=http://ascontroller:9292 \

--adminurl=http://ascontroller:9292

 

n  设置glance服务自动启动。

chkconfig openstack-glance-api on

chkconfig openstack-glance-registry on

 

1.1.5.3 启动glance服务

 

service openstack-glance-api start

service openstack-glance-registry start

 

1.1.5.4 测试glance服务

①     从网络下载image

mkdir images

cd images/

wget http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img

 

②     登陆image

glance image-create --name="cirros" --disk-format=qcow2 --container

-format=bare --is-public=true < cirros-0.3.1-x86_64-disk.img

+------------------+--------------------------------------+

| Property         | Value                                |

+------------------+--------------------------------------+

| checksum         | 4dd25c46fd12a540fde3a4c984c6087e     |

| container_format | bare                                 |

| created_at       | 2014-02-12T07:57:31                  |

| deleted          | False                                |

| deleted_at       | None                                 |

| disk_format      | qcow2                                |

| id               | cdf008cd-524c-4e58-8bc1-62263c153def |

| is_public        | True                                 |

| min_disk         | 0                                    |

| min_ram          | 0                                    |

| name             | cirros                               |

| owner            | None                                 |

| protected        | False                                |

| size             | 255524864                            |

| status           | active                               |

| updated_at       | 2014-02-12T07:57:40                  |

+------------------+--------------------------------------+

 

③     显示登陆的image

glance image-list

+--------------------------------------+--------+-------------+------------------+-----------+--------+

| ID                                   | Name   | Disk Format | Container Format | Size      | Status |

+--------------------------------------+--------+-------------+------------------+-----------+--------+

| cdf008cd-524c-4e58-8bc1-62263c153def | ubantu | qcow2       | bare             | 255524864 | active |

+--------------------------------------+--------+-------------+------------------+-----------+--------+

 

1.1.6  Nova

1.1.6.1 Nova的安装

yum install openstack-nova python-novaclient

 

1.1.6.2 nova的配置

n  数据库设置

openstack-config --set /etc/nova/nova.conf database connection mysql://nova:admin123@ascontroller/nova

 

n  消息队列设置

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpid

openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname ascontroller

 

n  做成数据库表。

openstack-db --init --service nova --password admin123

 

VNC设定

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.0.0.105

openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 10.0.0.105

openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.0.0.105

Controller内部的IP

 

n  做成管理用的「nova」用户

. keystonerc

keystone user-create --name=nova --pass=admin123 --email=nova@example.com

keystone user-role-add --user=nova --tenant=service --role=admin

 

设置Keystone的认证

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host ascontroller

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357

openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova

openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service

openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password admin123

openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret admin123

openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true

 

/etc/nova/api-paste.ini文件设置

vi /etc/nova/api-paste.ini

[filter:authtoken]

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

auth_host = ascontroller

auth_port = 35357

auth_protocol = http

auth_uri = http://ascontroller:5000/v2.0

admin_tenant_name = service

admin_user = nova

admin_password = admin123

 

vi /etc/nova/nova.conf

#api_paste_config=api-paste.ini

api_paste_config=/etc/nova/api-paste.ini   添加

 

n  做成「nova」服务

keystone service-create --name=nova --type=compute --description="Nova Compute service"

+-------------+----------------------------------+

|   Property  |              Value               |

+-------------+----------------------------------+

| description |       Nova Compute service       |

|      id     | 915caf148790496b979ce1e9426655ac |

|     name    |               nova               |

|     type    |             compute              |

+-------------+----------------------------------+

 

n  做成glance服务的Endpoint

keystone endpoint-create \

--service-id=915caf148790496b979ce1e9426655ac \

--publicurl=http://ascontroller:8774/v2/%\(tenant_id\)s \

--internalurl=http://ascontroller:8774/v2/%\(tenant_id\)s \

--adminurl=http://ascontroller:8774/v2/%\(tenant_id\)s

--service-id=就是上面命令的输出。如果没有记录的场合,

[root@ascontroller ~]# . keystonerc

[root@ascontroller ~]# keystone service-list

命令可以获得

 

n  设置nova服务自动启动

 

for service in api cert consoleauth scheduler novncproxy conductor; do

chkconfig openstack-nova-$service on

done

 

1.1.6.3 nova服务的启动

 

for service in api consoleauth scheduler novncproxy conductor cert; do

service openstack-nova-$service start

done

 

1.1.6.4 nova的安装测试

n  取得image list

source keystonerc

nova image-list

 

n  获得服务一览

[root@ascontroller ~]# nova service-list

+------------------+--------------+----------+---------+-------+----------------------------+-----------------+

| Binary           | Host         | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+------------------+--------------+----------+---------+-------+----------------------------+-----------------+

| nova-cert        | ascontroller | internal | enabled | up    | 2014-03-18T06:24:50.000000 | None            |

| nova-conductor   | ascontroller | internal | enabled | up    | 2014-03-18T06:24:48.000000 | None            |

| nova-consoleauth | ascontroller | internal | enabled | up    | 2014-03-18T06:24:50.000000 | None            |

| nova-scheduler   | ascontroller | internal | enabled | up    | 2014-03-18T06:24:50.000000 | None            |

| nova-network     | ascomnode1   | internal | enabled | down  | 2014-02-21T04:17:00.000000 | None            |

| nova-compute     | ascomnode1   | nova     | enabled | up    | 2014-03-18T06:24:54.000000 | None            |

| nova-compute     | ascomnode2   | nova     | enabled | up    | 2014-03-18T06:24:46.000000 | None            |

+------------------+--------------+----------+---------+-------+----------------------------+-----------------+

 

1.1.7  Horizon

1.1.7.1 Horizon的安装

yum install memcached python-memcached mod_wsgi openstack-dashboard

 

1.1.7.2 Horizon的配置

Mecached的设置

vi /etc/sysconfig/memcached

PORT="11211"

USER="memcached"

MAXCONN="1024"

CACHESIZE="64"

OPTIONS=""

 

/etc/openstack-dashboard/local_settings

vi /etc/openstack-dashboard/local_settings

# 查询下面的内容,下面一样进行修正

CACHES = {

'default': {

'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',

'LOCATION' : '127.0.0.1:11211'

}

}

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {

    'default': {

        'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',

        'LOCATION' : '127.0.0.1:11211',

    }

}

#下面添加可以访问的IP和网络。

ALLOWED_HOSTS = ['localhost','172.28.160.212','172.28.160.205','172.28.160.204','172.28.160.203']

#设置成本地一致的内容

TIME_ZONE = "UTC"

#如果位于一台机器上,如下设置

OPENSTACK_HOST = "127.0.0.1"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"

※手册上面并没有role「Member」的做成。我做成了。如果有问题,使用「keystone role-create --name=Member」命令来做成。

 

horizon服务自动启动

 

chkconfig httpd on

chkconfig memcached on

 

1.1.7.3 horizon服务的启动

启动horizon服务

service httpd start

service memcached start

 

1.1.7.4 horizon的安装测试

n  要使用FireFox/chromeweb浏览器,来Dashboard。请使用以下的URL

http://172.28.160.205/dashboard/auth/login/

 

n  当出现问题,察看下面的文件来解决错误。

/var/log/httpd/err_log

 

1.1.8  网络节点所用的配置信息

 

1.1.8.1 Neutron数据库

mysql --user=root --password=admin123

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'admin123';

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'admin123';

 

1.1.8.2 添加keystone 用户

keystone user-create --name=neutron --pass=admin123 --email=neutron@example.com

keystone user-role-add --user=neutron --tenant=service --role=admin

keystone service-create --name=neutron --type=network --description="OpenStack Networking Service"

 

[root@ascontroller ~]# keystone service-create --name=neutron --type=network --description="OpenStack Networking Service"

WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored).

+-------------+----------------------------------+

|   Property  |              Value               |

+-------------+----------------------------------+

| description |   OpenStack Networking Service   |

|      id     | 83f10830495b4024b49f26dceed83318 |

|     name    |             neutron              |

|     type    |             network              |

+-------------+----------------------------------+

 

1.1.8.3 追加endpoint

keystone endpoint-create \

#上面命令的输出

--service-id 83f10830495b4024b49f26dceed83318 \

--publicurl http://ascontroller:9696 \

--adminurl http://ascontroller:9696 \

--internalurl http://ascontroller:9696

 

1.1.9  网络节点(asnetnode

网络节点有三个网卡。

 

1.1.9.1 共通包的安装

MySQL客户端

yum install mysql MySQL-python

设置/etc/hosts文件, Ntp的安装?设置,请参考控制接点的部分。

Openstack共通包

yum install http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-6.noarch.rpm

yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

yum install openstack-utils

yum install openstack-selinux

yum upgrade

reboot

 

1.1.9.2 停止Firewall

安装前先停止Firewall。

service iptables stop

 

1.1.9.3 neutron的安装

yum install openstack-neutron

 

1.1.9.4 Neutron服务自动启动

 

for s in neutron-{dhcp,metadata,l3}-agent; do chkconfig $s on; done

 

1.1.9.5 neutron配置

IP转送和过滤的设置

vi /etc/sysctl.conf

net.ipv4.ip_forward=1

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

使修改生效。

sysctl -p

 

设置配置文件

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host ascontroller

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password admin123

openstack-config --set /etc/neutron/neutron.conf AGENT root_helper sudo neutron-rootwrap /etc/neutron/rootwrap.conf

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid

openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname ascontroller

openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_port 5672

openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_username guest

openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_password guest

openstack-config --set /etc/neutron/neutron.conf DATABASE sql_connection mysql://neutron:admin123@ascontroller/neutron

 

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken auth_host ascontroller

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken auth_uri http://ascontroller:5000

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_tenant_name service

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_user neutron

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_password admin123

 

1.1.9.6 安装plugin

Open vSwitch (OVS) plug-in

yum install openstack-neutron-openvswitch

 

1.1.9.7          OpenvSwitch服务自动启动

 

service openvswitch start

chkconfig openvswitch on

 

1.1.9.8 OpenvSwitch Plugin设置

n  添加网桥

 

ovs-vsctl add-br br-int

ovs-vsctl add-br br-ex

 

n  将连接外网的网卡添加到br-ex上。

ovs-vsctl add-port br-ex eth2

本次外部网卡使用「eth2」。

 

n  修改「eth2」的网络设置

vi /etc/sysconfig/network-scripts/ifcfg-eth2

# 必须按照以下的设置,手册上说明不对

DEVICE="eth2"

NM_CONTROLLED="yes"

ONBOOT=yes

#使用原来的MAC地址

HWADDR=00:0c:29:11:b7:8c

TYPE=Ethernet

UUID=da0bf67c-9572-4281-b2e3-db85af0bcc4c

BOOTPROTO=none

PROMISC=yes

 

n  修改「br-ex」的网卡设置

vi /etc/sysconfig/network-scripts/ifcfg-br-ex

# 按照以下来做成

DEVICE=br-ex

NM_CONTROLLED="no"

ONBOOT=yes

IPV6INIT=no

# 原来eth2IP设置

IPADDR=192.168.70.103

NETMASK=255.255.255.0

GATEWAY=192.168.70.1

BOOTPROTO=none

 

n  设置l3_agent

vi /etc/neutron/l3_agent.ini

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

use_namespaces = True

ovs_use_veth = True

 

n  设置dhcp_agent

vi /etc/neutron/dhcp_agent.ini

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

use_namespaces = True

ovs_use_veth = True

 

n  设置openvswitch的使用

vi /etc/neutron/neutron.conf

core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2

 

n  设置securitygroup

vi /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

 

1.1.9.9 设置GRE

n  配置NeutronOVS plugin使用「GRE TUNNELING」。

vi /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[ovs]

tenant_network_type = gre

tunnel_id_ranges = 1:1000

enable_tunneling = True

integration_bridge = br-int

tunnel_bridge = br-tun

#eth1的IP

local_ip = 10.0.0.103

 

1.1.9.10          OVS plug-in服务自动启动

chkconfig neutron-openvswitch-agent on

 

1.1.9.11          Plugin的设置

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq

 

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://ascontroller:5000/v2.0

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name service

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password admin123

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip ascontroller

#下面的内容必须同控制节点的相同。

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret admin123

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT neutron_insecure True

 

1.1.9.12 做成软连接

cd /etc/neutron

ln -s plugins/openvswitch/ovs_neutron_plugin.ini plugin.ini

 

1.1.9.13          Network服务的再次启动

service neutron-dhcp-agent restart

service neutron-l3-agent restart

service neutron-metadata-agent restart

service neutron-openvswitch-agent restart

 

1.1.9.14          Firewall的启动

 

service iptables start

chkconfig iptabls on

 

到此network node的安装结束。

 

1.1.10             控制节点的neutron安装

1.1.10.1          neutron的安装

yum install openstack-neutron python-neutron python-neutronclient

 

1.1.10.2          neutron设置

 

openstack-config --set /etc/neutron/neutron.conf DATABASE sql_connection mysql://neutron:admin123@ascontroller/neutron

 

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone

 

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host ascontroller

 

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://ascontroller:35357/v2.0

 

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service

 

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron

 

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password admin123

 

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory

 

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken auth_host ascontroller

 

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_tenant_name service

 

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_user neutron

 

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_password admin123

 

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid

 

openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname ascontroller

 

openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_port 5672

 

openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_username guest

 

openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_password guest

 

openstack-config --set /etc/neutron/neutron.conf AGENT root_helper sudo neutron-rootwrap /etc/neutron/rootwrap.conf

 

 

1.1.10.3          Plugin的安装

yum install openstack-neutron-openvswitch

 

1.1.10.4          Neutron设置

 

vi /etc/neutron/neutron.conf

core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2

 

1.1.10.5          GRE设置

vi /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[ovs]

tenant_network_type = gre

tunnel_id_ranges = 1:1000

enable_tunneling = True

 

1.1.10.6          Neutron的使用設定

设置Nova使用neutron的网络功能。

openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API

 

openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://ascontroller:9696

 

openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone

 

openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service

 

openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron

 

openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password admin123

 

openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://ascontroller:35357/v2.0

 

openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver

 

openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver

 

openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron

 

1.1.10.7          做成软连接

ln -s /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini /etc/neutron/plugin.ini

 

1.1.10.8          启动Neutron-server服务

service neutron-server start

chkconfig neutron-server on

 

1.1.10.9          Neutron-server服务启动的确认

[root@ascontroller ~]# service neutron-server status

 

 

到此controller节点的neutron安装结束。

 

 

1.1.11             做成网络

有两种做成网络的方法。

1. 使用dashboard的webUI.

2.命令行的方法。

n  UI做成方法

 

 

 

1.1.11.1        命令行的做法

[root@asnetnode ~]# . keystonerc

[root@ascontroller ~]# neutron net-create ext-net --router:external=True --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id 2

Created a new network:

+---------------------------+--------------------------------------+

| Field                     | Value                                |

+---------------------------+--------------------------------------+

| admin_state_up            | True                                 |

| id                        | cf369cd3-b3f3-41ca-b26e-81cbc18dd9c7 |

| name                      | ext-net                              |

| provider:network_type     | vlan                                 |

| provider:physical_network | physnet1                             |

| provider:segmentation_id  | 2                                    |

| router:external           | True                                 |

| shared                    | False                                |

| status                    | ACTIVE                               |

| subnets                   |                                      |

| tenant_id                 | 226ebdcfc68a4764bee9a832cdc94fc4     |

+---------------------------+--------------------------------------+

[root@asnetnode ~]# neutron subnet-create ext-net \

> --allocation-pool start=192.168.70.50,end=192.168.70.100 \

> --gateway=192.168.70.1 --enable_dhcp=False \

> 192.168.70.0/24

Created a new subnet:

+------------------+-----------------------------------------------------+

| Field            | Value                                               |

+------------------+-----------------------------------------------------+

| allocation_pools | {"start": "192.168.70.50", "end": "192.168.70.100"} |

| cidr             | 192.168.70.0/24                                     |

| dns_nameservers  |                                                     |

| enable_dhcp      | False                                               |

| gateway_ip       | 192.168.70.1                                        |

| host_routes      |                                                     |

| id               | b1a2fcac-6420-4d49-9fe3-e777e2db2672                |

| ip_version       | 4                                                   |

| name             |                                                     |

| network_id       | cf369cd3-b3f3-41ca-b26e-81cbc18dd9c7                |

| tenant_id        | 226ebdcfc68a4764bee9a832cdc94fc4                    |

+------------------+-----------------------------------------------------+

[root@asnetnode ~]# keystone tenant-create --name demo

WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored).

+-------------+----------------------------------+

|   Property  |              Value               |

+-------------+----------------------------------+

| description |                                  |

|   enabled   |               True               |

|      id     | d19ef78d5a3c4d328d37b4f6474b3482 |

|     name    |               demo               |

+-------------+----------------------------------+

[root@asnetnode ~]# neutron router-create ext-to-int --tenant-id d19ef78d5a3c4d328d37b4f6474b3482

Created a new router:

+-----------------------+--------------------------------------+

| Field                 | Value                                |

+-----------------------+--------------------------------------+

| admin_state_up        | True                                 |

| external_gateway_info |                                      |

| id                    | 21549ed8-160f-430c-a4a1-f128c3cd92de |

| name                  | ext-to-int                           |

| status                | ACTIVE                               |

| tenant_id             | d19ef78d5a3c4d328d37b4f6474b3482     |

 

上面使用的是VLAN的形式,因为我最初安装是使用的VLAN格式,因为switch不支持所以更改为GRE格式。

 

1.1.12             计算节点

※  首先确认是否支持虚拟化

[root@ascomnode2 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo

1

必须是1或者以上的数值。

 

1.1.12.1          共通包的安装

参考网络节点部分。

 

1.1.12.2          Firewall的停止

安装之前先将Firewall停止。

service iptables stop

 

1.1.12.3          nova安装

yum install openstack-nova-compute

 

1.1.12.4          nova设定

设置配置文件

openstack-config --set /etc/nova/nova.conf database connection mysql://nova:admin123@ascontroller/nova

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host ascontroller

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357

openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova

openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service

openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password admin123

 

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpid

openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname ascontroller

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.0.0.104 openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True

openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.

openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.0.0.104

openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://ascontroller:6080/vnc_auto.html

openstack-config --set /etc/nova/nova.conf DEFAULT glance_host ascontroller

openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu

 

api-paste.ini文件设置

vi /etc/nova/api-paste.ini

[filter:authtoken]

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

auth_host = ascontroller

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = admin123

 

nova.conf中的api-paste.ini文件部分设置

vi /etc/nova/nova.conf

api_paste_config=/etc/nova/api-paste.ini

 

n  设置服务自动启动

chkconfig libvirtd on

chkconfig messagebus on

chkconfig openstack-nova-compute on

 

1.1.12.5        compute服务启动

service libvirtd start

service messagebus start

service openstack-nova-compute start

 

1.1.12.6        包过滤设置

n  设置网络转发

vi /etc/sysctl.conf

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

将上面设置生效。

sysctl –p

 

1.1.12.7        Neutron plugin安装

 

yum install openstack-neutron-openvswitch

 

1.1.12.8        neutron plugin的启动

service openvswitch start

chkconfig openvswitch on

 

1.1.12.9        neutron plugin的设定

n  添加网桥

ovs-vsctl add-br br-int

 

/etc/neutron/neutron.conf文件

vi /etc/neutron/neutron.conf

auth_uri = http://ascontroller:5000

core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2

api_paste_config = /etc/neutron/api-paste.ini

rpc_backend = neutron.openstack.common.rpc.impl_qpid

 

GRE设置

vi /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[ovs]

tenant_network_type = gre

tunnel_id_ranges = 1:1000

enable_tunneling = True

integration_bridge = br-int

tunnel_bridge = br-tun

local_ip = 10.0.0.104

※local_ip的IP地址应该为数据网络的网卡IP地址。

 

firewall设置

vi /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

※上面的设定必须跟网络节点的设置一致。

  网络节点的设定请参考[网络plugin]一节。

 

1.1.12.10  Plugin的自动启动

Open vSwitch (OVS) plug-in

chkconfig neutron-openvswitch-agent on

 

1.1.12.11  使用Neutron的设置

 

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host ascontroller

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://ascontroller:35357/v2.0

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password admin123

 

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid

openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname ascontroller

openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_port 5672

openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_username guest

openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_password guest

 

openstack-config --set /etc/neutron/neutron.conf AGENT root_helper sudo neutron-rootwrap /etc/neutron/rootwrap.conf

openstack-config --set /etc/neutron/neutron.conf DATABASE sql_connection mysql://neutron:admin123@ascontroller/neutron

 

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken auth_host ascontroller

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_tenant_name service

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_user neutron

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_password admin123

 

openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API

openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://ascontroller:9696

openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone

openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service

openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron

openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password admin123

openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://ascontroller:35357/v2.0

openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver

openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver

 

openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini security_group neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

 

1.1.12.12        服务的重新启动

service openstack-nova-compute restart

service neutron-openvswitch-agent restart

 

1.1.12.13        服务启动的确认

[root@ascomnode1 ~]# service openstack-nova-compute status

 [root@ascomnode1 ~]# service neutron-openvswitch-agent status

 

 

1.1.12.14        Firewall的启动

开启Firewall

service iptables start

chkconfig iptabls on

 

到此ComputeNode的安装完成。

 


阅读(20965) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~
评论热议
请登录后评论。

登录 注册