澳门新浦京娱乐场网站-www.146.net-新浦京娱乐场官网
做最好的网站

drbd实现httpd服务的高效应用及资源统一管理,N

DRBD是生龙活虎种基于软件的、基于网络的块复制存款和储蓄施工方案

正文四台机械:Centos6.0

那是自身的操作进度,ip和主机名改进下。算是个布局的台本吧,只是电动而已。..O(∩_∩)O~ 

DRBD在IP网络传输,全数在聚集使用DRBD作为分享存款和储蓄设备,无需其余硬件投资,能够节约非常多基金

lv1、lv2两台机器做httpd的前端,使用keepalived做高可用;虚构出三个vip供client访问。

咱们线上用drbd做nfs效果还足以,只是nfs平素没挂过。算安宁啊。以往大点的品种一向上mfs,效果品质更加好点。

 

node1、node2应用drbd技艺达成文件镜像存款和储蓄,虚构出八个vip做nfsserver_ip供httpd服务,方便配置,资料的合併保管;

只顾: 主机名必定要安装,hosts和hostname 分区要单独的分区,何况是未曾格式化的 文件系统只好同期挂载多个,而且是在主节点实行,格式化也只好是在主节点

 

事关使用技巧;httpd、keepalived、drbd、nfs、heartbeat

echo "10.10.10.22 masternfs ">>/etc/hosts
echo "10.10.10.24 slavenfs ">>/etc/hosts
hostname masternfs
hostname slavenfs
 
fdisk -l
fdisk /dev/xxx
partprobe /dev/sdb
cat /proc/partitions
 
yum -y install kmod-drbd83 drbd83
modprobe drbd
lsmod |grep drbd
rm /etc/drbd.conf -f
cp -f /usr/share/doc/drbd83*/drbd.conf /etc/
cd /etc/drbd.d/
cp global_common.conf global_common.conf.bak
 
vim global_common.conf
global {
        usage-count no;
        # minor-count dialog-refresh disable-ip-verification
}
 
common {
        protocol C;
 
 
        startup {
                 wfc-timeout 120;
                 degr-wfc-timeout 120;
        }
 
       disk {
                on-io-error detach;
                fencing resource-only;
        }
 
        net {
                cram-hmac-alg "sha1";
                shared-secret "mydrbdlab";
        }
 
        syncer {
                rate 100M;
        }
}
 
vim web.res
 
resource web {
        on masternfs {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 10.10.10.22:7898;
                meta-disk internal;
        }
        on slavenfs {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 10.10.10.24:7898;
                meta-disk internal;
        }
}
scp global_common.conf 10.10.10.22:/etc/drbd.d/
scp web.res 10.10.10.24:/etc/drbd.d/
#创设财富,在五个节点都实施,会有提示sucess
drbdadm create-md web
 
#开行drbd,在四个节点都扶植
service drbd start
#翻看意况的音讯
drbd-overview
#查看运营状态
cat /proc/drbd

NFS1

如图:

设置主节点
drbdadm -- --overwrite-data-of-peer primary web
drbd-overview
mkfs -t ext3 -L drbdweb /dev/drbd0
mount /dev/drbd0 /web
#查看主从的意况,drbd只好同不常候mount二个。
cat /proc/drbd
service drbd status 
df -h
 
yum -y install nfs* portmap
vi /etc/exports
/web *(rw)
service portmap start
chkconfig portmap on
service nfs start
chkconfig nfs on
 
#将/etc/init.d/nfs 脚本中的stop 部分中的killproc
#nfsd -2 修改为 -9
 
yum install heartbeat*  libnet* –y
 
cd /usr/share/doc/heartbeat*
cp -f authkeys haresources ha.cf /etc/ha.d/
cd /etc/ha.d/
cat >> ha.cf EOF
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility     local0
keepalive 2
deadtime 10
udpport 694
#对方的ip
ucast eth0 10.10.10.24
#不过是路由的,ping节点仅仅用来测量检验互联网连接
ping 10.10.10.1
auto_failback off
node  masternfs
node  slavenfs
EOF
 
echo "masternfs IPaddr::10.10.10.88/24/eth0 drbddisk::web Filesystem::/dev/drbd0::/data::ext3 killnfsd" >> haresources
 
cat >> authkeys EOF
auth 1
1 crc
EOF
 
drbd实现httpd服务的高效应用及资源统一管理,NFS高可用文件共享存储。chmod 600 /etc/ha.d/authkeys
chmod 755 /etc/ha.d/resource.d/killnfsd
echo "killall -9 nfsd; /etc/init.d/nfs restart; exit 0 " >> /etc/ha.d/resource.d/killnfsd
 
service heartbeat start
chkconfig heartbeat on  
 
翻开是主节点依旧从节点:
# drbdadm role web《-----node1上进行的结果
Primary/Secondary
# drbdadm role web《------node2上实行的结果
Secondary/Primary
 
假诺大家想把团结的安装为从节点: # umount /web
# drbdadm secondary web
下一场再把原本的从节点设置为主节点:
# mkdir /web
# drbdadm primary  web 
# mount /dev/drbd0  /web   《----注意那时料定不要再格式化/dev/drbd0
#cd  /web
[root@node2 web]# ls
inittab  lost found--------验证那时候文件仍旧存在

         IP1:10.10.10.166     心跳和多少传输网卡  不安顿网关,增加路由就可以

图片 1

图片 2

                            增添路由:route add -host IP dev eth0何况写在rc.local内

lv1: 192.168.182.130

         VIP:10.10.10.150

lv2: 192.168.182.129     VIP:192.168.182.200 该vip供client访问

         DRBD起码使用三个分区须求独自分出来

node1:192.168.182.133

NFS2

node2:192.168.182.134  VIP:192.168.182.150  该VIP当做nfsserver举办挂载  

         IP1:10.10.10.167

先是关闭了selinux、iptables;当然真实意况其实不然,大家举行个别安排就可以

         VIP:10.10.10.50

大器晚成、初步配备,lv1、lv2;举行测验前端是不是健康

构造主机名、IP、hosts文件、关闭selinux、iptables

1、分别履行:yum install -y httpd ipvsadm keepalived

 

为了分别出lv1、lv2的页面分化,分别在页面标志lv1、lv2

 

图片 3

 

图片 4

安装drbd源

2、接下去配置keepalived;

CentOS 7

lv1:上配置

yum install glibc* -y

vim /etc/keepalived/keepalived.conf

# rpm --import

! Configuration File for keepalived

# rpm -Uvh

global_defs {
  notification_email {
    [email protected]
  }
  notification_email_from [email protected]
  smtp_server 127.0.0.1
  smtp_connect_timeout 30
  router_id LV_ha
}

 

vrrp_instance httpd {
   state MASTER
   interface eth0
   virtual_router_id 51
   priority 100
   advert_int 1
   authentication {
       auth_type PASS
       auth_pass 1111
   }
   virtual_ipaddress {
       192.168.182.200
   }
}

CentOS 6

virtual_server 192.168.182.200 80 {
   delay_loop 2
   lb_algo rr
   lb_kind DR
   persistence_timeout 50
   protocol TCP

rpm -Uvh elrepo-release-6-6.el6.elrepo.noarch.rpm

   real_server 192.168.182.130 80 {
       weight 3
   notify_down    /var/www/httpd.sh
   TCP_CHECK {
   connect_timeout    3
   nb_get_retry    3
   delay_before_retry    3
   connect_port 80
       }
   }
}
lv2:配置

或者

vim /etc/keepalived/keepalived.conf

rpm -Uvh

! Configuration File for keepalived

陈设Ali云源

global_defs {
  notification_email {
    [email protected]
  }
  notification_email_from [email protected]
  smtp_server 127.0.0.1
  smtp_connect_timeout 30
  router_id LV_ha
}

         mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

vrrp_instance httpd {
   state MASTER
   interface eth0
   virtual_router_id 51
   priority 100
   advert_int 1
   authentication {
       auth_type PASS
       auth_pass 1111
   }
   virtual_ipaddress {
       192.168.182.200
   }
}

        

virtual_server 192.168.182.200 80 {
   delay_loop 2
   lb_algo rr
   lb_kind DR
   persistence_timeout 50
   protocol TCP

         CentOS 6

   real_server 192.168.182.129 80 {
       weight 3
   notify_down    /var/www/httpd.sh
   TCP_CHECK {
   connect_timeout    3
   nb_get_retry    3
   delay_before_retry    3
   connect_port 80
       }
   }
}
分别在lv1、lv2创建httpd.sh

 

vim /var/www/httpd.sh

wget -O /etc/yum.repos.d/CentOS-Base.repo

#!/bin/sh
pkill    keepalived

 

#####

CentOS 7

chmod x httpd.sh

 

今昔为此测量试验上面的是或不是不荒谬,是不是平常转移

wget -O /etc/yum.repos.d/CentOS-Base.repo

图片 5

 

图片 6

yum makecache

察觉今后是lv1在提供劳动,在vrrp中也得以看出 priority 100);

 

现在在lv1停掉httpd

配置DRBD

图片 7

         两台主机分别磁盘分区

图片 8

parted -s /dev/sdb mklabel gpt                                             #分区表调换为GPT       

图片 9

parted -s /dev/sdb mkpart primary 0% 80%

现今发觉是lv2在提供服务,以后当你运维lv1上的httpd和keepalived的劳务后,httpd会自动切换会lv1上;这里不演示了;

parted -s /dev/sdb mkpart primary 81% 100%                           

二:将来始发node1、node2配置drbd heartbeat nfs并举办测验;

打字与印刷分区结果

1、配置hosts,安装drbd、heartbeat、nfs

[root@mfsmaster ~]# parted /dev/sdb p

1>、node1、node2:

Model: VMware, VMware Virtual S (scsi)

vim hosts

Disk /dev/sdb: 21.5GB

192.168.182.133    node1
192.168.182.134    node2

Sector size (logical/physical): 512B/512B

2>、drbd安装

Partition Table: gpt

yum -y install gcc kernel-devel kernel-headers flex

 

wget
 tar zxvf drbd-8.4.3.tar.gz
   cd drbd-8.4.3
   ./configure --prefix=/usr/local/drbd --with-km
   make KDIR=/usr/src/kernels/2.6.32-71.el6.i686/
  make install
   mkdir -p /usr/local/drbd/var/run/drbd
    cp /usr/local/drbd/etc/rc.d/init.d/drbd /etc/rc.d/init.d
    chkconfig --add drbd
    chkconfig drbd on
    cd drbd
    cp drbd.ko /lib/modules/`uname -r`/kernel/lib/
    depmod
    modprobe drbd

Number  Start   End     Size    File system  Name     标志

规定加载了drbd模块

 1      1049kB  17.2GB  17.2GB               primary

图片 10

 2      17.4GB  21.5GB  4079MB               primary

到此两台机械的drbd安装收尾;最初config

        

第意气风发供给在node1、node2;fdisk新扩张的disk这里本身新扩张了三个8G的disk来做drbd),切记不可格式化

 

图片 11

 

node1:

安装DRBD

cd /usr/local/drbd/etc/drbd.d

 

mv global_common.conf global_common.conf.bak

分出三个区,三个数码同步运用sdb1,二个记录日志sdb2

vim global_common.conf

fdisk /dev/ddb

global    {
   usage-count    yes;  #是或不是到位DRBD使用者总结,私下认可yes
   }
   common    {
   net    {
      protocol    C;  #利用drbd的第三种同盟协议,表示收到远程主机的写入确认后,则认为写入完结.
   }
   }

Command (m for help): n

vim r0.res

Command action

resource    r0    {
       on node1    {   #各种主机的评释以on初始,后边是hostname
       device    /dev/drbd1;   #drbd设备名
       disk    /dev/sdb1;  #/dev/drbd1使用的磁盘分区是/dev/sdb1。
       address    192.168.182.133:7789;    #设置DRBD的监听端口,用于与另生龙活虎台主机通信。
       meta-disk    internal;
   }
       on node2    {
       device    /dev/drbd1;
       disk    /dev/sdb1;
       address    192.168.182.134:7789;
       meta-disk    internal;
   }
}

   e   extended

将地点那些布局文件分别复制到两台主机的/etc/drbd.d目录下。

   p   primary partition (1-4)

2、 启动DRBD

                                                                 p

在多个节点试行
在运行DRBD早前,你须求各自在两台主机的sdb1分区上,创设供DRBD记录音讯的数量块.分别在两台主机上实施:
[[email protected] ~]# drbdadm create-md r0 也许实践drbdadm create-md all
[[email protected] ~]# drbdadm create-md r0
在多个节点运转服务
[[email protected] ~]#/etc/init.d/drbd start
[[email protected] ~]#/etc/init.d/drbd start
最佳还要开动
在随机节点查看节点状态

artition number (1-4): 1

图片 12

First cylinder (1-2610, default 1): 回车

对输出的含义解释如下:
ro代表剧中人物新闻,第二次运营drbd时,多个drbd节点暗许都地处Secondary状态,
ds是磁盘状态音信,“Inconsistent/Inconsisten”,即为“不均等/不均等”状态,表示三个节点的磁盘数据处于不相似状态。“UpToDate/UpToDate”。即为“实时/实时”状态了。
Ns代表网络发送的数据包信息。
Dw是磁盘写消息
Dr是磁盘读音讯

Last cylinder, cylinders or size{K,M,G} (1305-2610, default 2610): 10G

设置主节点
出于暗中认可未有前后相继节点之分,由此须要设置三个主机的前后相继节点,接纳需求安装为主节点的主机,然后实践如下命令:

Command (m for help): n

drbdsetup /dev/drbd1 primary --o

Command action

率先次实行完此命令后,在前边假诺须求安装哪些是主节点时,就足以接纳其它叁个指令:

   e   extended

drbdadm primary r0或者drbdadm primary all

   p   primary partition (1-4)

实践此命令后,起头协作两台机器对应磁盘的数目

                                                                 p

图片 13

在试行奉行一回分出sdb2  w保存分区                                          

从出口可以见到:
   “ro状态未来变为“Primary/Secondary”,“ds”状态也改成“UpToDate/Inconsistent”,也正是“实时/不雷同”状态,未来多少正在主备三个主机的磁盘间开展同步,且同步进度为8.4%,同步速度每秒10M左右。
等待片刻,再一次查看同步状态,输出如下:

 

图片 14

晋级内核yum install kernel kernel-devel kernel-headers -y

能够见见一头完毕了,并且“ds“状态也产生“UpToDate/UpToDate”了。即为“实时/实时”状态了。

        

格式化disk

         yum install kmod-drbd83 drbd83 -y

mkfs.ext4 /dev/drbd1

        

图片 15

         加载DRBD到内核

接过了就足以mount 使用了

                   modprobe drbd

图片 16

                   加载不成功履行depmod然后重启

3、安装heartbeat、nfs

                   echo "modprobe drbd > /dev/null 2>&1" > /etc/sysconfig/modules/drbd.modules

yum install heartbeat nfs libnet -y

         检查是或不是安装成功

cp /usr/share/doc/heartbeat-3.0.4/authkeys ha.cf haresources /etc/ha.d/

                   lsmod | grep -i drbd

1、node1配置ha.cf

         查看drbd.ko安装路线

logfile    /var/log/ha-log

                   modprobe -l | grep -i drbd

logfacility    local0

         安装成功今后drbd相关工具(drbdadm,drbdsetup卡塔尔(英语:State of Qatar)被设置在/sbin/目录下       

keepalive 2

 

deadtime 30

配置drbd

warntime 10

vim /etc/drbd.confi

initdead 120

#include "drbd.d/global_common.conf";

udpport    694

#include "drbd.d/*.res";

ucast eth0 192.168.182.134

global {

auto_failback off

        usage-count no;

node node1
node node2

}

ping 192.168.182.2

common {

respawn root /usr/lib/heartbeat/ipfail

        syncer { rate 200M; }

}

node2的ha.cf和方面同样,只是ucast eth0 192.168.182.133对方IP)

resource nfsha {

配置/etc/ha.d/authkeys

        protocol C;

auth 2
#1 crc
2 sha1 heartbeat
#3 md5 Hello!

                   startup {

node2同上

                 wfc-timeout 120;

配置/etc/ha.d/haresources

                 degr-wfc-timeout 120;

node1 IPaddr::192.168.182.150/24/eth0 drbddisk::r0 Filesystem::/dev/drbd1::/mnt::ext4 nfs

        }

node2同上

 

chmod 600 authkeys

        net {

node2同上

                cram-hmac-alg "sha1";

cp /usr/local/drbd/etc/ha.d/resource.d/drbddisk /etc/ha.d/resource.d/

                shared-secret "nfs-ha";

node2同上

        }

启动heartbeat

        

/etc/init.d/heartbeat start

         disk {

现今结束:你会发掘node1上网卡

                on-io-error detach;

图片 17

                fencing resource-only;

将node1上heartbeat关闭:你会开采node2

        }

图片 18

        device /dev/drbd0;

自动挂载,drbd状态自动切换,vip自动漂移全体正规

        on nfs1 {

分别在node1、node2上陈设nfs分享目录

                disk /dev/sdb1;

[[email protected] ~]# vim /etc/exports

                address 10.10.10.166:7788;

/root/data      *(rw)

                meta-disk internal;

[[email protected] ~]# exportfs -r
[[email protected] ~]# exportfs -u
/root/data        <world>

        }

4、接下去怎么在前边的两台lv上边挂载当作httpd服务的主目录;

        on nfs2 {

lv1、lv2:

                disk /dev/sdb1;

mount -t nfs 192.168.182.150:/root/data /var/www/html

                address 10.10.10.167:7788;

能够把这几个写到fstab中,开机运行

                meta-disk internal;

192.168.182.150:/root/data    /var/www/html    nfs    defaults    0    0

        }

5、接下去举行测量试验了:

 

1>、大家在node1中的/root/data/ 中成立index.html;期中内容:node heartbeat test

}

图片 19

        

2>、今后我们将lv1宕机掉进行测验也是如出大器晚成辙的,不影响

 

3>、未来大家将node1宕掉,node间的劳务会切换来node2,然后改善index.html的内容实行个别

解释:

那边新扩充了two叁个标记

global {

node1:/etc/init.d/heartbeat stop

        usage-count no;  #允分化意官方总结

node2:

}

[[email protected] data]# vim index.html

common {

node heartbeat test two

        syncer {rate 200M; }                  //设置主备节点同步时速率的最大值单位是字节

到现在再拜望vip

                  

图片 20

}

一切平常!ok

resource r0 {                                                   //财富名称

那般有助于统风流洒脱保管了能源,而且达成了可相信性;

        protocol C;                                             //使用的商业事务

正文出自 “Coffee_蓝山” 博客,请必得保留此出处

 

lv1、lv2两台机械做httpd的前端,使用keepalived做高可用;设想出二个vip供client访谈。 node1、node2应用drbd技巧完成文件镜...

        handlers {

                # These are EXAMPLE handlers only.

                # They may have severe implications,

                # like hard resetting the node under certain circumstances.

                # Be careful when chosing your poison.

 

                 pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";

                 pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";

                 local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";

                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";

                 split-brain "/usr/lib/drbd/notify-split-brain.sh root";

                 out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";

                # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";

                # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;

        }

 

        startup {

                 wfc-timeout 120;

                 degr-wfc-timeout 120;

        }

 

        net {

                cram-hmac-alg "sha1";   //DRBD同步时使用的申明办法和密码音讯

                shared-secret "nfsha";

        }

        

         disk {                                                                                                                //使用dpod功用保证在数据不一样步时不实行切换

                on-io-error detach;

                fencing resource-only;

        }

        device /dev/drbd0;

        on master-drbd {                                                                       //每个主机的证实以on初始前边是hostname(与uname -n近似卡塔尔(قطر‎

                disk /dev/sdb1;                                                                 // drbd0使用的磁盘分区是/dev/sdb1

                address 192.168.100.10:7788;                          //

                meta-disk internal;                                                //DRBD的源数据存形式

        }

        on slave-drbd {

                disk /dev/sdb1;

                address 192.168.100.20:7788;

                meta-disk internal;

        }

 

}

 

 

复制配置信息到另后生可畏台主机

 

启动DRBD(同时)

         首先成立供DRBD记录新闻的数据块

                   两台主机分别实践drbdadm create-md nfsha或然drbdadm create-md all

                   启动DRBD service drbd start

         查看节点状态        

                   cat /proc/drbd

                   version: 8.3.16 (api:88/proto:86-97)

GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37

 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----

    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:5242684

        

         ro表示剧中人物音信,第二回运行时都为Secondary(备卡塔尔(قطر‎状态

         ds表示磁盘消息,Inconsistent表示磁盘数据不相近状态

         ns表示网络发送的数目包音信

         dw表示磁盘写音讯

         dr表示磁盘读新闻

 

安装主用节点

         drbdsetup /dev/drbd0 primary -o

         drbdadm primary r0或者 drbdadm primary all

设置从 drbdadm secondary nfsha

开发银行现在再度翻开情形        

         version: 8.3.16 (api:88/proto:86-97)

GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37

 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----

    ns:4812800 nr:0 dw:0 dr:4813472 al:0 bm:293 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:429884

         [=================>..] sync'ed: 91.9% (416/5116)M

         finish: 0:00:10 speed: 40,304 (38,812) K/sec

        

sync'ed:同步进程

等候黄金时代段时间再一次翻开(同步时间恐怕十分长卡塔尔国

         version: 8.3.16 (api:88/proto:86-97)

GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37

 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----

    ns:5242684 nr:0 dw:0 dr:5243356 al:0 bm:320 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

 

ds状态产生UpToDate/UpToDate表明成功了

 

挂载DRBD设备

         mount只好用在主设备上,因而只要主设备本事格式化挂载。

         要在备设备上挂载,必得先卸载主,在晋级备为主,然后再挂载

 

主设备进行格式化

         mkfs.ext4 /dev/drbd0

         tune2fs -c -1 /dev/drbd0

挂载

mkdir /data

         mount /dev/drbd0 /data

         关闭DRBD开机自运维

测试

          dd if=/dev/zero of=/data/test.file bs=100M count=2

翻看备用节点是还是不是同步

         主操作

          umount /data

          drbdadm secondary all

        

         备用节点晋级为主

          drbdadm primary nfsha

          mount /dev/drbd0 /data

          挂载查看是或不是带有test.file

          

          

 

 

安装Heartbeat

配置vip

cd /etc/sysconfig/network-scripts/

cp ifcfg-eth0 ifcfg-eth0:0

 

DEVICE=eth0:0

ONBOOT=yes

IPADDR=10.10.10.50

NETMASK=255.255.255.255

 

yum install pam-devel -y

yum install python-devel -y

yum install gcc-c -y

yum install glib* -y

yum install libxslt* -y

yum install tkinter* -y

yum install elfutils* -y

yum install lm_sensors* -y

yum install perl-Compress* perl-libwww* perl-HTML* perl-XML* perl-Net* perl-IO* perl-Digest* -y

yum install bzip2* -y

yum install ncurses* -y

yum install imake* -y

yum install autoconf* -y

yum install flex -y

yum install beecrypt* -y

yum install net-snmp* -y

yum install perl-LDAP-0.40-1.el6.noarch.rpm -y

yum install perl-Parse-* perl-Mail-DKIM* -y

yum install libnet* -y

yum install openssl openssl-devel -y

 

tar xf libnet-1.1.2.1.tar.gz

cd libnet

./configure

make &&make install

cd ..

tar xf heartbeat-2.0.7.tar.gz

cd heartbeat-2.0.7

./ConfigureMe configure --enable-fatal-warnings=no --disable-swig --disable-snmp-subagent

./ConfigureMe make --enable-fatal-warnings=no || gmake

make install

 

 

cd /usr/share/doc/heartbeat-2.0.7/

cp ha.cf haresources authkeys /etc/ha.d/

cd /etc/ha.d/

 

 

设置heartbeat配置文件

(nfs1)

编写制定ha.cf,增加底下配置:

 

 

# vi /etc/ha.d/ha.cf

logfile         /var/log/ha-log

logfacility     local0

keepalive       2

deadtime        5

ucast           eth0 10.10.10.167    # 钦命对方网卡及IP

auto_failback   off

node           nfs1 nfs2

 

(nfs2)

编写ha.cf,加多上面配置:

 

# vi /etc/ha.d/ha.cf

logfile         /var/log/ha-log

logfacility     local0

keepalive       2

deadtime        5

ucast           eth0 10.10.10.166

auto_failback   off

node            nfs1 nfs2

 

编辑双机互联验证文件authkeys,增添以下内容:(node1,node2卡塔尔(英语:State of Qatar)

 

 

# vi /etc/ha.d/authkeys

auth 1

1 crc

给验证文件600权力

 

 

# chmod 600 /etc/ha.d/authkeys

编纂集群能源文件:(nfs1,nfs2卡塔尔(英语:State of Qatar)

 

 

# vi /etc/ha.d/haresources

nfs1 IPaddr::10.10.10.50/24/eth0 drbddisk::nfsha Filesystem::/dev/drbd0::/data::ext4 killnfsd

此处ip为虚ip   注意网卡(eth0)财富名(nfsha)和挂载点(data)

注:该文件内IPaddr,Filesystem等剧本贮存路线在/etc/ha.d/resource.d/下,也可在该目录下寄放服务运营脚本(比如:mysql,www),将同后生可畏脚本名称增加到/etc/ha.d/haresources内容中,进而跟随heartbeat运营而运营该脚本。

 

IPaddr::10.10.10.0/24/eth0:用IPaddr脚本配置对外地劳工务的扭转设想IP

drbddisk::nfsha:用drbddisk脚本完结DRBD主从节点能源组的挂载和卸载

Filesystem::/dev/drbd0::/data::ext4:用Filesystem脚本达成磁盘挂载和卸载

 

编写制定脚本文件killnfsd,用来重启NFS服务:(node1,node2卡塔尔

 

# vi /etc/ha.d/resource.d/killnfsd

killall -9 nfsd; /etc/init.d/nfs restart;exit 0

付与755实行权限:

 

 

chmod 755 /etc/ha.d/resource.d/killnfsd

 

创制DRBD脚本文件drbddisk:(nfs1,nfs2卡塔尔

 

编写drbddisk,增多上边包车型地铁本子内容

 

 

# vi /etc/ha.d/resource.d/drbddisk

 

#!/bin/bash

#

# This script is inteded to be used as resource script by heartbeat

#

# Copright 2003-2008 LINBIT Information Technologies

# Philipp Reisner, Lars Ellenberg

#

###

 

DEFAULTFILE="/etc/default/drbd"

DRBDADM="/sbin/drbdadm"

 

if [ -f $DEFAULTFILE ]; then

 . $DEFAULTFILE

fi

 

if [ "$#" -eq 2 ]; then

 RES="$1"

 CMD="$2"

else

 RES="all"

 CMD="$1"

fi

 

## EXIT CODES

# since this is a "legacy heartbeat R1 resource agent" script,

# exit codes actually do not matter that much as long as we conform to

#  

# but it does not hurt to conform to lsb init-script exit codes,

# where we can.

#  

#LSB-Core-generic/LSB-Core-generic/iniscrptact.html

####

 

drbd_set_role_from_proc_drbd()

{

local out

if ! test -e /proc/drbd; then

ROLE="Unconfigured"

return

fi

 

dev=$( $DRBDADM sh-dev $RES )

minor=${dev#/dev/drbd}

if [[ $minor = *[!0-9]* ]] ; then

# sh-minor is only supported since drbd 8.3.1

minor=$( $DRBDADM sh-minor $RES )

fi

if [[ -z $minor ]] || [[ $minor = *[!0-9]* ]] ; then

ROLE=Unknown

return

fi

 

if out=$(sed -ne "/^ *$minor: cs:/ { s/:/ /g; p; q; }" /proc/drbd); then

set -- $out

ROLE=${5%/**}

: ${ROLE:=Unconfigured} # if it does not show up

else

ROLE=Unknown

fi

}

 

case "$CMD" in

   start)

# try several times, in case heartbeat deadtime

# was smaller than drbd ping time

try=6

while true; do

$DRBDADM primary $RES && break

let "--try" || exit 1 # LSB generic error

sleep 1

done

;;

   stop)

# heartbeat (haresources mode) will retry failed stop

# for a number of times in addition to this internal retry.

try=3

while true; do

$DRBDADM secondary $RES && break

# We used to lie here, and pretend success for anything != 11,

# to avoid the reboot on failed stop recovery for "simple

# config errors" and such. But that is incorrect.

# Don't lie to your cluster manager.

# And don't do config errors...

let --try || exit 1 # LSB generic error

sleep 1

done

;;

   status)

if [ "$RES" = "all" ]; then

   echo "A resource name is required for status inquiries."

   exit 10

fi

ST=$( $DRBDADM role $RES )

ROLE=${ST%/**}

case $ROLE in

Primary|Secondary|Unconfigured)

# expected

;;

*)

# unexpected. whatever...

# If we are unsure about the state of a resource, we need to

# report it as possibly running, so heartbeat can, after failed

# stop, do a recovery by reboot.

# drbdsetup may fail for obscure reasons, e.g. if /var/lock/ is

# suddenly readonly.  So we retry by parsing /proc/drbd.

drbd_set_role_from_proc_drbd

esac

case $ROLE in

Primary)

echo "running (Primary)"

exit 0 # LSB status "service is OK"

;;

Secondary|Unconfigured)

echo "stopped ($ROLE)"

exit 3 # LSB status "service is not running"

;;

*)

# NOTE the "running" in below message.

# this is a "heartbeat" resource script,

# the exit code is _ignored_.

echo "cannot determine status, may be running ($ROLE)"

exit 4 #  LSB status "service status is unknown"

;;

esac

;;

   *)

echo "Usage: drbddisk [resource] {start|stop|status}"

exit 1

;;

esac

 

exit 0

赋予755推行权限:

 

 

 chmod 755 /etc/ha.d/resource.d/drbddisk

启动HeartBeat服务

 

在七个节点上运维HeartBeat服务,先运行node1:(node1,node2卡塔尔(قطر‎

 

 

 service heartbeat start

于今从任何机器能够ping通虚IP 192.168.0.190,表示配置成功

 

配置NFS:(nfs1,nfs2)

安装nfs  yum install nfs-utils rpcbind

 

编辑exports配置文件,加多以下配置:

 

 

# vi /etc/exports

/data        *(rw,no_root_squash)

重启NFS服务:

 

 

 

 

 

 service rpcbind restart

 service nfs restart

 chkconfig rpcbind on

 chkconfig nfs off

注:这里安装NFS开机不要自行运维,因为/etc/ha.d/resource.d/killnfsd 该脚本会调整NFS的启航。

 

五、测验高可用

 

1、符合规律热备切换

在客商端挂载NFS分享目录

 

 

# mount -t nfs 192.168.0.190:/store /tmp

宪章将主节点node1 的heartbeat服务结束,则备节点node2会立即无缝接管;测量试验客户端挂载的NFS分享读写平常。

 

此时备机node2上的DRBD状态:

 

 

# service drbd status

drbd driver loaded OK; device status:

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd2.corp.com, 2015-05-12 21:05:41

m:res  cs         ro                 ds                 p  mounted     fstype

0:r0   Connected  Primary/Secondary  UpToDate/UpToDate  C  /store      ext4

老大宕机切换

抑遏关机,直接关闭node1电源

 

node2节点也会及时无缝接管,测验客商端挂载的NFS分享读写日常。

 

此时node2上的DRBD状态:

 

# service drbd status

drbd driver loaded OK; device status:

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd2.corp.com, 2015-05-12 21:05:41

m:res  cs         ro                 ds                 p  mounted     fstype

0:r0   Connected  Primary/Unknown    UpToDate/DUnknown  C  /store      ext4

 

 

nfs  heartbeat、drbd不要设置成开机自运转

 

主宕机后先运营drbd  然后再起的heartbeat

 

本文由澳门新浦京娱乐场网站发布于澳门新浦京娱乐场网站,转载请注明出处:drbd实现httpd服务的高效应用及资源统一管理,N