百度360必应搜狗淘宝本站头条
当前位置:网站首页 > 优雅编程 > 正文

ORACLE 11G RAC 安装-通过VM配置共享磁盘

sinye56 2024-09-27 21:08 5 浏览 0 评论

简介:

在自己的电脑上通过VM软件搭建Oracle11G RAC,通过修改VM的参数文件来实现磁盘共享!

目标:搭建RAC环境

实现:使用VMware Workstation 8.0.0 + ORACLE 11gR2 +linux 5.8_x86_64

使用VM软件安装2个虚拟机,分配内存2.5G;

首先安装一台虚拟机,配置基本信息,然后配置共享磁盘,最后通过克隆完成第二台虚拟机。

磁盘规划说明:

为了简单,本文中直接使用了5块同样大小的磁盘使用External模式,创建成一个磁盘组

一般OCR为3块同样大小的盘组成,使用normal模式!(在安装grid的时候可以直接选择)

DATA和FRA为单独磁盘,使用External模式!(grid安装完成后,使用asmca创建新的磁盘组)

Linux 安装配置:

VM配置:(先创建一个虚拟机,等配置完成后,关闭虚拟机,克隆出第二台虚拟机)


第二块网卡添加完成后,VM配置基本完成,启动RAC1,安装linux系统。

Linux 5.8 安装:

系统安装略。。。。。。。。可以选择最小化安装,也可以根据情况自行选择!

注意:

2块网卡,一块bridge,一块hostonly 分别对应public ip和priv ip。

安装RAC:

1.需要用到的软件包:

linux.x64_11gR2_database_1of2.zip

linux.x64_11gR2_database_2of2.zip

linux.x64_11gR2_grid.zip

oracleasm-2.6.18-308.el5-2.0.5-1.el5.x86_64.rpm

oracleasmlib-2.0.4-1.el5.x86_64.rpm

oracleasm-support-2.1.7-1.el5.x86_64.rpm

pdksh-5.2.14-1.i386.rpm ---版本不同,环境不同,可能用到ksh包,在安装检查的时候有显示

2.网络配置:

在RAC1上面配置/etc/hosts文件,指定对应ip:

加入以下几行关于rac1和rac2的信息

#public ip: 公有ip

192.168.3.250 rac1

192.168.3.251 rac2

#private ip: 私有ip,RAC间通信

192.168.200.5 rac1priv

192.168.200.6 rac2priv

#vip 虚拟ip

192.168.3.240 rac1vip

192.168.3.241 rac2vip

#scan

192.168.3.243 racscan

3.检查rpm包是否安装完成:如果差,就使用yum安装(32和63位的都可能需要)

rpm -q binutils elfutils-libelf elfutils-libelf-devel compat-libstdc++-33 gcc gcc-c++ glibc glibc-devel glibc-common ksh libgcc libstdc++ libstdc++-devel libaio libaio-devel make sysstat unixODBC unixODBC-devel

安装rpm包: (根据上面的提示,安装没有的包)

yum install -y libaio-devel,numactl-devel,sysstat,unixODBC,unixODBC-devel


同时还需要ASM安装:rpm –ivh *.rpm

oracleasm-2.6.18-308.el5-2.0.5-1.el5.x86_64.rpm

oracleasmlib-2.0.4-1.el5.x86_64.rpm

oracleasm-support-2.1.7-1.el5.x86_64.rpm

pdksh-5.2.14-1.i386.rpm ---这个可能和ksh包冲突,这个如果不确定,可以等后面图形化检查时再安装

4.创建组和用户:

组:

groupadd oinstall

groupadd dba

groupadd oper

groupadd asmadmin

groupadd asmdba

groupadd asmoper

用户:

useradd -g oinstall -G dba,asmadmin,asmdba,asmoper grid

useradd -g oinstall -G dba,oper,asmdba oracle

[root@RAC1 ~]# passwd grid

Changing password for user grid.

New UNIX password:

BAD PASSWORD: it is based on a dictionary word

Retype new UNIX password:

passwd: all authentication tokens updated successfully.

[root@RAC1 ~]# passwd oracle

Changing password for user oracle.

New UNIX password:

BAD PASSWORD: it is based on a dictionary word

Retype new UNIX password:

passwd: all authentication tokens updated successfully.

5.创建目录并赋予权限:

[root@RAC1 ~]# mkdir -p /u01/app/grid

[root@RAC1 ~]# mkdir -p /u01/app/11.2.0/grid

[root@RAC1 ~]# mkdir -p /u01/app/oracle

[root@RAC1 ~]# chown -R grid:oinstall /u01/app

[root@RAC1 ~]# chown oracle:oinstall /u01/app/oracle

[root@RAC1 ~]# chown grid:oinstall /u01

[root@RAC1 ~]# chmod 775 /u01/

[oracle@RAC1 ~]$ mkdir -p /u01/app/oracle/product/11.2.0/dbhome_1

6.设置环境变量:

Grid用户:

vi .bash_profile

#new add

ORACLE_BASE=/u01/app/gridbase

ORACLE_HOME=/u01/app/grid

PATH=$ORACLE_HOME/bin:$PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH

DISPLAY=192.168.3.137:0.0

export ORACLE_BASE ORACLE_HOME LD_LIBRARY_PATH DISPLAY

Oracle 用户:

#new add

ORACLE_BASE=/u01/app/oracle

ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1

ORACLE_SID=racdb1

PATH=$ORACLE_HOME/bin:$PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH

DISPLAY=192.168.3.137:0.0

export ORACLE_BASE ORACLE_HOME LD_LIBRARY_PATH DISPLAY ORACLE_SID

7.配置内核参数:

[root@RAC1 ~]# vi /etc/sysctl.conf

#new add

kernel.shmall = 2097152

kernel.shmmax = 2147483648

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

fs.file-max = 6815744

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 4194304

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

fs.aio-max-nr = 1048576

fs.file-max = 6815744

[root@RAC1 ~]# sysctl –p

[root@RAC1 ~]# vi /etc/profile

#new add

if [ $USER = "oracle" ] || [ $USER = "grid" ] ; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

umask 022

fi

[root@RAC1 ~]# vi /etc/security/limits.conf

#new add

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

grid soft nproc 2047

grid hard nproc 16384

grid soft nofile 1024

grid hard nofile 65536

[root@RAC1 ~]# vi /etc/pam.d/login

#new add

session required pam_limits.so

修改共享内存:--查看/etc/fstab,找到/dev/shm列,添加size=3000m,有可能不同

[root@RAC1 ~]# vi /etc/fstab

tmpfs /dev/shm tmpfs default,size=3000m 0 0

[root@RAC1 ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/sda1 25G 6.4G 17G 28% /

tmpfs 1.2G 0 1.2G 0% /dev/shm

/dev/hdc 4.3G 4.3G 0 100% /mnt

[root@RAC1 ~]#mount -o remount /dev/shm

[root@RAC1 ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/sda1 25G 6.4G 17G 28% /

/dev/hdc 4.3G 4.3G 0 100% /mnt

tmpfs 3.0G 0 3.0G 0% /dev/shm

8.关于NTP的选择,可以使用NTP服务器,也可以使用oracle自带的ctssd。

使用ctssd 关闭NTP:

[root@RAC1 ~]# service ntpd stop

[root@RAC1 ~]# chkconfig ntpd off

[root@RAC1 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak ----将配置文件重命名,不然后面检测时会默认被使用

9.创建共享磁盘:(关闭上面创建的虚拟机,然后添加磁盘)

方法:

使用虚拟机创建普通磁盘,然后修改磁盘信息,使之成为共享磁盘。


重复以上操作,创建出5块磁盘,每块8G大小(这里为了图简单,全部选择一样的),选择新的控制器。

(磁盘大小可以自行 选择,一般OCR盘3块同样大小,一块DATA盘,一块FRA盘)

修改RAC1虚拟机的配置文件,使得以上创建的5块磁盘能被2台虚拟机共同使用。

修改RAC1.vmx文件:

找到tools.remindInstall = "TRUE"这一行,在该行下增加:

disk.locking = "FALSE"

diskLib.dataCacheMaxSize = "0"

diskLib.dataCacheMaxReadAheadSize = "0"

diskLib.dataCacheMinReadAheadSize = "0"

diskLib.dataCachePageSize = "4096"

diskLib.maxUnsyncedWrites = "0"

然后添加:

scsi1.sharedBus = "VIRTUAL"

并且在每块磁盘下面添加以下内容,根据控制器号修改:

scsi1:1.writeThrough = "TRUE"

scsi1:1.deviceType = "plainDisk"

scsi1:1.redo = ""

修改完成后,启动RAC1:

10.给添加的5块磁盘分区:

[root@RAC1 ~]# fdisk /dev/sdb

然后选择 n-》p -》1 -》默认-》默认-》w

依次执行,将5块磁盘都划分。

磁盘分区完成后,如果看不到划分后的磁盘,可以选择重启服务器!

11.配置ASM并初始化:(group那里最好写asmadmin)

使用ASM创建磁盘:

查看ASM磁盘信息:

到此RAC1上面的配置基本完成,下面开始克隆生成RAC2.

12.克隆第二个虚拟机

关闭RAC1 ,将RAC1下面的所有文件全部COPY到RAC2目录。

在VM中导入RAC2的虚拟机:

然后修改名字为RAC2.

启动2个RAC1和RAC2:

启动RAC2时会出现提示信息,选择 copy:

按照预先分配好的网络信息配置RAC2:

修改主机名:

修改ip:

重启RAC2,使得配置生效!

注意:有时需要将网卡配置中HWADDR删掉,然后清空/etc/udev/rules.d/70-persistent-net.rules文件中关于网卡的信息,然后重启!

13.配置RAC1和RAC2的连接等效性,即ssh无密码连接!

(也可以不配置,后面图形化安装时可以配置)

配置grid用户的ssh连接:

RAC1:

[root@RAC1 ~]# su – grid

[grid@RAC1 ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/grid/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/grid/.ssh/id_dsa.

Your public key has been saved in /home/grid/.ssh/id_dsa.pub.

The key fingerprint is:

04:d6:4e:ab:1a:f2:c3:ef:79:bd:96:16:88:4b:8f:fd grid@RAC1

RAC2:

[root@RAC2 ~]# su - grid

[grid@RAC2 ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/grid/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/grid/.ssh/id_dsa.

Your public key has been saved in /home/grid/.ssh/id_dsa.pub.

The key fingerprint is:

41:35:d3:87:90:c8:28:67:4b:dd:48:0d:52:f0:b4:e3 grid@RAC2

在RAC1上:(或者使用ssh-copy-id命令)

将RAC1的公钥复制到authorized_keys中:

[grid@RAC1 .ssh]$ cat id_dsa.pub >>authorized_keys

将RAC2的公钥复制到authorized_keys中:

[grid@RAC1 .ssh]$ ssh rac2 cat .ssh/id_dsa.pub >>authorized_keys

The authenticity of host 'rac2 (192.168.3.251)' can't be established.

RSA key fingerprint is aa:cf:68:e6:f6:5e:cb:42:79:62:af:d7:4a:19:16:c0.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'rac2,192.168.3.251' (RSA) to the list of known hosts.

grid@rac2's password:

然后将authorized_keys传到RAC2下.ssh目录中:

[grid@RAC1 .ssh]$ scp authorized_keys rac2:~/.ssh/

grid@rac2's password:

authorized_keys 100% 1198 1.2KB/s 00:00

验证:

RAC1:

[grid@RAC1 .ssh]$ ssh rac2 date

Fri Jan 30 11:09:50 CST 2015

[grid@RAC1 .ssh]$ ssh rac1 date

Fri Jan 30 11:09:53 CST 2015

RAC2:

[grid@RAC2 .ssh]$ ssh rac1 date

Fri Jan 30 11:09:33 CST 2015

[grid@RAC2 .ssh]$ ssh rac2 date

Fri Jan 30 11:09:35 CST 2015

使用同样的方法配置ORACLE用户。

14 GRID检查

linux.x64_11gR2_grid.zip上传到节点rac1

RAC1:

解压grid的zip包

[grid@RAC1 app]$ unzip linux.x64_11gR2_grid.zip

[grid@RAC1 app]$ mv grid/ ../app/g

使用CVU检查RAC1和RAC2:运行脚本runcluvfy.sh做全面检查

[grid@RAC1 g]$ ./runcluvfy.sh -help

USAGE:

runcluvfy [-help]

runcluvfy stage {-list|-help}

runcluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-verbose]

runcluvfy comp {-list|-help}

runcluvfy comp <component-name> <component-specific options> [-verbose]

执行命令,检查2个节点CRS配置信息是否正确:

[grid@RAC1 g]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup –verbose

………………..

………………..

Check: Total memory

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 2.39GB (2510244.0KB) 1.5GB (1572864.0KB) passed

rac1 2.39GB (2510244.0KB) 1.5GB (1572864.0KB) passed

Result: Total memory check passed

Check: Available memory

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 2.29GB (2398232.0KB) 50MB (51200.0KB) passed

rac1 2.17GB (2273192.0KB) 50MB (51200.0KB) passed

Result: Available memory check passed

…………

…………

最后看到下面行则没有问题,否则检查错误,根据提示错误修改:

Pre-check for cluster services setup was successful.

15.正式开始安装GRID:

使用xmanager配合图形化界面安装:

[grid@RAC1 g]$ env |grep DIS

DISPLAY=192.168.3.137:0.0

[grid@RAC1 g]$ ./runInstaller


由于先决条件都配置完成且没有错误,所有扫描直接通过,没有出现检查报错界面。

等一段时间后弹出一个对话框,开始执行相关脚本:

如果有多个节点,那么执行脚本时首先执行在local node上面,然后执行其他节点!

[root@RAC1 ~]# cd /u01/app/oraInventory/

[root@RAC1 oraInventory]# ./orainstRoot.sh

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

…………………………

[root@RAC1 oraInventory]# cd /u01/app/grid

[root@RAC1 grid]# ./root.sh

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/app/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Copying dbhome to /usr/local/bin ...

Copying oraenv to /usr/local/bin ...

Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2015-01-30 14:12:56: Parsing the host name

2015-01-30 14:12:56: Checking for super user privileges

2015-01-30 14:12:56: User has super user privileges

Using configuration parameter file: /u01/app/grid/crs/install/crsconfig_params

Creating trace directory

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

root wallet

root wallet cert

root cert export

peer wallet

profile reader wallet

pa wallet

peer wallet keys

pa wallet keys

peer cert request

pa cert request

peer cert

pa cert

peer root cert TP

profile reader root cert TP

pa root cert TP

peer pa cert TP

pa peer cert TP

profile reader pa cert TP

profile reader peer cert TP

peer user cert

pa user cert

Adding daemon to inittab

CRS-4123: Oracle High Availability Services has been started.

ohasd is starting

ADVM/ACFS is not supported on centos-release-5-8.el5.centos

………………………….

………………………….

Checking swap space: must be greater than 500 MB. Actual 5122 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

说明一切正常!

接下来去RAC2上面一次执行上面的脚本!

RAC2:

[root@RAC2 ~]# cd /u01/app/oraInventory/

[root@RAC2 oraInventory]# ./orainstRoot.sh

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

[root@RAC2 ~]# cd /u01/app/grid

[root@RAC2 grid]# ./root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/app/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Copying dbhome to /usr/local/bin ...

Copying oraenv to /usr/local/bin ...

Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

……………………..

Checking swap space: must be greater than 500 MB. Actual 5122 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

看到这段内容说明没有问题,如果有问题,网上搜索答案!

上述脚本执行完成后,点击“OK”,继续安装!

最后会报一个错误:

根据提示查找该文件,看报错信息:

可以找到以下内容:

INFO: Checking Single Client Access Name (SCAN)...

INFO: Checking name resolution setup for "racscan"...

INFO: ERROR:

INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "racscan"

INFO: ERROR:

INFO: PRVF-4657 : Name resolution setup check for "racscan" (IP address: 192.168.3.243) failed

INFO: ERROR:

INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "racscan"

INFO: Verification of SCAN VIP and Listener setup failed

如果是这个错误,请忽略!

至此,GRID安装完成!

16 验证GRID信息

[grid@RAC1 g]$ crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

较新的:

[grid@RAC1 ~]$crsctl stat resource -t

命令比较过时:

[grid@RAC1 ~]$ crs_stat -t -v

Name Type R/RA F/FT Target State Host

----------------------------------------------------------------------

ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE rac1

ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE rac1

ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE rac1

ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE rac1

ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE rac1

ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE

ora....network ora....rk.type 0/5 0/ ONLINE ONLINE rac1

ora.oc4j ora.oc4j.type 0/5 0/0 OFFLINE OFFLINE

ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE rac1

ora....SM1.asm application 0/5 0/0 ONLINE ONLINE rac1

ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1

ora.rac1.gsd application 0/5 0/0 OFFLINE OFFLINE

ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1

ora.rac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac1

ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2

ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2

ora.rac2.gsd application 0/5 0/0 OFFLINE OFFLINE

ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2

ora.rac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac2

ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE rac1

说明集群已经安装完成!

17 安装ORACLE 软件

linux.x64_11gR2_database_1of2.zip

linux.x64_11gR2_database_2of2.zip

解压zip包:

unzip linux.x64_11gR2_database_1of2.zip

unzip linux.x64_11gR2_database_2of2.zip

开始数据库软件安装:

[oracle@RAC1 database]$ ./runInstaller

执行root.sh脚本后就安装完成!

注:执行root.sh时一路回车,不要选Y!

18.使用DBCA创建数据库

[oracle@RAC1 ~]$ dbca

可以选择一般类型数据库,这里任意选择:

至此,数据库创建完成!(时间会比较长!)

19.验证RAC集群环境:

RAC1:

[root@RAC1 ~]# ps -elf|grep smon

0 S grid 3755 1 0 78 0 - 119593 - 14:16 ? 00:00:00 asm_smon_+ASM1

0 S oracle 17048 1 0 75 0 - 260369 - 18:24 ? 00:00:00 ora_smon_racdb1

0 S root 17423 13738 0 78 0 - 15297 pipe_w 18:28 pts/1 00:00:00 grep smon

RAC2:

[oracle@RAC2 ~]$ ps -elf|grep smon

0 S grid 5310 1 0 78 0 - 119593 ? 14:31 ? 00:00:00 asm_smon_+ASM2

0 S oracle 16294 1 0 75 0 - 261648 ? 18:24 ? 00:00:00 ora_smon_racdb2

0 R oracle 16661 16608 0 78 0 - 15296 - 18:30 pts/0 00:00:00 grep smon

在RAC1上面登录数据库,并创建表t

SQL> create table t(id int,names char(10));

Table created.

在RAC2上面看是否存在,

[oracle@RAC2 ~]$ sqlplus / as sysdba

SQL> desc t

Name Null? Type

----------------------------------------- -------- ----------------------------

ID NUMBER(38)

NAMES CHAR(10)

说明数据库正常!

20. RAC的一些命令

关闭时,需要先关闭数据库,再关闭集群

启动时,需要先启动集群,再启动数据库

集群的启动和停止需要使用root用户:

启动和停止CRS:

/u01/app/grid/bin/crsctl start crs

/u01/app/grid/bin/crsctl stop crs

Start & Stop clusterware: (必须使用root用户) 只需要在RAC1上面执行,启动和重启整个集群

/u01/app/grid/bin/crsctl start cluster –all

/u01/app/grid/bin/crsctl stop cluster –all

Start & Stop database instance: (oracle) 只需要在RAC1上面执行

srvctl start database –d racdb

srvctl stop database –d racdb


RAC的简单管理:----命令行crsctl和srvctl

Grid用户执行检查集群状态:----似乎root用户也可以

[grid@RAC1 ~]$ crsctl check cluster

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

检查CRS的当前状态:

[grid@RAC1 bin]$ ./crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

检查OHASD守护进程的当前状态:

[root@RAC1 bin]# ./crsctl check has

CRS-4638: Oracle High Availability Services is online

检查时钟同步:

[grid@RAC1 ~]$ crsctl check ctss

CRS-4701: The Cluster Time Synchronization Service is in Active mode.

CRS-4702: Offset (in msec): 0

cluvfy comp clocksync -verbose

检查节点情况:

[grid@RAC1 ~]$ olsnodes -n

rac1 1

rac2 2

检查ocr:

[grid@RAC1 ~]$ ocrcheck

Status of Oracle Cluster Registry is as follows :

Version : 3

Total space (kbytes) : 262120

Used space (kbytes) : 2528

Available space (kbytes) : 259592

ID : 589651344

Device/File Name : +DATA

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user

检查votedisk信息:

[grid@RAC1 ~]$ crsctl query css votedisk

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 17dea0b410b64ff7bf961bb1da36062a (ORCL:DISK1) [DATA]

Located 1 voting disk(s).

检查ASM进程:

[grid@RAC1 ~]$ srvctl status asm -a

ASM is running on rac1,rac2

ASM is enabled.

查看RAC详细情况:

[grid@RAC1 ~]$ crs_stat -t -v

Name Type R/RA F/FT Target State Host

----------------------------------------------------------------------

ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE rac1

ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE rac1

ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE rac2

ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE rac1

ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE rac1

ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE

ora....network ora....rk.type 0/5 0/ ONLINE ONLINE rac1

ora.oc4j ora.oc4j.type 0/5 0/0 OFFLINE OFFLINE

ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE rac1

ora....SM1.asm application 0/5 0/0 ONLINE ONLINE rac1

ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1

ora.rac1.gsd application 0/5 0/0 OFFLINE OFFLINE

ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1

ora.rac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac1

ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2

ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2

ora.rac2.gsd application 0/5 0/0 OFFLINE OFFLINE

ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2

ora.rac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac2

ora.racdb.db ora....se.type 0/2 0/1 ONLINE ONLINE rac1

ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE rac2

相关推荐

RHEL8和CentOS8怎么重启网络

本文主要讲解如何重启RHEL8或者CentOS8网络以及如何解决RHEL8和CentOS8系统的网络管理服务报错,当我们安装好RHEL8或者CentOS8,重启启动网络时,会出现以下报错:...

Linux 内、外网双网卡路由配置

1.路由信息的影响Linux系统中如果有多张网卡的情况下,如果路由信息配置不正确,...

Linux——centos7修改网卡名

修改网卡名这个操作可能平时用不太上,可作为了解。修改网卡默认名从ens33改成eth01.首先修改网卡配置文件名(建议将原配置文件进行备份)...

CentOS7下修改网卡名称为ethX的操作方法

?Linux操作系统的网卡设备的传统命名方式是eth0、eth1、eth2等,而CentOS7提供了不同的命名规则,默认是基于固件、拓扑、位置信息来分配。这样做的优点是命名全自动的、可预知的...

Linux 网卡名称enss33修改为eth0

一、CentOS修改/etc/sysconfig/grub文件(修改前先备份)为GRUB_CMDLINE_LINUX变量增加2个参数(net.ifnames=0biosdevname=0),修改完成...

CentOS下双网卡绑定,实现带宽飞速

方式一1.新建/etc/sysconfig/network-scripts/ifcfg-bond0文件DEVICE=bond0IPADDR=191.3.60.1NETMASK=255.255.2...

linux 双网卡双网段设置路由转发

背景网络情况linux双网卡:网卡A(ens3)和网卡B(...

Linux-VMware设置网卡保持激活

Linux系统只有在激活网卡的状态下才能去连接网络,进行网络通讯。修改配置文件(永久激活网卡)...

VMware虚拟机三种网络模式

01.VMware虚拟机三种网络模式由于linux目前很热门,越来越多的人在学习linux,但是买一台服务放家里来学习,实在是很浪费。那么如何解决这个问题?虚拟机软件是很好的选择,常用的虚拟机软件有v...

Rocky Linux 9/CentOS Stream 9修改网卡配置/自动修改主机名(实操)

推荐...

2023年最新版 linux克隆虚拟机 解决网卡uuid重复问题

问题描述1、克隆了虚拟机,两台虚拟机里面的ip以及网卡的uuid都是一样的2、ip好改,但是uuid如何改呢?解决问题1、每台主机应该保证网卡的UUID是唯一的,避免后面网络通信有问题...

Linux网卡的Vlan配置,你可能不了解的玩法

如果服务器上连的交换机端口已经预先设置了TRUNK,并允许特定的VLAN可以通过,那么服务器的网卡在配置时就必须指定所属的VLAN,否则就不通了,这种情形在虚拟化部署时较常见。例如在一个办公环境中,办...

Centos7 网卡绑定

1、切换到指定目录#备份网卡数据cd/etc/sysconfig/network-scriptscpifcfg-enp5s0f0ifcfg-enp5s0f0.bak...

Linux搭建nginx+keepalived 高可用(主备+双主模式)

一:keepalived简介反向代理及负载均衡参考:...

Linux下Route 路由指令使用详解

linuxroute命令用于显示和操作IP路由表。要实现两个不同子网之间的通信,需要一台连接两个网络的路由器,或者同时位于两个网络的网关来实现。在Linux系统中,设置路由通常是为了解决以下问题:该...

取消回复欢迎 发表评论: