oracle RAC 更换存储迁移数据
我们利用ASM rebalance特性实现更换存储迁移数据的需求
基本 零宕机时间(操作步骤总结)
1)保证新存储和RAC当前节点间的可用性;
2)新存储划分LUN,可以重新规划存储方案;
3)迁移OCR和表决盘
4)给现有ASM磁盘组添加ASM磁盘(新存储分配的),充分利用ASM REBALANCE技术
5)删除原存储的ASM磁盘
6)观察期
1 当前存储信息
以下ASM磁盘组、OCR、VOTE信息:
ASM磁盘组:
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 4096 1048576 3071982 3071091 298 1535396 0 N BACK/
MOUNTED NORMAL N 512 4096 1048576 4095976 1561759 633568 464095 0 N DATA/
MOUNTED NORMAL N 512 4096 1048576 102396 101470 326 50572 0 N OCR/
当前ASM有BACK,DATA,OCR三个磁盘组,总大小7TB,磁盘组主要存放数据文件和归档日志文件,COR文件,以下是各磁盘组磁盘信息:
SQL> select NAME,PATH,total_mb,free_mb from v$asm_disk;
NAME PATH TOTAL_MB FREE_MB
------------------------------ ------------------------------ ---------- ----------
BACK_VOL1 ORCL:BACK_VOL1 1023994 390436
DATA_VOL1 ORCL:DATA_VOL1 1023994 390450
DATA_VOL2 ORCL:DATA_VOL2 1023994 390447
DATA_VOL3 ORCL:DATA_VOL3 1023994 390426
DATA_VOL4 ORCL:DATA_VOL4 1023994 1023697
DATA_VOL5 ORCL:DATA_VOL5 1023994 1023698
DATA_VOL6 ORCL:DATA_VOL6 1023994 1023696
OCR_VOL1 ORCL:OCR_VOL1 31376 31075
OCR_VOL2 ORCL:OCR_VOL2 31376 31077
OCR_VOL3 ORCL:OCR_VOL3 39644 39318
10 rows selected.
OCR&VOTE信息:
[grid@oracle1 bin]$ ./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2720
Available space (kbytes) : 259400
ID : 2006438789
Device/File Name : +OCR
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
由于OCR和VOTEDISK都和ASM放在同一个存储上,所以OCR和VOTEDISK也需要迁移到新的存储上。
2 新存储磁盘分区
要求:(由存储工程师操作)
2.1. 共享存储,两台
服务器都可以看到新存储分配的磁盘空间。
2.2. 如之前ASM磁盘组所有存储分区大小,个数保持一致。
3 划分后磁盘分区
[root@oracle1 sbin]# fdisk -l
Disk /dev/cciss/c0d0: 1000.1 GB, 1000171331584 bytes
255 heads, 63 sectors/track, 121597 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/cciss/c0d0p1 * 1 13 104391 83 Linux
/dev/cciss/c0d0p2 14 121597 976623480 8e Linux LVM
Disk /dev/sda: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 130541 1048570551 83 Linux
Disk /dev/sdb: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 130541 1048570551 83 Linux
Disk /dev/sdc: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 130541 1048570551 83 Linux
Disk /dev/sdd: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 130541 1048570551 83 Linux
Disk /dev/sde: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 130541 1048570551 83 Linux
Disk /dev/sdf: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdf1 1 130541 1048570551 83 Linux
Disk /dev/sdg: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdg1 1 130541 1048570551 83 Linux
Disk /dev/sdh: 107.3 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdh2 1 4000 32129968+ 83 Linux
/dev/sdh3 4001 8000 32130000 83 Linux
/dev/sdh4 8001 13054 40596255 83 Linux
WARNING: The size of this disk is2.9 TB (2919504019456 bytes).
DOS partition table format can not be used on drives for volumes
larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID
partition table format (GPT).
Disk /dev/sdi: 2919.5 GB, 2919504019456 bytes
255 heads, 63 sectors/track, 354942 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdi1 1 130000 1044224968+ 83 Linux
/dev/sdi2 130001 267349 1103255842+ 83 Linux
Disk /dev/sdj: 1073.7 GB, 1073741824000 bytes --------------------------------新加盘
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdj doesn't contain a valid partition table
Disk /dev/sdk: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdk doesn't contain a valid partition table
Disk /dev/sdl: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdl doesn't contain a valid partition table
Disk /dev/sdm: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdm doesn't contain a valid partition table
Disk /dev/sdn: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdn doesn't contain a validpartition table
Disk /dev/sdo: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdo doesn't conain a valid partition table
Disk /dev/sdp: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdp doesn't contain a valid partition table
Disk /dev/sdq: 107.3 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdq doesn't contain a valid partition table
Disk /dev/sdr: 2919.5 GB, 2919504019456 bytes
255 heads, 63 sectors/track, 354942 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdr doesn't contain a valid partition table
Disk /dev/sds: 322.1 GB, 322122547200 bytes
255 heads, 63 sectors/track, 39162 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sds1 1 39162 314568733+ 8e Linux LVM
Disk /dev/sdt: 322.1 GB, 322122547200 bytes
255 heads, 63 sectors/track, 39162 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdt1 1 39162 314568733+ 8e Linux LVM
4 配置ASM新磁盘
/etc/init.d/oracleasm createdisk DATA_VOL01 /dev/sdj1
/etc/init.d/oracleasm createdisk DATA_VOL02 /dev/sdk1
/etc/init.d/oracleasm createdisk DATA_VOL03 /dev/sdl1
/etc/init.d/oracleasm createdisk DATA_VOL04 /dev/sdm1
/etc/init.d/oracleasm createdisk DATA_VOL05 /dev/sdn1
/etc/init.d/oracleasm createdisk DATA_VOL06 /dev/sdo1
/etc/init.d/oracleasm createdisk BACK_VOL01 /dev/sdp1
/etc/init.d/oracleasm createdisk OCR_VOL4 /dev/sdq1
/etc/init.d/oracleasm createdisk OCR_VOL5 /dev/sdq2
/etc/init.d/oracleasm createdisk OCR_VOL6 /dev/sdq3
5 创建新的OCRNEW磁盘组
su – grid
sqlplus / as sysasm
CREATE DISKGROUP OCRNEW NORMAL REDUNDANCY
DISK 'ORCL:OCR_VOL4' NAME VOL4
DISK 'ORCL:OCR_VOL5' NAME VOL5
DISK 'ORCL:OCR_VOL6' NAME VOL6 ATTRIBUTE 'compatible.asm'='11.2';
6 添加OCR信息到OCRNEW
[root@oracle1 bin]# ./ocrconfig -add +OCRNEW
[root@oracle1 bin]# ./ocrcheck -config
Oracle Cluster Registry configuration is :
Device/File Name : +OCR
Device/File Name : +OCRNEW
[root@oracle1 bin]# more /etc/oracle/ocr.loc
#Device/file getting replaced by device +OCRNEW
ocrconfig_loc=+OCR
ocrmirrorconfig_loc=+OCRNEW
local_only=false
[root@oracle1 bin]#
可以看到OCRNEW 磁盘组已经成功添加到OCR磁盘信息中
迁移vote 文件
当前votedisk信息
[grid@oracle1 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 14f694d9d4414f9ebf85d3ce6b9aef0b (ORCL:OCR_VOL1) [OCR]
2. ONLINE 9f9ee7281c954f8abfcc6e88c33257ac (ORCL:OCR_VOL2) [OCR]
3. ONLINE 38114fd602194fa9bf4d05655b3d89b7 (ORCL:OCR_VOL3) [OCR]
Located 3 voting disk(s).
[grid@oracle1 ~]$ crsctl replace votedisk +OCRNEW
Successful addition of voting disk 00634ef593ee4f92bf48e8c089cb5565.
Successful addition of voting disk 232159722de04f67bf03a78b757e3bec.
Successful addition of voting disk a340d5b23aac4f6fbf9f7b1d59088fa5.
Successful deletion of voting disk 14f694d9d4414f9ebf85d3ce6b9aef0b.
Successful deletion of voting disk 9f9ee7281c954f8abfcc6e88c33257ac.
Successful deletion of voting disk 38114fd602194fa9bf4d05655b3d89b7.
Successfully replaced voting disk group with +OCRNEW.
CRS-4266: Voting file(s) successfully replaced
7 创建ASM实例spfile到OCR_NEW
创建ASM实例spfile到新创建的OCR_NEW ASM磁盘组上(在一个节点grid用户登录ASM实例执行)
SQL> create pfile='/home/grid/asmpfile.ora' from spfile;
File created.
SQL> create spfile='+OCRNEW' from pfile='/home/grid/asmpfile.ora';
File created.
8 删除ASM磁盘组OCR
[root@oracle1 bin]# ./ocrconfig -delete +OCR
查看OCR和VOTE新状态与位置
[root@oracle1 bin]# ./ocrcheck && ./crsctl query css votedisk
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2768
Available space (kbytes) : 259352
ID : 2006438789
Device/File Name : +OCRNEW
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 00634ef593ee4f92bf48e8c089cb5565 (ORCL:OCR_VOL4) [OCRNEW]
2. ONLINE 232159722de04f67bf03a78b757e3bec (ORCL:OCR_VOL5) [OCRNEW]
3. ONLINE a340d5b23aac4f6fbf9f7b1d59088fa5 (ORCL:OCR_VOL6) [OCRNEW]
Located 3 voting disk(s).
SYS@+ASM1> alter diskgroup OCR dismount;
Diskgroup altered.
SYS@+ASM2> drop diskgroup OCR including contents;
Diskgroup dropped.
SYS@+ASM2> SQL> select GROUP_NUMBER,NAME,STATE,type,TOTAL_MB,free_mb,VOTING_FILES,COMPATIBILITY from v$asm_diskgroup;
GROUP_NUMBER NAME STATE TYPE TOTAL_MB FREE_MB V COMPATIBILITY
------------ ------------------------------ ----------- ------ ---------- ---------- - ------------------------------------------------------------
1 BACK MOUNTED NORMAL 3071982 3070675 N 11.2.0.0.0
2 DATA MOUNTED NORMAL 4095976 1561759 N 11.2.0.0.0
3 OCRNEW MOUNTED NORMAL 102396 101470 N 11.2.0.0.0
SYS@+ASM2> SQL> select GROUP_NUMBER,DISK_NUMBER,STATE,REDUNDANCY,TOTAL_MB,FREE_MB,name,path,failgroup from v$asm_disk order by GROUP_NUMBER;
GROUP_NUMBER DISK_NUMBER STATE REDUNDA TOTAL_MB FREE_MB NAME PATH FAILGROUP
------------ ----------- -------- ------- ---------- ---------- ------------------------------ ------------------------------ ------------------------------
0 0 NORMAL UNKNOWN 0 0 ORCL:OCR_VOL1
0 1 NORMAL UNKNOWN 0 0 ORCL:OCR_VOL2
0 2 NORMAL UNKNOWN 0 0 ORCL:OCR_VOL3
1 1 NORMAL UNKNOWN 1023994 1023559 DATA_VOL5 ORCL:DATA_VOL5 DATA_VOL5
1 0 NORMAL UNKNOWN 1023994 1023559 DATA_VOL4 ORCL:DATA_VOL4 DATA_VOL4
1 2 NORMAL UNKNOWN 1023994 1023557 DATA_VOL6 ORCL:DATA_VOL6 DATA_VOL6
2 2 NORMAL UNKNOWN 1023994 390447 DATA_VOL2 ORCL:DATA_VOL2 DATA_VOL2
2 1 NORMAL UNKNOWN 1023994 390450 DATA_VOL1 ORCL:DATA_VOL1 DATA_VOL1
2 0 NORMAL UNKNOWN 1023994 390436 BACK_VOL1 ORCL:BACK_VOL1 BACK_VOL1
2 3 NORMAL UNKNOWN 1023994 390426 DATA_VOL3 ORCL:DATA_VOL3 DATA_VOL3
3 0 NORMAL UNKNOWN 31376 31075 VOL4 ORCL:OCR_VOL4 VOL4
GROUP_NUMBER DISK_NUMBER STATE REDUNDA TOTAL_MB FREE_MB NAME PATH FAILGROUP
------------ ----------- -------- ------- ---------- ---------- ------------------------------ ------------------------------ ------------------------------
3 1 NORMAL UNKNOWN 31376 31077 VOL5 ORCL:OCR_VOL5 VOL5
3 2 NORMAL UNKNOWN 39644 39318 VOL6 ORCL:OCR_VOL6 VOL6
13 rows selected.
至此整个OCR&VOTING迁移过程结束
9.这里可以重启下集群crs测试OCR&VOTE是否迁移成功,当然你也可以选择不重启,个人建议重启测试下。
查看OCR&VOTE位置及ASM实例spfile位置
[root@oracle1 bin]# ./ocrcheck && ./crsctl query css votedisk
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2768
Available space (kbytes) : 259352
ID : 2006438789
Device/File Name : +OCRNEW
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 00634ef593ee4f92bf48e8c089cb5565 (ORCL:OCR_VOL4) [OCRNEW]
2. ONLINE 232159722de04f67bf03a78b757e3bec (ORCL:OCR_VOL5) [OCRNEW]
3. ONLINE a340d5b23aac4f6fbf9f7b1d59088fa5 (ORCL:OCR_VOL6) [OCRNEW]
Located 3 voting disk(s).
SQL> show parameter spfile;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string +OCRNEW/oracle-cluster/asmpara
meterfile/registry.253.8456918
87
SQL>
10 迁移数据磁盘组数据
SQL> alter diskgroup DATA add disk 'ORCL:DATA_VOL01' rebalance power 11;
Diskgroup altered.
SQL> alter diskgroup DATA add disk 'ORCL:DATA_VOL02' rebalance power 11;
Diskgroup altered.
SQL> alter diskgroup DATA add disk 'ORCL:DATA_VOL03' rebalance power 11;
Diskgroup altered.
SQL> alter diskgroup DATA add disk 'ORCL:DATA_VOL04' rebalance power 11;
Diskgroup altered.
SQL> alter diskgroup back add disk 'ORCL:DATA_VOL05' rebalance power 11;
Diskgroup altered.
SQL> alter diskgroup back add disk 'ORCL:DATA_VOL06' rebalance power 11;
SQL> alter diskgroup BACK add disk 'ORCL:BACK_VOL01' rebalance power 11;
Diskgroup altered.
由于指定rebalance power 11,ASM会自动均衡ASM磁盘组DATA里面存放的数据在各ASM磁盘的分布。
当rebalance结束后,查询V$ASM_OPERATION视图,将不会返回信息。
SQL> select * from V$ASM_OPERATION;
no rows selected
11 删除数据磁盘组上的旧磁盘
alter diskgroup data drop disk 'BACK_VOL1' rebalance power 11;
alter diskgroup data drop disk 'DATA_VOL2' rebalance power 11;
alter diskgroup data drop disk 'DATA_VOL3' rebalance power 11;
alter diskgroup back drop disk 'DATA_VOL4' rebalance power 11;
alter diskgroup back drop disk 'DATA_VOL5' rebalance power 11;
alter diskgroup back drop disk 'DATA_VOL6' rebalance power 11;
ASM不但向磁盘组中加入新磁盘时会做rebalance,在删除ASM磁盘时也会rebalance,将该磁盘上的数据rebalance到其他该磁盘组的磁盘中。
按照此方法删除ASM磁盘后,ASM的所有数据都已经存放在新的存储上。
09:40:38 SQL> select a.NAME GROUP_NAME,a.TOTAL_MB,a.FREE_MB GROUP_FREE_MB,b.OS_MB,b.FREE_MB,b.name,b.path from v$asm_diskgroup a,v$asm_disk b where a.GROUP_NUMBER=b.GROUP_NUMBER;
GROUP_NAME TOTAL_MB GROUP_FREE_MB OS_MB FREE_MB NAME PATH
------------------------------ ---------- ------------- ---------- ---------- ------------------------------ ----------------------------------------
BACK 3071982 3070868 1023994 1023622 DATA_VOL05 ORCL:DATA_VOL05
BACK 3071982 3070868 1023994 1023624 DATA_VOL06 ORCL:DATA_VOL06
OCRNEW 102396 101470 31376 31075 VOL4 ORCL:OCR_VOL4
OCRNEW 102396 101470 31376 31077 VOL5 ORCL:OCR_VOL5
OCRNEW 102396 101470 39644 39318 VOL6 ORCL:OCR_VOL6
DATA 4095976 1561759 1023994 390437 DATA_VOL01 ORCL:DATA_VOL01
DATA 4095976 1561759 1023994 390440 DATA_VOL02 ORCL:DATA_VOL02
DATA 4095976 1561759 1023994 390443 DATA_VOL03 ORCL:DATA_VOL03
DATA 4095976 1561759 1023994 390439 DATA_VOL04 ORCL:DATA_VOL04
BACK 3071982 3070868 1023994 1023622 BACK_VOL01 ORCL:BACK_VOL01
12 删除旧ASM盘配置信息
[root@oracle1 bin]# oracleasm listdisks
BACK_VOL01
BACK_VOL1
DATA_VOL01
DATA_VOL02
DATA_VOL03
DATA_VOL04
DATA_VOL05
DATA_VOL06
DATA_VOL1
DATA_VOL2
DATA_VOL3
DATA_VOL4
DATA_VOL5
DATA_VOL6
OCR_VOL4
OCR_VOL5
OCR_VOL6
oracleasm deletedisk DAA_VOL1
oracleasm deletedisk DATA_VOL2
oracleasm deletedisk DATA_VOL3
oracleasm deletedisk DATA_VOL4
oracleasm deletedisk DATA_VOL5
oracleasm deletedisk DATA_VOL6
oracleasm deletedisk BACK_VOL1
oracleasm deletedisk OCR_VOL1
oracleasm deletedisk OCR_VOL2
oracleasm deletedisk OCR_VOL3
[root@oracle2 bin]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Cleaning disk "BACK_VOL1"
Cleaning disk "DATA_VOL1"
Cleaning disk "DATA_VOL2"
Cleaning disk "DATA_VOL3"
Cleaning disk "DATA_VOL4"
Cleaning disk "DATA_VOL5"
Cleaning disk "DATA_VOL6"
Scanning system for ASM disks...
You have new mail in /var/spool/mail/root
[root@oracle2 bin]# oracleasm listdisks
BACK_VOL01
DATA_VOL01
DATA_VOL02
DATA_VOL03
DATA_VOL04
DATA_VOL05
DATA_VOL06
OCR_VOL4
OCR_VOL5
OCR_VOL6
至此整个旧存储数据全部迁移到新的存储上了。
本文名称:oracleRAC更换存储迁移数据
转载注明:
http://chengdu.cdxwcx.cn/article/ggcehg.html