X5-2 Eighth Rack and X5-2 Quarter rack have the same hardware and look exactly the same. The only difference is that only half of the compute power and storage space on an eighth rack is usable. In an eighth rack the compute nodes have half of their CPUs activated – 18 cores per server. It’s the same for the storage cells – 16 cores per cell, six hard disks and two flash cards are active.
While this is true for X3, X4 and X5 things have slightly changed for X6. Up until now, eighth rack configurations had all the hard disks and flash cards installed but only half of them were usable. The new Exadata X6-2 Eighth Rack High Capacity configuration has half of the hard disks and flash cards removed. To extend X6-2 HC to a quarter rack you need to add high capacity disks and flash cards to the system. This is only required for High Capacity configurations because X6-2 Eighth Rack Extreme Flash storage servers have all flash drives enabled.
What are the main steps of the upgrade:
- Activate Database Server Cores
- Activate Storage Server Cores and disks
- Create eighth new cell disks per cell – six hard disk and two flash disks
- Create all grid disks (DATA01, RECO01, DBFS_DG) and add them to the disk groups
- Expand the flashcache onto the new flash disks
- Recreate the flashlog on all flash cards
Here are few things you need to keep in mind before you start:
- Compute nodes upgrade require a reboot for the new changes to come into action.
- Storage cells upgrade do NOT require a reboot and it is an online operation.
- Upgrade work is a low risk – your data is secure and redundant at all times.
- This post is about X5 upgrade. If you were to upgrade X6 then before you begin you need to install the six 8 TB disks in HDD slots 6 – 11 and install the two F320 flash cards in PCIe slots 1 and 4.
Upgrade of the compute nodes
Well, this is really straight forward and you can do it at any time. Remember that you need to restart the server for the change to come into action:
dbmcli -e alter dbserver pendingCoreCount=36 force DBServer exa01db01 successfully altered. Please reboot the system to make the new pendingCoreCount effective.
Reboot the server to activate the new cores. It will take around 10 minutes for the server to come back online.
Check the number of cores after server comes back:
dbmcli -e list dbserver attributes coreCount cpuCount: 36/36
Make sure you’ve got the right number of cores. These systems allow capacity on demand (CoD) and in my case customer wanted to me activate only 28 cores per server.
Upgrade of the storage cells
Like I said earlier, the upgrade of the storage cells does NOT require reboot and can be done online at any time.
The following needs to be done on each cell. You can, of course, use dcli but I wanted to do that cell by cell and make sure each operation finishes successfully.
1. First, upgrade the configuration from an eighth to a quarter rack:
[root@exa01cel01 ~]# cellcli -e list cell attributes cpuCount,eighthRack cpuCount: 16/32 eighthRack: TRUE [root@exa01cel01 ~]# cellcli -e alter cell eighthRack=FALSE Cell exa01cel01 successfully altered [root@exa01cel01 ~]# cellcli -e list cell attributes cpuCount,eighthRack cpuCount: 32/32 eighthRack: FALSE
2. Create cell disks on top of the newly activated physical disks
Like I said – this is an online operation and you can do it at any time:
[root@exa01cel01 ~]# cellcli -e create celldisk all CellDisk CD_06_exa01cel01 successfully created CellDisk CD_07_exa01cel01 successfully created CellDisk CD_08_exa01cel01 successfully created CellDisk CD_09_exa01cel01 successfully created CellDisk CD_10_exa01cel01 successfully created CellDisk CD_11_exa01cel01 successfully created CellDisk FD_02_exa01cel01 successfully created CellDisk FD_03_exa01cel01 successfully created
3. Expand the flashcache on to the new flash cards
This is again an online operation and it can be run at any time:
[root@exa01cel01 ~]# cellcli -e alter flashcache all Flash cache exa01cel01_FLASHCACHE altered successfully
4. Recreate the flashlog
The flashlog is always 512MB big but to make use of the new flash cards it has to be recreated. Use the DROP FLASHLOG command to drop the flash log, and then use the CREATE FLASHLOG command to create a flash log. The DROP FLASHLOG command can be run at runtime, but the command does not complete until all redo data on the flash disk is written to hard disk.
Here is an important note from Oracle:
If FORCE is not specified, then the DROP FLASHLOG command fails if there is any saved redo. If FORCE is specified, then all saved redo is purged, and Oracle Exadata Smart Flash Log is removed.
[root@exa01cel01 ~]# cellcli -e drop flashlog Flash log exa01cel01_FLASHLOG successfully dropped
5. Create grid disks
The best way to do that is to query the current grid disks size and use to create the new grid disks. Use the following queries to obtain the size for each grid disk. We use disk 02 because the first two does have DBFS_DG on them.
[root@exa01db01 ~]# dcli -g cell_group -l root cellcli -e "list griddisk attributes name, size where name like \'DATA.*02.*\'" exa01cel01: DATA01_CD_02_exa01cel01 2.8837890625T [root@exa01cel01 ~]# dcli -g cell_group -l root cellcli -e "list griddisk attributes name, size where name like \'RECO.*02.*\'" exa01cel01: RECO01_CD_02_exa01cel01 738.4375G [root@exa01cel01 ~]# dcli -g cell_group -l root cellcli -e "list griddisk attributes name, size where name like \'DBFS_DG.*02.*\'" exa01cel01: DBFS_DG_CD_02_exa01cel01 33.796875G
Then you can either generate the commands and run them on each cell or use dcli to create them on all three cells:
dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=2.8837890625T" dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=2.8837890625T" dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=2.8837890625T" dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=2.8837890625T" dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=2.8837890625T" dcli -g cell_group -l celladmin "cellcli -e create griddisk DATA_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=2.8837890625T" dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=738.4375G" dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=738.4375G" dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=738.4375G" dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=738.4375G" dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=738.4375G" dcli -g cell_group -l celladmin "cellcli -e create griddisk RECO_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=738.4375G" dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_06_\`hostname -s\` celldisk=CD_06_\`hostname -s\`,size=33.796875G" dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_07_\`hostname -s\` celldisk=CD_07_\`hostname -s\`,size=33.796875G" dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_08_\`hostname -s\` celldisk=CD_08_\`hostname -s\`,size=33.796875G" dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_09_\`hostname -s\` celldisk=CD_09_\`hostname -s\`,size=33.796875G" dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_10_\`hostname -s\` celldisk=CD_10_\`hostname -s\`,size=33.796875G" dcli -g cell_group -l celladmin "cellcli -e create griddisk DBFS_DG_CD_11_\`hostname -s\` celldisk=CD_11_\`hostname -s\`,size=33.796875G"
6. The final step is to add newly created grid disks to ASM
Connect to the ASM instance using sqlplus as sysasm and disable the appliance mode:
SQL> ALTER DISKGROUP DATA01 set attribute 'appliance.mode'='FALSE'; SQL> ALTER DISKGROUP RECO01 set attribute 'appliance.mode'='FALSE'; SQL> ALTER DISKGROUP DBFS_DG set attribute 'appliance.mode'='FALSE';
Add the disks to the disk groups, you can either queue them on one instance or run them on both ASM instances in parallel:
SQL> ALTER DISKGROUP DATA01 ADD DISK 'o/*/DATA_CD_0[6-9]*',' o/*/DATA_CD_1[0-1]*' REBALANCE POWER 128; SQL> ALTER DISKGROUP RECO01 ADD DISK 'o/*/RECO_CD_0[6-9]*',' o/*/RECO_CD_1[0-1]*' REBALANCE POWER 128; SQL> ALTER DISKGROUP DBFS_DG ADD DISK 'o/*/DBFS_DG_CD_0[6-9]*',' o/*/DBFS_DG_CD_1[0-1]*' REBALANCE POWER 128;
Monitor the rebalance using select * from gv$asm_operations and once done change the appliance mode back to TRUE:
SQL> ALTER DISKGROUP DATA01 set attribute 'appliance.mode'='TRUE'; SQL> ALTER DISKGROUP RECO01 set attribute 'appliance.mode'='TRUE'; SQL> ALTER DISKGROUP DBFS_DG set attribute 'appliance.mode'='TRUE';
And at this point, you are done with the upgrade. I strongly recommend you to run (latest) exachk report and make sure there are no issues with the configuration.
A problem you might encounter is that the flash is not fully utilized, in my case I had 128MB free on each card:
[root@exa01db01 ~]# dcli -g cell_group -l root "cellcli -e list celldisk attributes name,freespace where disktype='flashdisk'" exa01cel01: FD_00_exa01cel01 128M exa01cel01: FD_01_exa01cel01 128M exa01cel01: FD_02_exa01cel01 128M exa01cel01: FD_03_exa01cel01 128M exa01cel02: FD_00_exa01cel02 128M exa01cel02: FD_01_exa01cel02 128M exa01cel02: FD_02_exa01cel02 128M exa01cel02: FD_03_exa01cel02 128M exa01cel03: FD_00_exa01cel03 128M exa01cel03: FD_01_exa01cel03 128M exa01cel03: FD_02_exa01cel03 128M exa01cel03: FD_03_exa01cel03 128M
This seems to be a known bug and to fix it you need to recreate both flashcache and flashlog.