Sunday, July 25, 2021

Upgrade Grid Infrastructure from 12c To 19C


The upgrade will be done in rolling mode.

PRECHECKS:

1. Note the output of below commands on each node.

crsctl stat res -t
crsctl check crs


2. Check activeversion, softwareversion and releaseversion:

crsctl query crs activeversion
crsctl query crs softwareversion
crsctl query crs releaseversion

activeversion:

This is the version of grid across all nodes of the cluster. If the rolling upgrade is not completed across all nodes of the cluster, it will show the previous version of grid.

releaseversion:

It shows the grid version installed in the binaries of the local node.

softwareversion [node_name]:

It gives the leverage to check the grid version of the remote node. It tell the grid version that has been successfully started on the specified node which is mentioned in the command. If not specified it shows the same for the local node.

crsctl query crs softwareversion [node_name]

If you do not provide a node name, then Oracle Clusterware displays the version of Oracle Clusterware running on the local server.

Example

The crsctl query crs softwareversion command returns output similar to the following:

Oracle Clusterware version on node [node1] is [11.2.0.2.0]


3. Apply required patches to your existing 12c grid.

4. Download the software and unzip on your first node.

5. Run the orachk tool as grid owner ( oracle)

orachk tool will generate a report for recommendation, that need to be taken care before upgrading.

export GRID_HOME=/u01/oracle/app/grid19c
cd /u01/oracle/app/grid19c/suptools/orachk/
./orachk –u -o pre



Analyze the html report for any recommendations.

6. Run cluvfy as grid owner ( oracle )



cd /u01/oracle/app/grid19c

syntax – >
./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome -dest_crshome -dest_version 19.0.0.0.0 -fixup -verbose

i.e
./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /drcrs/app/oracle/product/grid12c -dest_crshome /u01/oracle/app/grid19c -dest_version 19.0.0.0.0 -fixup -verbose



In case any error reported, fix them before proceeding further.



DRY RUN PHASE

It may restart the cluster nodes so take downtime if possible to run it.

Dry run phase will not do any changes to the existing grid setup. It will just check the system readiness.

As per oracle note: Dry run does below activities:

· Validates storage and network configuration for the new release

· Checks if the system meets the software and hardware requirements for the new release.

· Checks for the patch requirements and apply necessary patches before starting the upgrade.

· Writes system configuration issues or errors in the gridSetupActions<timestamp>.log log file



— Run as grid owner ( oracle)

unset ORACLE_BASE
unset ORACLE_HOME
unset ORACLE_SID

cd /u01/oracle/app/grid19c
./gridSetup.sh –dryRunForUpgrade



Follow the Gui instructions.

The warnings for public network interface can be ignored.

It will ask to run rootupgrade.sh on local node only. Not to be run on the remote node.

The dry run upgrade is successful. Lets proceed with actual upgrade.


ACTUAL UPGRADE:

Now we will proceed with the actual upgrade in a rolling mode .

— Run as grid owner ( oracle )
unset ORACLE_BASE
unset ORACLE_HOME
unset ORACLE_SID

cd /u01/oracle/app/grid19c
./gridSetup.sh

Follow the Gui instructions

Or You can execute it using responsefile:

run gridsetup.sh script with the resonse file :

./gridSetup.sh -executeConfigTools -responseFile /u01/crsapp/grid19c/install/response/gridinstall_5_07_2019.rsp


This will skip the already executed tasks and complete the pending configuration.

It will ask to run rootupgrade.sh first on the local node. Then it can be executed parallely in subsequent nodes except the last node. Run this script in the end on the last node.



The rootupgrade.sh does the following:

1. Sets oracle_owner, oracle_home.

2. Put oraenv and coraenv in /usr/local/bin/ directory.

3. Add entires in /etc/oratab file.

4. Relinking will be done.

5. Check and validate the the crs confiruration which is present in /u01/oracle/app/grid19c/crs/install/crsconfig_params file.

6. Stop the TFA process.



Once rootupgade.sh script execution completed on both nodes. Proceed to resume.

We have successful upgraded the grid to 19c version.



POST CHECK :

7. Check activeversion, softwareversion and releaseversion:

crsctl query crs activeversion

crsctl query crs softwareversion
crsctl query crs releaseversion






oracle@node1:~$ crsctl query crs activeversion

Oracle Clusterware active version on the cluster is [19.0.0.0.0]



oracle@node1:~$ crsctl query crs softwareversion

Oracle Clusterware version on node [node1] is [19.0.0.0.0]

oracle@node2:~$ crsctl query crs activeversion

Oracle Clusterware active version on the cluster is [19.0.0.0.0]

oracle@node2:~$ crsctl query crs softwareversion

Oracle Clusterware version on node [node2] is [19.0.0.0.0]



8. Run the below commands on each node and compare the output of below commands with the prechecks.



crsctl stat res -t

crsctl check crs



TROUBLESHOOTING:

1. If rootupgrade.sh script failed on local node:



In case rootupgrade.sh script failed on local node either due to any error or system got rebooted during that time, Then analyze the error and fix it . Once fixed, resume the ugprade with below step.

–run rootupgrade.sh script again on node 1:

cd /u01/crsapp/grid19c
/u01/crsapp/grid19c/rootupgrade.sh

–run rootupgrade.sh script again on node 2

cd /u01/crsapp/grid19c
/u01/crsapp/grid19c/rootupgrade.sh

Sunday, May 9, 2021

Direct 12c database Upgrade high level steps


Direct 12c database Upgrade high level steps

1. Backup the source database. Enable flashback if possible and create a guaranteed restore point.

2. Direct upgrade to 12c is possible from 10.2.0.5, 11.1.0.7 and 11.2.0.2 so please check the source db version and if:

a. It is < 10.2.0.5, then upgrade it to 10.2.0.5 first.

b. If it is 11.1.0.7, then upgrade it to 11.1.0.7 first.

c. If it is 11.2.0.2 then upgrade it to 11.2.0.2 first.

Also, 12c database cannot be downgraded to 10g. it can only be downgraded to 11g.

3. Install the 12c binaries in a separate ORACLE_HOME directory.

/home/oracle/stage/database/runInstaller -silent -responseFile /home/oracle/stage/database/response/db_install.rsp

4. Run the pre upgrade tool. It is present in new ORACLE_HOME. You can run it from new ORACLE_HOME. 

$ORACLE_HOME/rdbms/admin/preupgrd.sql

5. It will create the preupgrade fixup and postupgrade fix up scripts. Run them as suggested by this tool.

6. Purge recycle bin. DB control is no longer used in 12c. it is removed during the upgrade but to save the downtime, it recommends to remove it. it is also recommended by preupgrade tool.

7. Disable cron jobs and jobs scheduled outside the database until the upgrade is completed.

8. Shutdown the source db and listener.

9. Copy the parameter file, password file, network files in new ORACLE_HOME.

10. From Oracle Database 12c home, start up the database using STARTUP UPGRADE and execute catctl.pl

SQL> startup UPGRADE

$ORACLE_HOME/perl/bin/perl catctl.pl catupgrd.sql

11. Run post upgrade fixup scripts.

12. Run the Post-Upgrade Status tool to show a summary and status of the upgrade. If there are any errors or any component is INVALID, then troubleshoot to fix the error.

SQL> @@?/rdbms/admin/utlu121s.sql

13. 12c using the timezone with version 18. So it is not that it will recommended to update it post upgrade.

14. Recompile any invalid objects. You may specify parallelism for the script as a parameter.

SQL> @@?/rdbms/admin/utlrp 6

15. Run the utluiobj.sql (Upgrade Invalid Objects tool) script to identify new invalid objects after the upgrade. This script will work only if you have run the preupgrd.sql script before the upgrade. It outputs the difference between the invalid objects that exist after the upgrade and invalid objects that existed prior to the upgrade. Fix up the new invalid objects.

SQL > @?/rdbms/admin/utluiobj.sql

16. Start the listener from the new oracle home.

17. Enable back Database Vault if required.

18. Enable back the cron jobs.

19. Drop restore point if it is taking up huge space.
SELECT NAME,SCN,TIME,GUARANTEE_FLASHBACK_DATABASE,STORAGE_SIZE/1024/1024/1024 STORAGE_SIZE_GB,DATABASE_INCARNATION# FROM V$RESTORE_POINT;

drop restore point <restore Point name>;

20. Ask app team and all other stakeholder teams for confirmation if all is ok to set the compatible parameter to 12.1.0 to enable its features.

SQL> ALTER SYSTEM SET compatible = '12.1.0' SCOPE=spfile;








Preupgrade tool:

If $ORACLE_BASE is defined, the generated scripts and log files are saved in $ORACLE_BASE/cfgtoollogs/db_unique_name/preupgrade directory. If $ORACLE_BASE is not defined, then the generated scripts and log files are created in $ORACLE_HOME/cfgtoollogs/db_unique_name/preupgrade.

It will check the following and reports issues if found:

a) If the COMPATIBLE parameter is at 11.0.0 or higher. So for 10g databases, it cannot be downgraded. It will need to be restore from backup if needed for downgrade.

b) db parameters. If there is a need to remove hidden parameters, depreciated parameters, and underscore events.

c) components of the database

d) size and free space in system , sysaux, undo, temp tablespaces. If SYS and SYSTEM users have SYSTEM as their default tablespace.

e) resources count like processes,

f) users and roles with the same name e.g. AUDSYS, AUDIT_VIEWER etc.

g) DB control is no longer used in 12c. it is removed during the upgrade but to save the downtime, it recommends to remove it before using below:

- Stop EM Database Control:

$> emctl stop dbconsole

- Connect to the Database using the SYS account AS SYSDBA:

SET ECHO ON;

SET SERVEROUTPUT ON;

@emremove.sql

h) invalid objects,

i) gather stats

EXECUTE dbms_stats.gather_dictionary_stats;

Please create stats on fixed objects after the upgrade using the command:

EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;


j) If the JOB_QUEUE_PROCESSES value is set too low.

k) Purge recyclebin

l) Timezone. 12c using the timezone with version 18. So it is not that it will recommended to update it post upgrade using below:

-- Fix Summary:

-- Update the timezone using the DBMS_DST package after upgrade is complete.

dbms_preup.run_fixup_and_report('OLD_TIME_ZONES_EXIST');

END;

and creates two pre and post upgrade fixup scripts:

Pre-Upgrade Fixup Script (run in source database environment):

/u01/app/oracle/cfgtoollogs/ocad11/preupgrade/preupgrade_fixups.sql

Post-Upgrade Fixup Script (run shortly after upgrade):

/u01/app/oracle/cfgtoollogs/ocad11/preupgrade/postupgrade_fixups.sql

Fixup scripts must be reviewed prior to being executed.

These scripts fixup only the trivial issues which are non-impacting and which doesn’t require DBA attention. it does not fix the issues that could damage the database. For example, dropping Enterprise Manager Database Control, resizing tablespaces, dropping users and roles, gathering statistics, and so on must be fixed manually by the DBA. It will let you know to manually fix those issues before the upgrade.

m) If Database Vault is enabled. It may recommend disabling, because it is a requirement to disable before the upgrade and enable after the upgrade if needed.

n) If any files are in backup mode or in media-recovery-needed state.

o) If the standby database is in sync with the primary.






Catctl.pl step:

Set the below parameters to the new ORACLE_HOME

ORACLE_HOME

ORACLE_SID

LD_LIBRARY_PATH

PATH

ORACLE_BASE

$ cd $ORACLE_HOME/rdbms/admin

$ sqlplus "/ as sysdba"



SQL> startup UPGRADE

$ORACLE_HOME/perl/bin/perl catctl.pl -n 6 -l $ORACLE_BASE/admin/$ORACLE_SID/upgrade catupgrd.sql



-n is for parallelism. 4 is the default no of parallelism.

-l is for explicit log location directory.



The db gets shutdown the upgrade completes. Check the upgrade logs and make sure that there is no error occurred in it.


Exadata Ibswitches patching

Imp points:
Starting with release 11.2.3.3.0, the patchmgr utility is used to upgrade and downgrade the InfiniBand switches.
IB Switch patch is delievered with Exadata storage patch.
IB Switch patches are released semi annually to annually.
IB Switch can be patched in Rolling fashion only.


Environment
Exadata Half Rack X4-2
4 Compute nodes, 7 Storage cells and 2 IB Switches
Current IB Switch Version 2.2.7-1



Steps:
1. Identify the number of switches in clusters.

[root@dm01dbadm01 ~]# ibswitches
Switch : 0x002128469b8aa0a0 ports 36 “SUN DCS 36P QDR dm01sw-iba01 10.209.41.246” enhanced port 0 lid 5 lmc 0
Switch : 0x002128469b97a0a0 ports 36 “SUN DCS 36P QDR dm01sw-ibb01 10.209.41.247” enhanced port 0 lid 4 lmc 0


2. Identify the current IB switch software version on all the Switches
[root@dm01db01 patch_18.1.12.0.0.190111]# ssh dm01sw-iba01 version
SUN DCS 36p version: 2.2.7-1
Build time: Aug 4 2017 12:20:53
SP board info:
Manufacturing Date: 2014.05.20
Serial Number: “NCDFxxxxx”
Hardware Revision: 0x0107
Firmware Revision: 0x0000
BIOS version: SUN0R100
BIOS date: 06/22/2010



3. Log in to Exadata Compute node 1 as root user and navigate the Exadata Storage Software staging area

[root@dm01dbadm01 ESS_121220]# cd /u01/app/oracle/software/exa_patches/patch_18.1.12.0.0.190111
[root@dm01dbadm01 patch_18.1.12.0.0.190111]# pwd
/u01/app/oracle/software/exa_patches/patch_18.1.12.0.0.190111

4. Execute the following to perform the IB Switch precheck
[root@dm01db01 patch_18.1.12.0.0.190111]# ./patchmgr -ibswitches ~/ibswitch_group -upgrade -ibswitch_precheck

5. Upgrade the IB Switches using the following command:
[root@dm01db01 patch_18.1.12.0.0.190111]# ./patchmgr -ibswitches ~/ibswitch_group -upgrade


6. Verify that all the IB Switches are upgraded to latest version.
[root@dm01db01 ~]# ssh dm01sw-ibb01 version
SUN DCS 36p version: 2.2.11-2
Build time: Aug 27 2018 11:18:39
SP board info:
Manufacturing Date: 2014.05.19
Serial Number: “NCDFxxxxx”
Hardware Revision: 0x0107
Firmware Revision: 0x0000
BIOS version: SUN0R100
BIOS date: 06/22/2010
[root@dm01db01 ~]#

[root@dm01db01 ~]# ssh dm01sw-iba01 version
SUN DCS 36p version: 2.2.11-2
Build time: Aug 27 2018 11:18:39
SP board info:
Manufacturing Date: 2014.05.20
Serial Number: “NCDFxxxxx”
Hardware Revision: 0x0107
Firmware Revision: 0x0000
BIOS version: SUN0R100
BIOS date: 06/22/2010

Exadata compute nodes patching

Exadata compute nodes patching

Imp points:

Patching a database node takes 45 minutes to one hour

You can perform the actual update of Exadata database servers in a rolling (using the -rolling flag) or non-rolling fashion. The default is non-rolling.

Starting with release 12.2.1.1.0, Exadata software updates for the Exadata database server can be applied only through the patchmgr utility. The dbserver.patch.zip file contains a special version of patchmgr used to orchestrate dbnodeupdate in updating Exadata Databases Nodes.

The patchmgr utility for updating Exadata database servers is not the same as the patchmgr script shipped with the Exadata software update.

Pre-requisites:

1. Note the output of below commands from each db node:
imageinfo
crsctl stat res –t

2. Check and note the ilom ip of each db node. connect to ilom of each db node and verify that each ilom server is running fine.

3. If you are planning to update all Exadata database servers at once, it is a requirement to run the update utility from a Linux node outside the group of Exadata database servers being updated. This is because the update utility cannot update the Exadata database server it is currently running on. If you have no other systems running Oracle Linux or Oracle Solaris you can run the update utility from one of the Exadata database servers. In such cases be sure that the Exadata database server where the update utility is running is not listed in the dbs_group file you specify.

4. You need to set up ssh equivalence for the root user from the driving node to the root user of all Exadata database servers that will be updated.

5. Download the latest dbserver.patch.zip from My Oracle Support note 1553103.1. verify that the patchmgr utility is of latest version.

6. Download the latest Exachk from My Oracle Support note 1070954.1.

7. Run ExaChk tool

[root@dm01 ]# ./exachk

8. Run prerequisite check with the –nomodify_at_prereq flag weeks before doing the patching.

[root@dm01 ]# nohup ./patchmgr -dbnodes dbs_group -precheck -iso_repo /u01/exa/

p22750145_121230_Linux-x86-64.zip -target_version 12.1.2.3.0.160207.3

-nomodify_at_prereq &


If it find any unknown rpm package which is conflicting with the patch, so it will show error like below:

exadb01: ERROR: Custom packages found that require removal before updating to another major Oracle Linux release, see exadb01:/var/log/cellos/unkown_packages-rpt.070519194937.txt for more details.


Imp Contents of this file:

ALERT : Custom packages found (see above)

# These custom packages MUST be removed before proceeding a major Oracle Linux upgrade.

# Run dbnodeupdate.sh with additional -R flag at update time to have these packages automatically removed - OR -

# Run /var/log/cellos/remove_unkown_packages.070519194937.sh manually as root before re-running prereq check.

#

# RECOMMENDED : Let dbnodeupdate.sh remove the packages right after the mandatory backup by using the -R flag

# WARNING : Removing custom packages may impact functionality

# NOTE : When removed functionality is still required after the update, reinstall the rpm after the update

#################################################################################


Note:

By default, prerequisite check warnings such as active NFS mounts result in a failed check. If you want to allow active NFS mounts, then starting in release 12.1.2.1.1 you can use the -allow_active_network_mounts flag.

9. Fix the issues found in these two reports and run them again for no issues.

Perform a “backup only” run using the - backup flag.

[root@dm01 ]# nohup ./patchmgr -dbnodes dbs_group -backup -iso_repo /u01/exa/p22750145_121230_Linux-x86-64.zip -target_version 12.1.2.3.0.160207.3 -allow_active_network_mounts &

Note:
The -backup action backs up the active root and /boot file system only and is sufficient for rolling back (failed) updates.

Re-running the update utility with the -backup flag (or in default updating mode) will overwrite existing backups.

Intervention steps:
1. Perform a “backup only” run using the - backup flag.
2. Run ExaChk tool
3. Run prerequisite check with the –nomodify_at_prereq flag weeks before doing the patching.
4. Check and fix the critical issues found in them and run them again for no issues.
5. Remove any blocking rpms and re-run the prerequisite check to validate if all pre-requisite are passed.
6. Run the pre-requisite checks with out the –nomodify_at_prereq flag to allow the removal of conflicting rpms using ISO.

[root@dm01 ]# nohup ./patchmgr -dbnodes dbs_group -precheck -iso_repo /u01/exa/ p22750145_121230_Linux-x86-64.zip -target_version 12.1.2.3.0.160207.3 &

7. • Perform the update. Use the –nobackup flag to skip the backup because you already made a “backup only” run.

[root@dm01 ]# nohup ./patchmgr -dbnodes ~/dbs_group -upgrade -iso_repo /u01/iso/p23557378_121223_Linux-x86-64.zip -target_version 12.1.2.2.3.160720 -allow_active_network_mounts –nobackup &

Note:
You can perform the actual update of Exadata database servers in a rolling (using the -rolling flag) or non-rolling fashion. The default is non-rolling.

For permforming the update in rolling fashion, use –rolling option. Below is the example:

[root@myclustercel01 ~]# nohup ./patchmgr -dbnodes ~/dbs_group -upgrade -iso_repo /tmp/SAVE/p28666206_*_Linux-x86-64.zip -target_version 18.1.9.0.0.181006 -allow_active_network_mounts -rolling &


8. After the update, Reinstall any non-Exadata rpms that you removed before the Exadata update if they are needed.

9. Run Exachk. Check the issues and fix them.

10. Run imageinfo on each node. Verify that each node is running with the expected version.

11. Run crsctl stat res –t on each node. Compare its output with the prechecks output.


Rollback Steps:

[root@dm01 ]# ./patchmgr -dbnodes dbs_group -rollback -target_version 12.1.2.3.0.160207


Note:
Having only one inactive system partition limits the rollback options to only the previous active image.

Firmware updates are not rolled back when rolling back to a previous image.

The Oracle Exadata System Software releases support later firmware releases. After rolling back, run the following commands to apply older firmware versions if needed:

/etc/init.d/lsidiag stop
/etc/init.d/lsi_mrdsnmpd stop
/opt/oracle.cellos/CheckHWnFWProfile -action updatefw -mode exact

The last command only applies to releases 11.2.3.3.0 or later.

Sunday, March 15, 2020

Hugepages in oracle database

HugePages are supported in many Operating systems. We are considering the Linux operating system since it is the most widely used in the market.

Let's understand Hugepages step by step.

I.   What are Hugepages and why they are used
II.  Hugepages advantages
III. Conditions to use Hugepages
IV. How to use them.

I. what are Hugepages
In any operating system, the memory is allocated in term of pages. The normal default page size in Linux is 4KB and these pages are managed by using the page table. so if your oracle database SGA size is say 5gb, it would be allocated in 5gb/4kb = 1.3 million pages, so that means 1.3 million records in the page table to do the mapping. that would an extensive overhead on the OS to manage it.

So to overcome this issue, Hugepages were introduced. They are basically large memory pages with default size of 2MB. so if your database sga size is 5gb, you would be 5gb/2mb=2560 pages. so with this, the OS will need 2560 records in the page table to manage these pages and thus the overhead on the OS will be drastically reduced which will lead to better overall performance.


II. Hugepages advantages
1. Contiguous pages are preallocated and cannot be used for anything else but for System shared memory (for example, SGA)
2. Hugepages are locked in memory and never swapped out, which forces the SGA to always stay in physical memory.
3. Lesser page table entries -> Lesser memory requirements for storing the page table -> lesser overhead on the OS to manage the pages.

III. Conditions to use Hugepages

1. If the SGA size is >= 5GB, it is strongly recommended to configure HugePages at the OS level.

2. Hugepagesize can vary depending on the kernel and hardware architecture of the server.

3. Oracle large pages support began in Oracle 10g Release 1. So Hugepages can be used in Oracle Database 10gR1 and later versions.

4. Automatic Memory Management (AMM) is not compatible with Linux HugePages i.e. MEMORY_TARGET cannot be used with HugePages, Instead, Automatic Shared Memory Management ( sga_target) and Automatic PGA Management ( pga_aggregate_target) should be used as they are compatible with HugePages.


5. Disable Transparent HugePages
Transparent HugePages memory is enabled by default with Red Hat Enterprise Linux 6, Oracle Linux 6 etc. Transparent HugePages memory differs from standard HugePages memory because the kernel khugepaged thread allocates memory dynamically during runtime. Standard HugePages memory is pre-allocated at startup, and does not change during runtime. It degrade the database performance many times so Oracle strongly recommends that you disable Transparent HugePages and use standard Hugepages on all Oracle Database servers.


To check if Transparent HugePages is enabled run one of the following commands as the root user: Red Hat Enterprise Linux kernels:
# cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
Other kernels:
# cat /sys/kernel/mm/transparent_hugepage/enabled

The following is a sample output that shows Transparent HugePages is being used as the [always] flag is enabled.

[always] never

Note:

If Transparent HugePages are not being used then the /sys/kernel/mm/transparent_hugepage or /sys/kernel/mm/redhat_transparent_hugepage files do not exist.

To disable Transparent HugePages perform the following steps. DO THIS STEP ONLY IF YOU FIND TRANSPARENT HUGEPAGES already enabled IN YOUR OS.

Add the following entry to the kernel boot line in the /etc/grub.conf file:
transparent_hugepage=never


IV. How to use them.

1. Check current status of Hugepages. Run the following command to determine the current HugePage usage. 
$ grep Huge /proc/meminfo
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
$


Clearly in above output, no value in HugePages_Total, HugePages_Free that means currently hugepages are not being used.


2. Edit /etc/sysctl.conf file.

The below script is provided by Oracle to calculate the value of no of hugepages. Copy the below file onto your server, give it execute permissions.

Make sure that the oracle database is running and AMM is not enabled i.e. MEMORY_TARGET is not being used.

Run this script.

It will give the output like below:

$ ./hugepages_setting.sh 
Recommended setting: vm.nr_hugepages = 306
$

Edit the "/etc/sysctl.conf" file from "root" user and add the following entry in it.

vm.nr_hugepages=306


Run the following command as the "root" user.

# sysctl -p

3. Edit /etc/security/limits.conf file. 

Check the following entries into the "/etc/security/limits.conf". 
* soft memlock 626688
* hard memlock 626688


Both the values should be equals to the total Hugepages memory i.e. HugePages * Hugepagesize so in our case it will be 306*2048=626688. If the values are found lower, then edit the file and change the values.


Limits.conf file is used to limits the system resources. The memlock option is used to set the maximum amount of locked-in-memory in kilobytes. This is memory that will not be paged out and always remain present in physical memory. So by putting these values, we are enabling the hugepages to always remain the resident of the physical memory and are never pages out by the OS.


4. Stop all the database services and restart the server.


5. Check the HugePages information again.


$ grep Huge /proc/meminfo
AnonHugePages:         0 kB
HugePages_Total:     306
HugePages_Free:       98
HugePages_Rsvd:       93
HugePages_Surp:        0
Hugepagesize:       2048 kB
$



Clearly, the hugepages are being used at OS level.
 
6. Enable database to use large pages. 

Lets enable hugepages at database level by setting the USE_LARGE_PAGES parameter to ONLY. Note that this parameter was introduces in 11.2.0.2 so please skip this step if your database version is older than 11.2.0.2.

ALTER SYSTEM SET use_large_pages=only SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP;


Below are the available values of this parameter:

Values:

1. TRUE ( it is its DEFAULT value)

In Oracle Database 11.2.0.2, if there are not enough large pages configured on the system, then regular sized pages will be used to allocate SGA memory.

In Oracle Database 11.2.0.3 and later releases, Oracle allocates as much of the SGA as it can in large pages, and if it runs out, it will allocate the rest of the SGA using regular sized pages. In this supported mixed page mode allocation, the database will exhaust the available large pages before switching to regular sized pages.


2. FALSE
Specifies that the instance will not use large pages.

3. ONLY

Specifies that the instance will fail to start if large pages cannot be used for the entire SGA memory.

It is recommended by Oracle to always set this parameter to a value of ONLY if you are going to use large pages (hugepages).

Friday, June 21, 2019

Procedure to create acfs file system

Pre-checks

1.      The disk space of acfs file system which you want to create will be allocated in asm diskgroup. so please make sure that that much free space is available in the diskgroup.

 Procedure:

Here we are creating a acfs file system of 2gb in +DATA01 diskgroup.

1.      First create a volume of 2GB in +DATA01 diskgroup. 

ASMCMD [+] > volcreate -G DATA01 -s 2G volume01

The volume name is volume01.


2.      Check the details of new created volume. 

ASMCMD [+] > volinfo -G DATA01 volume01

Diskgroup Name: DATA01

         Volume Name: VOLUME01

         Volume Device: /dev/asm/volume01-123

         State: ENABLED

Make a note of volume name and volume device of the new volume. The volume device will be used in the next step.

3.      Create the acfs file system

/sbin/mkfs -t acfs /dev/asm/volume01-123


4.      Register a new volume created for file system in acfs mount registry.

/sbin/acfsutil registry -a  /dev/asm/volume01-123 /acfsmounts/acfs1

Here, /acfsmounts/acfs1 is the directory which the acfs file systems is mounted.

5.      Check if the file system is mounted by acfs mount registry service. If not, then manually mount the file system wrt the new volume device.


/bin/mount -t acfs <volume_device> /acfsmounts/acfs1

                                  
Post Checks

1.  df –hP|grep asm; check that the acfs file system is showing up in df command.

2.      Check the alertlogs of asm for any error.

Friday, March 29, 2019

oracle database upgrade to 11.2.0.4

Here we are upgrading oracle database from 11.2.0.3 to 11.2.0.4.

Considering the OS version is Linux.

1. Check and make sure you have the latest consistent backup of database. create a restore point as well if possible.

2. Disable all the cron entries on the db server e.g. backups and other stuff which runs on db.

crontab -e

To Dissable Entry

%s/^/####/g
:wq!

3. Take the prechecks.

spool precheck.log

select name,open_mode from v$database;
archive log list
set lines 120
select TABLESPACE_NAME,EXTENT_MANAGEMENT,contents from dba_tablespaces;
select tablespace_name,sum(bytes/1024/1024) from dba_temp_files group by tablespace_name;
select file_name,bytes/1024/1024 from dba_temp_files;
col comp_name for a40
set lines 120
select comp_name,status,version from dba_registry;
show parameter NLS_LENGTH_SEMANTICS
show parameter CLUSTER_DATABASE
show parameter parallel_max_server
show parameter undo_management
show parameter job_queue_process
show parameter pool
show parameter remote_login_password
show parameter spfile
show parameter pga
show parameter sga
show parameter disk_as
select owner,object_type,count(*) from dba_objects group by owner,object_type
order by 1,2;
select owner,count(*) from dba_objects group by owner order by 1;
col owner for a15
col object_name for a35
select OWNER,OBJECT_NAME,OBJECT_TYPE,status from DBA_OBJECTS where
status = 'INVALID';
select count(*) from dba_objects where status='INVALID';
col password for a20
col username for a15
col account_statu for a15
set lines 120
select USERNAME,PASSWORD,ACCOUNT_STATUS,PROFILE,EXPIRY_DATE from dba_users
order by ACCOUNT_STATUS;

select * from v$log;
select PROPERTY_NAME,PROPERTY_VALUE from database_properties;
select * from registry$history;

SELECT DISTINCT owner
FROM DBA_DEPENDENCIES
WHERE referenced_name
IN ('UTL_TCP','UTL_SMTP','UTL_MAIL','UTL_HTTP','UTL_INADDR')
AND owner NOT IN ('SYS','ORDPLUGINS');

4. Install the 11.2.0.4 binaries to the new 11.2.0.4 directory inside product directory.

Upload the 11204 binaries zip file on the server say in cd /home/oracle/stage/. make sure the oracle database OS user have access to them. so change their ownership and permissions if needed.

As Oracle user unzip softwaresu - oracle --unzip software 11.2.0.4 unizp p13390677_112040_Linux-x86-64_1of7.zip unzip p13390677_112040_Linux-x86-64_2of7.zip

it should unzip it to one directory “database” ls database p13390677_112040_Linux-x86-64_1of7.zip p13390677_112040_Linux-x86-64_2of7.zip

Response files
Once Oracle 11GR2 binaries are unzipped you can find in directory /home/oracle/stage/database/response dedicated files called “response files” used for silent mode installations.

The response files store parameters necessary to install Oracle components:
db_install.rsp – used to install oracle binaries, install/upgrade a database in silent mode
dbca.rsp – used to install/configure/delete a database in silent mode
netca.rsp – used to configure simple network for oracle database in silent mode cd /home/oracle/stage/database/response $ ls dbca.rsp db_install.rsp netca.rsp

Install Oracle binaries cp db_install.rsp db_install.rsp.bck

Edit file db_install.rsp to set parameters required to install binaries.


--force to install only database software
oracle.install.option=INSTALL_DB_SWONLY
--set your hostname
ORACLE_HOSTNAME=oel6.dbaora.com
--set unix group for oracle inventory
UNIX_GROUP_NAME=oinstall
--set directory for oracle inventory
INVENTORY_LOCATION=/ora01/app/oraInventory
--set oracle home for binaries
ORACLE_HOME=/ora01/app/oracle/product/11.2.04/
--set oracle base
ORACLE_BASE=/ora01/app/oracle 
--set version of binaries to install -- EE - enterprise edition
oracle.install.db.InstallEdition=EE
--force to install advanced options
oracle.install.db.EEOptionsSelection=true
--specify which advanced option to install --
oracle.oraolap:11.2.0.4.0 - Oracle OLAP -- oracle.rdbms.dm:11.2.0.4.0 - Oracle Data Mining -- oracle.rdbms.dv:11.2.0.4.0 - Oracle Database Vault -- oracle.rdbms.lbac:11.2.0.4.0 - Oracle Label Security -- oracle.rdbms.partitioning:11.2.0.4.0 - Oracle Partitioning -- oracle.rdbms.rat:11.2.0.4.0 - Oracle Real Application Testing oracle.install.db.optionalComponents=oracle.rdbms.partitioning:11.2.0.4.0,oracle.oraolap:11.2.0.4.0,oracle.rdbms.dm:11.2.0.4.0,oracle.rdbms.rat:11.2.0.4.0
--specify extra groups for database management
oracle.install.db.DBA_GROUP=dba
oracle.install.db.OPER_GROUP=oper0

once edition is completed. Start binaries installation

cd /home/oracle/stage/database./runInstaller -silent -responseFile /home/oracle/stage/database/response/db_install.rsp


output is following
[oracle@oel6 database]$ ./runInstaller -silent -responseFile /home/oracle/stage/database/response/db_install.rsp
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 41752 MB Passed
Checking swap space: must be greater than 150 MB. Actual 4095 MB Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-02-06_09-58-53PM. Please wait ...

[oracle@oel6 database]$ No protocol specified [WARNING] - My Oracle Support Username/Email Address Not Specified [SEVERE] - The product will be registered anonymously using the specified email address. You can find the log of this install session at: /ora01/app/oraInventory/logs/installActions2015-02-06_09-58-53PM.log

The installation of Oracle Database 11g was successful. Please check '/ora01/app/oraInventory/logs/silentInstall2015-02-06_09-58-53PM.log' for more details.

 As a root user, execute the following script(s):
1. /ora01/app/oraInventory/orainstRoot.sh
2. /ora01/app/oracle/product/11.2.0/db_1/root.sh Successfully Setup Software.

You are then asked to run root.sh script as root user. Once it’s done binaries are installed.

Run utlu112i.sql from 11.2.0.4/rdbms/admin location
WARNING:  Database contains INVALID objects prior to upgrade. ....
The list of invalid SYS/SYSTEM objects was written to .... registry$sys_inv_objs. ....
The list of non-SYS/SYSTEM objects was written to .... registry$nonsys_inv_objs. ....
Use utluiobj.sql after the upgrade to identify any new invalid .... objects due to the upgrade.

Copy utltzuv2.sql from 11.2.0.4/rdbms/admin to backup location.

Timezone should be 14, if not please run it after db upgrade. SQL> select * from v$timezone_file; FILENAME VERSION -------------------- ---------- timezlrg_14.dat 14 SQL>


Take Before_upgrade.log spool Before_upgrade.log

select tablespace_name,sum(bytes/1024/1024) from dba_data_files
group by tablespace_name; select file_name from dba_data_files; select file_name,bytes/1024/1024 from dba_temp_files; select * from v$timezone_file; select value from NLS_DATABASE_PARAMETERS where parameter= 'NLS_CHARACTERSET'; SELECT * FROM nls_database_parameters; EXEC DBMS_STATS.GATHER_DICTIONARY_STATS; >> Already done as part of utlu112i.sql grant analyze any to sys; exec dbms_stats.create_stat_table('SYS','dictstattab'); exec dbms_stats.export_schema_stats('SYSMAN','dictstattab',statown => 'SYS'); exec dbms_stats.export_schema_stats('XDB','dictstattab',statown => 'SYS');


SELECT * FROM v$recover_file;

-- This should return no rows.

SELECT * FROM v$backup WHERE status!='NOT ACTIVE'; -- This should return no rows.


SELECT b.FILE#,d.FILE_ID,d.FILE_NAME,b.status FROM v$backup b,dba_data_files d

WHERE b.file#=d.file_id and b.status!='NOT ACTIVE'; select * from dba_2pc_pending; --- If this returns rows you should do the following:


SELECT local_tran_id FROM dba_2pc_pending; EXECUTE dbms_transaction.purge_lost_db_entry(''); COMMIT; SELECT tablespace_name FROM dba_tables WHERE table_name='AUD$';

SELECT name FROM v$controlfile; SELECT username, default_tablespace FROM dba_users WHERE username in ('SYS','SYSTEM');

--If DEFAULT_TABLESPACE is anything other than SYSTEM tablespace, modify the

default tablespace to SYSTEM by using the below command. ALTER user SYS default tablespace SYSTEM; ALTER user SYSTEM default tablespace SYSTEM;

Check connect role and give permissions directly to users as in 11g connect role has

only create session privilege

Put below contents in analyze.sql using vi editor:

Set verify off
Set space 0 Set line 120
Set heading off
Set feedback off
Set pages 1000
Spool analyze.sql
SELECT 'Analyze cluster "'||cluster_name||'" validate structure cascade;' FROM dba_clusters  WHERE owner='SYS'
UNION
SELECT 'Analyze table "'||table_name||'" validate structure cascade;' FROM dba_tables
WHERE owner='SYS' AND partitioned='NO' AND (iot_type='IOT' OR iot_type is NULL)
UNION
SELECT 'Analyze table "'||table_name||'" validate structure cascade into invalid_rows;' FROM dba_tables WHERE owner='SYS' AND partitioned='YES';
spool off

Now, Run it like below on sql prompt: @analyze.sql

create pfile='/dbs/init.ora' from spfile; SELECT name,description from SYS.V$PARAMETER WHERE name LIKE '\_%' ESCAPE '\'; CREATE RESTORE POINT PRE_UPGRADE_DATABASE GUARANTEE FLASHBACK DATABASE;


Stop database,listener,agent Take the backup of old oracle home

mv old oracle_HOME mv 11.2.0 11.2.0_old change oracle_home to new oracle home path in /etc/oratab and

/var/opt/oracle/oratab

file to load its new location in env variables. vi upgrade_.sql set echo on set time on set timing on spool /home/oracle/pre_11204/upgrade_.log @/rdbms/admin/catupgrd.sql spool off exit

Startup database in upgrade mode SQL> startup upgrade

Run below to execute catupgrd.sql in background
nohup sqlplus "/as sysdba" @upgrade_.sql &;

It can run upto 1 hour or more. once it completes,

Run utlu112s.sql, utlrp.sql from new oracle_home to check the post checks.
 $ sqlplus "/as sysdba
SQL> startup
SQL> spool utlu112s.log
SQL>@?/rdbms/admin/utlu112s.sql
SQL>spool off

 --Compile all invalid objects.
SQL> @?/rdbms/admin/utlrp.sql

 --make sure all sys owned objects are valid.
 col owner for a20;
select owner,object_type,count(*) from dba_objects where status='INVALID'
group by owner,object_type order by owner,object_type ;

Apply psu if needed.

Connect to rman catalog and upgrade catalog using below command
upgrade catalog;
It will ask to type this command again. so type and run it again.
it would take 5 to 10 min to complete.

If all is ok, drop the restore point:
SELECT NAME,SCN,TIME,GUARANTEE_FLASHBACK_DATABASE,
STORAGE_SIZE/1024/1024/1024 STORAGE_SIZE_GB,DATABASE_INCARNATION#
FROM V$RESTORE_POINT;

drop restore point ;

Check with app team and ask them to validation everything from app side. if all is
ok, get confirmation from them to update the compatible parameter. --Update compatible patameter and restart db

alter system set compatible='11.2.0.4' scope=spfile;
shu immediate
startup

$ cd $ORACLE_HOME/lib
$ ls -lrt libsql*
-rw-r--r-- 1 oracle dba 1385072 Jul 20 2013 libsqlplusO.so
-rw-r--r-- 1 oracle dba 1331536 Aug 21 2013 libsql11.a
-rw-r--r-- 1 oracle dba 2189440 Jan 17 15:59 libsqlplus.a
-rw-r----- 1 oracle dba 1307632 Jan 17 16:01 libsqlplus.so $

Make sure the permissions of each of the above file is 644.

If not change it using chmod.

chmod 644 libsql*

Hi SA team

Can you please change below parameters in main.cf file as below tns entry :-- /opt/app/p2dti1d3/oracle/product/11.2.0.4 $ORACLE_HOME :-- /opt/app/p2dti1d3/oracle/product/11.2.0.4 compatible :-- 11.2.0.4

Revert the cron entries.

2. Enable all the cron entries on the db server e.g. backups and other stuff which runs on db.

crontab -e
%s/####//g
:wq!