CHAPTER 16
Running an initial data load
Overview of initial data load methods
You can use Oracle GoldenGate to:
â— Perform a standalone batch load to populate database tables for migration or other purposes.
â— Load data into database tables as part of an initial synchronization run in preparation for change synchronization with Oracle GoldenGate.
Â
The initial load can be performed from an active source database. Users and applications can access and update data while the load is running. You can perform initial load from a quiesced source database if you delay access to the source tables until the target load is completed.
Â
Supported load methods
You can use Oracle GoldenGate to load data in any of the following ways:
◠“Loading data with a database utility†on page 217. The utility performs the initial load.
◠“Loading data from file to Replicat†on page 218. Extract writes records to an extract file and Replicat applies them to the target tables. This is the slowest initial-load method.
◠“Loading data from file to database utility†on page 222. Extract writes records to extract files in external ASCII format. The files are used as data files for input into target tables by a bulk load utility. Replicat creates the run and control files.
◠“Loading data with an Oracle GoldenGate direct load†on page 227. Extract communicates with Replicat directly across TCP/IP without using a Collector process or files. Replicat applies the data through the database engine.
◠“Loading data with a direct bulk load to SQL*Loader†on page 231. Extract extracts records in external ASCII format and delivers them directly to Replicat, which delivers them to Oracle’s SQL*Loader bulk-load utility. This is the fastest method of loading Oracle data with Oracle GoldenGate.
Â
Running an initial data load
Using parallel processing in an initial load
For all initial load methods except those performed with a database utility, you can load large databases more quickly by using parallel Oracle GoldenGate processes.
Â
To use parallel processing
·        Follow the directions in this chapter for creating an initial-load Extract and an initialload Replicat for each set of parallel processes that you want to use.
Â
·        With the TABLE and MAP parameters, specify a different set of tables for each pair of Extract-Replicat processes, or you can use the SQLPREDICATE option of TABLE to partition the rows of large tables among the different Extract processes.
Â
Prerequisites for initial load
Â
Disable DDL processing
Before executing an initial load, disable DDL extraction and replication. DDL processing
is controlled by the DDL parameter in the Extract and Replicat parameter files.
Â
Prepare the target tables
The following are suggestions that can make the load go faster and help you to avoid errors.
â— Data: Make certain that the target tables are empty. Otherwise, there may be duplicate-row errors or conflicts between existing rows and rows that are being loaded.
â— Constraints: Disable foreign-key constraints and check constraints. Foreign-key constraints can cause errors, and check constraints can slow down the loading process. Constraints can be reactivated after the load concludes successfully.
â— Indexes: Remove indexes from the target tables. Indexes are not necessary for inserts. They will slow down the loading process significantly. For each row that is inserted into a table, the database will update every index on that table. You can add back the indexes after the load is finished.
◠Keys: To use the HANDLECOLLISIONS function to reconcile incremental data changes with the load, each target table must have a primary or unique key. If you cannot create a key through your application, use the KEYCOLS option of the TABLE and MAP parameters to specify columns as a substitute key for Oracle GoldenGate’s purposes. A key helps identify which row to process. If you cannot create keys, the source database must be quiesced for the load.
Â
Configure the Manager process
On the source and target systems, configure and start a Manager process. One Manager can be used for the initial-load processes and the change-synchronization processes.
Â
Create a data-definitions file
A data-definitions file is required if the source and target databases have dissimilar definitions. Oracle GoldenGate uses this file to convert the data to the format required by the target database.
Â
Create change-synchronization groups
NOTE If the load is performed from a quiet source database and will not be followed by continuous change synchronization, you can omit these groups.
To prepare for the capture and replication of transactional changes during the initial load, create online Extract and Replicat groups. You will start these groups during the load procedure. See the instructions in this  documentation that are appropriate for the type of replication configuration that you will be using.
Do not start the Extract or Replicat groups until instructed to do so in the initial-load instructions. Change synchronization keeps track of transactional changes while the load is being applied, and then the target tables are reconciled with those changes.
NOTE The first time that Extract starts in a new Oracle GoldenGate configuration, any open transactions will be skipped. Only transactions that begin after Extract starts are captured.
If the source database will remain active during the initial load, include the HANDLECOLLISIONS parameter in the Replicat parameter file; otherwise do not use it.
HANDLECOLLISIONS accounts for collisions that occur during the overlap of time between the initial load and the ongoing change replication. It reconciles insert operations for which the row already exists, and it reconciles update and delete operations for which the row does not exist.
It can be used in these ways:
â— globally for all tables in a parameter file
â— as an on/off toggle for groups of tables
â— within MAP statements to enable or disable the error handling for specific table pairs.
Â
Sharing parameters between process groups
Some of the parameters that you use in a change-synchronization parameter file also are required in an initial-load Extract and initial-load Replicat parameter file. You can copy those parameters from one parameter file to another, or you can store them in a central file and use the OBEY parameter in each parameter file to retrieve them. Alternatively, you can create an Oracle GoldenGate macro for the shared parameters and then call the macro from each parameter file with the MACRO parameter.
Â
Loading data with a database utility
To use a database copy utility to establish the target data, you start a changesynchronization Extract group to extract ongoing data changes while the database utility makes and applies a static copy of the data. When the copy is finished, you start the change-synchronization Replicat group to re-synchronize rows that were changed while the copy was being applied. From that point forward, both Extract and Replicat continue Running to maintain data synchronization. This method does not involve any special initial-load Extract or Replicat processes.
Â
To load data with a database utility
·        On the source and target systems, run GGSCI and start the Manager process.
START MANAGER
·        On the source system, start change extraction.
START EXTRACT
Where: is the name of the Extract group.
·        On the source system, start making the copy.
·        Wait until the copy is finished and record the time of completion.
·        View the Replicat parameter file to make certain that the HANDLECOLLISIONS parameter is listed. If not, add the parameter with the EDIT PARAMS command.
VIEW PARAMS
EDIT PARAMS
Where: is the name of the Replicat group.
·        On the target system, start change replication.
START REPLICAT
Where: is the name of the Replicat group.
·        On the target system, issue the following command to verify the status of change replication.
INFO REPLICAT
·        Continue to issue the INFO REPLICAT command until you have verified that change replication has posted all of the change data that was generated during the initial load. Reference the time of completion that you recorded. For example, if the copy stopped at 12:05, make sure change replication has posted data up to that point.
·        On the target system, issue the following command to turn off the HANDLECOLLISIONS parameter and disable the initial-load error handling.
SEND REPLICAT , NOHANDLECOLLISIONS
·        On the target system, edit the Replicat parameter file to remove the HANDLECOLLISIONS parameter. This prevents HANDLECOLLISIONS from being enabled again the next time Replicat starts.
EDIT PARAMS
·        Save and close the parameter file.
Â
From this point forward, Oracle GoldenGate continues to synchronize data changes.
Â
Loading data from file to Replicat
To use Replicat to establish the target data, you use an initial-load Extract to extract source records from the source tables and write them to an extract file in canonical format. From the file, an initial-load Replicat loads the data using the database interface. During the load, the change-synchronization groups extract and replicate incremental changes, which are then reconciled with the results of the load. During the load, the records are applied to the target database one record at a time, so this method is considerably slower than any of the other initial load methods. This method permits data transformation to be done on either the source or target system.
Â
To load data from file to Replicat
·        On the source and target systems, run GGSCI and start Manager.
START MANAGER
·        On the source system, issue the following command to create an initial-load Extract parameter file.
EDIT PARAMS
·        Enter the parameters listed below in the order shown, starting a new line for each parameter statement.
·         Save and close the parameter file.
Â
Initial-load Extract parameters for loading data from file to Replicat
SOURCEISTABLE
Designates Extract as an initial load process extracting records directly from the source tables.
[SOURCEDB ,][USERID [, PASSWORD ]]
SOURCEDB specifies a data source name, if required in the connection information. Not required for Oracle.
USERID specifies database credentials, if required. For Oracle, you can include a host string, for example:
USERID ggs@ora1.ora, PASSWORD ggs123
Specifies database connection information. These parameters also allow for authentication at the operating-system level.
RMTHOST , MGRPORT
Specifies the target system and port where Manager is running.
RMTFILE ,[MAXFILES , MEGABYTES ]
is the fully qualified name of the file.
MAXFILES creates a series of files that are aged as needed. Use if the file could exceed the operating system’s file size limitations. MEGABYTES designates the size of each file.
Specifies the extract file to which the load data will be written. Oracle GoldenGate creates this file during the load. Checkpoints are not maintained with RMTFILE. Use the MAXFILES and MEGABYTES options to control the size.
TABLE .;
is the schema name.
is the name of the table or a group of tables defined with wildcards. To exclude tables from a wildcard specification, use the TABLEEXCLUDE parameter.
Specifies a source table or tables for initial data extraction.
Â
·        On the target system, issue the following command to create an initial-load Replicat parameter file.
EDIT PARAMS
·        Enter the parameters listed below in the order shown, starting a new line for each parameter statement.
Â
Initial-load Replicat parameters for loading data from file to Replicat
SPECIALRUN
Implements the initial-load Replicat as a one-time run that does not use checkpoints.
END RUNTIME
Directs the initial-load Replicat to terminate when the load is finished.
TARGETDB ,][USERID [, PASSWORD ]]
TARGETDB specifies a data source name, if requiredin the connection information. Not required for Oracle. USERID specifies database credentials, if required. For Oracle, you can include a host string, for example:
USERID ggs@ora1.ora, PASSWORD ggs123
Specifies database connection information. These parameters also allow for authentication at the operating-system level.
EXTFILE | EXTTRAIL
is the fully qualified name of the file.
Use EXTTRAIL only if you used the MAXFILES option of the RMTFILE parameter in the Extract parameter file.
Specifies the extract file specified with the Extract parameter RMTFILE.
{SOURCEDEFS } | ASSUMETARGETDEFS
Use SOURCEDEFS if the source and target tables have different definitions. Specify the sourcedefinitions file generated by DEFGEN.
 Use ASSUMETARGETDEFS if the source and target tables have the same definitions.
Specifies how to interpret data definitions.
MAP .,TARGET .;
 is the schema name.
 is the name of a table or a wildcard definition for multiple tables. To exclude tables from a wildcard specification, use the MAPEXCLUDE parameter. Specifies a relationship between a source and target table or tables.
·        Save and close the file.
·        On the source system, start change extraction.
START EXTRACT
·        From the directory where Oracle GoldenGate is installed on the source system, start the initial-load Extract.
$ //extract paramfile dirprm/.prm reportfile
Where: is the name of the initial-load Extract that you used when creating the parameter file, and is the fully qualified name of the Extract report file.
·        Verify the progress and results of the initial extraction by viewing the Extract report file using the operating system’s standard method for viewing files.
·        Wait until the initial extraction is finished.
·        On the target system, start the initial-load Replicat.
$ //replicat paramfile dirprm/.prm reportfile
Where: is the name of the initial-load Replicat that you used when creating the parameter file, and is the fully qualified name of the Replicat report file.
·        When the initial-load Replicat is finished running, verify the results by viewing the Replicat report file using the operating system’s standard method for viewing files.
·        On the target system, start change replication.
START REPLICAT
·        On the target system, issue the following command to verify the status of change replication.
INFO REPLICAT
·        Continue to issue the INFO REPLICAT command until you have verified that Replicat posted all of the change data that was generated during the initial load. For example, if the initial-load Extract stopped at 12:05, make sure Replicat posted data up to that point.
·        On the target system, issue the following command to turn off the HANDLECOLLISIONS parameter and disable the initial-load error handling.
SEND REPLICAT , NOHANDLECOLLISIONS
·        On the target system, edit the Replicat parameter file to remove the HANDLECOLLISIONS parameter. This prevents HANDLECOLLISIONS from being enabled again the next time Replicat starts.
EDIT PARAMS
·        Save and close the parameter file.
From this point forward, Oracle GoldenGate continues to synchronize data changes.
Â
Loading data from file to database utility
To use a database bulk-load utility, you use an initial-load Extract to extract source records from the source tables and write them to an extract file in external ASCII format. The file can be read by Oracle’s SQL*Loader, Microsoft’s BCP, DTS, or SQL Server Integration Services (SSIS) utility, or IBM’s Load Utility (LOADUTIL). During the load, the changesynchronization groups extract and replicate incremental changes, which are then
reconciled with the results of the load. As part of the load procedure, Oracle GoldenGate uses the initial-load Replicat to create run and control files required by the database utility. Any data transformation must be performed by the initial-load Extract on the source system because the control files are generated dynamically and cannot be pre-configured with transformation rules.
Â
To load data from file to database utility
·        Make certain to satisfy “Prerequisites for initial load†on page 215.
·        On the source and target systems, run GGSCI and start Manager.
START MANAGER
·        On the source system, issue the following command to create an initial-load Extract parameter file.
EDIT PARAMS
·        Enter the parameters listed below in the order shown, starting a new line for each parameter statement.
Â
Initial-load Extract parameters for loading from file to database utility
SOURCEISTABLE
Designates Extract as an initial load process that extracts records directly from the source tables.
[SOURCEDB ,][USERID [, PASSWORD ]]
SOURCEDB specifies a data source name, if required in the connection information. Not required for Oracle.
USERID specifies database credentials, if required. For Oracle, you can include a host string, for example:
USERID ggs@ora1.ora, PASSWORD ggs123
Specifies database connection information. These parameters also allow for authentication at the operating-system level.
RMTHOST , MGRPORT [, PARAMS – E -d ]
-E converts ASCII to EBCDIC.
-d specifies the source definitions file.
Specifies the target system and port where Manager is running.
The PARAMS clause is necessary when loading with IBM’s Load Utility, because Oracle GoldenGate will need to refer to the source definitions file.
RMTFILE ,[MAXFILES , MEGABYTES ]
is the fully qualified name of the file
MAXFILES creates a series of files that are aged as needed. Use if the file could exceed the operating
system’s file size limitations.
MEGABYTES designates the size of each file. Specifies the extract file to which the load data will
be written. Oracle GoldenGate creates this file during the load. Checkpoints are not maintained with RMTFILE.
FORMATASCII, {BCP | SQLLOADER}
BCP is used for BCP, DTS, or SSIS.
SQLLOADER is used for Oracle SQL*Loader or IBM Load Utility.
Directs output to be formatted as ASCII text rather than the default canonical format.
TABLE .;
is the schema name.
is the name of the table or a group of tables defined with wildcards. To exclude tables from a wildcard specification, use the TABLEEXCLUDE parameter. Specifies a source table or tables for initial data extraction.
·        Save and close the parameter file.
·        On the target system, issue the following command to create an initial-load Replicat parameter file.
EDIT PARAMS
·        Enter the parameters listed below in the order shown, starting a new line for each parameter statement.
Â
Initial-load Replicat parameters for loading from file to database utility
GENLOADFILES
Generates run and control files for the database utility.
[TARGETDB ,] [USERID [, PASSWORD ]]
TARGETDB specifies a data source name, if required in the connection information. Not required for Oracle.
USERID specifies database credentials, if required. For Oracle, you can include a host string, for example:
USERID ggs@ora1.ora, PASSWORD ggs123 Specifies database connection information. These parameters also allow for authentication at the operating-system level.
EXTFILE | EXTTRAIL
is the fully qualified name of the file
Use EXTTRAIL only if you used the MAXFILES option of the RMTFILE parameter in the Extract parameter file.
Specifies the extract file specified with the Extract parameter RMTFILE.
{SOURCEDEFS } | ASSUMETARGETDEFS
Use SOURCEDEFS if the source and target tables have different definitions. Specify the sourcedefinitions file generated by DEFGEN.
Use ASSUMETARGETDEFS if the source and target tables have the same definitions.
Specifies how to interpret data definitions.
MAP .,
TARGET .;
is the schema name.
is the name of a table or a wildcard definition for multiple tables. To exclude tables from a wildcard specification, use the MAPEXCLUDE parameter. Specifies a relationship between a source and target table or tables.
·        Save and close the parameter file.
·        On the source system, start change extraction.
START EXTRACT
·        From the directory where Oracle GoldenGate is installed on the source system, start the initial-load Extract.
$ //extract paramfile dirprm/.prm reportfile
·        Verify the progress and results of the initial extraction by viewing the Extract report file using the operating system’s standard method for viewing files.
·        Wait until the initial extraction is finished.
·        On the target system, start the initial-load Replicat.
$ //replicat paramfile dirprm/.prm reportfile
·        When the initial-load Replicat is finished running, verify the results by viewing the Replicat report file using the operating system’s standard method for viewing files.
·        Using the ASCII-formatted extract files and the run and control files created by the initial-load Replicat, load the data with the database utility.
·        Wait until the load into the target tables is complete.
·        On the target system, start change replication.
START REPLICAT
·        On the target system, issue the following command to verify the status of change replication.
INFO REPLICAT
·        Continue to issue the INFO REPLICAT command until you have verified that Replicat posted all of the change data that was generated during the initial load. For example, if the initial-load Extract stopped at 12:05, make sure Replicat posted data up to that point.
·        On the target system, issue the following command to turn off the HANDLECOLLISIONS parameter and disable the initial-load error handling.
SEND REPLICAT , NOHANDLECOLLISIONS
·        On the target system, edit the Replicat parameter file to remove the HANDLECOLLISIONS parameter. This prevents HANDLECOLLISIONS from being enabled again the next time Replicat starts.
EDIT PARAMS
·        Save and close the parameter file.
Â
From this point forward, Oracle GoldenGate continues to synchronize data changes.
Â
Loading data with an Oracle GoldenGate direct load
To use an Oracle GoldenGate direct load, you run an Oracle GoldenGate initial-load Extract to extract the source records and send them directly to an initial-load Replicat task. A task is started dynamically by the Manager process and does not require the use of a Collector process or file. The initial-load Replicat task delivers the load in large blocks to the target database. Transformation and mapping can be done by Extract, Replicat, or both. During the load, the change-synchronization groups extract and replicate incremental changes, which are then reconciled with the results of the load.
NOTE This method does not support extraction of LOB or LONG data. As an alternative, see “Loading data from file to Replicat†or “Loading data from file to database utilityâ€.
You can control which port is used by Replicat by specifying the DYNAMICPORTLIST parameter in the Manager parameter file. When starting a process such as Replicat, Manager first looks for a port defined with DYNAMICPORTLIST. If no ports are listed, Manager chooses a port number by incrementing from its own port number until a port is available.
Â
To load data with an Oracle GoldenGate direct load
·        Make certain to satisfy “Prerequisites for initial loadâ€.
·        On the source and target systems, run GGSCI and start Manager.
START MANAGER
·        On the source, issue the following command to create the initial-load Extract.
ADD EXTRACT , SOURCEISTABLE
Where: is the name of the initial-load Extract, up to eight characters. SOURCEISTABLE designates Extract as an initial-load process that reads complete records directly from the source tables. Do not use any of the other ADD EXTRACT service options or datasource arguments.
·        On the source system, issue the following command to create an initial-load Extract parameter file.
EDIT PARAMS
·        Enter the parameters listed below in the order shown, starting a new line for each parameter statement.
Â
Initial-load Extract parameters for Oracle GoldenGate direct load
EXTRACT
Specifies the initial-load Extract that you created.
[SOURCEDB ,][USERID [, PASSWORD ]]
SOURCEDB specifies a data source name, if required in the connection information. Not required for Oracle.
USERID specifies database credentials, if required. For Oracle, you can include a host string, for example: USERID ggs@ora1.ora, PASSWORD ggs123
Specifies database connection information. These parameters also allow for authentication at the operating-system level.
RMTHOST , MGRPORT
Specifies the target system and port where Manager is running.
RMTTASK replicat, GROUP
is the name of the initialload Replicat group Directs Manager on the target system to dynamically start the initial-load Replicat as a onetime task.
TABLE .;
is the schema name.
is the name of the table or a group of tables defined with wildcards. To exclude tables from a wildcard specification, use the TABLEEXCLUDE parameter. Specifies a source table or tables for initial data extraction.
·        Save and close the file.
·        On the target system, issue the following command to create the initial-load Replicat task.
ADD REPLICAT , SPECIALRUN
Where:
is the name of the initial-load Replicat task.
SPECIALRUN identifies the initial-load Replicat as a one-time run, not a continuous
process.
·        On the target system, issue the following command to create an initial-load Replicat parameter file.
EDIT PARAMS
·        Enter the parameters listed below in the order shown, starting a new line for each parameter statement.
Â
Initial-load Replicat parameters for Oracle GoldenGate direct load
REPLICAT
Specifies the initial-load Replicat task to be started by Manager. Use the name that you specified when you created the initial-load Replicat in step 8.
[TARGETDB ,][USERID [, PASSWORD ]]
TARGETDB specifies a data source name, if required in the connection information. Not required for
Oracle.
USERID specifies database credentials, if required. For Oracle, you can include a host string, for example:
USERID ggs@ora1.ora, PASSWORD ggs123 Specifies database connection information. These parameters also allow for authentication at the operating-system level.
{SOURCEDEFS } | ASSUMETARGETDEFS
Use SOURCEDEFS if the source and target tables have different definitions. Specify the sourcedefinitions file generated by DEFGEN.
Use ASSUMETARGETDEFS if the source and target tables have the same definitions. Specifies how to  interpret data definitions.
MAP ., TARGET .;
is the schema name.
is the name of a table or a wildcard definition for multiple tables. To exclude tables from a wildcard specification, use the MAPEXCLUDE parameter. Specifies a relationship between a source and target table or tables.
Â
·        Save and close the parameter file.
·        On the source system, start change extraction.
START EXTRACT
·        On the source system, start the initial-load Extract.
START EXTRACT
·        On the target system, issue the following command to find out if the load is finished. Wait until the load is finished before going to the next step.
VIEW REPORT
·        On the target system, start change replication.
START REPLICAT
·        On the target system, issue the following command to verify the status of change replication.
INFO REPLICAT
·        Continue to issue the INFO REPLICAT command until you have verified that Replicat posted all of the change data that was generated during the initial load. For example, if the initial-load Extract stopped at 12:05, make sure Replicat posted data up to that point.
·        On the target system, issue the following command to turn off the HANDLECOLLISIONS parameter and disable the initial-load error handling.
SEND REPLICAT , NOHANDLECOLLISIONS
·        On the target system, edit the Replicat parameter file to remove the HANDLECOLLISIONS parameter. This prevents HANDLECOLLISIONS from being enabled again the next time Replicat starts.
EDIT PARAMS
·        Save and close the parameter file.
Â
From this point forward, Oracle GoldenGate continues to synchronize data changes.
Â
Loading data with a direct bulk load to SQL*Loader
To use Oracle’s SQL*Loader utility to establish the target data, you run an Oracle GoldenGate initial-load Extract to extract the source records and send them directly to an initial-load Replicat task. A task is a process that is started dynamically by the Manager process and does not require the use of a Collector process or file. The initial-load Replicat task interfaces with the API of SQL*Loader to load data as a direct-path bulk load. Data mapping and transformation can be done by either the initial-load Extract or initial-load Replicat, or both. During the load, the change-synchronization groups extract and replicate incremental changes, which are then reconciled with the results of the load. You can control which port is used by Replicat by specifying the  DYNAMICPORTLIST parameter in the Manager parameter file. When starting a process such as Replicat, Manager first looks for a port defined with DYNAMICPORTLIST. If no ports are listed, Manager chooses a port
number by incrementing from its own port number until a port is available.
Â
Limitations:
◠This method only works with Oracle’s SQL*Loader. Do not use it for other databases.
â— This method does not support extraction of LOB or LONG data. As an alternative, see “Loading data from file to Replicat†or “Loading data from file to database utilityâ€.
â— This method does not support materialized views that contain LOBs, regardless of their size. It also does not support data encryption.
Â
To load data with a direct bulk load to SQL*Loader
·        Make certain that you have addressed the requirements in “Prerequisites for initial loadâ€.
·        (Oracle 9i and later) Grant LOCK ANY TABLE to the Replicat database user on the target Oracle database.
·        On the source and target systems, run GGSCI and start Manager.
START MANAGER
·        On the source system, issue the following command to create the initial-load Extract.
ADD EXTRACT , SOURCEISTABLE
Where:
is the name of the initial-load Extract, up to eight characters.
SOURCEISTABLE designates Extract as an initial-load process that reads complete records directly from the source tables. Do not use any of the other ADD EXTRACT service options or datasource arguments.
·        On the source system, issue the following command to create an initial-load Extract parameter file.
EDIT PARAMS
·        Enter the parameters listed below in the order shown, starting a new line for each parameter statement.
Â
Initial-load Extract parameters for a direct bulk load to SQL*Loader
EXTRACT
Specifies the initial-load Extract that you created.
[SOURCEDB ,] [USERID [, PASSWORD ]]
SOURCEDB specifies a data source name, if required in the connection information. Not required for Oracle.
USERID specifies database credentials, if required. For Oracle, you can include a host string, for example:
USERID ggs@ora1.ora, PASSWORD ggs123
Specifies database connection information. These parameters also allow for authentication at the operating-system level.
RMTHOST , MGRPORT
Specifies the target system and port where Manager is running.
RMTTASK replicat, GROUP
is the name of the initialload Replicat group. Directs Manager on the target system to dynamically start the initial-load Replicat as a onetime task.
TABLE .;
is the schema name.
is the name of the table or a group of tables defined with wildcards. To exclude tables from a wildcard specification, use the TABLEEXCLUDE parameter. Specifies a table or tables for initial data extraction.
Â
·        Enter any appropriate optional parameters.
·        Save and close the file.
·        On the target system, issue the following command to create the initial-load Replicat.
ADD REPLICAT , SPECIALRUN
Where:
is the name of the initial-load Replicat task.
SPECIALRUN identifies the initial-load Replicat as a one-time task, not a continuous process.
·        On the target system, issue the following command to create an initial-load Replicat parameter file.
EDIT PARAMS
·        Enter the parameters listed below in the order shown, starting a new line for each parameter statement.
Â
Initial-load Replicat parameters for direct load to SQL*Loader
REPLICAT
Specifies the initial-load Replicat task to be started by Manager. Use the name that you specified when you created the initial-load Replicat.
USERID , PASSWORD
Specifies the user ID and password to be used by the initial-load Replicat for connecting to the Oracle target database. You can include a host string, for example: USERID ggs@ora1.ora, PASSWORD ggs123 This  parameter also allows for authentication at the operating-system level.
BULKLOAD
Directs Replicat to interface directly with the Oracle SQL*Loader interface.
{SOURCEDEFS } | ASSUMETARGETDEFS
Use SOURCEDEFS if the source and target tables have different definitions. Specify the source-definitions file generated by DEFGEN.
Use ASSUMETARGETDEFS if the source and target tables have the same definitions. Specifies how to  interpret data definitions.
·        Save and close the parameter file.
·        On the source system, start change extraction.
START EXTRACT
·        On the source system, start the initial-load Extract.
START EXTRACT
WARNING Do not start the initial-load Replicat. The Manager process starts it automatically and terminates it when the load is finished.
·        On the target system, issue the following command to determine when the load is finished. Wait until the load is finished before proceeding to the next step.
VIEW REPORT
·        On the target system, start change replication.
START REPLICAT
·        On the target system, issue the following command to verify the status of change replication.
INFO REPLICAT
·        Continue to issue the INFO REPLICAT command until you have verified that Replicat posted all of the change data that was generated during the initial load. For example, if the initial-load Extract stopped at 12:05, make sure Replicat posted data up to that point.
·        On the target system, issue the following command to turn off the HANDLECOLLISIONS parameter and disable the initial-load error handling.
SEND REPLICAT , NOHANDLECOLLISIONS
·        On the target system, edit the Replicat parameter file to remove the HANDLECOLLISIONS parameter. This prevents HANDLECOLLISIONS from being enabled again the next time Replicat starts.
EDIT PARAMS
MAP .,
TARGET .;
is the schema name.
is the name of a table or a wildcard definition for multiple tables. To exclude tables from a wildcard specification, use the MAPEXCLUDE parameter. Specifies a relationship between a source and target table or tables.
Â
·        Save and close the parameter file.
Â
From this point forward, Oracle GoldenGate continues to synchronize data changes.
Â
on June 3, 2013
at 8:46 am
netstat -an 1 | find “nnn.nnn.nnn.”Â
where nnn.nnn.nnn is the source host.
on May 22, 2013
at 11:54 am
Located either in /home/oracle/exachk OR, later, /opt/oracle.SupportTools/exachk
 Â
-
asm asm Checks
-
clusterware clusterware checks
-
dba dba Checks
-
storage storage server checks
-
switch switch checks
-
sysadmin sysadmin checks
virtual_infra all OVS checks , all control VM checks, all NTP related checks, and stale VNICs check (Exalogic Only)
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Located either in /home/oracle/exachk OR, later, /opt/oracle.SupportTools/exachk
Â
Â
Â
Â
Â
exachk can be executed with following arguments:
Â
Â
-
-o Argument to an option. if -o is followed by v,V,Verbose,VERBOSE or Verbose, it will print checks which passs on the screen. if -o option is not specified,it will print only failures on screen. for eg: exachk -a -o v
-
-r exclude High availability best practices checks (see user guide for more details)
-
-clusternodes Pass comma separated node names to run exachk only on selected nodes.
-
-cells Pass comma separated storage server names to run exachk only on selected storage servers.
-
-ibswitches Pass comma separated infiniband switch names to run exachk only on selected infiniband switches.
-
-localonly Run exachk only on local node.
-
-nopass Dont print PASS’ed checks in HTML report.
-
-diff [-outfile ] Diff two exachk reports. Pass directory name or zip file or html report file as &
-
-c Used only under the guidance of Oracle support or development to override default components
-
-profile Pass specific profile. List of supported profiles:
Â
-
-a All (Perform best practice check and recommended patch check)
-
-b Best Practice check only. No recommended patch check
-
-h Show usage
-
-v Show version
-
-p Patch check only
- Â -m exclude checks for Maximum Availability Architecture scorecard (see userguide for more details)
-
-u Run exachk to check pre-upgrade or post-upgrade best practices.-o pre or –o post is mandatory with -u option like ./exachk -u -o pre
- -f Run Offline.Checks will be performed on data already collected from the customer system
Â
exachk can be executed with following arguments:
exachk version 2.2.1 uses the “nm2user†userid for IB switch validation. If you wish to have exachk use the “root†userid for IB switch validation (the original pre-version 2.2.1 behavior), please set the RAT_IBSWITCH_USER environment variable.
Â
Â
Â
Â
Â
Â
Â
Â
Â
Detailed report (html) –
/home/oracle/exachk_215/20120524/exachk_dbm_053012_102825/exachk_dbm_053012_102825.html
UPLOAD(if required) –
Â
/home/oracle/exachk_215/20120524/exachk_dbm_053012_102825.zip
Typically, there is very little or no screen output following the clusterwide checks banner. The last screen
output to appear is the file reference information, as shown below:
on May 17, 2013
at 12:41 pm
If the diskgroup SYSTEMDG is not mounting (because of multiple disk failures) then following procedure can be used to recreate this diskgroup.
This procedure should be followed if you are NOT able to mount the SYSTEMDG diskgroup even with FORCE option.
Sometimes if one of the disk is missing then diskgroup will not mount and will give error regarding missing disk but if all other members are available then we can mount the diskgroup with FORCE option and later drop the missing disk. Once the disk is replaced we can add this missing disk again to diskgroup.
Fix
In this note we assume that OCR and Voting disk are on SYSTEMDG diskgroup and it is not mounting resulting in crs and css not coming up.
1. You should have a backup of the ocr file.Â
  Check the following location on all the db nodes and pick the latest backup of the ocr:
 Â
$GRID_HOME/crsdata//backup.ocr
  Â
  Copy this file to the node from where you will be performing the steps given in this note.
 Â
2. Start the crs in exclusive mode. Please note that you will not be able to start the crs inÂ
  normal mode since the diskgroup (SYSTEMDG) on which ocr is existing, does not exists or is not mounted.
Â
 -- For 11.2.0.1:
   # crsctl start crs -excl  Â
   # crsctl stop res ora.crsd -init
Â
  -- For 11.2.0.2 onwards:
   # crsctl start crs -excl -nocrs
Â
3. Find out all the disks which were there in the SYSTEMDG diskgroup:
   Â
  Â
# dcli -g cell_group -l root "cellcli -e list griddisk where name like \'.*SYSTEM.*\' "
  Â
     SYSTEMDG_CD_02_dmorlcel01    active
     SYSTEMDG_CD_03_dmorlcel01    active
     SYSTEMDG_CD_04_dmorlcel01    active
     SYSTEMDG_CD_05_dmorlcel01    active
     SYSTEMDG_CD_06_dmorlcel01    active
     SYSTEMDG_CD_07_dmorlcel01    active
     SYSTEMDG_CD_08_dmorlcel01    active
     SYSTEMDG_CD_09_dmorlcel01    active
     SYSTEMDG_CD_10_dmorlcel01    active
     SYSTEMDG_CD_11_dmorlcel01    active
 Execute above command from any of the db node where cell_group file has ip or names of all the cell nodes. And ensure that all the griddisks are usable in status. Which means the status should not be ‘not present’.Â
 Â
4. Recreate the diskgroup using following command:
  Login to the ASM instance and execute following command:
— Find out the compatible parameter setting for the diskgroup:
Go to the asm alert.log file and check for the following entry for this disk group:
“NOTE: Instance updated compatible.asm to 11.2.0.2.0 for grp “
The same value you need to specify in the below command.
  sql> create diskgroup SYSTEMDG normal redundancy disk 'o/*/SYSTEMDG_CD*' force attribute 'compatible.rdbms'='11.2', 'compatible.asm'='11.2', 'au_size'='4M',
          'cell.smart_scan_capable'='TRUE';
  Â
 Â
If you had replaced any disk after the SYSTEMDG was dismounted then you may get error stating that this particular disk does not require FORCE option. This is because all the other disk still have the ASM header but this disk is clean and would not have any header and so would not require force option. In this case for the time being drop the griddisk from the cell:
Â
 cellcli > drop griddisk force
   After this re execute the disk group creation command and this time it will not detect the disk which was newly inserted (sine we have dropped it) and will create the disk group with the remaining disks. Once you finish of with steps given in this note, you can create this griddisk and add it to ASM. (step 11 and 12)
Â
5. cd
Â
 # ocrconfig -restore backup00.ocr  Â
Â
6. Start the crs demon (only needed in 11.2.0.1)
  Â
# crsctl start res ora.crsd -init Â
7. Recreate the Voting disk:
Â
 # crsctl replace votedisk +SYSTEMDG
8. Recreate the ASM spfile: (Only if the ASM spfile was on SYSTEMDG diskgroup)
  Open ASM alert log and go to the location when the ASm instance was last started successfully before the SYSTEMDG got dismounted and find out the ASM parameters.
 Â
Â
 processes         = 1000
 large_pool_size      = 16M
 instance_type       = "asm"
 cluster_interconnects   = "192.168.10.20"
 memory_target       = 1040M
 remote_login_passwordfile= "EXCLUSIVE"
 asm_diskstring      = "o/*/*"
 asm_diskgroups      = "DATA"
 asm_diskgroups      = "RECO"
 asm_power_limit      = 1024
 diagnostic_dest      = "/u01/app/oracle"
Â
There are certain parameters which are instance specific for example cluster_interconnects. So you need to open ASM alert.log from all the nodes and find out correct value of such parameters in order to prepare the pfile.
 Â
After collecting all the parameters, prepare the final pfile which will look like following:
Â
*.processes         = 1000
 *.large_pool_size      = 16M
 *.instance_type       = "asm"
 +ASM1.cluster_interconnects = "192.168.10.20"
 +ASM2.cluster_interconnects = "192.168.10.21"
 *.memory_target       = 1040M
 *.remote_login_passwordfile = "EXCLUSIVE"
 *.asm_diskstring       = "o/*/*"
 +ASM1.asm_diskgroups     = "DATA"
 +ASM1.asm_diskgroups     = "RECO"
 *.asm_power_limit      = 4
 *.diagnostic_dest      = "/u01/app/oracle"  Â
  Â
Now the SPFILE can be created using this PFILE:
$ sqlplus / as sysasm
Â
SQL> create spfile='+SYSTEMDG' from pfile='/tmp/asmpfile.ora';
9. Stop the crsÂ
  Â
# $CRS_HOME/bin/crsctl stop crs -f
10. Start crs
  Â
# $GRID_HOME/bin/crsctl start crs
Execute following step 11 and 12 only if you had dropped any griddisk in step 4. Â Â
11. In the step 4 if you had to drop any griddisk (because of the newly inserted disk) then create it once agian:
Â
   cellcli> create griddisk celldisk=
 Â
  For ex:
  cellcli> create griddisk SYSTEMDG_CD_02_dmorlcel01 celldisk=CD_02_dmorlcel01
12. Add this griddisk to the SYSTEMDG diskgroup:
Â
  login to the asm instance:
    $ sqlplus / as sysasm
  Ex:Â
  sql> alter diskgroup SYSTEMDG add disk 'o/192.168.10.6/SYSTEMDG_CD_02_dmorlcel01';
on May 17, 2013
at 12:04 pm
Procedure
This section demonstrates the resizing procedure in detail.Â
1. Ensure prerequisites are metÂ
- FREE_MB > REQUIRED_MIRROR_FREE_MB in V$ASM_DISKGROUP for the disk group that will shrink (typically RECO); ideally, FREE_MB > 2 * REQUIRED_MIRROR_FREE_MB [USED_MB may be more realistic, need to check] to cut number of rebalances in half by doing and ADD/DROP operation simultaneously for each cell, except the first and last cells. On the first cell you will do a DROP disk operation and on the last one you will do an ADD disk operation.Â
- If the RECO disk group is configured as high redundancy, then you can do this procedure on a Half Rack or greater. You will not be able to do this procedure on a Quarter Rack because ASM will not allow you to drop a failure group such that only one copy of the data remains.Â
- All disk groups are backed up and you have tested the recovery strategy.Â
- The load on the system will not inhibit rebalancing activity at the desired power level.Â
- All DCLI commands should use the celladmin user on the cells as a best practice. This may require establishing equivalency to the celladmin user from the oracle user on DB nodes.Â
- Run the Exachk script ( MOS note 1070954.1) one or two weeks before doing this procedure to ensure the system is healthy.Â
2. Calculate RECO and DATA disk group and grid disk sizing changesÂ
In this step we assume that you have properly estimated the space needed for objects to store in RECO (without considering redundancy), e.g., number of archivelogs you want to store, etc. The system in the examples below was a quarter-rack installation containing three cells for a total of 36 grid disks in each of the DATA_A and RECO_A disk groups. We are using DATA_A and RECO_A disk groups instead of the customary DATA and RECO disk groups to make our testing flexible and not interfere with the other disk groups (this also demonstrates that the procedure will work for any two disk groups).Â
This value is known as USABLE_MB and is expressed in megabytes.
We will first obtain information about the current disk group sizes:
a. Query 1: Find the amount of free space in the disk groups:Â
select group_number, name, type,total_mb,free_mb,required_mirror_free_mb,usable_file_mb,(total_mb-free_mb) used_mb
from v$asm_diskgroup
order by group_number;
 Â
GROUP_NUMBER NAMEÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â TYPEÂ Â Â Â TOTAL_MBÂ Â Â FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MBÂ Â Â USED_MB
———— —————————— —— ———- ———- ———————– ————– ———-
…
          4 DATA_A_DM01                 NORMAL   6659712   6577052                2219904       2178574     82660
          5 RECO_A_DM01                 NORMAL  10742976  10721816                3580992       3570412     21160Â
For our example here, we will choose to resize DATA_A_DM01 and RECO_A_DM01. Other disk groups won’t be affected and hence aren’t shown.Â
We need to determine the number of cells and disks that are being used; this is accomplished by the following queries:
b. Query 2: Find miscellaneous information about the ASM disks:Â
SQL> SELECT dg.name, count(1) “Num Disks”
FROM v$asm_disk d, v$asm_diskgroup dg
WHERE d.group_number = dg.group_number
GROUP BY dg.name;
 2   3   4 Â
NAMEÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Num Disks
—————————— ———-
DATA_A_DM01Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 36
DATA_DM01Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 36
DBFS_DGÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 30
RECO_A_DM01Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 36
RECO_DM01Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 36
SQL> SELECT d.group_number, dg.name, d.os_mb
FROMÂ v$asm_disk d, v$asm_diskgroup dg
WHERE d.group_number = dg.group_number
AND dg.name IN (‘DATA_A_DM01′,’RECO_A_DM01’)
ORDER BY dg.name
…
          4 DATA_A_DM01                     184992
          4 DATA_A_DM01                     184992
          5 RECO_A_DM01                     298416
          5 RECO_A_DM01                     298416
          5 RECO_A_DM01                     298416   Â
c. Query 3: Find the number of failure groups (cells):Â
SELECT dg.name DG_NAME, count(distinct failgroup) NUM_FAILGROUPS
FROM v$asm_disk d, v$asm_diskgroup dg
WHERE d.group_number = dg.group_number
AND dg.name like ‘RECO%’
GROUP BY dg.name;
Â
DG_NAMEÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â NUM_FAILGROUPS
—————————— ————–
RECO_A_DM01Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 3Â
For this specific example, based on our needs, we have determined that RECO_A requires 1 TB of usable file space (instead of the 3.5 TB it currently has) and will resize disk groups DATA_A and RECO_A accordingly. This should give DATA_A an increase of about 2.5 TB for a total of about 4.6 TB of usable free space.Â
We decide to reserve space in case of cell failure instead of disk failure, so we will use this formula to calculate the total size of the RECO disk group we will need (see the discussion in the Exadata Database Machine Owners Guide, Chapter 7, Resizing Storage Grid Disks:Â
 RECO_TOTAL_MB = 2 * C /(C-1) * USABLE_MB Â
Using the above queries, we know the following:
C = NUM_FAILGROUPS (in query 3) = 3
D = DISK_COUNT (in query 2) in the RECO and DATA disk groups = 36Â
We apply the formulas, with our chosen value of USABLE_MB = 1 TB:Â
RECO_TOTAL_MB = 2 * C /(C-1) * USABLE_MBÂ = 2 * 3/2 * 1048576 = 3145728 MB
RECO_GD_MB = RECO_TOTAL_MB / DÂ Â = CEIL(3145728 / 36/16) * 16 = 87392 MB
Note: The CEIL() function returns values to the next largest integer; this is used to round up to the next 16MB boundary.Â
DATA GD_MB = (DATA_OS_MB + RECO_OS_MB) – RECO GD_MBÂ
                          = (184992 + 298416) – 87392 = 396016 MB
New DATA disk group total size = DATA GD_MB * D = 396016* 36 = 14256576 MB ~ 14 TBÂ
As a sanity check, we look at the existing grid disk definitions for RECO_A:Â
$ dcli -g cell_group -l celladmin “cellcli -e list griddisk attributes name,size where name like \’RECO_A.*\'”
dm01cel04: RECO_A_CD_01_dm01cel04Â Â Â 291.421875G
…
dm01cel06: RECO_A_CD_12_dm01cel06Â Â Â 291.421875G
Â
The current size of the RECO_A grid disks agree with the value of OS_MB in query 2 (291.421875G = 298416 MB). These will be re-created with the size of 87392 MB we calculated above.
This command will be used to find the DBFS_DG starting offset to use when we recreate the RECO grid disks (this will likely be different in your environment):Â
 dcli -g cell_group -l celladmin “cellcli -e list griddisk attributes name,offset | grep –i dbfs_dg“
 dm01cel04: DBFS_DG_CD_03_dm01cel04          854.9375G
…
dm01cel06: DBFS_DG_CD_12_dm01cel06Â Â Â Â Â Â Â Â Â Â 854.9375GÂ
We also look at the existing grid disk definitions for DATA_A:
$ dcli -g cell_group -l celladmin “cellcli -e list griddisk attributes name,size where name like \’DATA_A.*\'”
dm01cel04: DATA_A_CD_01_dm01cel04Â Â Â Â Â Â Â Â 180.65625G
…
dm01cel06: DATA_A_CD_12_dm01cel06Â Â Â Â Â Â Â Â 180.65625G
The current size of the DATA_A grid disks agree with the value of OS_MB in query 2. These grid disks will be resized to the value of 396026 MB. We sanity-check our calculations here to ensure we are growing and not shrinking the DATA_A grid disks (e.g., 396026 > 184992).Â
3. DROP ASM disks from RECO Disk group on the first cell
First, ensure no rebalancing operations are occurring by running this query (and all subsequent SQL statements shown in the examples below in an ASM instance); no rows should return from this query:Â
SQL> SELECT * FROM gv$asm_operation;
If no rebalancing operations are occurring then, drop the ASM disks from the RECO disk group for one cell (in this example we have cells dm01cel04 – dm01cel06 in a split-rack):    Â
SQL> alter diskgroup RECO_A_DM01 drop disks in failgroup DM01CEL04 rebalance power 32 NOWAIT;Â
 In the example above we use a power level of 32 with the assumption that ASM is running version 11.2.0.2 or higher. Your choice of power level depends on how much workload is on the system at the time of the rebalance operation, how quickly you wish the rebalance to finish, and how much higher power levels will impact the workload. We do not recommend rebalance power levels higher than 32. If there is application activity that is sensitive to I/O performance, you may want to start with a small rebalance power and increase it (using the ALTER DISKGROUP command) to higher power levels until the application begins to be impacted.Â
4. Wait for Rebalance to Finish
First, ensure no rebalancing operations are occurring by running this query; no rows should return:Â Â Â Â Â Â Â Â Â Â Â
SQL> SELECT * FROM gv$asm_operation;Â
Note: Rebalancing involves a “compaction†phase after the file extents are rebalanced and redundancy is established. This compaction phase may take some time and its progress is not reported in gv$asm_operation. Please be patient while this activity proceeds. To monitor progress, please see MOS note 1274322.1 for the script in entry, “Check ASM rebalance forward progress if you suspect a problemâ€Â
5. Drop the RECO grid disks on the cell
a. Ensure that ASM disks were properly dropped:Â
column header_status heading HSTATUS format A15
column name format A35
column path format A55
set lines 200
set pages 100Â
SQL> SELECT group_number,path,failgroup,header_status,mount_status
  FROM v$asm_disk
  WHERE failgroup = ‘DM01CEL04’
  ORDER BY header_status,path;Â
PATHÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â FAILGROUPÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â HSTATUSÂ Â Â Â Â Â Â Â MOUNT_S
——————————————————- —————————— ————— ——-
o/192.168.227.203/RECO_A_CD_01_dm01cel04Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â DM01CEL04Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â FORMERÂ Â Â Â Â Â Â Â Â CLOSED
o/192.168.227.203/RECO_A_CD_02_dm01cel04Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â DM01CEL04Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â FORMERÂ Â Â Â Â Â Â Â Â CLOSED
…
o/192.168.227.203/RECO_A_CD_11_dm01cel04Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â DM01CEL04Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â FORMERÂ Â Â Â Â Â Â Â Â CLOSED
o/192.168.227.203/RECO_A_CD_12_dm01cel04Â Â Â Â Â Â Â Â Â Â Â Â Â Â DM01CEL04Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â FORMERÂ Â Â Â Â Â Â Â Â CLOSEDÂ
 The GROUP_NUMBER should be zero if the disks were successfully dropped (usually you will also see the HEADER_STATUS as “FORMER†and the MOUNT_STATUS as “CLOSED†), otherwise do not proceed until you investigate the state of the disks further.Â
b. Verify the grid disks are in the proper status to proceed:Â
$ dcli –c dm01cel04 –l celladmin “cellcli –e list griddisk attributes name,asmmodestatus,asmdeactivationoutcome“
        RECO_A_CD_01_dm01cel04       UNUSED Yes
…
        RECO_A_CD_12_dm01cel04       UNUSED YesÂ
If the ASMMODESTATUS is “UNUSED†and the ASMDEACTIVATIONOUTCOME is “Yesâ€, then it is safe to proceed. Otherwise, investigate further and correct before continuin
c. Drop the grid disks from the cell
From the first DB node, run the following DCLI command as the oracle user (be sure the cell name is correct!):
 $ dcli –c dm01cel04 –l celladmin “cellcli -e drop griddisk all harddisk prefix=RECO_A”
NOTE: you may need to add the FORCE option to the “drop griddisk†command if the cell doesn’t allow this command to succeed and you are sure that the ASM disks on the specific cell were successfully dropped in steps 3 and 4 above.Â
6. Recreate RECO grid disks on the cell to a smaller size
We will need the size and offset calculations that were done in step 2 here. For this particular example:
- The DBFS_DG disk group has its starting offset at 854.9375 GB = 875456 MB
- We calculated the new size for the RECO_A grid disks as  87392 MB
 Therefore, the starting offset for the RECO_A grid disk will be: 875456 – 87392 = 788064 MB. This will make the RECO_A grid disks contiguous with the DBFS_DG grid disks on the inner part of the celldisk.
 From the first DB node, run the following DCLI command as the oracle user to create the RECO_A grid disks:
 $ dcli -c dm01cel04 -l celladmin “cellcli -e CREATE GRIDDISK ALL HARDDISK PREFIX=’RECO_A’, offset=788064M, size=87392M“
 Verify the grid disks were created properly and contiguous with DBFS_DG:
 $ dcli -c dm01cel04 -l celladmin “cellcli -e list griddisk attributes name, offset, size”
 Verify free space available now in the celldisk to use for the DATA_A grid disks in a later step (there should only be one “offsetâ€, “size†pair listed per celldisk):Â
 $ dcli -c dm01cel04 -l celladmin “cellcli -e list celldisk attributes name,freespace,freespacemap“
 dm01cel04: CD_01_dm01cel04  291.421875G ((offset=769.59375G,size=291.421875G)) Â
7. Add RECO ASM disks from the cell to the RECO Disk group and drop RECO ASM disks from the next cellÂ
If sufficient free space exists in the RECO DG (twice the amount of REQUIRED_MIRROR_FREE_MB), then you can run this command:
SQL> alter diskgroup RECO_A_DM01 drop disks in failgroup DM01CEL05 add disk ‘o/*/RECO_A*dm01cel04′ rebalance power 32 NOWAIT;Â
Otherwise, you’ll need to run the add and drop commands separately:
SQL> alter diskgroup RECO_A_DM01 add disk ‘o/*/RECO_A*dm01cel04′ rebalance power 32 NOWAIT;
Wait for rebalance to finish successfully:Â
SQL> SELECT * FROM gv$asm_operation;
SQL> alter diskgroup RECO_A_DM01 drop disks in failgroup DM01CEL05 rebalance power 32 NOWAIT;Â
8. Wait for Rebalance to Finish
Ensure no rebalancing operations are occurring by running this query; no rows should return:
SQL> SELECT * FROM gv$asm_operation;
 Verify ASM disks were dropped and grid disks are ready to be dropped as in steps 5a and 5b above.
 Ensure dm01cel04 RECO_A ASM disks are added back successfully with HEADER_STATUS=â€MEMBER†and MOUNT_STATUS=â€CACHEDâ€; and dm01cel05 RECO ASM disks are now unused:Â
 PATH                                        FAILGROUP    HSTATUS MOUNT_S
——————————————– ————- ——– ——-
o/192.168.227.203/RECO_A_CD_01_dm01cel04Â Â DM01CEL04Â Â MEMBERÂ Â CACHED
o/192.168.227.203/RECO_A_CD_02_dm01cel04Â Â DM01CEL04Â Â MEMBERÂ Â CACHED
…
o/192.168.227.204/RECO_A_CD_01_dm01cel05Â Â DM01CEL05Â Â FORMERÂ Â CLOSED
o/192.168.227.204/RECO_A_CD_02_dm01cel05Â Â DM01CEL05Â Â FORMERÂ Â CLOSED
9. Repeat steps 5 through 8 for the remaining cells
Be careful to change the name of the cells and failure groups as you copy and execute commands on the various cells.
Note: On the final cell, we only need to add the ASM disks back into the disk group with the command:
 SQL> alter diskgroup RECO_A_DM01 add disk ‘o/*/RECO_A*dm01cel06′ rebalance power 32 NOWAIT;Â
10. Wait for Rebalance to Finish
Ensure no rebalancing operations are occurring by running this query; no rows should return:
SQL> SELECT * FROM gv$asm_operation;
Verify ASM disks were added properly using the query in 5a above (be sure to change the failgroup name); the HEADER_STATUS should be “MEMBER†and the MOUNT_STATUS should be “CACHEDâ€:Â
PATHÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â FAILGROUPÂ Â Â Â Â Â Â Â HSTATUSÂ MOUNT_S
—————————————————- —————————— ————— ——-
o/192.168.227.205/RECO_A_CD_01_dm01cel06Â Â Â Â Â Â Â Â DM01CEL06Â Â MEMBERÂ Â Â Â Â Â CACHED
o/192.168.227.205/RECO_A_CD_02_dm01cel06Â Â Â Â Â Â Â Â DM01CEL06Â Â MEMBERÂ Â Â Â Â Â CACHED
 When rebalancing is done, you may proceed to the next step.Â
11. Resize all DATA grid disks up to the desired size
Now that all RECO_A grid disks have been recreated to a smaller size, there should be sufficient free space in every celldisk to extend the DATA_A grid disks.
 a. Confirm available spaceÂ
$ dcli –g cell_group -l celladmin “cellcli -e list celldisk attributes name,freespaceâ€
dm01cel04: CD_01_dm01cel04Â Â 206.078125GÂ Â
…Â
      b. Confirm current size of the DATA_A grid disks: Â
$ dcli –g cell_group -l celladmin “cellcli -e list griddisk attributes name,size“
dm01cel04: DATA_A_CD_01_dm01cel04Â Â Â Â Â Â Â Â Â Â Â 180.65625G
…
…
      c. Calculate the new size for the DATA_A grid disks:
Current size = 180.65625G
Free space = 206.078125G
New size = current size + free space = 180.65625G + 206.078125G = 386.734375G = 396016 MB
 We note that this is exactly what we calculated in step 2 for “DATA GD_MB”, so we will proceed; any discrepancies should be investigated.Â
     d. Execute the DATA_A grid disk resizing commands:
Hint: You can obtain a list of disks to resize using this command:
SQL>Â select name||’,’ from v$asm_disk where group_number = order by path
Â
$ dcli -c dm01cel04 -l celladmin “cellcli -e alter grid disk DATA_A_CD_01_DM01CEL04, DATA_A_CD_02_DM01CEL04, DATA_A_CD_03_DM01CEL04, DATA_A_CD_04_DM01CEL04, DATA_A_CD_05_DM01CEL04, DATA_A_CD_06_DM01CEL04,DATA_A_CD_07_DM01CEL04, DATA_A_CD_08_DM01CEL04, DATA_A_CD_09_DM01CEL04,DATA_A_CD_10_DM01CEL04,DATA_A_CD_11_DM01CEL04,DATA_A_CD_12_DM01CEL04 size=396016M“Â
This operation is immediate and can be done online. After running this on all cells (change the –c parameter and grid disk names accordingly), verify the DATA_A grid disks are indeed larger on all cells:
$ dcli –g cell_group -l root “cellcli -e list griddisk attributes name,offset,sizeÂ
Verify that all free space on the celldisks are allocated to the grid disks:Â
$ dcli -g cell_group -l celladmin “cellcli -e list celldisk attributes name,freespace,freespacemap“
…
dm01cel06: CD_01_dm01cel06Â Â 0
dm01cel06: CD_02_dm01cel06Â Â 0Â
 e. Verify the ASM disk sizes match the grid disk sizes for DATA_AÂ
SELECT name, failgroup, os_mb FROM v$asm_disk
WHERE group_number = (SELECT group_number FROM v$asm_diskgroup
                   WHERE name = ‘DATA_A_DM01’)
ORDER BY failgroup;Â
Note: Executing this query will refresh ASM’s cache and ensure the next step succeeds. Â
12. Resize all DATA ASM disks up to the same size as grid disks in previous stepÂ
ALTER DISKGROUP DATA_A_DM01 RESIZE ALL SIZE 396016M REBALANCE POWER 32 NOWAIT;
 This will trigger a rebalance.
CAUTION: Be sure to set the size for the ASM disks to EXACTLY the same size as the griddisks created and verified in step 11.c ; failure to match the sizes could corrupt the diskgroup.Â
13. Wait for Rebalance to Finish
Ensure no rebalancing operations are occurring by running this query; no rows should return:
SQL> SELECT * FROM gv$asm_operation;Â
14. Verify all sizes
When the rebalance is finished, verify that the disk group sizes match what was calculated in step 2:
GROUP_NUMBER NAMEÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â TYPEÂ Â Â Â TOTAL_MBÂ Â Â FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MBÂ Â Â USED_MB
———— ———————————– —— ———- ———- ———————– ————– ———-
… Â Â Â Â Â Â Â Â Â Â 4 DATA_A_DM01Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â NORMALÂ Â 14256576Â Â 14173916Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 4752192Â Â Â Â Â Â Â 4710862Â Â Â Â Â 82660
          5 RECO_A_DM01                        NORMAL   3146112   3124952                1048704       1038124     21160
We confirm that DATA_A_DM01 disk group grew to 14 TB and RECO shrank to about 3 TB.Â
15. End of Procedure
Â
on May 17, 2013
at 11:18 am
DISABLING FLASH CACHE WRITETHROUGH
Â
NOTE – For the list griddisk attributes name, asmmodestatus, asmdeactivationoutcome last step for each cell ensure that status is ONLINE (not SYNCING) before moving on to the next storage server.
The following query using “sqlplus / as sysasm†is useful for checking that you have 12 disks for each cell for diskgroups 1 and 3 (DATA and RECO) and 10 for diskgroup 2 (DBFS)
select group_number, substr(path,1,15), count(1) from v$asm_disk group by group_number, substr(path,1,15);
Â
1)   Check griddisk status by verifying the griddisk attribute “asmdeactivationoutcome†= “Yes†for all griddisks on this cell.Â
Do not proceed if a griddisk is returned using the following command.Â
The following command should return no rows.
dcli -g /home/oracle/cell_group -l root cellcli -e “list griddisk where asmdeactivationoutcome != ‘Yes’ attributes name,asmmodestatus,asmdeactivationoutcome”
Â
2)Â Â Â Determine amount of bytes to be flushed by determining how much is dirty is in the flash cache.
This will provide the number of bytes of data that needs to be de-staged to disk per cell, which will give an indication of how long the flush will take.Â
dcli -g /home/oracle/cell_group -l root cellcli -e “list metriccurrent attributes name,metricvalue where name like \’FC_BY_DIRTY.*\’ “
Â
Flush the flashcache across all cells
To perform this step, it is recommended to have two separate sessions: one to execute the command below and the other to monitor its progression in the next step.
Issue the following command to begin the flush in one of the two sessions:
dcli -g /home/oracle/cell_group -l root cellcli -e “alter flashcache all flush”
If any errors occur, they will be displayed in this session, otherwise, this session will show a successful flush across all cells.
3)Â Â Â Check the flush status across all cells
a.   Execute the following command every few minutes in the second session to monitor the progress.Â
As dirty blocks are de-staged to disk, this count will reduce to zero (0).Â
This will take some time and you can determine a time estimate as you execute the following command over time:Â
dcli -g /home/oracle/cell_group -l root cellcli -e “list metriccurrent attributes name,metricvalue where name like \’FC_BY_DIRTY.*\’ “
Â
b.   The following command should return “working” for each flash disk on each cell while the cache is being flushed and “completed” when it is finished.
Execute the following command in the second session:
dcli -g /home/oracle/cell_group -l root cellcli -e “LIST CELLDISK ATTRIBUTES name, flushstatus, flusherror” | grep FD
The following steps are to be executed individually on each cell, one at a time.Â
All steps that must be performed directly on a cell use the cellcli utility.
 Log onto the first cell that will have the write back flash cache disabled.
4)Â Â Â Drop the flashcache for this cell after the flush completes
cellcli -e drop flashcache
5)Â Â Â Inactivate all griddisks on the cell
cellcli -e alter griddisk all inactive
6)Â Â Â Shut down the cellsrv service
cellcli -e alter cell shutdown services cellsrv
7)Â Â Â Reset the cell flash cache state to writethrough
cellcli -e “alter cell flashCacheMode=writethrough”
8)Â Â Â Restart the cellsrv service
cellcli -e alter cell startup services cellsrv
9)Â Â Â Reactivate the griddisks on the cell
cellcli -e alter griddisk all active
10) Recreate the flash cache
cellcli -e create flashcache all
11) Check the status of this cell flash cache state
cellcli -e list cell detail | grep flashCacheMode
12) Check the griddisks of the cell
Before moving on to the next cell, check the attribute “asmModestatus†of all of the griddisks and make sure they are all “ONLINE†and the attribute “asmdeactivationoutcome†is set to “Yesâ€.Â
It may be necessary to execute the following command several times until the “asmModestatus†shows “ONLINEâ€.Â
Â
cellcli -e list griddisk attributes name,asmmodestatus,asmdeactivationoutcome
========================================================
checking on all nodes
dcli -g /home/oracle/cell_group -l root cellcli -e “list cell detail” | grep flashCacheMode
node01-mgmt: flashCacheMode:Â Â Â Â Â Â Â writethrough
node02-mgmt: flashCacheMode:Â Â Â Â Â Â Â writethrough
node03-mgmt: flashCacheMode:Â Â Â Â Â Â Â writethrough
node04-mgmt: flashCacheMode:Â Â Â Â Â Â Â writethrough
node05-mgmt: flashCacheMode:Â Â Â Â Â Â Â writethrough
node06-mgmt: flashCacheMode:Â Â Â Â Â Â Â writethrough
node07-mgmt: flashCacheMode:Â Â Â Â Â Â Â writethrough
Â
on May 17, 2013
at 9:48 am
1. Locate the latest automatic OCR backup
When using a non-shared CRS home, automatic OCR backups can be located on any node of the cluster, consequently all nodes need to be checked for the most recent backup:
$ ls -lrt $CRS_HOME/cdata/rac_cluster1/
-rw——- 1 root root 7331840 Mar 10 18:52 week.ocr
-rw——- 1 root root 7651328 Mar 26 01:33 week_.ocr
-rw——- 1 root root 7651328 Mar 29 01:33 day.ocr
-rw——- 1 root root 7651328 Mar 30 01:33 day_.ocr
-rw——- 1 root root 7651328 Mar 30 01:33 backup02.ocr
-rw——- 1 root root 7651328 Mar 30 05:33 backup01.ocr
-rw——- 1 root root 7651328 Mar 30 09:33 backup00.ocr
2. Make sure the Grid Infrastructure is shutdown on all nodes
Given that the OCR diskgroup is missing, the GI stack will not be functional on any node, however there may still be various daemon processes running. On each node shutdown the GI stack using the force (-f) option:
# $CRS_HOME/bin/crsctl stop crs -f
3. Start the CRS stack in exclusive mode
On the node that has the most recent OCR backup, log on as root and start CRS in exclusive mode, this mode will allow ASM to start & stay up without the presence of a Voting disk and without the CRS daemon process (crsd.bin) running.
11.2.0.1:
# $CRS_HOME/bin/crsctl start crs -excl
…
CRS-2672: Attempting to start ‘ora.asm’ on ‘racnode1’
CRS-2676: Start of ‘ora.asm’ on ‘racnode1’ succeeded
CRS-2672: Attempting to start ‘ora.crsd’ on ‘racnode1’
CRS-2676: Start of ‘ora.crsd’ on ‘racnode1’ succeeded
Â
Please note:This document assumes that the CRS diskgroup was completely lost, in which case the CRS daemon (resource ora.crsd) will terminate again due to the inaccessibility of the OCR – even if above message indicates that the start succeeded.
If this is not the case – i.e. if the CRS diskgroup is still present (but corrupt or incorrect) the CRS daemon needs to be shutdown manually using:
# $CRS_HOME/bin/crsctl stop res ora.crsd -init
otherwise the subsequent OCR restore will fail.
11.2.0.2 and above:
# $CRS_HOME/bin/crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
…
CRS-2672: Attempting to start ‘ora.cluster_interconnect.haip’ on ‘auw2k3’
CRS-2672: Attempting to start ‘ora.ctssd’ on ‘racnode1’
CRS-2676: Start of ‘ora.drivers.acfs’ on ‘racnode1’ succeeded
CRS-2676: Start of ‘ora.ctssd’ on ‘racnode1’ succeeded
CRS-2676: Start of ‘ora.cluster_interconnect.haip’ on ‘racnode1’ succeeded
CRS-2672: Attempting to start ‘ora.asm’ on ‘racnode1’
CRS-2676: Start of ‘ora.asm’ on ‘racnode1’ succeeded
Â
IMPORTANT:
A new option ‘-nocrs‘ has been introduced with 11.2.0.2, which prevents the start of the ora.crsd resource. It is vital that this option is specified, otherwise the failure to start the ora.crsd resource will tear down ora.cluster_interconnect.haip, which in turn will cause ASM to crash.
4. Label the CRS disk for ASMLIB use
If using ASMLIB the disk to be used for the CRS disk group needs to stamped first, as user root do:
# /usr/sbin/oracleasm createdisk ASMD40 /dev/sdh1
Writing disk header: done
Instantiating disk: done
5. Create the CRS diskgroup via sqlplus
The disk group can now be (re-)created via sqlplus from the grid user. The compatible.asm attribute must be set to 11.2 in order for the disk group to be used by CRS:
$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on Tue Mar 30 11:47:24 2010
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Real Application Clusters and Automatic Storage Management options
SQL> create diskgroup CRS external redundancy disk ‘ORCL:ASMD40’ attribute ‘COMPATIBLE.ASM’ = ‘11.2’;
Diskgroup created.
SQL> exit
6. Restore the latest OCR backup
Now that the CRS disk group is created & mounted the OCR can be restored – must be done as the root user:
# cd $CRS_HOME/cdata/rac_cluster1/
# $CRS_HOME/bin/ocrconfig -restore backup00.ocr
7. Start the CRS daemon on the current node (11.2.0.1 only !)
Now that the OCR has been restored the CRS daemon can be started, this is needed to recreate the Voting file. Skip this step for 11.2.0.2.0.
# $CRS_HOME/bin/crsctl start res ora.crsd -init
CRS-2672: Attempting to start ‘ora.crsd’ on ‘racnode1’
CRS-2676: Start of ‘ora.crsd’ on ‘racnode1’ succeeded
8. Recreate the Voting file
The Voting file needs to be initialized in the CRS disk group:
# $CRS_HOME/bin/crsctl replace votedisk +CRS
Successful addition of voting disk 00caa5b9c0f54f3abf5bd2a2609f09a9.
Successfully replaced voting disk group with +CRS.
CRS-4266: Voting file(s) successfully replaced
9. Recreate the SPFILE for ASM (optional)
Please note:
Starting with 11gR2 ASM can start without a PFILE or SPFILE, so if you are
– not using an SPFILE for ASM
– not using a
shared SPFILE for ASM
– using a shared SPFILE not stored in ASM (e.g. on cluster file system)
this step possibly should be skipped.
Also use extra care in regards to the asm_diskstring parameter as it impacts the discovery of the voting disks.
Please verify the previous settings using the ASM alert log.
Prepare a pfile (e.g. /tmp/asm_pfile.ora) with the ASM startup parameters – these may vary from the example below. If in doubt consult the ASM alert log as the ASM instance startup should list all non-default parameter values. Please note the last startup of ASM (in step 2 via CRS start) will not have used an SPFILE, so a startup prior to the loss of the CRS disk group would need to be located.
*.asm_power_limit=1
*.diagnostic_dest=’/u01/app/oragrid’
*.instance_type=’asm’
*.large_pool_size=12M
*.remote_login_passwordfile=’EXCLUSIVE’
Now the SPFILE can be created using this PFILE:
$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on Tue Mar 30 11:52:39 2010
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Real Application Clusters and Automatic Storage Management options
SQL> create spfile=’+CRS’ from pfile=’/tmp/asm_pfile.ora’;
File created.
SQL> exit
10. Shutdown CRS
Since CRS is running in exclusive mode, it needs to be shutdown to allow CRS to run on all nodes again. Use of the force (-f) option may be required:
# $CRS_HOME/bin/crsctl stop crs -f
…
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘auw2k3’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
11. Rescan ASM disks
If using ASMLIB rescan all ASM disks on each node as the root user:
# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks…
Scanning system for ASM disks…
Instantiating disk “ASMD40”
12. Start CRS
As the root user submit the CRS startup on all cluster nodes:
# $CRS_HOME/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
13. Verify CRS
To verify that CRS is fully functional again:
# $CRS_HOME/bin/crsctl check cluster -all
**************************************************************
racnode1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
# $CRS_HOME/bin/crsctl status resource -t
…
on May 9, 2013
at 11:26 am
In RAC, every data block is mastered by an instance. Mastering a block simply means that master instance keeps track of the state of the block
SELECT INST_ID, OBJECT_NAME, GC_MASTERING_POLICY, CURRENT_MASTER,
 PREVIOUS_MASTER, REMASTER_CNT
FROMÂ Â GV$GCSPFMASTER_INFO G, DBA_OBJECTS O
WHEREÂ G.DATA_OBJECT_ID = O.DATA_OBJECT_ID
ORDER BYÂ INST_ID, REMASTER_CNT;
Â
INST_ID | OBJECT_NAMEÂ Â Â Â Â Â | GC_MASTERING_POLICY | CURRENT_MASTER | PREVIOUS_MASTER | REMASTER_CNT|
——————————————————————————————–
|        1|ZTLOY_MSH_ISU~0        |Affinity          |            0|             2|         11|
|        1|CRMD_ORDER_INDEX     |Affinity          |            0|             3|         11|
|        1|LOYD_MA_GENATTR~ZL |Affinity          |            1|             0|         11|
|         1|CRMM_BUAG_H~Z03      |Affinity          |            2|             1|         11|
|        1|ZTC_LOY_MSH_BHV~Z0 |Affinity          |            0|             1|         12|
|         1|BUT020~MAD                |Affinity          |            2|             0|         12|
|        1|CRMM_BUAG_H~1         |Affinity          |            1|             0|         12|
|         1|COMD_EVHIST              |Affinity          |            0|             3|         12|
|        1|LOYD_MA_GENATTR~0  |Affinity          |            0|             1|         15|
|         1|LOYD_MA_GENATTR     |Affinity          |            0|             1|         17|
|        1|SNAP                            |Affinity          |            3|             0|         19|
|         1|BUT000~FRD               |Affinity          |            0|             1|         23|
Â
To disable DRM set the following database parameters:
_gc_policy_time=0
 and
_gc_undo_affinity=FALSE
Â
on May 9, 2013
at 11:23 am
select name from v$event_name where name like ‘%cell%’;
NAME
—————————————————————-
cell smart table scan
cell smart index scan
cell statistics gather
cell smart incremental backup
cell smart file creation
cell smart restore from backup
cell single block physical read
…
Â
From AWR
     cell single block physical read: 533,424,212 in 293,122seconds = 550 microseconds (!)
per single block IO. See ‘db file sequential read’ on non-Exadata
on May 9, 2013
at 11:21 am
Â
select r.object_name, r.value as reads, o.value as freads, o.value/r.value as quality
from
( select owner, object_name, sum(value) as value
from gv$segment_statistics
where owner = ‘SAPCRM’
and statistic_name = ‘physical reads’
group by owner, object_name) r,
( select owner, object_name, sum(value) as value
from gv$segment_statistics
where owner = ‘SAPCRM’
and statistic_name = ‘optimized physical reads’
group by owner, object_name) o
where
r.owner = o.owner(+) and
r.object_name = o.object_name(+) and
r.value > 10000000
order by r.object_name;
e.g.
OBJECT_NAMEÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â READSÂ Â Â Â Â Â Â Â Â Â FREADSÂ Â Â Â QUALITYÂ Â Â Â Â Â
ADR2Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 13,417,977Â Â Â Â Â Â Â 8,024,460Â Â Â Â Â Â Â 0.59Â Â Â Â Â
ADRCÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 44,346,940Â Â Â Â Â Â 29,051,952Â Â Â Â Â Â Â 0.65Â Â Â Â Â
BUT000Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 53,060,460Â Â Â Â Â Â 51,298,173Â Â Â Â Â Â Â 0.96Â Â Â Â Â
BUT020Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 10,392,989Â Â Â Â Â Â 10,380,525Â Â Â Â Â Â Â 0.99Â Â Â Â Â
COMD_EVHISTÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 22,815,295Â Â Â Â Â Â Â Â Â 270,896Â Â Â Â Â Â Â 0.01Â Â Â Â Â
COMM_PRODUCTÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 15,434,447Â Â Â Â Â Â Â 2,575,635Â Â Â Â Â Â Â 0.16Â Â Â Â Â
CRMM_BUAGÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 10,588,323Â Â Â Â Â Â Â Â 6,598,218Â Â Â Â Â Â Â 0.62Â Â Â Â Â
CRM_ICI_TRACESÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 90,444,640Â Â Â Â Â Â Â Â Â Â Â 18,449Â Â Â Â Â Â Â Â Â Â 0Â Â Â Â Â
IBINÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 12,880,422Â Â Â Â Â Â Â 7,987,396Â Â Â Â Â Â Â 0.62Â Â Â Â Â
LOYD_MSH_MEMSÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 2,261,610,110Â Â Â Â Â Â Â Â 3,364,355Â Â Â Â Â Â Â Â Â Â 0Â Â Â Â Â ***
LOYD_PT_TXNÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 42,122,415Â Â Â Â Â Â 18,280,917Â Â Â Â Â Â Â 0.43Â Â Â Â Â
SCAPPTSEGÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 12,080,171Â Â Â Â Â Â Â Â Â 279,222Â Â Â Â Â Â Â 0.02Â Â Â Â Â
SMW3_BDOC2Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 14,079,632Â Â Â Â Â Â Â 2,685,403Â Â Â Â Â Â Â 0.19Â Â Â Â Â
SYS_LOB0000107104C00006$$Â Â Â Â Â Â Â 225,295,545Â Â Â Â Â 224,433,450Â Â Â Â Â Â Â 0.99Â Â Â Â Â
VBDATAÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 12,818,971Â Â Â Â Â Â 2,622,517Â Â Â Â Â Â Â Â 0.2Â Â Â Â Â
ZTC_LOY_MSH_BHVÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 11,899,581Â Â Â Â Â Â 11,169,939Â Â Â Â Â Â Â 0.93Â Â Â Â Â
ZTC_LOY_MSH_BHVDÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 102,109,972Â Â Â Â Â Â 44,811,860Â Â Â Â Â Â Â 0.43Â Â Â Â Â
ZTC_LOY_MSH_BHVDREÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 22,577,318Â Â Â Â Â Â Â 9,574,038Â Â Â Â Â Â Â 0.42Â Â Â Â Â
ZTC_LOY_MSH_BHVD~0Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 12,026,472Â Â Â Â Â Â 12,019,270Â Â Â Â Â Â Â 0.99Â Â Â Â Â
ZTPRO_MI_EVENTÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 15,444,970Â Â Â Â Â Â Â Â Â Â Â 37,786Â Â Â Â Â Â Â Â Â Â 0
Therefore queries accessing LOYD_MSH_MEMS or the table need modifying to improve query perfromance.
N.B. for full table scans data is not cached.
Â
on May 9, 2013
at 11:11 am
select a.inst_id, b.total_io, a.flash_io, a.flash_io/b.total_io as quality
from (SELECT inst_id, value as flash_io FROM GV$SYSSTAT WHERE NAME IN (‘cell flash cache read hits’)) a,
(SELECT inst_id, value as total_io FROM GV$SYSSTAT WHERE NAME IN (‘physical read total IO requests’)) b
where a.inst_id = b.inst_id order by a.inst_id
Â
|INST_IDÂ Â TOTAL_IOÂ Â Â Â Â Â Â Â Â FLASH_IOÂ Â Â QUALITY
1Â Â Â Â 940,403,933Â Â Â Â Â Â 872,071,164Â Â Â Â Â Â Â Â Â Â 0.92
2Â Â Â Â 139,959,018Â Â Â Â Â Â 114,791,948Â Â Â Â Â Â Â Â Â 0.82
3Â Â Â Â Â Â 98,157,948Â Â Â Â Â Â 92,874,490Â Â Â Â Â Â Â Â Â Â 0.94
4Â Â Â Â 146,588,414Â Â Â Â Â Â 140,561,659Â Â Â Â Â Â Â Â Â Â 0.95
Â
on May 9, 2013
at 11:08 am
select a.value as all_bytes, b.value as saved_bytes
from ( select sum(value) as value from gv$sysstat where name like ‘cell physical IO interconnect bytes’) a,
       ( select sum(value) as value from gv$sysstat where name like ‘cell physical IO bytes saved by storage index’) b ;
Â
ALL_BYTESÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â SAVED_BYTES
71,236,249,253,864Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 671,241,273,344
Â
on May 9, 2013
at 11:06 am
Â
SELECT name, sum(value) as value
FROM GV$SYSSTAT
WHERE name = ‘cell physical IO bytes saved during optimized file creation’
group by name;
Â
cell physical IO bytes saved during optimized file creation               343,598,301,184
Â
on May 9, 2013
at 11:03 am
SELECT (a.value+b.value)/c.value as efficiency
FROM v$sysstat a, v$sysstat b, v$sysstat c
WHERE a.name = ‘cell IO uncompressed bytes’
AND b.name = ‘cell physical IO bytes saved by storage index’
AND c.name = ‘cell physical IO interconnect bytes returned by smart scan’;
EFFICIENCY
———-
45.9
on May 24, 2012
at 11:26 am
cd /oracle
touch get_uptime.sh
#!/bin/ksh
/usr/bin/uptime
chmod 777 /oracle/get_uptime.sh
create or replace directory exec_dir as ‘/oracle’;
create or replace directory load_dir as ‘/oracle’;
drop table uptime;
CREATE TABLE uptime
(
data varchar2(255)
)
ORGANIZATION external
( TYPE oracle_loader
DEFAULT DIRECTORY load_dir
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE
preprocessor exec_dir:’get_uptime.sh’
FIELDS TERMINATED BY “|” LDRTRIM
)
location (‘get_uptime.sh’)
)
/
on December 14, 2011
at 5:57 pm
# clonedb.pl – This script generates two sql scripts that can be used to
# create your test clones. Run this from your testdb Oracle Home env
#
# Before running this script make sure the following env variables are set:
#
# MASTER_COPY_DIR – env variable to point to the directory where the
#Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â backup/snapshot of your Master database are kept
#
# CLONE_FILE_CREATE_DEST – env variable to point to the directory where
#Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â clonedb files will be created including datafiles,
#Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â log files, control files
#
# CLONEDB_NAMEÂ – Cloned database name
#
# S7000_TARGET – Set if the nfs host providing the filesystem for the backup
#Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â and the clones is an S7000 series machine and we wish to
#Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â employ its cloning features.
#
# perl clonedb.pl
#
# Arg1 – Full path of the Master db init.ora file from your production env
# Arg2 – sqlscript1
# Arg3 – sqlscript2
#
# This script copies the init.ora file from your master db env to your
# clonedb env in CLONE_FILE_CREATE_DEST directory.
#
# After running this script go through the test database parameter file to
# make sure all parameters look correct
#
# Go through crtdb.sql to make sure the log names are correct.
# If all files look good do the following steps
#
# sqlplus system/manager
# @crtdb.sql
# @dbren.sql
#
# Now your test database should be available for use.
#
if ($#ARGV != 2) {
print “usage: perl clonedb.pl \n”;
exit;
}
if (!$ENV{‘MASTER_COPY_DIR’}) {
print “MASTER_COPY_DIR env not set. Set this and rerun it again \n”;
exit;
}
if (!$ENV{‘CLONE_FILE_CREATE_DEST’}) {
print “CLONE_FILE_CREATE_DEST env not set. Set this and rerun it again \n”;
exit;
}
if (!$ENV{‘CLONEDB_NAME’}) {
print “CLONEDB_NAME env not set. Set this and rerun it again \n”;
exit;
}
($orafile)=$ARGV[0];
($sqlfile1)=$ARGV[1];
($sqlfile2)=$ARGV[2];
# Set env variable specific to your clone database
$neworafile =”$ENV{‘CLONE_FILE_CREATE_DEST’}/init$ENV{‘CLONEDB_NAME’}.ora”;
$cldboh = “$ENV{‘ORACLE_HOME’}”;
$cldbosid = “$ENV{‘ORACLE_SID’}”;
$cldbname = “$ENV{‘CLONEDB_NAME’}”;
$cldbctlfl = “$ENV{‘CLONE_FILE_CREATE_DEST’}/$ENV{‘CLONEDB_NAME’}_ctl.dbf”;
$mastercopydir = “$ENV{‘MASTER_COPY_DIR’}”;
$clonedbdir =”$ENV{‘CLONE_FILE_CREATE_DEST’}”;
$s7000 = $ENV{S7000_TARGET} ? 1 : 0 ;
# Check if the CLONE_FILE_CREATE_DEST exists
if (! open(CLONEDIR, $clonedbdir))
{
print(“CLONE_FILE_CREATE_DEST directory does not exist.\n”);
print(“Create this directory and rerun the script \n”);
exit;
}
close(CLONEDIR);
# Rename the parameters in the copied production init.ora and will open a new init.ora with new values
open (INFILE,$orafile);
open (OUTFILE,”>$neworafile”);
@skipparam=(“instance_name”,”service_names”,”diagnostic_dest”);
@inparam=(“db_name”,”control_files”);
@outparm=($cldbname,$cldbctlfl);
$skiplen = @skipparam;
$inlen = @inparam;
for $ln ()
{
$newln = $ln;
#look for any include files and read their contents
if ($newln =~ “ifile”)
{
@lnsp = split(“=”,$newln);
open(INCFILE, $lnsp[1]);
print OUTFILE “# Copy from $lnsp[1] \n”;
for $ln ()
{
$newln = $ln;
for ($i=0; $i<$skiplen; $i++){
if ($newln =~ /$skipparam[$i]/)
{
$newln=”\n”;
}
}
for ($i=0; $i<$inlen; $i++){
if ($newln =~ /$inparam[$i]/)
{
@lnsp = split(“=”,$newln);
$lnsp[1]=$outparm[$i];
$newln=$inparam[$i].”=”.$lnsp[1].”\n”;
}
}
print OUTFILE “$newln”;
}
close INCFILE;
print OUTFILE “# End Copy”;
}
else
{
for ($i=0; $i<$skiplen; $i++){
if ($newln =~ /$skipparam[$i]/)
{
$newln=”\n”;
}
}
for ($i=0; $i<$inlen; $i++){
if ($newln =~ /$inparam[$i]/)
{
@lnsp = split(“=”,$newln);
$lnsp[1]=$outparm[$i];
$newln=$inparam[$i].”=”.$lnsp[1].”\n”;
}
}
print OUTFILE “$newln”;
}
}
# Add db_create_file_dest, log_arhive_dest parameter
print OUTFILE “db_create_file_dest=$clonedbdir\n”;
print OUTFILE “log_archive_dest=$clonedbdir\n”;
#print OUTFILE “clonedb=TRUE\n”;
close INFILE;
close OUTFILE;
# Create clone db raneame file sql
if (!$s7000)
{
$target=$mastercopydir;
} else {
$target=$clonedbdir;
}
# XXX Needs to be modified to just deal with datafiles.
system (“cd $target; ls -d $target/* >> dnfsa1axxx.log”);
system (“cp $target/dnfsa1axxx.log .;rm $target/dnfsa1axxx.log”);
open(INPFILE,”dnfsa1axxx.log”);
open(INTFILE,”>filenamexxx.txt”);
open(OUTFILE1,”>$sqlfile2″);
open(OUTFILE,”>dnfsa2axxx.log”);
for $ln ()
{
print INTFILE “$ln”;
}
close INTFILE;
close INPFILE;
open(INTFILE,”filenamexxx.txt”);
$refline=” “;
for $line ()
{
$line =~ s/\s+$//;
if ($refline ne ” “)
{
print OUTFILEÂ “‘”.”$refline”.”‘”.”, \n”;
}
$refline = $line;
}
if ($refline ne ” “)
{
print OUTFILEÂ “‘”.”$refline”.”‘ \n”;
}
close INTFILE;
if (!$s7000)
{
print OUTFILE1 “declare \n”;
print OUTFILE1 “begin \n”;
open(INTFILE,”filenamexxx.txt”);
$i = 0;
for $lne ()
{
$lne =~ s/\s+$//;
print OUTFILE1 “dbms_dnfs.clonedb_renamefile(‘$lne’ , ‘$clonedbdir/\ora_data_$cldbname$i.dbf’); \n”;
$i++;
}
print OUTFILE1 “end; \n”;
print OUTFILE1 “/ \n”;
print OUTFILE1 “show errors; \n”;
}
print OUTFILE1 “alter database open resetlogs;\n”;
#Add a default temp tablespace in teh clone env
print OUTFILE1 “drop tablespace TEMP;\n”;
print OUTFILE1 “create temporary tablespace TEMP;”;
close OUTFILE;
close OUTFILE1;
close OUTFILE1;
close OUTFILE1;
# Create the create controlfile script
open(INPFILE1,”dnfsa2axxx.log”);
open(INPSQLFILE,”>interm.sql”);
open (OUTSQLFILE,”>$sqlfile1″);
print INPSQLFILE (“
SET ECHO ON
SET FEEDBACK 1
SET NUMWIDTH 10
SET LINESIZE 80
SET TRIMSPOOL ON
SET TAB OFF
SET PAGESIZE 100
STARTUP NOMOUNT PFILE=$neworafile
CREATE CONTROLFILE REUSE SET DATABASE $cldbname RESETLOGS
MAXLOGFILES 32
MAXLOGMEMBERS 2
MAXINSTANCES 1
MAXLOGHISTORY 908
LOGFILE
GROUP 1 ‘$clonedbdir/$cldbname\_log1.log’ SIZE 100M BLOCKSIZE 512,
GROUP 2 ‘$clonedbdir/$cldbname\_log2.log’ SIZE 100M BLOCKSIZE 512
DATAFILE
CHARACTER SET WE8DEC; “);
close INPSQLFILE;
open(INPSQLFILE,”interm.sql”);
for $ln ()
{
print OUTSQLFILE “$ln”;
if ($ln =~ /DATAFILE/)
{
for $ln0 ()
{
print OUTSQLFILE “$ln0”;
}
}
}
close OUTSQLFILE;
close INPFILE1;
close INPSQLFILE;
unlink(“interm.sql”);
unlink(“dnfsa1axxx.log”);
unlink(“dnfsa2axxx.log”);
unlink(“filenamexxx.txt”);