isitdevops.com/databases

Useful computing notes
  • Archive
  • Home
  • Posts

Categories

  • Chef (Cat) (2)
  • Chef Cookbook Tutorials (8)
  • CHEF Full Tutorial (18)
  • Jernkins (Cat) (1)
  • Oracle (354)
    • Oracle 11g (Sect) (309)
      • Oracle 11.2 Notes (Cat) (18)
      • Oracle ASM (Cat) (1)
      • Oracle Data Guard Notes (Cat) (45)
      • Oracle DB and SQL performance analysis (Cat) (67)
      • Oracle DB Objects (tables, sql etc) (Cat) (56)
      • Oracle Exadata (Cat) (21)
      • Oracle Flashback Notes (Cat) (12)
      • Oracle Golden Gate (Cat) (11)
      • Oracle odds and ends (Cat) (39)
      • Oracle oui and dbca (Cat) (4)
      • Oracle RAC (Cat) (24)
      • Oracle Rman Notes (Cat) (10)
      • Oracle sql one liners (Cat) (1)
    • Oracle 12c (Sect) (45)
      • 12c Cloud control (Cat) (4)
      • 12c Cloud Control Agent (Cat) (2)
      • 12c General (Cat) (19)
      • 12c Performance (Cat) (4)
      • 12c rman (cat) (16)
  • SQL Server (Sect) (31)
    • MS SQL Server (Cat) (31)
  • Uncategorized (7)
  • Unix Notes (Sect) (50)
    • Configuration (Cat) (16)
    • Resource Management (Cat) (18)
    • Unix notes (Cat) (16)

Categories

  • Chef (Cat)
  • Chef Cookbook Tutorials
  • CHEF Full Tutorial
  • Jernkins (Cat)
  • Oracle
    • Oracle 11g (Sect)
      • Oracle 11.2 Notes (Cat)
      • Oracle ASM (Cat)
      • Oracle Data Guard Notes (Cat)
      • Oracle DB and SQL performance analysis (Cat)
      • Oracle DB Objects (tables, sql etc) (Cat)
      • Oracle Exadata (Cat)
      • Oracle Flashback Notes (Cat)
      • Oracle Golden Gate (Cat)
      • Oracle odds and ends (Cat)
      • Oracle oui and dbca (Cat)
      • Oracle RAC (Cat)
      • Oracle Rman Notes (Cat)
      • Oracle sql one liners (Cat)
    • Oracle 12c (Sect)
      • 12c Cloud control (Cat)
      • 12c Cloud Control Agent (Cat)
      • 12c General (Cat)
      • 12c Performance (Cat)
      • 12c rman (cat)
  • SQL Server (Sect)
    • MS SQL Server (Cat)
  • Uncategorized
  • Unix Notes (Sect)
    • Configuration (Cat)
    • Resource Management (Cat)
    • Unix notes (Cat)

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

BizTalk database backup job setup

on August 3, 2015 at 2:13 pm
Posted In: MS SQL Server (Cat)

 

If BizTalk Server is being installed as part of the environment build, a separate backup job, specifically for BizTalk will need to be configured after the BizTalk specialists have deployed the software.

Firstly, identify a suiteable Windows user account (usually a domain service account) for running the backup job. Craete a SQL Server login for the account (unless it already exists) and then assign it the BTS_BACKUP_USERS role either via the GUI or the following T-SQL:

— Assign backup role to nominated user

USE [BizTalkDTADb]

 

EXEC sp_addrolemember ‘BTS_BACKUP_USERS’, ‘Domain\BizTalk_SVC_BKP’

USE [BizTalkMgmtDb]

EXEC sp_addrolemember ‘BTS_BACKUP_USERS’, ‘Domain\BizTalk_SVC_BKP’

USE [BizTalkMsgBoxDb]

EXEC sp_addrolemember ‘BTS_BACKUP_USERS’, ‘Domain\BizTalk_SVC_BKP’

USE [BizTalkRuleEngineDb]

EXEC sp_addrolemember ‘BTS_BACKUP_USERS’, ‘Domain\BizTalk_SVC_BKP’

USE [SSODB]

EXEC sp_addrolemember ‘BTS_BACKUP_USERS’, ‘Domain\BizTalk_SVC_BKP’

Once that’s done, navigate to “SQL Server Agent” in SSMS, expand the “Jobs” folder. Right click on the “Backup BizTalk Server (BizTalkMgmtDb)” job that’s been created, and click on “Properties”. Update the “Owner” of the job to the user chosen above.

 

Click on the “Steps” page from the menu on the left, highlight the “BackupFull” step and Edit.

 

Update the “Command” string so that the destination path reflects an appropriate directory for hosting the BizTalk backup files. It’s usually a good idea to use the same disk as the other database backups, but a different parent directory so that any existing maintenance plans with “Maintenance Cleanup Tasks” do not interfere with purging of the BizTalk backup files.

As an example, the following directory is used here: \MSSQL\BizTalk_Backups

 

Repeat the set above for the “MarkAndBackupLog” step of the job.

 

Update the “Clear Backup History” step with an appropriate retention period for the backup history. The default of 14 days is usually acceptable, but given that we usually only keep 2 days’ worth of backups online, this can be reflected here also.

 

NOTE: The above step does not purge the physical backup files. An additional step (below) will be added to take care of this.

For the same step, click on the “Advanced” option and update the “On success action” to “Go to the next step”. Then change the “On failure action” to “Quit the job reporting failure”.

 

From the “Job step list” page, click “New”.

 

Create a new step with the following properties, updating the path, server name, and hour’s retention as appropriate to your requirements:

Step Name: Delete Old Backups

Type: PowerShell

Run as: SQL Server Agent Service Account

Command:

$path = ‘\MSSQL\BizTalk_Backups‘

$filter = ‘SNAPSQL01*.bak’

$retention= (Get-Date).AddHours(-48)

Get-ChildItem (Join-Path $path $filter) -Recurse |? {($_.PSIsContainer -eq $false) -and ($_.LastWriteTime -lt $retention)} | Remove-Item

 

Click on the “Advanced” page. Update the “On success action” to “Quit the job reporting success”. Then change the “On failure action” to “Quit the job reporting failure”.

 

Finally, check the “Job History” after around 30 minutes to ensure the job is completing successfully, and every 15 minutes (default).


Further information around configuration of the BizTalk Backup job can be found here in the MSDN article here:

https://msdn.microsoft.com/en-us/library/aa546765.aspx


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 Comment 

2008 R2 Reporting services Configuration

on August 3, 2015 at 1:59 pm
Posted In: MS SQL Server (Cat)

 

Start “Reporting Services Configuration Manager” and connect to the Report Server instance. Click on the “Encryption Keys” option from the left hand menu (unless you’re automatically prompted).

 

Click on “Backup” then provide an appropriate path (usually a network share) and password to store the encrypted key should recovery be required at a later date.

 

 

 Comment 

2008 R2 Installation Guide

on August 3, 2015 at 1:54 pm
Posted In: MS SQL Server (Cat)

 

 

A guide to installing SQL Server 2008 R2 for a single instance. In addition, post installation steps have been added where appropriate, to ensure a consistent approach is taken to the configuration of the MSSQL instance.

Included are some basic requirements for SQL Server Reporting Services (SSRS) and BizTalk, which are optional components and not always applicable to the installation.

The installation of Service Pack 3 for SQL Server 2008 R2 has also been summarised, which is the latest Service Pack available at the time of writing this document.

Pre-Installation Requirements

Windows Page File

Ensure that the Windows server has been built with a dedicated disk for the Windows Page File (swap space). Generally, this should be sized at 1.5 times the size of the amount of physical RAM installed on the machine, with a “system managed page file size” configuration. For servers with large amounts of physical RAM, the 1.5 times rule may not be appropriate and should be considered on an individual build basis.

Disk Configuration

Whilst not always possible, the recommended disk configuration to use for a dedicated SQL Server database server is as follows:

Disk Letter

Contains

Typical Path

C:\

Operating System

\

D:\

MSSQL software, System DB data/logs

\Program Files\Microsoft SQL Server

E:\

User DB data

\MSSQL\DATA

F:\

User DB logs

\MSSQL\LOGS

G:\

Temp DB data

\MSSQL\Temp_DATA

H:\

Temp DB logs

\MSSQL\Temp_LOGS

I:\

DB backups

\MSSQL\Backups

Using the default Allocation Unit Size for the backup disk provides potential for the backup files (<20GB) to be later compressed to preserve disk space.

The “Bytes Per Cluster” value can be verified per disk, by the running the following command line (with “Administrator” rights):

fsutil fsinfo ntfsinfo L:|find “Bytes Per Cluster”

Bytes Per Cluster : 65536

Further information around disk partition alignment recommendations can be found here:

https://technet.microsoft.com/en-us/library/dd758814(v=sql.100).aspx

Service Account and Security Rights

A domain level SQL Server service account should be created in readiness for the installation, as this will be used to run the various SQL Server services.

Add the SQL Server service account to the server’s local “Administrators” group. In addition, add the same service account to the following “Local Security Policies”:

· Replace a process level token

· Lock Pages in Memory (LPIM) (may not be necessary in Windows Server 2008+ builds)

· Perform Volume Maintenance Task

 

If it is not possible to run the services as a domain account, and not a Production environment build, then you may consider using “NT AUTHORITY\Network Service”. This is a low privilege account, but one that retains the computer’s network credentials.

Installation Instructions

From the MSSQL installation kit, run setup.exe and select “Installation” from the side menu. Select “New installation or add features to an existing installation.”

Ensure that the “Setup Support Rules” pass without any issue and continue.

If the product key has been pre-populated, leave as is. Otherwise obtain a valid and appropriate product key and enter it here.

Accept the license terms and continue.

Proceed with the install of the “Setup Support Files” if prompted.

Further “Setup Support Rules” are checked at this stage. The “Windows Firewall” rule will typically flag a warning, which can be ignored. All other checks should pass OK.

Opt for a “SQL Server Feature Installation” and continue.

Feature Selection

Select the features appropriate to the requirements of the installation. Typically the following features would be installed:

· Database Engine Services (no sub-components)

· Reporting Services (optional)

· SQL Server Books Online

· Management Tools – Basic

· Management Tools – Complete

 

DeDetailed descriptions for each of the features can be found Microsoft documentation here:

https://msdn.microsoft.com/en-us/library/ms143786(v=sql.105).aspx

 

Additional “Installation Rules” will perform further pre-requisite checks. Ensure these pass OK and continue.

Instance Configuration

Update the “Instance root directory” path to an appropriate disk (C:\ should not be avoided). If a named instance is required, this should be provided here with an appropriate naming convention.

Disk space requirements will be checked next, and highlight any issues.

Change the “Startup Type” for the “SQL Server Agent” service to Automatic.

An appropriate domain service account should be used if at all possible to run each of the services listed.

If this is not possible, and not a Production environment build, then you may consider using “NT AUTHORITY\Network Service”.

This is a low privilege account, but one that retains the computer’s network credentials.

Make sure the collation is set to “Latin1_General_CI_AS” and continue.

On the “Account Provisioning” tab, ensure that the “Authentication Mode” is set to “Mixed Mode”. Additionally, add in accounts for other DBA logins/groups. These will be granted “sysadmin” rights and therefore have unrestricted access to the SQL Server instance.

 

You will also need to supply a password for the “SA” account (“System Administrator”) which has the same “sysadmin” rights as the listed users, but uses SQL Server authentication. Make sure the password conforms to the current password complexity standard, and is recorded securely.

Update the “Data root directory” to match the location used for the “Instance root directory” path earlier on. In addition, update the remaining Data, Log and Backup directory paths as appropriate.

The drive letters may differ slightly per installation, but if possible, the following directory structures should be used:

Directory

Path

MSSQL software

D:\Program Files\Microsoft SQL Server

User DB data

E:\MSSQL\DATA

User DB logs

F:\MSSQL\LOGS

Temp DB data

G:\MSSQL\Temp_DATA

Temp DB logs

H:\MSSQL\Temp_LOGS

DB backups

I:\MSSQL\Backups

 

 

Leave the FILESTREAM feature disabled, and continue.

If “Reporting Services” was selected from the “Feature Selection”, you’ll see the following screen. Continue using the “Install the native mode default configuration” option.

Ensure that the “Send Windows and SQL Server Error Reports” option is unticked and continue.

Ensure that the “Installation Configuration Rules” completed OK and continue.

Finally, check the installation summary, specifically the directory paths, features, and edition of SQL Server (usually Standard) to make sure everything is as expected, then click Install.

Once installed, check the log file for any issues, and click Close.

Service Pack 3 Installation

Accept the license terms and continue.

Leave all of the features select, so that all components are patched to the same level.

Wait for the “Check Files In Use” check to complete. Everything should be stopped that needs to be at this point, but you may need to explicitly stop applications/services depending on the output here, before the patching can continue.

Double check the correct features are listed for patching, then click Update. The “Patch Level” listed below indicates the current version installed. In this example, it is 10.50.1600.1 which is SQL Server 2008 R2 RTM (base release).

Once the update has completed, click Close.

To verify the Service Pack has been applied, you can log into SQL Server Management Studio (SSMS) and the version number will be displayed to the right of the instance name. Alternatively, you can run the following T-SQL:

SELECT CAST(SUBSTRING(@@version ,0, CHARINDEX(CHAR(10), @@version )) AS CHAR(70)) AS [Version]

Version

———————————————————————

Microsoft SQL Server 2008 R2 (SP3) – 10.50.6000.34 (X64)

Further information around SQL Server version information can be found here in the Microsoft KB:

http://support.microsoft.com/en-us/kb/321185

A similar process to the above is followed when applying Cumulative Updates.

NOTE: Some patches may require server restarts before/after update process.

Post Installation Steps

Max Server Memory Settings

By default this is effectively set to unlimited, which can lead to the Operating System being starved of physical memory at times, or other SQL Server instance, and so should always be adjusted.

The guide below assumes an x64 bit installation is running with either SQL Server 2005/2008/2008 R2/2012 versions:

Physical RAM Installed

SQL Max Server
Memory Setting

Physical RAM Remaining

Total GB

Total MB

Free MB

Free GB

Remaining %

2

2,048

1,450

598

0.6

29.2

4

4,096

3,000

1,096

1.1

26.8

6

6,144

4,800

1,344

1.3

21.9

8

8,192

6,400

1,792

1.8

21.9

12

12,288

10,000

2,288

2.2

18.6

16

16,384

13,500

2,884

2.8

17.6

24

24,576

21,500

3,076

3

12.5

32

32,768

29,000

3,768

3.7

11.5

48

49,152

44,000

5,152

5

10.5

64

65,536

60,000

5,536

5.4

8.4

72

73,728

68,000

5,728

5.6

7.8

96

98,304

92,000

6,304

6.2

6.4

128

131,072

124,000

7,072

6.9

5.4

256

262,144

250,000

12,144

11.9

4.6

1024

1,048,576

1,029,000

19,576

19.1

1.9

If more than one SQL Server instance (MSSQL) is running on the server, split the allocated SQL Max Server Memory across the instances, appropriate to demands of each environment.

For example, a server with 16GB of physical memory installed running MSSQL and SSRS would typically result in the MSSQL instance being configured to use 12,500MB (minus 1,000MB for SSRS).

This update can either be performed using the GUI or via T-SQL.

From SSMS, right click on the instance and open up the “Properties”. Click on the “Memory” page on the left hand menu, and adjust the “Maximum server memory (in MB)” value as required. Leave the “Minimum server memory (in MB)” value as zero, and accept the changes (no restart is required).

T-SQL:

— Adjust Max Server Memory

EXEC sys.sp_configure ‘show advanced options’, ‘1’ RECONFIGURE WITH OVERRIDE

EXEC sys.sp_configure ‘max server memory (MB)’, ‘16500’

RECONFIGURE WITH OVERRIDE

EXEC sys.sp_configure ‘show advanced options’, ‘0’ RECONFIGURE WITH OVERRIDE

Database Settings

From SSMS, right click on the instance and open up the “Properties”. Click on the “Database Settings” page on the left hand menu, and tick the “Compress backup” option. Also check that “Database default locations” paths for the data and log files are also correct, although they should already be set from the install.

T-SQL:

— Enable compression for backups

EXEC sys.sp_configure ‘backup compression default’, ‘1’

RECONFIGURE WITH OVERRIDE

Advanced Optimization Settings

From the same instance properties window, click on the “Advanced” page on the left hand menu, and change the “Set Optimize for Ad hoc Workloads” option to “True”. This helps optimize ad-hoc queries by reducing plan cache bloat, and reducing memory pressure.

T-SQL:

— Set Optimize for Ad hoc Workloads to True

EXEC sys.sp_configure ‘show advanced options’, ‘1’ RECONFIGURE WITH OVERRIDE

EXEC sys.sp_configure ‘optimize for ad hoc workloads’, ‘1’

RECONFIGURE WITH OVERRIDE

EXEC sys.sp_configure ‘show advanced options’, ‘0’ RECONFIGURE WITH OVERRIDE

Database Mail Configuration

There is no current requirement to configure database mail, but this URL explains how to do so:

http://www.snapdba.com/2013/04/enabling-and-configuring-database-mail-in-sql-server-using-t-sql/

Error Logs Retention

From SSMS, navigate to “Management” and then right click on “SQL Server Logs” then “Configure”.

Tick the “Limit the number of error log files before they are recycled” box and change the “Maximum number of error log files” to 62, allowing for 2 months’ worth of logs.

T-SQL:

–Configure SQL Error Log retention for 62 days (2 months)

EXEC xp_instance_regwrite ‘HKEY_LOCAL_MACHINE’, ‘Software\Microsoft\MSSQLServer\MSSQLServer’,

‘NumErrorLogs’, REG_DWORD, 62

SQL Server Agent Job History

From SSMS, right click on “SQL Server Agent” then “Properties”. Click on the “History” page on the left hand menu and make sure the “Limit size of job history log” box is ticked. Change the “Maximum job history log size (in rows)” to 10000 and the “Maximum job history rows per job” to 100.

T-SQL:

–Change the SQL agent history retention

EXEC msdb.dbo.sp_set_sqlagent_properties @jobhistory_max_rows=10000,

@jobhistory_max_rows_per_job=100

TempDB configuration

The default TempDB sizing and configuration is usually sufficient for most installations. However, some environments may benefit from splitting the data file into multiple files of equal size, especially if latch contention issues are experienced and TempDB usage is quite high.

Creating multiple TempDB files enables MSSQL to split the I/O transactions across each of the data files, optimizing performance when multiple CPUs perform the SQL operations simultaneously. This can also prevent or alleviate blocking activity.

The advice from Microsoft is such that:

“As a general rule, if the number of logical processors is less than or equal to 8, use the same number of data files as logical processors. If the number of logical processors is greater than 8, use 8 data files and then if contention continues, increase the number of data files by multiples of 4 (up to the number of logical processors) until the contention is reduced to acceptable levels or make changes to the workload/code.”

However, previous experience has shown that using 1 data file per logical core, can sometimes lead to further contention and performance issues. From experience, the best approach to take is to create equally sized TempDB data files based on ½ the number of logical processor cores, with a maximum of 8 files.

Further details can be found in the Microsoft KB here:

http://support.microsoft.com/en-us/kb/2154845

Additional information about working with TempDB can be found here:

https://technet.microsoft.com/en-us/library/cc966545.aspx

 

 Comment 

2000 – Move system databases

on August 3, 2015 at 12:43 pm
Posted In: MS SQL Server (Cat)

Normal 0 false false false EN-GB JA X-NONE

Master

Update the -d and -l startup parameters for your SQL Server instance with the new location for your master MDF and LDF files (do not move any files at this stage). To do this, open SQL Server Enterprise Manager, right-click the server name and click Properties.

 

Normal 0 false false false EN-GB JA X-NONE

Open up the Startup Parameters.

Remove the existing master MDF and LDF entries:

-dD:\MSSQL\MSSQL\Data\master.mdf

-lD:\MSSQL\MSSQL\Data\mastlog.ldf

 

Normal 0 false false false EN-GB JA X-NONE

Add the new file locations:

-dC:\MSSQL\Data\master.mdf

-lC:\MSSQL\Data\mastlog.ldf

 

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

Stop SQL Server (from a Command Prompt):

net stop SQLSERVERAGENT

net stop MSSQLSERVER

Move the master.mdf and mastlog.ldf files from D:\MSSQL\MSSQL\Data to the new location at C:\MSSQL\Data.

Start SQL Server (from a Command Prompt) leave the SQL Server Agent down.

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

net start MSSQLSERVER

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

TempDB

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

USE tempdb

GO

EXEC sp_helpfile

GO

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

Then issue ALTER DATABASE SQL commands to change the file location of the tempdb database files:

USE master

GO

ALTER DATABASE tempdb

MODIFY FILE (NAME = ‘tempdev’, FILENAME = ‘C:\MSSQL\Data\tempdb.mdf’);

GO

ALTER DATABASE tempdb

MODIFY FILE (NAME = ‘templog’, FILENAME = ‘C:\MSSQL\Data\templog.ldf’);

GO

ALTER DATABASE tempdb

MODIFY FILE (NAME = ‘templog2’, FILENAME = ‘C:\MSSQL\Data\templog2.ldf’);

GO

Restart SQL Server:

net stop MSSQLSERVER

net start MSSQLSERVER

Now delete the old tempdb files (they are recreated in the new location at startup).

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

Model and msdb

The model databases should be moved first, followed by the msdb database.

The order in which these are moved is very important and could prevent the instance from starting if done incorrectly.

Add the -T3608 startup parameter to the instance:

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;} Open up the Startup Parameter.

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

Add the -T3608 startup parameter to the instance.

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

Navigate to the msdb database properties, then select the Options tab.

Then restrict access to the msdb database by placing it into single-user mode first.

Repeat for the MODEL database.

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;} Both the model and msdb databases should have ‘(Single User)’ appended to them when viewed in SQL Server Enterprise Manager,

 

 

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

Restart SQL Server:

 

net stop MSSQLSERVER

net start MSSQLSERVER

You will not be able to access any user databases at this time, and you should not perform any operations other than the steps below while this trace flag is set.

Open SQL Query Analyzer to detach the model, then msdb databases the using the following commands:

USE master

GO

sp_detach_db ‘model’

GO

sp_detach_db ‘msdb’

GO

Move the MDF and LDF files for the msdb and model databases to the new location.

Attach the model and msdb databases (in that order*) using its new location file locations:

USE master

GO

sp_attach_db ‘model’,’C:\MSSQL\Data\model.mdf’,’D:\MSSQL\Data\modellog.ldf’

GO

sp_attach_db ‘msdb’,’C:\MSSQL\Data\msdbdata.mdf’,’D:\MSSQL\Data\msdblog.ldf’

GO

 

 

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

The order of reattachment is important here – you must deal with model first and then msdb.

Go back to SQL Server Enterprise Manager and remove the -T3608 startup parameter.

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

Finally, restart SQL Server and bring up the SQL Server Agent:

net stop MSSQLSERVER

net start MSSQLSERVER

net start SQLSERVERAGENT

 

You can also check the files locations for a particular System database at any time by running:

USE

GO

EXEC sp_helpfile

GO

 

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

References:

https://support2.microsoft.com/default.aspx?scid=kb;en-us;224071

 

 

 Comment 

2008 – Move system databases

on August 3, 2015 at 12:30 pm
Posted In: MS SQL Server (Cat)

Normal 0 false false false false EN-GB JA X-NONE

Get the old file locations first:

SELECT name AS LogicalName, physical_name AS PhysicalPath, CASE type_desc

WHEN ‘LOG’ THEN ‘Log’

WHEN ‘ROWS’ THEN ‘Data’ ELSE type_desc END AS FileType,

state_desc AS Status

FROM sys.master_files

WHERE database_id

–WHERE database_id >= 5 — User DBs

–WHERE database_id = DB_ID(‘model’) — Specific DB

 

For 2008 on there is no longer a requirement to use the 3608 trace flag, or startup the instance in single-user mode.

ALTER DATABASE MODIFY FILE commands can be used for the model and msdb moves, just like tempdb.

Normal 0 false false false EN-GB JA X-NONE

 

Model, msdb, and tempdb

Issue the following ALTER DATABASE T-SQL commands, updating the new file paths as appropriate:

— e.g. Move model

USE master

GO

ALTER DATABASE model MODIFY FILE (NAME = modeldev,FILENAME = ‘C:\SQLData\model.mdf’);

GO

ALTER DATABASE model MODIFY FILE (NAME = Modellog,FILENAME = ‘C:\SQLLogs\modelLog.ldf’);

GO

etc

then

— Move msdb

— Move tempdb

 

Stop SQL Server (from a Command Prompt):

net stop SQLSERVERAGENT

net stop MSSQLSERVER

Move the model.mdf and MSDBData.mdf files from D:\SQLData\ to C:\SQLData\ (do not move tempdb.mdf).

Move the modelLog.ldf and MSDBLog.ldf files from D:\SQLLogs\ to C:\SQLLogs\ (do not move tempLog.ldf).

Delete the tempdb.mdf and tempLog.ldf files from D:\SQLData\ and D:\SQLLogs\; These will be recreated at startup of the SQL instance so do not need to be moved.

Start SQL Server (from a Command Prompt) leaving the SQL Server Agent for now:

net start MSSQLSERVER

 

Normal 0 false false false EN-GB JA X-NONE

Master

There is no longer a requirement to place the SQL instance in master recovery mode first.

Start “SQL Server Configuration Manager”, click on the “SQL Server Services” option, right-click on the SQL Server instance and click on “Properties”. Click on the “Advanced” tab.

Edit the “Startup Parameters” by changing the existing master MDF and LDF entries:

-dC:\SQLData\master.mdf;-eC:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\LOG\ERRORLOG;-lC:\SQLLogs\mastlog.ldf

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

-d is the fully qualified path for the master database data file.

-e is the fully qualified path for the error log file.

-l is the fully qualified path for the master database log file.

Apply the changes, then stop SQL Server (from a Command Prompt):

net stop MSSQLSERVER

Move the master.mdf files from D:\SQLData\ to C:\SQLData\ and the mastlog.ldf file from D:\SQLLogs\ to C:\SQLLogs \.

Restart SQL Server along with the SQL Server Agent:

net stop MSSQLSERVER

net start MSSQLSERVER

 

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

You can check the locations of the System database files after the move(s) by running:

SELECT name AS LogicalName, physical_name AS PhysicalPath,CASE type_desc

WHEN ‘LOG’ THEN ‘Log’

WHEN ‘ROWS’ THEN ‘Data’ ELSE type_desc END AS FileType,

state_desc AS Status

FROM sys.master_files WHERE database_id

 

 

 Comment 

Fragmentation Report

on August 3, 2015 at 12:16 pm
Posted In: MS SQL Server (Cat)

DECLARE @DatabaseName VARCHAR(64)
SELECT @DatabaseName = ‘‘

DECLARE @MaxFragmentationPercent DECIMAL
DECLARE @Mode VARCHAR(10)
DECLARE @FillFactor DECIMAL
DECLARE @Tablename VARCHAR(255)

DECLARE @Command   VARCHAR (255)
DECLARE @ObjectID  INT
DECLARE @IndexName VARCHAR(255)

— Create table to hold list of tables in database
CREATE TABLE #TableList ( TableName VARCHAR(255) )

SELECT @Command = ‘SELECT t1.Name FROM ‘ + @DatabaseName + ‘..sysobjects t1 INNER JOIN ‘ + @DatabaseName + ‘..sysindexes t2 ON t1.id = t2.id WHERE t1.type = ”U” AND t2.indid = 1’

INSERT INTO #TableList
EXEC (@Command)

— Build cursor containing list of tables in database
DECLARE tables CURSOR FOR
SELECT TableName FROM #TableList

— Create table to hold fragmentation statistics
CREATE TABLE #fraglist (
ObjectName CHAR (255),
ObjectId INT,
IndexName CHAR (255),
IndexId INT,
Lvl INT,
CountPages INT,
CountRows INT,
MinRecSize INT,
MaxRecSize INT,
AvgRecSize INT,
ForRecCount INT,
Extents INT,
ExtentSwitches INT,
AvgFreeBytes INT,
AvgPageDensity INT,
ScanDensity DECIMAL,
BestCount INT,
ActualCount INT,
LogicalFrag DECIMAL,
ExtentFrag DECIMAL)

OPEN tables

— Loop through all the tables in the tables cursor
FETCH NEXT FROM tables INTO @TableName

WHILE @@FETCH_STATUS = 0
BEGIN

— DBCC SHOWCONTIG to get fragmentation stats for indexes on current table
SELECT @Command = ‘USE ‘ + @DatabaseName + ‘; DBCC SHOWCONTIG (”’ + @DatabaseName + ‘..’ + @TableName + ”’, 1 ) WITH FAST,
TABLERESULTS, ALL_INDEXES, NO_INFOMSGS’

INSERT INTO #fraglist
EXEC(@Command)

FETCH NEXT FROM tables INTO @TableName
END

CLOSE tables
DEALLOCATE tables

SELECT CAST(ObjectName as varchar(30)) “ObjectName”, CAST(IndexName as varchar(40)) “IndexName”, CountPages, ScanDensity, BestCount, ActualCount, LogicalFrag
from #fraglist
where CountPages>=128
AND indexid != 0
AND indexid != 255
order by LogicalFrag Desc

DROP TABLE #TableList
DROP TABLE #fraglist

 Comment 

Table Growth

on July 27, 2015 at 2:31 pm
Posted In: MS SQL Server (Cat)

DECLARE @dbname sysname

/* Work with current database if a database name is not specified */

SET @dbname = ‘DB1’

SELECT   
CONVERT(char, backup_start_date, 111) AS [Date],
CONVERT(char, backup_start_date, 108) AS [Time],
@dbname AS [Database Name],
[filegroup_name] AS [Filegroup Name],
logical_name AS [Logical Filename],
physical_name AS [Physical Filename],
CONVERT(numeric(9,2),file_size/1048576) AS [File Size (MB)],
Growth AS [Growth Percentage (%)]
FROM
(
SELECT    b.backup_start_date, a.backup_set_id, a.file_size, a.logical_name,
a.[filegroup_name], a.physical_name,
(
SELECT    CONVERT(numeric(15,2),((a.file_size * 100.00)/i1.file_size)-100)
FROM    msdb.dbo.backupfile i1
WHERE     i1.backup_set_id =
(
SELECT    MAX(i2.backup_set_id)
FROM    msdb.dbo.backupfile i2 JOIN msdb.dbo.backupset i3
ON      i2.backup_set_id = i3.backup_set_id
WHERE    i2.backup_set_id i2.file_type=’D’ AND
i3.database_name = @dbname AND
i2.logical_name = a.logical_name AND
i2.logical_name = i1.logical_name AND
i3.type = ‘D’
) AND
i1.file_type = ‘D’
) AS Growth
FROM    msdb.dbo.backupfile a JOIN msdb.dbo.backupset b
ON a.backup_set_id = b.backup_set_id
WHERE    b.database_name = @dbname AND
a.file_type = ‘D’ AND
b.type = ‘D’

) as Derived
WHERE (Growth 0.0) OR (Growth IS NULL)
ORDER BY logical_name, [Date]

 Comment 

List Table sizes

on July 27, 2015 at 2:28 pm
Posted In: MS SQL Server (Cat)

SELECT
s.Name AS SchemaName,
t.NAME AS TableName,
p.rows AS RowCounts,
(SUM(a.total_pages) * 8) AS TotalSpaceKB,
(SUM(a.used_pages) * 8) AS UsedSpaceKB,
((SUM(a.total_pages) – SUM(a.used_pages)) * 8) AS UnusedSpaceKB
FROM
sys.tables t
INNER JOIN
sys.schemas s ON s.schema_id = t.schema_id
INNER JOIN     
sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN
sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN
sys.allocation_units a ON p.partition_id = a.container_id
WHERE
t.NAME LIKE ‘if%’    — filter out system tables for diagramming
AND t.is_ms_shipped = 0
AND i.OBJECT_ID > 255
GROUP BY
t.Name, s.Name, p.Rows
ORDER BY
totalspacekb, s.Name, t.Name

SELECT
(SUM(a.total_pages) * 8) AS TotalSpaceKB
FROM
sys.tables t
INNER JOIN
sys.schemas s ON s.schema_id = t.schema_id
INNER JOIN     
sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN
sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN
sys.allocation_units a ON p.partition_id = a.container_id
WHERE
t.NAME LIKE ‘if%’    — filter out system tables for diagramming
AND t.is_ms_shipped = 0
AND i.OBJECT_ID > 255
GROUP BY
t.Name, s.Name, p.Rows

 Comment 

Creating Maintenance Plans

on July 27, 2015 at 1:34 pm
Posted In: MS SQL Server (Cat)

Screenshots

Instance Level

The following SQL job ensures the SQL Server Agent logs and SQL Server Error logs are recycled on a daily basis. This will help ensure that neither log grows too large in size, and can therefore easily be accessed for troubleshooting purposes.

Supporting screenshots have been added below this section, using the “Cycle error logs” plan as an example.

Start > Microsoft SQL Server > SQL Server Management Studio > SQL Server Agent > Jobs > New Job

Name: Cycle error logs

Category: Database Maintenance

Steps:

ü Step 1

o Step Name: Cycle agent log

o Type: T-SQL

o Database: msdb

o Command:

exec sp_cycle_agent_errorlog

o On success action: Go to the next step

ü Step 2

o Step Name: Cycle error log

o Type: T-SQL

o Database: msdb

o Command:

exec sp_cycle_errorlog

o On success action: Quit the job reporting success

Schedule Name: Daily at midnight

Schedule: Daily at 00:00

BizTalk Backup Job Configuration

If BizTalk Server is being installed as part of the environment build, a separate backup job, specifically for BizTalk will need to be configured after the BizTalk specialists have deployed the software.

Firstly, identify a suiteable Windows user account (usually a domain service account) for running the backup job. Craete a SQL Server login for the account (unless it already exists) and then assign it the BTS_BACKUP_USERS role either via the GUI or the following T-SQL:

— Assign backup role to nominated user

USE [BizTalkDTADb]

EXEC sp_addrolemember ‘BTS_BACKUP_USERS’, ‘STORES\BizTalk_SVC_BKP’

USE [BizTalkMgmtDb]

EXEC sp_addrolemember ‘BTS_BACKUP_USERS’, ‘STORES\BizTalk_SVC_BKP’

USE [BizTalkMsgBoxDb]

EXEC sp_addrolemember ‘BTS_BACKUP_USERS’, ‘STORES\BizTalk_SVC_BKP’

USE [BizTalkRuleEngineDb]

EXEC sp_addrolemember ‘BTS_BACKUP_USERS’, ‘STORES\BizTalk_SVC_BKP’

USE [SSODB]

EXEC sp_addrolemember ‘BTS_BACKUP_USERS’, ‘STORES\BizTalk_SVC_BKP’

Once that’s done, navigate to “SQL Server Agent” in SSMS, expand the “Jobs” folder. Right click on the “Backup BizTalk Server (BizTalkMgmtDb)” job that’s been created, and click on “Properties”. Update the “Owner” of the job to the user chosen above.


Click on the “Steps” page from the menu on the left, highlight the “BackupFull” step and Edit.

Update the “Command” string so that the destination path reflects an appropriate directory for hosting the BizTalk backup files. It’s usually a good idea to use the same disk as the other database backups, but a different parent directory so that any existing maintenance plans with “Maintenance Cleanup Tasks” do not interfere with purging of the BizTalk backup files.

As an example, the following directory is used here: V:\MSSQL\BizTalk_Backups


Repeat the set above for the “MarkAndBackupLog” step of the job.

Update the “Clear Backup History” step with an appropriate retention period for the backup history. The default of 14 days is usually acceptable, but given that we usually only keep 2 days’ worth of backups online, this can be reflected here also.

NOTE: The above step does not purge the physical backup files. An additional step (below) will be added to take care of this.


For the same step, click on the “Advanced” option and update the “On success action” to “Go to the next step”. Then change the “On failure action” to “Quit the job reporting failure”.

From the “Job step list” page, click “New”.


Create a new step with the following properties, updating the path, server name, and hour’s retention as appropriate to your requirements:

Step Name: Delete Old Backups

Type: PowerShell

Run as: SQL Server Agent Service Account

Command:

$path = ‘V:\MSSQL\BizTalk_Backups‘

$filter = ‘SNAPSQL01*.bak’

$retention= (Get-Date).AddHours(-48)

Get-ChildItem (Join-Path $path $filter) -Recurse |? {($_.PSIsContainer -eq $false) -and ($_.LastWriteTime -lt $retention)} | Remove-Item


Click on the “Advanced” page. Update the “On success action” to “Quit the job reporting success”. Then change the “On failure action” to “Quit the job reporting failure”.

Finally, check the “Job History” after around 30 minutes to ensure the job is completing successfully, and every 15 minutes (default).

Further information around configuration of the BizTalk Backup job can be found here in the MSDN article here:

https://msdn.microsoft.com/en-us/library/aa546765.aspx


Large Index Rebuilds

The dbo.Index_Rebuilds Stored Procedure (SP) has been created internally to address large ad-hoc index rebuilds. Whilst the automated index management tool is sufficient in most cases, there are occasions when large indexes incur timeouts before completing the rebuild or reorganize operations. When this happens, not only does the operation fail, but it can take a long time to rollback the transaction, which can also significantly degrade performance.

Accepted Parameters

A list of accepted parameters are listed within the SP and also below for reference:

@MaxFrag Max level of acceptable fragmentation (default is 20.0).

@MaxDensity Max level of acceptable density (default is 80.0).

@DatabaseName Database to check. This determines the table whitelist used (@TableInList).

@RebuildLimit Limits how many indexes are rebuilt/reorganized (default is 1).

@ReorgMax Fragmentation level at which point rebuilds are initiated. Anything lower than this limit will execute a reorganize instead, 0 will disable reorgs (default is 30.0).

@RunMode Options are ‘REPORT’ for report only, with no REBUILD/REORG operations being performed, or ‘ACTIVE’ mode which executes the REBUILD/REORG operations (default is ‘CHECK’ mode).

Example Executions

EXEC Index_Rebuilds @MaxFrag=5.0, @MaxDensity=80.0, @DatabaseName=’DB1′, @RebuildLimit=1, @ReorgMax=0, @RunMode=’REPORT’

Job outcome from supplied parameters:

· Run in REPORT only mode against DB1

· Rebuild/reorganize limit of 1

· Indexes with fragmentation levels above 5% OR page density of less than 80% will be candidates

· Indexes will be rebuilt, nothing will be reorganized (@ReorgMax=0)

EXEC Index_Rebuilds @DatabaseName=’DB2′, @RebuildLimit=1, @ReorgMax=0, @RunMode=’ACTIVE’

Job outcome from supplied parameters:

· Run in ACTIVE (execute) mode against DB2

· Rebuild/reorganize limit of 1

· Indexes with fragmentation levels above 20% OR page density of less than 80% will be candidates (DEFAULTS)

· Indexes will be rebuilt, nothing will be reorganized (@ReorgMax=0)

EXEC Index_Rebuilds @DatabaseName=’DB2′, @RebuildLimit=2, @ReorgMax=30, @RunMode=’ACTIVE’

Job outcome from supplied parameters:

· Run in ACTIVE (execute) mode against DB2

· Rebuild/reorganize limit of 2

· Indexes with fragmentation levels above 20% OR page density of less than 80% will be candidates (DEFAULTS)

· Indexes with less than 30% fragmentation will be reorganized, anything equal to or above this will be rebuilt

Example SQL Server Scheduled Job

 

Name: DB2 Large Index Rebuilds

Category: Database Maintenance

Description: Weekly index rebuild job for the most fragmented indexes

Steps:

ü Step 1

o Step Name: Rebuild index

o Type: Operating system (CmdExec)

o Run as: SQL Server Agent Service Account

o Command:

o sqlcmd -S MSSQLSERVER01 -E -d DB2 -Q “EXEC Index_Rebuilds @MaxFrag=50.0, @MaxDensity=80.0, @DatabaseName=’DB2′, @RebuildLimit=1, @ReorgMax=0, @RunMode=’ACTIVE'” -o “Q:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log\DB2_Large_Index_Rebuild.log”

o On success action: Quit the job reporting success

Schedule Name: Weekly index

Schedule: Every Weds at 00:20


Example Output File

***********************

* May 27 2015 12:20AM *

***********************

Listing index candidates for maintenance. Only 1 index(es) will be worked on…

IndexName TableName DensityPercent FragPercent

—————————————- —————————— ————– ———–

PK__RETAIL_T__F05C7E984850AF91 RETAIL_TRANSACTION 79.87 97.52

IDX_RTL_TRN_NO RETAIL_TRANSACTION 74.47 35.11

IDX_END_DT RETAIL_TRANSACTION 78.3 31.56

Starting maintenance…

*** Index [PK__RETAIL_T__F05C7E984850AF91] is 97.52% fragmented ***

Executing statement: ALTER INDEX [PK__RETAIL_T__F05C7E984850AF91] ON [DB2].[dbo].[RETAIL_TRANSACTION] REBUILD

Started at 00:22:04:857

Running in ACTIVE mode. Index maintenance WILL be performed.

Finished at 00:32:54:510

Duration: 00:10:49:653

Procedure completed.

 

 

 

 Comment 

Add data or log files to the database

on July 27, 2015 at 1:05 pm
Posted In: MS SQL Server (Cat)

Using SQL Server Management Studio


To add data or log files to a database

  1. In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that instance.

  2. Expand Databases, right-click the database from which to add the files, and then click Properties.

  3. In the Database Properties dialog box, select the Files page.

  4. To add a data or transaction log file, click Add.

  5. In the Database files grid, enter a logical name for the file. The file name must be unique within the database.

  6. Select the file type, data or log.

  7. For a data file, select the filegroup in which the file should be included from the list, or select to create a new filegroup. Transaction logs cannot be put in filegroups.

  8. Specify the initial size of the file. Make the data file as large as possible, based on the maximum amount of data you expect in the database.

  9. To specify how the file should grow, click (…) in the Autogrowth column. Select from the following options:

    1. To allow for the currently selected file to grow as more data space is required, select the Enable Autogrowth check box and then select from the following options:

    2. To specify that the file should grow by fixed increments, select In Megabytes and specify a value.

    3. To specify that the file should grow by a percentage of the current file size, select In Percent and specify a value.

  10. To specify the maximum file size limit, select from the following options:

    1. To specify the maximum size the file should be able to grow to, select Restricted File Growth (MB) and specify a value.

    2. To allow for the file to grow as much as needed, select Unrestricted File Growth.

    3. To prevent the file from growing, clear the Enable Autogrowth check box. The size of the file will not grow beyond the value specified in the Initial Size (MB) column.

    System_CAPS_noteNote

    The maximum database size is determined by the amount of disk space available and the licensing limits determined by the version of SQL Server that you are using.

  11. Specify the path for the file location. The specified path must exist before adding the file.

    System_CAPS_noteNote

    By default, the data and transaction logs are put on the same drive and path to accommodate single-disk systems, but may not be optimal for production environments. For more information, see Database Files and Filegroups.

  12. Click OK.

Arrow icon used with Back to Top link

Using Transact-SQL

To add data or log files to a database

  1. Connect to the Database Engine.

  2. From the Standard bar, click New Query.

  3. Copy and paste the following example into the query window and click Execute. The example adds a filegroup with two files to a database. The example creates the filegroup Test1FG1 in the AdventureWorks2012 database and adds two 5-MB files to the filegroup.

Transact-SQL
Copy
USE master GO ALTER DATABASE AdventureWorks2012 ADD FILEGROUP Test1FG1; GO ALTER DATABASE AdventureWorks2012  ADD FILE  (     NAME = test1dat3,     FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\t1dat3.ndf',     SIZE = 5MB,     MAXSIZE = 100MB,     FILEGROWTH = 5MB ), (     NAME = test1dat4,     FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\t1dat4.ndf',     SIZE = 5MB,     MAXSIZE = 100MB,     FILEGROWTH = 5MB ) TO FILEGROUP Test1FG1; GO 

For more examples, see ALTER DATABASE File and Filegroup Options (Transact-SQL).

 Comment 

General – Move System Databases

on July 27, 2015 at 1:04 pm
Posted In: MS SQL Server (Cat)

Planned Relocation and Scheduled Disk Maintenance Procedure

To move a system database data or log file as part of a planned relocation or scheduled maintenance operation, follow these steps. This procedure applies to all system databases except the master and Resource databases.

  1. For each file to be moved, run the following statement.

    Copy
    ALTER DATABASE database_name MODIFY FILE ( NAME = logical_name , FILENAME = 'new_path\os_file_name' ) 
  2. Stop the instance of SQL Server or shut down the system to perform maintenance. For more information, see Start, Stop, Pause, Resume, Restart the Database Engine, SQL Server Agent, or SQL Server Browser Service.

  3. Move the file or files to the new location.

  4. Restart the instance of SQL Server or the server. For more information, see Start, Stop, Pause, Resume, Restart the Database Engine, SQL Server Agent, or SQL Server Browser Service.

  5. Verify the file change by running the following query.

    Copy
    SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files WHERE database_id = DB_ID(N''); 

If the msdb database is moved and the instance of SQL Server is configured for Database Mail, complete these additional steps.

  1. Verify that Service Broker is enabled for the msdb database by running the following query.

    Copy
    SELECT is_broker_enabled  FROM sys.databases WHERE name = N'msdb'; 

    For more information about enabling Service Broker, see ALTER DATABASE (Transact-SQL).

  2. Verify that Database Mail is working by sending a test mail.

Arrow icon used with Back to Top link

Failure Recovery Procedure

If a file must be moved because of a hardware failure, follow these steps to relocate the file to a new location. This procedure applies to all system databases except the master and Resource databases.

System_CAPS_importantImportant

If the database cannot be started, that is it is in suspect mode or in an unrecovered state, only members of the sysadmin fixed role can move the file.

  1. Stop the instance of SQL Server if it is started.

  2. Start the instance of SQL Server in master-only recovery mode by entering one of the following commands at the command prompt. The parameters specified in these commands are case sensitive. The commands fail when the parameters are not specified as shown.

    • For the default (MSSQLSERVER) instance, run the following command:

      Copy
      NET START MSSQLSERVER /f /T3608 
    • For a named instance, run the following command:

      Copy
      NET START MSSQL$instancename /f /T3608 

    For more information, see Start, Stop, Pause, Resume, Restart the Database Engine, SQL Server Agent, or SQL Server Browser Service.

  3. For each file to be moved, use sqlcmd commands or SQL Server Management Studio to run the following statement.

    Copy
    ALTER DATABASE database_name MODIFY FILE( NAME = logical_name , FILENAME = 'new_path\os_file_name' ) 

    For more information about using the sqlcmd utility, see Use the sqlcmd Utility.

  4. Exit the sqlcmd utility or SQL Server Management Studio.

  5. Stop the instance of SQL Server. For example, run NET STOP MSSQLSERVER.

  6. Move the file or files to the new location.

  7. Restart the instance of SQL Server. For example, run NET START MSSQLSERVER.

  8. Verify the file change by running the following query.

    Copy
    SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files WHERE database_id = DB_ID(N''); 

Arrow icon used with Back to Top link

Moving the master Database

To move the master database, follow these steps.

  1. From the Start menu, point to All Programs, point to Microsoft SQL Server, point to Configuration Tools, and then click SQL Server Configuration Manager.

  2. In the SQL Server Services node, right-click the instance of SQL Server (for example, SQL Server (MSSQLSERVER)) and choose Properties.

  3. In the SQL Server (instance_name) Properties dialog box, click the Startup Parameters tab.

  4. In the Existing parameters box, select the –d parameter to move the master data file. Click Update to save the change.

    In the Specify a startup parameter box, change the parameter to the new path of the master database.

  5. In the Existing parameters box, select the –l parameter to move the master log file. Click Update to save the change.

    In the Specify a startup parameter box, change the parameter to the new path of the master database.

    The parameter value for the data file must follow the -d parameter and the value for the log file must follow the -l parameter. The following example shows the parameter values for the default location of the master data file.

    -dC:\Program Files\Microsoft SQL Server\MSSQL.MSSQLSERVER\MSSQL\DATA\master.mdf

    -lC:\Program Files\Microsoft SQL Server\MSSQL.MSSQLSERVER\MSSQL\DATA\mastlog.ldf

    If the planned relocation for the master data file is E:\SQLData, the parameter values would be changed as follows:

    -dE:\SQLData\master.mdf

    -lE:\SQLData\mastlog.ldf

  6. Stop the instance of SQL Server by right-clicking the instance name and choosing Stop.

  7. Move the master.mdf and mastlog.ldf files to the new location.

  8. Restart the instance of SQL Server.

  9. Verify the file change for the master database by running the following query.

    Copy
    SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files WHERE database_id = DB_ID('master'); GO 

Moving the Resource Database

The location of the Resource database is drive>:\Program Files\Microsoft SQL Server\MSSQL.instance_name>\MSSQL\Binn\. The database cannot be moved.

Arrow icon used with Back to Top link

Follow-up: After Moving All System Databases

If you have moved all of the system databases to a new drive or volume or to another server with a different drive letter, make the following updates.

  • Change the SQL Server Agent log path. If you do not update this path, SQL Server Agent will fail to start.

  • Change the database default location. Creating a new database may fail if the drive letter and path specified as the default location do not exist.

Change the SQL Server Agent Log Path

  1. From SQL Server Management Studio, in Object Explorer, expand SQL Server Agent.

  2. Right-click Error Logs and click Configure.

  3. In the Configure SQL Server Agent Error Logs dialog box, specify the new location of the SQLAGENT.OUT file. The default location is C:\Program Files\Microsoft SQL Server\MSSQL.\MSSQL\Log\.

Change the database default location

  1. From SQL Server Management Studio, in Object Explorer, right-click the SQL Server server and click Properties.

  2. In the Server Properties dialog box, select Database Settings.

  3. Under Database Default Locations, browse to the new location for both the data and log files.

  4. Stop and start the SQL Server service to complete the change.

Arrow icon used with Back to Top link

Examples

A. Moving the tempdb database

The following example moves the tempdb data and log files to a new location as part of a planned relocation.

System_CAPS_noteNote

Because tempdb is re-created each time the instance of SQL Server is started, you do not have to physically move the data and log files. The files are created in the new location when the service is restarted in step 3. Until the service is restarted, tempdb continues to use the data and log files in existing location.

  1. Determine the logical file names of the tempdb database and their current location on the disk.

    Copy
    SELECT name, physical_name AS CurrentLocation FROM sys.master_files WHERE database_id = DB_ID(N'tempdb'); GO 
  2. Change the location of each file by using ALTER DATABASE.

    Copy
    USE master; GO ALTER DATABASE tempdb  MODIFY FILE (NAME = tempdev, FILENAME = 'E:\SQLData\tempdb.mdf'); GO ALTER DATABASE tempdb  MODIFY FILE (NAME = templog, FILENAME = 'F:\SQLLog\templog.ldf'); GO 
  3. Stop and restart the instance of SQL Server.

  4. Verify the file change.

    Copy
    SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files WHERE database_id = DB_ID(N'tempdb'); 
  5. Delete the tempdb.mdf and templog.ldf files from the original location.

 Comment 

Move database files

on July 27, 2015 at 1:03 pm
Posted In: MS SQL Server (Cat)

The procedures in this topic require the logical name of the database files. To obtain the name, query the name column in the sys.master_files catalog view.

Starting with SQL Server 2008 R2, full-text catalogs are integrated into the database rather than being stored in the file system. The full-text catalogs now move automatically when you move a database.

Planned Relocation Procedure

To move a data or log file as part of a planned relocation, follow these steps:

  1. Run the following statement.

    Copy
    ALTER DATABASE database_name SET OFFLINE; 
  2. Move the file or files to the new location.

  3. For each file moved, run the following statement.

    Copy
    ALTER DATABASE database_name MODIFY FILE ( NAME = logical_name, FILENAME = 'new_path\os_file_name' ); 
  4. Run the following statement.

    Copy
    ALTER DATABASE database_name SET ONLINE; 
  5. Verify the file change by running the following query.

    Copy
    SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files WHERE database_id = DB_ID(N''); 

Relocation for Scheduled Disk Maintenance

To relocate a file as part of a scheduled disk maintenance process, follow these steps:

  1. For each file to be moved, run the following statement.

    Copy
    ALTER DATABASE database_name MODIFY FILE ( NAME = logical_name , FILENAME = 'new_path\os_file_name' ); 
  2. Stop the instance of SQL Server or shut down the system to perform maintenance. For more information, see Start, Stop, Pause, Resume, Restart the Database Engine, SQL Server Agent, or SQL Server Browser Service.

  3. Move the file or files to the new location.

  4. Restart the instance of SQL Server or the server. For more information, see Start, Stop, Pause, Resume, Restart the Database Engine, SQL Server Agent, or SQL Server Browser Service

  5. Verify the file change by running the following query.

    Copy
    SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files WHERE database_id = DB_ID(N''); 

Failure Recovery Procedure

If a file must be moved because of a hardware failure, use the following steps to relocate the file to a new location.

System_CAPS_importantImportant

If the database cannot be started, that is it is in suspect mode or in an unrecovered state, only members of the sysadmin fixed role can move the file.

  1. Stop the instance of SQL Server if it is started.

  2. Start the instance of SQL Server in master-only recovery mode by entering one of the following commands at the command prompt.

    • For the default (MSSQLSERVER) instance, run the following command.

      Copy
      NET START MSSQLSERVER /f /T3608 
    • For a named instance, run the following command.

      Copy
      NET START MSSQL$instancename /f /T3608 

    For more information, see Start, Stop, Pause, Resume, Restart the Database Engine, SQL Server Agent, or SQL Server Browser Service.

  3. For each file to be moved, use sqlcmd commands or SQL Server Management Studio to run the following statement.

    Copy
    ALTER DATABASE database_name MODIFY FILE( NAME = logical_name , FILENAME = 'new_path\os_file_name' ); 

    For more information about how to use the sqlcmd utility, see Use the sqlcmd Utility.

  4. Exit the sqlcmd utility or SQL Server Management Studio.

  5. Stop the instance of SQL Server.

  6. Move the file or files to the new location.

  7. Start the instance of SQL Server. For example, run: NET START MSSQLSERVER.

  8. Verify the file change by running the following query.

    Copy
    SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files WHERE database_id = DB_ID(N''); 

Examples

The following example moves the AdventureWorks2012 log file to a new location as part of a planned relocation.

Copy
USE master; GO -- Return the logical file name. SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files WHERE database_id = DB_ID(N'AdventureWorks2012')     AND type_desc = N'LOG'; GO ALTER DATABASE AdventureWorks2012 SET OFFLINE; GO -- Physically move the file to a new location. -- In the following statement, modify the path specified in FILENAME to -- the new location of the file on your server. ALTER DATABASE AdventureWorks2012      MODIFY FILE ( NAME = AdventureWorks2012_Log,                    FILENAME = 'C:\NewLoc\AdventureWorks2012_Log.ldf'); GO ALTER DATABASE AdventureWorks2012 SET ONLINE; GO --Verify the new location. SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files WHERE database_id = DB_ID(N'AdventureWorks2012')     AND type_desc = N'LOG'; 

 Comment 

Backup a transaction log

on July 27, 2015 at 1:01 pm
Posted In: MS SQL Server (Cat)

Using SQL Server Management Studio

To back up a transaction log

  1. After connecting to the appropriate instance of the SQL Server Database Engine, in Object Explorer, click the server name to expand the server tree.

  2. Expand Databases, and, depending on the database, either select a user database or expand System Databases and select a system database.

  3. Right-click the database, point to Tasks, and then click Back Up. The Back Up Database dialog box appears.

  4. In the Database list box, verify the database name. You can optionally select a different database from the list.

  5. Verify that the recovery model is either FULL or BULK_LOGGED.

  6. In the Backup type list box, select Transaction Log.

  7. Optionally, you can select Copy Only Backup to create a copy-only backup. A copy-only backup is a SQL Server backup that is independent of the sequence of conventional SQL Server backups. For more information, see Copy-Only Backups (SQL Server).

    System_CAPS_noteNote

    When the Differential option is selected, you cannot create a copy-only backup.

  8. Either accept the default backup set name suggested in the Name text box, or enter a different name for the backup set.

  9. Optionally, in the Description text box, enter a description of the backup set.

  10. Specify when the backup set will expire:

    • To have the backup set expire after a specific number of days, click After (the default option), and enter the number of days after set creation that the set will expire. This value can be from 0 to 99999 days; a value of 0 days means that the backup set will never expire.

      The default value is set in the Default backup media retention (in days) option of the Server Properties dialog box (Database Settings page). To access this dialog box, right-click the server name in Object Explorer and select properties; then select the Database Settings page.

    • To have the backup set expire on a specific date, click On, and enter the date on which the set will expire.

  11. Choose the type of backup destination by clicking Disk, URL or Tape. To select the paths of up to 64 disk or tape drives containing a single media set, click Add. The selected paths are displayed in the Backup to list box.

    To remove a backup destination, select it and click Remove. To view the contents of a backup destination, select it and click Contents.

  12. To view or select the advanced options, click Options in the Select a page pane.

  13. Select an Overwrite Media option, by clicking one of the following:

    • Back up to the existing media set

      For this option, click either Append to the existing backup set or Overwrite all existing backup sets. For more information, see Media Sets, Media Families, and Backup Sets (SQL Server).

      Optionally, select Check media set name and backup set expiration to cause the backup operation to verify the date and time at which the media set and backup set expire.

      Optionally, enter a name in the Media set name text box. If no name is specified, a media set with a blank name is created. If you specify a media set name, the media (tape or disk) is checked to see whether the actual name matches the name you enter here.

      If you leave the media name blank and check the box to check it against the media, success will equal the media name on the media also being blank.

    • Back up to a new media set, and erase all existing backup sets

      For this option, enter a name in the New media set name text box, and, optionally, describe the media set in the New media set description text box. For more information, see Media Sets, Media Families, and Backup Sets (SQL Server).

  14. In the Reliability section, optionally, check:

    • Verify backup when finished.

    • Perform checksum before writing to media, and, optionally, Continue on checksum error. For information on checksums, see Possible Media Errors During Backup and Restore (SQL Server).

  15. In the Transaction log section:

    • For routine log backups, keep the default selection, Truncate the transaction log by removing inactive entries.

    • To back up the tail of the log (that is, the active log), check Back up the tail of the log, and leave database in the restoring state.

      A tail-log backup is taken after a failure to back up the tail of the log in order to prevent work loss. Back up the active log (a tail-log backup) both after a failure, before beginning to restore the database, or when failing over to a secondary database. Selecting this option is equivalent to specifying the NORECOVERY option in the BACKUP LOG statement of Transact-SQL. For more information about tail-log backups, see Tail-Log Backups (SQL Server).

  16. If you are backing up to a tape drive (as specified in the Destination section of the General page), the Unload the tape after backup option is active. Clicking this option activates the Rewind the tape before unloading option.

  17. SQL Server 2008 Enterprise and later supports backup compression. By default, whether a backup is compressed depends on the value of the backup-compression default server configuration option. However, regardless of the current server-level default, you can compress a backup by checking Compress backup, and you can prevent compression by checking Do not compress backup.

    To view the current backup compression default

    • View or Configure the backup compression default Server Configuration Option

Arrow icon used with Back to Top link

Encryption

To encrypt the backup file check the Encrypt backup check box. Select an encryption algorithm to use for encrypting the backup file and provide a Certificate or Asymmetric key. The available algorithms for encryption are:

  • AES 128

  • AES 192

  • AES 256

  • Triple DES

Using Transact-SQL

To back up a transaction log

  1. Execute the BACKUP LOG statement to back up the transaction log, specifying the following:

    • The name of the database to which the transaction log that you want to back up belongs.

    • The backup device where the transaction log backup is written.

Example (Transact-SQL)

System_CAPS_importantImportant

This example uses the AdventureWorks2012 database, which uses the simple recovery model. To permit log backups, before taking a full database backup, the database was set to use the full recovery model. For more information, see View or Change the Recovery Model of a Database (SQL Server).

This example creates a transaction log backup for the AdventureWorks2012 database to the previously created named backup device, MyAdvWorks_FullRM_log1.

Transact-SQL
Copy
BACKUP LOG AdventureWorks2012    TO MyAdvWorks_FullRM_log1; GO 

Arrow icon used with Back to Top link

Using PowerShell

  1. Use the Backup-SqlDatabase cmdlet and specify Log for the value of the -BackupAction parameter.

    The following example creates a log backup of the MyDB database to the default backup location of the server instance Computer\Instance.

    Copy
    --Enter this command at the PowerShell command prompt, C:\PS> Backup-SqlDatabase -ServerInstance Computer\Instance -Database MyDB -BackupAction Log 

To set up and use the SQL Server PowerShell provider

  • SQL Server PowerShell Provider

 Comment 

Troubleshoot a full transaction log

on July 27, 2015 at 1:00 pm
Posted In: MS SQL Server (Cat)

From MSDN

Troubleshoot a Full Transaction Log (SQL Server Error 9002)

SQL Server 2012

This topic discusses possible responses to a full transaction log and suggests how to avoid it in the future. When the transaction log becomes full, SQL Server Database Engine issues a 9002 error. The log can fill when the database is online or in recovery. If the log fills while the database is online, the database remains online but can only be read, not updated. If the log fills during recovery, the Database Engine marks the database as RESOURCE PENDING. In either case, user action is required to make log space available.

Responding to a Full Transaction Log

The appropriate response to a full transaction log depends partly on what condition or conditions caused the log to fill. To discover what is preventing log truncation in a given case, use the log_reuse_wait and log_reuse_wait_desc columns of the sys.database catalog view. For more information, see sys.databases (Transact-SQL). For descriptions of factors that can delay log truncation, see The Transaction Log (SQL Server).

Important note Important

If the database was in recovery when the 9002 error occurred, after resolving the problem, recover the database by using ALTER DATABASE database_name SET ONLINE.

Alternatives for responding to a full transaction log include:

  • Backing up the log.

  • Freeing disk space so that the log can automatically grow.

  • Moving the log file to a disk drive with sufficient space.

  • Increasing the size of a log file.

  • Adding a log file on a different disk.

  • Completing or killing a long-running transaction.

These alternatives are discussed in the following sections. Choose a response that fits your situation best.

Backing up the Log

Under the full recovery model or bulk-logged recovery model, if the transaction log has not been backed up recently, backup might be what is preventing log truncation. If the log has never been backed up, you must create two log backups to permit the Database Engine to truncate the log to the point of the last backup. Truncating the log frees space for new log records. To keep the log from filling up again, take log backups frequently.

To create a transaction log backup

Important note Important

If the database is damaged, see Tail-Log Backups (SQL Server).

  • Back Up a Transaction Log (SQL Server)

  • SqlBackup (SMO)

Freeing Disk Space

You might be able to free disk space on the disk drive that contains the transaction log file for the database by deleting or moving other files. The freed disk space allows the recovery system to enlarge the log file automatically.

Moving the Log File to a Different Disk

If you cannot free enough disk space on the drive that currently contains the log file, consider moving the file to another drive with sufficient space.

Important note Important

Log files should never be placed on compressed file systems.

To move a log file

  • Move Database Files

Increasing the Size of a Log File

If space is available on the log disk, you can increase the size of the log file. The maximum size for log files is two terabytes (TB) per log file.

To increase the file size

If autogrow is disabled, the database is online, and sufficient space is available on the disk, either:

  • Manually increase the file size to produce a single growth increment.

  • Turn on autogrow by using the ALTER DATABASE statement to set a non-zero growth increment for the FILEGROWTH option.

Note Note

In either case, if the current size limit has been reached, increase the MAXSIZE value.

Adding a Log File on a Different Disk

Add a new log file to the database on a different disk that has sufficient space by using ALTER DATABASE ADD LOG FILE.

To add a log file

  • Add Data or Log Files to a Database

See Also

Reference

ALTER DATABASE (Transact-SQL)
sp_add_log_file_recover_suspect_db (Transact-SQL)

Concepts

Manage the Size of the Transaction Log File
Transaction Log Backups (SQL Server)

 

 Comment 

TOP 10 I/O

on July 27, 2015 at 12:54 pm
Posted In: MS SQL Server (Cat)

doesn’t work – but could.

SELECT TOP 10

(total_logical_reads/execution_count) AS avg_reads,

(total_logical_writes/execution_count) AS avg_writes,

(total_physical_reads/execution_count) AS avg_physical_reads,

execution_count,

statement_start_offset as stmt_start_offset,

(SELECT SUBSTRING(text, statement_start_offset/2 + 1,

(CASE WHEN statement_end_offset = -1

THEN LEN(CONVERT(nvarchar(MAX),text)) * 2

ELSE statement_end_offset

END – statement_start_offset)/2)

FROM sys.dm_exec_sql_text(sql_handle)) AS query_text,

(SELECT query_plan from sys.dm_exec_query_plan(plan_handle)) as query_plan

FROM sys.dm_exec_query_stats

ORDER BY (total_logical_reads + total_logical_writes)/execution_count DESC

 

 Comment 

Backups, Log files and shrinking log files

on July 27, 2015 at 12:53 pm
Posted In: MS SQL Server (Cat)

 

 

— SELECT name, log_reuse_wait_desc FROM sys.databases

 

 

Displays information about the oldest active transaction and the oldest distributed and nondistributed replicated transactions, if any, within the specified database.

Results are displayed only if there is an active transaction or if the database contains replication information.

An informational message is displayed if there are no active transactions.

— dbcc opentran;

Virtual log files in the log file

–dbcc loginfo

Fix is:

BACKUP LOG dbname TO DISK=’backup_path’ (or CHECKPOINT if recovery model is Simple)

DBCC SHRINKFILE (log_logical_file, 1);

ALTER DATABASE dbname MODIFY FILE (NAME=log_logical_file, SIZE=new_size);

–SELECT sdb.Name AS DatabaseName,

–COALESCE(CONVERT(VARCHAR(12), MAX(bus.backup_finish_date), 101),’-‘) AS LastBackUpTime

–FROM sys.sysdatabases sdb

–LEFT OUTER JOIN msdb.dbo.backupset bus ON bus.database_name = sdb.name

–GROUP BY sdb.Name

–SELECT B.name as Database_Name, ISNULL(STR(ABS(DATEDIFF(day, GetDate(),

–MAX(Backup_finish_date)))), ‘NEVER’) as DaysSinceLastBackup,

–ISNULL(Convert(char(10), MAX(backup_finish_date), 101), ‘NEVER’) as LastBackupDate

–FROM master.dbo.sysdatabases B LEFT OUTER JOIN msdb.dbo.backupset A

–ON A.database_name = B.name AND A.type = ‘L’ GROUP BY B.Name Osdatabases;RDER BY B.name

SELECT B.name as Database_Name, A.type, backup_start_date, backup_finish_date

FROM master.dbo.sysdatabases B LEFT OUTER JOIN msdb.dbo.backupset A

ON A.database_name = B.name

–and a.database_name = ‘Auditworks’

–AND A.type = ‘D’

order by backup_start_date desc

–select * from msdb.dbo.backups

 

 

 Comment 

Statistics

on July 27, 2015 at 12:51 pm
Posted In: MS SQL Server (Cat)

 

SELECT OBJECT_NAME(id), name, STATS_DATE(id, indid), rowmodctr FROM sys.sysindexes

WHERE STATS_DATE(id, indid)

AND rowmodctr>0 AND id IN (SELECT object_id FROM sys.tables) order by rowmodctr desc;

 

SELECT ‘UPDATE statistics dbo.’ + OBJECT_NAME(id) FROM sys.sysindexes

WHERE STATS_DATE(id, indid)

AND rowmodctr>0 AND id IN (SELECT object_id FROM sys.tables)and name not like ‘_WA_%’;

 


 

 Comment 

Shrink Datafiles

on July 27, 2015 at 12:50 pm
Posted In: MS SQL Server (Cat)

 

SQL Server 2008-2014

USE UserDB;

GO

DBCC SHRINKFILE (DataFile1, 7);

GO

or

DBCC SHRINKFILE ( { file_name | file_id } { [ , EMPTYFILE ] | [ [ , target_size ] [ , { NOTRUNCATE | TRUNCATEONLY } ] ] } ) [ WITH NO_INFOMSGS ]

SQL Server 2005

SQLDatabase.TransactionLog.Truncate();

SQLDatabase.Shrink(5, SQLDMO.SQLDMO_SHRINK_TYPE.SQLDMOShrink_NoTruncate);

SQLDatabase.TransactionLog.Truncate();

SQLDatabase.Shrink(5, SQLDMO.SQLDMO_SHRINK_TYPE.SQLDMOShrink_Default);

USE [SQLdefragRepository]

GO

DBCC SHRINKFILE (N’SQLdefragRepository_log’ , 1000) — value in MB

GO

 

 Comment 

Table Sizes

on July 27, 2015 at 12:49 pm
Posted In: MS SQL Server (Cat)

 

EXEC sp_MSforeachtable @command1=”EXEC sp_spaceused ‘?'”

 

 Comment 

Calculate space used in a database file.

on July 27, 2015 at 12:49 pm
Posted In: MS SQL Server (Cat)

 

 

Select *, fileproperty(name, ‘SpaceUsed’) as Used From dbo.sysfiles;

SELECT name ,size/128.0 – CAST(FILEPROPERTY(name, ‘SpaceUsed’) AS int)/128.0 AS AvailableSpaceInMB FROM sys.database_files;

select d.name, m.name, m.physical_name from sys.master_files m

inner join sys.databases d on (m.database_id = d.database_id) order by 1, 2;


 

 Comment 

TABLE and INDEX fragmentation

on July 27, 2015 at 12:48 pm
Posted In: MS SQL Server (Cat)

SQL Server 2005

SELECT dbschemas.[name] as ‘Schema’, dbtables.[name] as ‘Table’, dbindexes.[name] as ‘Index’, indexstats.avg_fragmentation_in_percent, indexstats.page_count FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS indexstats

INNER JOIN sys.tables dbtables ON dbtables.[object_id] = indexstats.[object_id]

INNER JOIN sys.schemas dbschemas ON dbtables.[schema_id] = dbschemas.[schema_id]

INNER JOIN sys.indexes AS dbindexes ON dbindexes.[object_id] = indexstats.[object_id]

AND indexstats.index_id = dbindexes.index_id

WHERE indexstats.database_id = DB_ID()

ORDER BY indexstats.avg_fragmentation_in_percent desc;

 

ALTER INDEX ON REBUILD WITH (MAXDOP = 1);

dbcc show_statistics(CommandLog,PK_CommandLog);

select * from sys.dm_db_index_physical_stats(dbid, objid, indexid, partition_no, mode);

use nulls to get values for all databases etc

 

 Comment 

Calculate page life expectancy

on July 27, 2015 at 12:46 pm
Posted In: MS SQL Server (Cat)

SELECT [object_name], [counter_name], [cntr_value] FROM sys.dm_os_performance_counters WHERE [object_name] LIKE ‘%Manager%’ AND [counter_name] = ‘Page life expectancy’

 

 Comment 

One Offs

on July 27, 2015 at 12:45 pm
Posted In: MS SQL Server (Cat)

select DB_ID(‘Maintenance’)

select OBJECT_ID(‘CommandLog’)

select * from sys.dm_db_index_physical_stats(13, 2105058535, -1, 0,NULL)

list traces — SELECT * FROM :: fn_trace_getinfo(default)

delete trace — oracle sp_trace_setstatus @traceid=?, @status =2

SELECT name FROM sys.databases


 

 

 Comment 

List all files in a database

on July 27, 2015 at 12:43 pm
Posted In: MS SQL Server (Cat)

List all files

SQL Server 2000

select d.name, f.name, f.filename from sysaltfiles f inner join sysdatabases d on (f.dbid = d.dbid) order by 1,2

SQL Server 2005/8

select d.name, m.name, m.physical_name from sys.master_files m inner join sys.databases d on (m.database_id = d.database_id) order by 1, 2

 

 Comment 

Find database growth statistics

on July 27, 2015 at 12:41 pm
Posted In: MS SQL Server (Cat)

 

Find database growth stats by looking at backup sizes.

SELECT

CONVERT(char, backup_start_date, 111) AS [Date], –yyyy/mm/dd format

CONVERT(char, backup_start_date, 108) AS [Time],

‘dbname’ AS [Database Name],

[filegroup_name] AS [Filegroup Name],

logical_name AS [Logical Filename], physical_name AS [Physical Filename],

CONVERT(numeric(9,2),file_size/1048576) AS [File Size (MB)],

Growth AS [Growth Percentage (%)]

FROM

(SELECT

b.backup_start_date, a.backup_set_id, a.file_size, a.logical_name,

a.[filegroup_name], a.physical_name,

(SELECT CONVERT(numeric(5,2),((a.file_size * 100.00)/i1.file_size)-100)

FROM msdb.dbo.backupfile i1

WHERE i1.backup_set_id =

(SELECT MAX(i2.backup_set_id)

FROM msdb.dbo.backupfile i2 JOIN msdb.dbo.backupset i3

ON i2.backup_set_id = i3.backup_set_id

WHERE i2.backup_set_id a.backup_set_id AND i2.file_type=‘D’

AND i3.database_name = ‘dbname’

AND i2.logical_name = a.logical_name

AND i2.logical_name = i1.logical_name

AND i3.type = ‘D’

)

AND i1.file_type = ‘D’

) AS Growth

FROM msdb.dbo.backupfile a

JOIN msdb.dbo.backupset b ON a.backup_set_id = b.backup_set_id

WHERE b.database_name = ‘dbname’

AND a.file_type = ‘D’

AND b.type = ‘D’

) as Derived

WHERE

(Growth 0.0) OR (Growth IS NULL) ORDER BY logical_name, [Date]

 

 

 Comment 

opatch util cleanup

on July 2, 2014 at 2:37 pm
Posted In: Oracle odds and ends (Cat)

Clean out old patch backups

opatch util cleanup

 Comment 

Login when profile hangs

on July 2, 2014 at 2:34 pm
Posted In: Resource Management (Cat)

On your putty session pass the remote command as /bin/ksh, this will prevent .profile from running.

 Comment 

NIC Speeds

on July 2, 2014 at 2:33 pm
Posted In: Resource Management (Cat)

To find NIC speeds the following command will provide the answer:

for n in `seq 0 15`; do echo -n “eth${n}:”; ethtool eth${n}|grep Speed; done

You can also look in /proc/net/bonding/bondx

===

netstat -an 1 | find “10.217.107.”

===

 

 Comment 

IO Performance investigation example

on July 2, 2014 at 2:28 pm
Posted In: Oracle DB and SQL performance analysis (Cat)

Looking at the segment level stats (shown below stacked), it looks like it is indeed TABLE1 which is generating the high I/O.

So we can “alter table TABLE1 cache” (and place it a keep pool) to remove this completely. Need to this about long term growth for sizing the pool also. Also perhaps scanning the table is not the most efficient use of resource, maybe there is an application improvement which can be made there

For further info here is a comparison of I/O profile with prior to the transition.

In both cases the vast majority of logical reads are table scans. (this is an extreme example of an application I/O profile in that respect).

However in the red (first) case almost none of those translate to physical reads – i.e. the reads are cached in the SGA.

In the green (second) case the majority translate into physical reads to SAN (and the absolute thoughput is much lower).

So more evidence to backup the fact that we need to ensure we cache the relevant segments in the SGA (and in the longer term perhaps look to preventing the table scans in the first place).

 

 

Since on ASM there is no OS level caching, it is important that all such queries are cached where possible in the SGA, otherwise via direct path read they will not be cached and will have to go to physical disk. This one is fairly easy to do via the cache directive + keep pool since the segment size is fairly small.

 Comment 

Tips – 1

on July 2, 2014 at 2:23 pm
Posted In: Oracle Golden Gate (Cat)

By default, the way in which trail files are copied from the source to the target hosts is by transferring small packets of data before the trail file write on the target host is issued.

Confirmation that that data has successfully been written to disk (using fsync on Linux) does not occur very frequently, which in the event of a role transition opens up the possibility of the trail file getting out of sync with a checkpoint from one or more Replicat processes.

To reduce occurrences, the Ddata pump must be configured with the following parameters:

RMTHOST , MGRPORT , PARAMS -f -B1048576, TCPBUFSIZE 1048576, TCPFLUSHBYTES 1048576#

The TCPBUFSIZE and TCPFLUSHBYTES combination of parameters increases the transfer buffer between the source and target to 1MB, with the server/collector process on the target issuing a write for every 1MB (-B write buffer size parameter) received, followed directly with a write confirmation (-f flush and sync).

This works in 11.2.0.19

And

After a Data Guard failover, if the Replicat fails to start due to an error with the checkpoint being larger than the trail file (OGG-01705) do the following to restart the Replicat:

Example error:

2014-05-16 11:44:04 ERROR OGG-01705 Input checkpoint position 388369723 for input trail file ‘/u01/goldengate/dirdat_os/bb000028′ is greater than the size of the file (387583957). Please consult Oracle Knowledge Management Doc ID 1138409.1. for instructions.

If using Oracle GoldenGate version 11.2.1.0.7 or later AND the Replicat is using a checkpoint table, the process can be restarted with:

GGSCI> START REPLICAT FILTERDUPTRANSACTIONS

This only works with Oracle GoldenGate v11.2.1.0.7 and later, and when a checkpoint table is used.

Refer to My Oracle Support note 1536741.1 for more detailed process restart instructions.

 

 

 Comment 

Add Linux User

on July 2, 2014 at 2:15 pm
Posted In: Unix notes (Cat)

useradd -d /home/hallamp -G 55555 -c “Paul Hallam DBA” -s /bin/ksh -m hallamp

passwd hallamp

 

 

 Comment 

ACFS Volume commands

on July 2, 2014 at 2:14 pm
Posted In: Oracle RAC (Cat)

Asmcmd> volcreate -G DBDUMP2_B_ACFS1 -s 64G –redundancy unprotected V_DBDUMP2

Asmcmd> volinfo -G DBDUMP2_B_ACFS1 V_DBDUMP2

Diskgroup Name: DBDUMP2_B_ACFS1

Volume Name: V_DBDUMP2

Volume Device: /dev/asm/v_dbdump2-115

State: ENABLED

Size (MB): 64512

Resize Unit (MB): 32

Redundancy: UNPROT

Stripe Columns: 4

Stripe Width (K): 128

Usage:

Mountpath:

ASMCMD> exit

#  /sbin/mkfs -t acfs /dev/asm/v_dbdump2-115

AS ROOT

/sbin/acfsutil registry -a /dev/asm/v_dbdump2-115 /oracle/dbdump2

mount –t acfs dev/asm/v_dbdump2-115 /oracle/dbdump2

chown –R oracle:dba /oracle/dbdump2

NOTE – I can’t remember if we need to create the directory “/oracle/dbdump2” before doing the root commands. If the commands fail then we obviously need to do “mkdir /oracle/dbdump2”

Doc Id 1371067.1 explains about issues with the asm drivers and how to get around them


 Comment 

ACFS VOLUME COMMANDS

on July 2, 2014 at 2:12 pm
Posted In: Oracle RAC (Cat)

Let’s create a script to mount/umount ACFS Filesystem Using Oracle user (oracle)

cd /u01/app/grid/product/11.2.0/grid/crs/script

vi acfs_ORAHOME11GR2.sh

#!/bin/sh

case $1 in

‘start’)

#  Check if Volume is Enabled if not enable volume

if [ ! -f /dev/asm/db112_dbh1-220 ]; then

ORACLE_HOME=/u01/app/grid/product/11.2.0/grid

ORACLE_SID=+ASM

$ORACLE_HOME/bin/asmcmd

volenable -G DG_ORAHOME11GR2_1 db112_dbh1

EOF

fi

#  Mount filesystem

/usr/bin/sudo /bin/mount -t acfs /dev/asm/db112_dbh1-220 /u01/app/oracle/product/11.2.0/dbhome_1

# Change permission of Filesystem

if [ $? = “0” ]; then

/usr/bin/sudo  /bin/chown oracle:oinstall /u01/app/oracle/product/11.2.0/dbhome_1

/usr/bin/sudo  /bin/chmod 775 /u01/app/oracle/product/11.2.0/dbhome_1

exit 0

fi

RET=$?

;;

‘stop’)

#  Stop Filesystem

/usr/bin/sudo /bin/umount -t acfs /dev/asm/db112_dbh1-220

RET=$?

;;

‘clean’)

/usr/bin/sudo  /bin/umount -t acfs /dev/asm/db112_dbh1-220

RET=$?

;;

‘check’)

#  Check if Filesystem is Mounted

OUTCMD=`/bin/mount |grep ‘/dev/asm/db112_dbh1-220’ |wc -l`

if [ $OUTCMD = 1 ]; then

RET=0

else

RET=1

fi

;;

esac

# 0: success; 1 : error

if [ $RET -eq 0 ]; then

exit 0

else

exit 1

fi

 

I used the “sudo” to perform root-tasks. So you must allow user oracle perform root actions without password.

Using root privileges (or root user)  type “vi /etc/sudoers” to edit sudoers file.

 

# vi /etc/sudoers

Comment the line “Defaults    requiretty”

# Defaults    requiretty

Add user Oracle above this:

# %wheel        ALL=(ALL)       NOPASSWD: ALL

oracle          ALL=(ALL)       NOPASSWD: /bin/chown, /bin/chmod, /bin/mount, /bin/umount

Registering Resource in OHAS

Register Resource “acfs.orahome11gR2_1.fs” on OHAS creating dependency of DISKGROUP “DG_ORAHOME11GR2_1″

START_DEPENDENCIES: Specifies a set of relationships that OHAS considers when starting a resource.

 

You can specify a space-separated list of dependencies on several resources and resource types on which a particular resource can depend.

hard: Specify a hard start dependency for a resource when you want the resource to start only when a particular resource or resource of a particular type starts.

Oracle recommends that resources with hard start dependencies also have pullup start dependencies.

pullup: When you specify the pullup start dependency for a resource, then this resource starts as a result of named resources starting.

 

So, We must specify the START_DEPENDENCIES referencing to DISKGROUP DG_ORAHOME11GR2_1 where are ORACLE_HOME filesystem, that’s means when you try start the resource “acfs.orahome11gR2_1.fs” the DISKGROUP DG_ORAHOME11GR2_1 must be started, if DISKGROUP DG_ORAHOME11GR2_1 is not started “pullup” will try start DISKGROUP DG_ORAHOME11GR2_1 before try start resource “acfs.orahome11gR2_1.fs”.

STOP_DEPENDENCIES: Specifies a set of relationships that OHAS considers when stopping a resource.

hard: Specify a hard stop dependency for a resource that you want to stop when named resources or resources of a particular resource type stop.

 

So, if we try to stop (using OHAS) DISKGROUP DG_ORAHOME11GR2_1 when ACFS (ORACLE_HOME) remain mounted (ONLINE) OHAS must raise CRS-2529

 

CRS-2529: Unable to act on  because that would require stopping or relocating, but the force option was not specified

$ crsctl add resource acfs.orahome11gR2_1.fs

-type local_resource

-attr “

ACTION_SCRIPT=/u01/app/grid/product/11.2.0/grid/crs/script/acfs_ORAHOME11GR2.sh,

AUTO_START=always,

START_TIMEOUT=100,

STOP_TIMEOUT=100,

CHECK_INTERVAL=10,

START_DEPENDENCIES=hard(ora.DG_ORAHOME11GR2_1.dg)pullup(ora.DG_ORAHOME11GR2_1.dg),

STOP_DEPENDENCIES=’hard(ora.DG_ORAHOME11GR2_1.dg)'”

 

More info about attributes used here you can found here:

http://download.oracle.com/docs/cd/E11882_01/rac.112/e16794/resatt.htm

 

$ crsctl status resource acfs.orahome11gR2_1.fs

NAME=acfs.orahome11gR2_1.fs

TYPE=local_resource

TARGET=OFFLINE

STATE=OFFLINE

 

$ df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

54G  9.0G   42G  18% /

/dev/sda1              99M   31M   64M  33% /boot

tmpfs                 2.5G  176M  2.3G   8% /dev/shm

 

# Mounting Filesystem with OHAS:

$ crsctl start resource acfs.orahome11gR2_1.fs

CRS-2672: Attempting to start ‘acfs.orahome11gR2_1.fs’ on ‘macedonia’

CRS-2676: Start of ‘acfs.orahome11gR2_1.fs’ on ‘macedonia’ succeeded

 

$ df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

54G  9.0G   42G  18% /

/dev/sda1              99M   31M   64M  33% /boot

tmpfs                 2.5G  176M  2.3G   8% /dev/shm

/dev/asm/db112_dbh1-220

15G  10M   15G  1% /u01/app/oracle/product/11.2.0/dbhome_1

 

$ crsctl status resource acfs.orahome11gR2_1.fs

NAME=acfs.orahome11gR2_1.fs

TYPE=local_resource

TARGET=ONLINE

STATE=ONLINE on macedonia

Trying stop DISKGROUP without umount filesystem

 

$ srvctl stop diskgroup -g DG_ORAHOME11GR2_1

PRCR-1065 : Failed to stop resource ora.DG_ORAHOME11GR2_1.dg

CRS-2529: Unable to act on ‘ora.DG_ORAHOME11GR2_1.dg’ because that would require stopping or relocating ‘acfs.orahome11gR2_1.fs’, but the force option was not specified

 

Now you can restart your server and make sure wich the filesystem will be mounted at startup of OHAS.

 

If everything is ok you can install Oracle Software on ACFS.

 Comment 

ORA-27300 – fork failed

on July 2, 2014 at 2:03 pm
Posted In: Oracle RAC (Cat)

ORA-27300: OS system dependent operation:fork failed with status: 11

ORA-27301: OS failure message: Resource temporarily unavailable

 

ORA-27302: failure occurred at: skgpspawn3

The processes and resources started by CRS (Grid Infrastructure) do not inherit the ulimit setting for “max user processes” from /etc/security/limits.conf setting (Doc ID 1594606.1)

This is happening due to bug 17301761 THE MAXIMUM PROCESS VALUE OF OHASD,BIN IS TOO SMALL IF THE CRS STARTS MANUALLY

1) Go to GI_HOME/bin
2) Make a backup of ohasd script file
3) In the ohasd script file, locate the following code:

Linux)
# MEMLOCK limit is for Bug 9136459
ulimit -l unlimited
if [ “$?” != “0” ]
then
$CLSECHO -p has -f crs -l -m 6021 “l” “unlimited”
fi
ulimit -c unlimited
if [ “$?” != “0” ]
then
$CLSECHO -p has -f crs -l -m 6021 “c” “unlimited”
fi

ulimit -n 65536
In the above code, insert the following line just before the line with “ulimit -n 65536”
ulimit -u 16384
4) Recycle CRS manually so that the ohasd will not use new ulimit setting for open files.
After the database is started, please issue “ps -ef | grep pmon” and get the pid of it.
Then, issue “cat /proc//limits | grep process” and find out if the Max process is set to 16384.
Setting the number of processes to 16384 should be enough for most servers since having 16384 processes normally mean the server to loaded very heavily.  using smaller number like 4096 or 8192 should also suffice for most users.
In addition to above, the ohasd template needs to be modified to insure that new ulimit setting persists even after a patch is applied.
1) Go to GI_HOME/crs/sbs
2) Make a backup of crswrap.sh.sbs
3) In crswrap.sh.sbs, insert the following line just before the line “# MEMLOCK limit is for Bug 9136459”
ulimit -u 16384
Finally, although the above setting is successfully used to increase the number of processes setting, please test this on the test server first before setting the ulimit on the production.

 Comment 

ssh to host fails – connection refused

on July 2, 2014 at 1:58 pm
Posted In: Oracle odds and ends (Cat)

[root]# nslookup pcrmlp1dbx

Address: 10.218.11.157

[root]# dcli -g /home/oracle/dbs_group -l root “ifconfig -a|grep IPADDR”

NODE: inet addr:IPADDR  Bcast:10.218.15.255  Mask:255.255.248.0

[root]# ifconfig -a

.

bondeth0:8 Link encap:Ethernet  HWaddr 90:E2:BA:23:E2:E4

inet addr:IPADDR Bcast:10.218.15.255  Mask:255.255.248.0

UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1

A putty connection to vipname fails (Network error: Connection refused)

===

Maybe ssh is running on a different IP address?

Do a netstat –an|grep IPADDR and ensure :22 appears, sshd.

Update: /etc/ssh/sshd_config (requires HP/root)

ListenAddress 0.0.0.0 – listen on all ip addresses

ListenAddress IPADDR – adds your vip to the list (you’d need to add this to all cluster nodes)

 

 

 Comment 

ORA-15063: ASM discovered an insufficient number of disks for diskgroup “DATA”

on July 2, 2014 at 1:54 pm
Posted In: Oracle RAC (Cat)

Mounting Diskgroup Fails With ORA-15063 and V$ASM_DISK Shows PROVISIONED (Doc ID 1487443.1)

Symptoms

1. ASM is restarted and some ASM diskgroups will not be mounted with the following similar error.

SQL> alter diskgroup DATA mount;

alter diskgroup DATA mount

*

ERROR at line 1:

ORA-15032: not all alterations performed

ORA-15017: diskgroup “DATA” cannot be mounted

ORA-15063: ASM discovered an insufficient number of disks for diskgroup “DATA”

2. v$asm_disk view returns all ASM device information but v$asm_disk.header_status colume shows the status as

“PROVISIONED”.

SQL> select group_number gn,disk_number dn, mount_status, header_status,mode_status,state, total_mb,

free_mb, label, path from v$asm_disk order by group_number, disk_number;

GN DN MOUNT_STATUS HEADER_STATU MODE_STATUS STATE TOTAL_MB FREE_MB

LABEL PATH

0 0 CLOSED PROVISIONED ONLINE NORMAL 0 0

CRS ORCL:CRS

0 1 CLOSED PROVISIONED ONLINE NORMAL 0 0

DATA ORCL:DATA

3. kfed output for the device affected shows 0xaa55 on kfdhdb.acdb.ub2spare field ( 2 bytes at 510th location ).

Example>

$ kfed read /dev/oracleasm/disks/DATA

kfbh.endian: 1 ; 0x000: 0x01

kfbh.hard: 130 ; 0x001: 0x82

..

kfdhdb.acdb.ents: 0 ; 0x1dc: 0x0000

kfdhdb.acdb.ub2spare: 43605 ; 0x1de: 0xaa55

Cause

0xaa55 on little-endian server like Linux or 0x55aa on big-endain server like Sun Sparc indicates boot signature ( or

magic number ) on MBR (Master Boot Record ) Partition.

This issue could happen outside ASM when some tools on OS ( or human ) put partition information on the affected

device.

Solution

This situation can happen while the affected diskgroup is mounted and maintenance window needs to be scheduled to

repair the device header.

1. Shutdown ASM instance

2. Take a 1M dd backup just in case.

$ dd if=/dev/oracleasm/disks/DATA of=/tmp/DATA.dd bs=1M count=1

3. Repair the device header in the following way. Repeat the following example procedure for the all affected devices.

Check AU ( Allocation Unit ) size for the disk in the following way for aus option in kfed command .

$ kfed read /dev/oracleasm/disks/DATA | grep ausize

kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000

^^^^^^

If ASM version is 11.1.0.7 or above,

$ kfed repair /dev/oracleasm/disks/DATA aus=1048576

If ASM version is less than 11.1.0.7,

$ kfed read /dev/oracleasm/disks/DATA text=./DATA.kfed

$ kfed write /dev/oracleasm/disks/DATA CHKSUM=YES aus=1048576 text=./DATA.kfed

4. Startup ASM and mounte the affected diskgroup.

5. The procedure above only works with the symtom (0xaa55 on 510th location on device header ) described above.

And there are many other possibilities causing PROVISIONED status in which the procedure above may not work.

And note that if the procedure above does not resolve the issue, the corruption must have been spreaded to other

blocks beyond disk header. Contact Oracle Support for further investigation.

 

 Comment 

ORA-19643 – INCREMENTAL-START SCN IS TOO RECENT

on July 2, 2014 at 1:50 pm
Posted In: Oracle odds and ends (Cat)

Evidently one situation on 11.2.0.3 where the ORA-19643 can happen is where an rman catalog is shared between primary and standby and then for example an INCREMENTAL LEVEL 1 is created at PRIMARY, and the STANDBY has not yet applied logs – which can advance standby datafile’s checkpoint beyond the last incremental backup’s checkpoint scn.

Thus the error ORA-19643 may be signalled intermittently because of a timing issue, i.e., it depends on when the incremental backup is created at primary and the delay in applying the logs at standby.

Is it possible that this could be the situation here (i.e. are you interleaving incrementals between primary and standby, with standby logs being applied after some (intentional or otherwise) delay?

(for ref Bug 16622161 : ORA-19643: DATAFILE 11: INCREMENTAL-START SCN IS TOO RECENT)

 Comment 

Grid Control Agent startup

on July 2, 2014 at 1:49 pm
Posted In: Oracle odds and ends (Cat)

1) requiretty commented out in /etc/sudoers file

2) Following sudo commands added on all hosts for oracrs:

sudo –u oraoem /oracle/product/1120/crs/script/agentstartup.sh

3) Create the file /oracle/product/1120/crs/crs/script/agentstartup.sh containing:

#!/bin/bash

#

ORACLE_BASE=/oracle/product/middleware

ORACLE_HOME=/oracle/product/middleware/agent12c

ORACLE_SID=agent12c

LD_LIBRARY_PATH=$ORACLE_HOME/lib

PATH=$ORACLE_HOME/bin:$PATH

AGENT_HOME=/oracle/product/middleware/agent12c

export ORACLE_BASE

export ORACLE_HOME

export ORACLE_SID

export LD_LIBRARY_PATH

export PATH

export AGENT_HOME

agent_start () {

$AGENT_HOME/bin/emctl start agent

}

agent_stop () {

$AGENT_HOME/bin/emctl stop agent

}

agent_check () {

$AGENT_HOME/bin/emctl status agent

}

case “$1” in

start)

agent_start

;;

stop)

agent_stop

;;

check)

agent_check

;;

clean)

agent_clean $(ps -C emagent -o pid=)

;;

*)

echo $”Usage: ‘basename $0’ {start|stop|status|clean}”

exit 1

esac

 

4) Create the file /oracle/product/1120/crs/crs/script/crs_agent12c.sh containing:

#!/bin/bash

#

# dbcagent.sh – scriot to start and stop the EMC 12c agent

#

ORACLE_BASE=/oracle/product/middleware

ORACLE_HOME=/oracle/product/middleware/agent12c

ORACLE_SID=agent12c

LD_LIBRARY_PATH=$ORACLE_HOME/lib

PATH=$ORACLE_HOME/bin:$PATH

export ORACLE_BASE

export ORACLE_HOME

export LD_LIBRARY_PATH

export ORACLE_SID

export PATH

agent_start () {

sudo -u oraoem /oracle/product/1120/crs/crs/script/agentstartup.sh start

}

agent_stop () {

sudo -u oraoem /oracle/product/1120/crs/crs/script/agentstartup.sh stop

}

agent_check () {

sudo -u oraoem /oracle/product/1120/crs/crs/script/agentstartup.sh check

}

case “$1” in

start)

agent_start

;;

stop)

agent_stop

;;

check)

agent_check

;;

clean)

agent_clean $(ps -C emagent -o pid=)

;;

*)

echo $”Usage: ‘basename $0’ {start|stop|status|clean}”

exit 1

esac

 

5) chmod 755 /oracle/product/1120/crs/crs/script/*

6) ON NODE

crsctl add resource app.agent_NODE -type application -attr “AUTO_START=restore, ACTION_SCRIPT=/oracle/product/1120/crs/crs/script/crs_agent12c.sh, CHECK_INTERVAL=60,FAILOVER_DELAY=0, FAILURE_INTERVAL=0,FAILURE_THRESHOLD=0,HOSTING_MEMBERS=NODE,PLACEMENT=restricted,RESTART_ATTEMPTS=10,SCRIPT_TIMEOUT=360”

7)

crsctl stop res app.agent_NODE etc all now works.

 

 Comment 

Oracle Network Connections / Listeners

on July 2, 2014 at 1:45 pm
Posted In: Oracle odds and ends (Cat)

A few sections from the available documentation on how oracle connections work.

May help, may not, but it’s all I can find.

SOURCE/CLIENT

The source/client is wyclapph017 IP 10.217.28.189

DATABASE HOST ADDRESSES

The cluster scan (Single Client Access Name) is racprd007b-scan on port 1526 (IP 10.217.105.73 and 74 and 75).

These addresses are shared across doclorad004 and doclorad005.

The database listener is on port 1536 and uses the IP addresses:

Doclorad004

10.217.105.98  doclorad004.uk.centricaplc.com

10.217.105.147 doclorad004-vip.uk.centricaplc.com

Doclorad005

10.217.105.174 doclorad005.uk.centricaplc.com

10.217.105.148 doclorad005-vip.uk.centricaplc.com

===

“When a client submits a request, the SCAN listener listening on a SCAN IP address and the SCAN port is contracted on a client’s behalf. Because all services on the cluster are registered with the SCAN listener, the SCAN listener replies with the address of the local listener on the least-loaded node (Each scan listener keeps updated cluster load statistics) where the service is currently being offered. Finally, the client establishes connection to the service through the listener on the node where service is offered.All of these actions take place transparently to the client without any explicit configuration required in the client.”

 

===

The listener uses the dispatcher as a type of service handler to which it can direct client requests.
When a client request arrives, the listener performs one of the following actions:
• Hands the connection request directly to a dispatcher.
• Issues a redirect message to the client, containing the protocol address of a dispatcher.

The client then terminates the network session to the listener and establishes a network session to the dispatcher, using the network address provided in the redirect message.

The listener uses direct hand off whenever possible.
Redirect messages are used, for example, when dispatchers are remote to the listener.

===

When the client makes a connection, Connection packet is sent to a Scan listener.
Connection is then redirected to a Node Listener.
Node Listener forks the shadow process for the dedicated connection to use or passes the connection onto dispatcher for shared server connection model.
The most intense operation a TNS listener does, the fork (bequeath) of a shadow process is completed by the Node listener.

 

How the Client connection works thru SCAN?
Client connection attempt, based on the IP provide from DNS, reaches one the SCAN LIstner.
SCAN Listener is aware of load and availability conditions of all Instances in the cluster.
It direct the client to most sutiable instance.
This is done by directing the client to the Listener on node-vip of the most suitable instance.
Once the client connection reaches the local listerner it hands off the connection to database.

 Comment 

MOVE TABLESPACE IN ASM

on July 2, 2014 at 1:45 pm
Posted In: Oracle odds and ends (Cat)

SQL> select tablespace_name, file_id, file_name from dba_data_files where tablespace_name in (‘SYSTEM’,’SYSAUX’);

TABLESPACE    FILE_ID FILE_NAME

SYSTEM              1 +DBAASPREP_A_DATA1/DB1_a/datafile/system.355.815059249

SYSAUX              2 +DBAASPREP_A_DATA1/DB1_a/datafile/sysaux.357.815059395

oracle DB1_1 NODE1> srvctl stop database -d DB1_A

SQL> startup mount;

oracle DB1_1 NODE1> rman target /

RMAN> run{

2> BACKUP AS COPY DATAFILE 1 format “+DBAASPREP_A_SYSTEM1”;

3> BACKUP AS COPY DATAFILE 2 format “+DBAASPREP_A_SYSTEM1”;

4> }

RMAN> run{

2> SWITCH DATAFILE 1 TO DATAFILECOPY ‘+DBAASPREP_A_SYSTEM1/DB1_a/datafile/system.348.817731609’;

3> SWITCH DATAFILE 2 TO DATAFILECOPY ‘+DBAASPREP_A_SYSTEM1/DB1_a/datafile/sysaux.347.817731625’;

4> }

datafile 1 switched to datafile copy

input datafile copy RECID=83 STAMP=817731619 file name=+DBAASPREP_A_SYSTEM1/DB1_a/datafile/system.348.817731609

datafile 2 switched to datafile copy

input datafile copy RECID=84 STAMP=817731631 file name=+DBAASPREP_A_SYSTEM1/DB1_a/datafile/sysaux.347.817731625

 

SQL> alter database open

SQL> select tablespace_name, file_id, file_name from dba_data_files where tablespace_name in (‘SYSTEM’,’SYSAUX’);

TABLESPACE    FILE_ID FILE_NAME

———- ———- ————————————————————

SYSTEM              1 +DBAASPREP_A_SYSTEM1/DB1_a/datafile/system.348.817731609

SYSAUX              2 +DBAASPREP_A_SYSTEM1/DB1_a/datafile/sysaux.347.817731625

 

ASMCMD> rm +DBAASPREP_A_DATA1/DB1_a/datafile/SYSTEM.355.815059249

ASMCMD> rm +DBAASPREP_A_DATA1/DB1_a/datafile/sysaux.357.815059395

ASMCMD>

 

 

 Comment 

NID

on July 2, 2014 at 1:41 pm
Posted In: Oracle odds and ends (Cat)

export DBNAME=XYTLSU01

startup mount

nid target=”sys/password1″

startup mount;

alter database open resetlogs;

select dbid,name,open_mode,activation#,created from v$database;

 Comment 

Grid Shows Scan listener down

on July 2, 2014 at 1:40 pm
Posted In: Oracle RAC (Cat)

$ ps -ef|grep SCAN

oraoem    2452 29821  0 11:18 pts/1    00:00:00 grep SCAN

oracle   21115     1  0 Mar07 ?        00:04:42 /oracle/GRID/11203/bin/tnslsnr LISTENER_SCAN2 -inherit

oracle   21117     1  0 Mar07 ?        00:04:43 /oracle/GRID/11203/bin/tnslsnr LISTENER_SCAN3 -inherit

$ grep SCAN $AGENT_HOME/sysman/emd/targets.xml

In fact, Grid has got all three SCAN listeners in the wrong place.

 

To fix, run the following emcli commands on the Grid OMS server (as oraoem):

$ id

uid=60003(oraoem) gid=602(oinstall) groups=600(dba),601(oper),602(oinstall),603(asmadmin),604(asmoper),605(asmdba)

$ which emcli

/oracle/product/middleware/oms11g/bin/emcli

$ emcli login -username=sysman

Enter password

Login successful

$ emcli relocate_targets -src_agent=NODE2:3872 -dest_agent=NODE1:3872 -target_name=LISTENER_SCAN1_scanname -target_type=oracle_listener -copy_from_src -force=yes

Moved all targets from NODE2:3872 to NODE1:3872

$ emcli relocate_targets -src_agent=NODE1:3872 -dest_agent=NODE2:3872 -target_name=LISTENER_SCAN2_scanname -target_type=oracle_listener -copy_from_src -force=yes

Moved all targets from NODE1:3872 to NODE2:3872

$ emcli relocate_targets -src_agent=NODE1:3872 -dest_agent=NODE2:3872 -target_name=LISTENER_SCAN3_scanname -target_type=oracle_listener -copy_from_src -force=yes

Moved all targets from NODE1:3872 to NODE2:3872

$ emcli logout

Logout successful

 

 Comment 

ora.diskmon in offline state

on July 2, 2014 at 1:34 pm
Posted In: Oracle RAC (Cat)

“…. If You installed or upgraded to GI 11.2.0.3, You will see “ora.diskmon” be OFFLINE status. . …it’s disabled in non-exadata env. So just ignore it.”

 Comment 

CPU, CORE information

on July 2, 2014 at 1:18 pm
Posted In: Configuration (Cat)

# Processors

echo “CPUS = “`cat /proc/cpuinfo | grep “processor” | wc -l`

# Cores

echo “Cores = “`cat /proc/cpuinfo | grep “cpu cores” | cut -f3 -d” ” | uniq`

# Hyperthreading

NO_SIBS=`cat /proc/cpuinfo | grep “siblings” | cut -f2 -d” ” | uniq`

 

echo “Hyperthreading = “`[[ $NO_SIBS -gt $NO_CORES ]] && echo “Yes” || echo “No”`

 Comment 

strace

on July 2, 2014 at 1:08 pm
Posted In: Resource Management (Cat)

strace -f -o asm_rename.out /etc/init.d/oracleasm force-renamedisk

 Comment 

Change ASM password

on July 2, 2014 at 1:07 pm
Posted In: Oracle RAC (Cat)

Possibly the easiest was to change the ASM password is to use the following on each instance.

$ +ASM1

ASMCMD> orapwusr –modify –password sys

Enter password: ******
ASMCMD> exit

 

NOTE: This is not applicable for SYSASM privilege.

 Comment 

Alerts when standby is down

on July 2, 2014 at 1:06 pm
Posted In: Oracle Data Guard Notes (Cat)

STATUS:

1) Change the mode to max performance.
2) change log modes to ASYNC
3) Set transport=off
4) set apply=off
Perform 3 log switches on each of the two live instances.
Then record the state of dataguard for the configuration, each cluster database and each instance.
Here you will see that:
1) INTENDED_STATE IS TRANSPORT=OFF
2) All databases are in ASYNC mode
3) All databases are in MAXPERFORMANCE

There are no alerts in the alert log.
We then shutdown the standby database and start getting alerts in the primary database alert log which, for live systems, are sent to production support.

Oracles answer to “why the alerts” is:

The reason is that when standby db is down, the client connection from EM (via data guard broker) which tried to connect to (SERVICE_NAME=PCTIR1_A_DGB.uk.centricaplc.com) failed since the service name is not there when the db is down.

If we disable dg broker then the alerts stop.

SO

 

Whenever we need to bring down a production standby we have to disable DG broker to ensure no alerts are generated for prod support by the live database.

 Comment 

restore database validate

on July 2, 2014 at 1:03 pm
Posted In: Oracle Rman Notes (Cat)

The ‘restore database validate’ only check for the latest database backup. If you also want to check the archivelogs you should run:

restore archivelog all validate;

that will check the archivelogs based on the retention policy of your database. If you want to restrict this validation to the last 3 days of archives , use:

restore archivelog from time ‘sysdate-3’ validate;

Make sure you also check the controlfile and the spfile with:

restore controlfile validate;

restore spfile validate;

 

 Comment 

Service Issue on Grid

on July 2, 2014 at 1:01 pm
Posted In: Oracle RAC (Cat)

alter system set remote_listener=”;
wait till the scan listener is de-registered for the services of that instance on all the nodes

then
alter system set remote_listener=’‘;

Check whether scan listener registers properly

 Comment 

OEM – Grid Control – Incident Rule creation

on July 2, 2014 at 1:00 pm
Posted In: Oracle odds and ends (Cat)

1) Create the job you want to be alerted on.

2) Setup yourself for the “Enterprise Manager Password & E-mail”

3) Go to Setup -> Incidents -> Incident Rules

4) Select “Create Rule Set”

a. Create the rules set for the job in question and then go back

 

5) Select Actions -> e-mail -> Subscribe me

 Comment 

Archive log history

on July 2, 2014 at 12:54 pm
Posted In: Oracle odds and ends (Cat)

Select inst_id, sum(blocks*block_size)/1024/1024 as MB, trunc(completion_time)

From gv$archived_log

Group by inst_id, trunk(completion_time);

AND

 

select distinct a.thread#, a.sequence#, a.completion_time,

(a.blocks * a.block_size)/1024/1024 as MB,

(a.completion_time – b.first_time)*86400 as “Elapsed Write Secs”,

ROUND ( ((a.blocks * a.block_size)/1024/1024) / DECODE(((a.completion_time – b.first_time)*86400),0,1,((a.completion_time – b.first_time)*86400) )) AS “M/S”

from v$archived_log a, v$archived_log b

where a.sequence# +1 = b.sequence#

and a.thread# = b.thread#

and a.name like ‘+%’

and a.completion_time between

to_date(’01/aug/2013 17:00:00′,’dd/mon/yyyy hh24:mi:ss’) and

to_date(’19/aug/2013 22:00:00′,’dd/mon/yyyy hh24:mi:ss’)

and a.thread# = 1

order by 1,2;

THREAD#  SEQUENCE# COMPLETION_TIME              MB Elapsed Write Secs        M/S

———- ———- ——————– ———- —————— ———-

1      19508 17-aug-2013 14:22:25 2555.51221                 13        197

1      19509 17-aug-2013 14:31:17 2562.60986                 11        233

1      19510 17-aug-2013 14:37:05 2534.86865                 11        230

etc

 

AND

SQL TO GET “APPROXIMATE” TIME TO WRITE AN ARCHIVELOG TO DISK

select distinct a.thread#, a.sequence#, a.completion_time,

(a.blocks * a.block_size)/1024/1024 as MB,

(a.completion_time – b.first_time)*86400 as “Elapsed Write Secs”,

ROUND ( ((a.blocks * a.block_size)/1024/1024) / DECODE(((a.completion_time – b.first_time)*86400),0,1,((a.completion_time – b.first_time)*86400) )) AS “M/S”

from v$archived_log a, v$archived_log b

where a.sequence# +1 = b.sequence#

and a.thread# = b.thread#

and a.name like ‘+%’ — SHOULD USE destination = 1, not name

and a.completion_time between

to_date(’01/aug/2013 17:00:00′,’dd/mon/yyyy hh24:mi:ss’) and

to_date(’19/aug/2013 22:00:00′,’dd/mon/yyyy hh24:mi:ss’)

and a.thread# = 1

order by 1,2;

THREAD#  SEQUENCE# COMPLETION_TIME              MB Elapsed Write Secs        M/S

———- ———- ——————– ———- —————— ———-

1      19508 17-aug-2013 14:22:25 2555.51221                 13        197

1      19509 17-aug-2013 14:31:17 2562.60986                 11        233

1      19510 17-aug-2013 14:37:05 2534.86865                 11        230

 

 

etc

 Comment 

Modify SCAN Listener

on July 2, 2014 at 12:50 pm
Posted In: Oracle RAC (Cat)

srvctl config scan_listener

srvctl modify scan_listener -p TCP:1522

srvctl config scan_listener

srvctl stop scan_listener

 

srvctl start scan_listener

 Comment 

OEM SNMP Traps

on July 2, 2014 at 12:45 pm
Posted In: Oracle odds and ends (Cat)

1. Log on to OEM 12c as SYSMAN

2. From the Setup menu, select Notifications, Notification Methods.

 Comment 

RMAN Restore multi-piece SR

on July 2, 2014 at 12:38 pm
Posted In: Oracle Rman Notes (Cat)

set serveroutput on size 1000000

DECLARE

v_dev varchar2(50);

v_done boolean:=FALSE;

 

type t_fileTable is table of varchar2(255) index by binary_integer;

v_fileTable t_fileTable;

 

type t_dataTable is table of varchar2(4000) index by binary_integer;

v_dataTable t_dataTable;

 

v_maxPieces number:=1;

v_maxFiles number:=1;

 

v_restore_from varchar2(4);

v_file_no number:=0;

v_file_name varchar2(513);

v_debug boolean:=FALSE; — Optional Change to TRUE to Debug PL/SQL.

v_msr boolean;

BEGIN

— CHANGE 1.

— Populate the file table below with datafiles to be restored.

— This matches the output from logs.

 

v_dataTable(1):=’fno=1 name=/oracle/dbdump/VCEMV1/datafile1.dbf’;

 

—

— CHANGE 2.

— Set the below to the max number of files to be restored from above table.

v_maxFiles := 1;

 

— CHANGE 3.

— Set the Below To the name of the backup pieces you wish to reference.

— Use backup.log if available to locate all backup pieces for a backup.

— If backup is on Disk ensure you have directories as well,

— If tape then just use the backup piece name.

 

v_fileTable(1):=’/oracle/dbdump/VCEMV1/dblevel0_VCEMV1_2355_1.rman’;

v_fileTable(2):=’/oracle/dbdump/VCEMV1/dblevel0_VCEMV1_2355_2.rman’;

v_fileTable(3):=’/oracle/dbdump/VCEMV1/dblevel0_VCEMV1_2355_3.rman’;

v_fileTable(4):=’/oracle/dbdump/VCEMV1/dblevel0_VCEMV1_2355_4.rman’;

 

— CHANGE 4.

— Set the below to the total number of backup pieces in Backup to restore.

—

 

v_maxPieces:=4;

— CHANGE 5. (Optional If Tape)

— Set the below to TAPE if RESTORE FROM TAPE.

—

v_restore_from := ‘DISK’;

 

— CHANGE 6 – change this to TRUE for multi-section backup

v_msr := TRUE;

 

IF (v_msr) THEN

v_maxFiles := 1;

END IF;

 

dbms_output.put_line(‘Restoring All Data Files :’);

dbms_output.put_line(‘————————–‘);

 

For i IN 1..v_maxFiles LOOP

v_file_no := substr(v_dataTable(i),5,instr(v_dataTable(i),’ ‘,1,1)-5);

v_file_name :=substr(v_dataTable(i),instr(v_dataTable(i),’=’,1,2)+1);

 

dbms_output.put_line(‘Attempting To Restore :’||v_file_name);

 

FOR i IN 1..v_maxPieces LOOP

BEGIN

IF v_restore_from = ‘DISK’ THEN

v_dev:=sys.dbms_backup_restore.deviceAllocate(type=>null,ident=>’D1′);

ELSE

— CHANGE 7. (Optional Tape Only).

— Remember to set Params correctly for tape media.

— If not required then remove parameter.

—

v_dev:=sys.dbms_backup_restore.deviceAllocate(type=>’sbt_tape’,ident=>’T1′,params=>’SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/u003/backup/ORA1020/)’);

END IF;

 

sys.dbms_backup_restore.restoreSetDatafile;

sys.dbms_backup_restore.restoreDataFileTo(dfnumber=>v_file_no,toname=>v_file_name);

 

IF (i = 1 AND v_msr) THEN

sys.dbms_backup_restore.initMSR(dfnumber=>v_file_no,fname=>v_file_name);

END IF;

 

dbms_output.put_line(‘————————‘);

dbms_output.put_line(‘Trying : ‘||v_fileTable(i)||’ From : ‘||v_restore_from);

 

sys.dbms_backup_restore.restoreBackupPiece(done=>v_done,handle=>v_fileTable(i),params=>null);

EXCEPTION

WHEN OTHERS THEN

IF v_debug THEN

dbms_output.put_line(SQLERRM);

ELSE

dbms_output.put_line(‘Data File: Not Found’);

END IF;

sys.dbms_backup_restore.deviceDeallocate;

sys.dbms_backup_restore.restoreCancel;

END;

 

IF v_done THEN

dbms_output.put_line(‘Data File : Found’);

dbms_output.put_line(‘————————‘);

dbms_output.put_line(‘RESTORED Data File To: ‘||v_file_name);

sys.dbms_backup_restore.deviceDeallocate;

sys.dbms_backup_restore.restoreCancel;

v_done:=FALSE;

exit WHEN NOT v_msr;

END IF;

END LOOP;

 

IF (v_msr) THEN

sys.dbms_backup_restore.setParms(p0 => 5, p1 => v_file_no, p5 => v_file_name);

END IF;

END LOOP;

END;

/

 Comment 

How to check the setting of all hidden parameters

on July 2, 2014 at 12:37 pm
Posted In: Oracle odds and ends (Cat)

Run the following from an SQL*Plus session (connected as SYS):

ttitle – center  ‘All Instance Parameters’  skip 2

col ksppinm   format a28 heading ‘Parameter’ justify c trunc

col ksppstvl  format a40 heading ‘Value’     justify c trunc

col ksppstdf  format a8  heading ‘Default?’  justify c trunc

select

a.ksppinm,

b.ksppstvl,

b.ksppstdf

from

x$ksppi a, x$ksppcv b

where a.indx = b.indx

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

order by a.ksppinm;

 Comment 

Parallel Update SQL

on July 2, 2014 at 12:36 pm
Posted In: Oracle odds and ends (Cat)

set serveroutput on

DECLARE

l_sql_stmt VARCHAR2(1000);

l_try NUMBER;

l_status NUMBER;

BEGIN

— Create the TASK

DBMS_PARALLEL_EXECUTE.CREATE_TASK (‘blds_convert’);

— Chunk the table by ROWID

DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_ROWID(‘blds_convert’,’SAPISU’,’SWW_CONTOB’, true, 5);

— Execute the DML in parallel

l_sql_stmt := ‘update /*+ ROWID (dda) */ SAPISU.SWW_CONTOB e SET e.LOGSYS = ”AAA-100” WHERE e.LOGSYS = ”JBA-100” and rowid BETWEEN :start_id AND :end_id’;

— l_sql_stmt := ‘update /*+ ROWID (dda) */ SWW_CONTOB e SET e.LOGSYS = “AAA-100″‘;

dbms_output.put_line(‘SQL: ‘ || l_sql_stmt );

DBMS_PARALLEL_EXECUTE.RUN_TASK(‘blds_convert’, l_sql_stmt, DBMS_SQL.NATIVE, parallel_level => 10);

— dbms_output.put_line(‘start: ‘ || start_id || ‘end ‘ || end_id );

— If there is an error, RESUME it for at most 2 times.

L_TRY := 0;

L_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS(‘blds_convert’);

dbms_output.put_line(‘OUT: ‘ || L_TRY || ‘Status ‘ || L_status );

WHILE(L_TRY

LOOP

L_TRY := L_TRY + 1;

DBMS_PARALLEL_EXECUTE.RESUME_TASK(‘blds_convert’);

L_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS(‘blds_convert’);

END LOOP;

— Done with processing; drop the task

DBMS_PARALLEL_EXECUTE.DROP_TASK(‘blds_convert’);

END;

 

/

 Comment 

Huge Pages

on July 2, 2014 at 12:34 pm
Posted In: Oracle odds and ends (Cat)

cat /proc/sys/vm/nr_hugepages

The HugePages settings should be set differently on each size of server, as below:

Small servers:     vm.nr_hugepages = 8961

Medium servers: vm.nr_hugepages = 23897

 

Large servers:     vm.nr_hugepages = 71691

 

HugePages in MOS notes 361323.1  & 1134002.1 .

 


 Comment 

OEM – Grid Control – Timeout – 11g

on July 2, 2014 at 12:30 pm
Posted In: Oracle odds and ends (Cat)

1.$OMS_HOME/bin/emctl set property -name oracle.sysman.eml.maxInactiveTime -value -1 -sysman_pwd grid_pwd
Oracle Enterprise Manager 11g Release 1 Grid Control
Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved.
Property oracle.sysman.eml.maxInactiveTime for oms EM_BOX:4889_Management_Service has been set to value -1

#oracle.sysman.eml.maxInactiveTime=time_in_minutes
-1 : Unlimited Duration

2. Restart services to apply changed value

[oracle@EM_BOX config]$ $OMS_HOME/bin/emctl stop oms
[oracle@EM_BOX config]$ $OMS_HOME/bin/emctl start oms

 Comment 

DataProtector Backup List

on July 2, 2014 at 11:50 am
Posted In: Unix notes (Cat)

/opt/omni/lbin/devbra -dev  |grep HU

# ./omnidb -session -datalist “Oracle8

./omnidb -session -type backup -last 1

./omnidb -session 2011/11/09-228 –report

./omnidb -session 2011/11/08-388 -report

./omnidb -session -datalist “Oracle8 DOC_CENTRICA_doclorap005_ORA_XE1_ON_Weekly” -last 7

/var/opt/omni/log

When encountering rman backup errors, it is worth taking a look in directory /var/opt/omni/log at files debug.log  inet.log  oracle8.log.

 

/opt/omni/bin/omnidb -oracle8 “nodename-b:NODENAME_ORA_DBNAME_ON_ARCHIVEDBNAME_A_16583:804933109:1>.dbf” -detail

 

omnidb is part of the Data Protector GUI and command line interface package.

List all backups from all servers, all users:

/opt/omni/bin/omnidb -session –detail

Because channel for archivelog backup had ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=DB_A,=OB2BARLIST=NODENAME_ORA_DBNAME_ON_ARCHIVE)

Same as before, but limit to just user orabackup user:

/opt/omni/bin/omnidb -user orabackup -session –detail

One sample paragraph of output:

SessionID : 2012/05/09-213

Backup Specification: Oracle8 NODENAME_ORA_DBNAME_ON_ARCHIVE

Session type        : Backup (full)

Started             : Wed 09 May 2012 03:01:00 PM BST

Finished            : Wed 09 May 2012 03:09:56 PM BST

Status              : Completed

Number of warnings  : 0

Number of errors    : 0

User                : orabackup

Group               : dba

Host                : hpnodename.hpecs.net

List details of all objects in a particular backup sessionid:

/opt/omni/bin/omnidb -session “2012/05/09-213” –detail

(Sample paragraph of output)

Object name : nodename-b.hpecs.net:NODENAME_ORA_DBNAME_ON_ARCHIVEdbname_A_8011:782838060:1>.dbf

Object type        : Oracle8

Object status      : Completed

Started            : Wed 09 May 2012 03:01:59 PM BST

Finished           : Wed 09 May 2012 03:02:16 PM BST

Object size        : 154368 KB

Backup type        : Full

Protection         : Protected for 30 days

Catalog retention  : Same as data protection.

Version type       : Normal

Access             : Public

Number of warnings : 0

Number of errors   : 0

Device name        : VLS01_06.D02

Backup ID          : 2012/05/09-213

Copy ID            : 62653 (Orig)

Encrypted          : No

DiskAgent ID       : 1336572061

 

Search by RMAN handle:

/opt/omni/bin/omnidb -oracle8 “hpnodename.hpecs.net:NODENAME_ORA_DBNAME_ON_ARCHIVEDBNAME_A_8011:782838060:1>.dbf” –detail

(handle is NODENAME_ORA_PSTGC1_ON_ARCHIVEDBNAME_A_8011:782838060:1>.dbf)

SessionID     : 2012/05/09-213

Started            : Wed 09 May 2012 03:01:59 PM BST

Finished           : Wed 09 May 2012 03:02:16 PM BST

Object status      : Completed

Object size        : 154368 KB

Backup type        : Full

Protection         : Protected for 30 days

Catalog retention  : Same as data protection.

Version type       : Normal

Access             : Public

Number of warnings : 0

Number of errors   : 0

Device name        : VLS01_06.D02

Backup ID          : 2012/05/09-213

Copy ID            : 62653 (Orig)

Encrypted          : No

 Comment 

Finding out more about the automatic stats gathering job

on July 2, 2014 at 11:43 am
Posted In: Oracle 11.2 Notes (Cat)

Found this (in note 1233203.1 (attached)):

· How can I see the history of the automatic stats job for each day?

 

SELECT client_name, window_name, jobs_created, jobs_started, jobs_completed
 FROM dba_autotask_client_history
 WHERE client_name like '%stats%';
 
                                                  JOBS    JOBS     JOBS
 CLIENT_NAME                     WINDOW_NAME      CREATED STARTED  COMPLETED
 ------------------------------- ---------------- ------- -------- ----------
 auto optimizer stats collection THURSDAY_WINDOW        1        1          1
 auto optimizer stats collection SUNDAY_WINDOW          3        3          3
 auto optimizer stats collection MONDAY_WINDOW          1        1          1
 auto optimizer stats collection SATURDAY_WINDOW        2        2          2

 

 

 Comment 

Monitor GCS Cache

on July 2, 2014 at 11:42 am
Posted In: Oracle RAC (Cat)

This section describes how to monitor GCS performance by identifying objects read and modified frequently and the service times imposed by the remote access. Waiting for blocks to arrive may constitute a significant portion of the response time, in the same way that reading from disk could increase the block access delays, only that cache fusion transfers in most cases are faster than disk access latencies.

The following wait events indicate that the remotely cached blocks were shipped to the local instance without having been busy, pinned or requiring a log flush:

· gc current block 2-way

· gc current block 3-way

· gc cr block 2-way

· gc cr block 3-way

The object statistics for gc current blocks received and gc cr blocks received enable quick identification of the indexes and tables which are shared by the active instances. As mentioned earlier, creating an ADDM analysis will, in most cases, point you to the SQL statements and database objects that could be impacted by inter-instance contention.

Any increases in the average wait times for the events mentioned in the preceding list could be caused by the following occurrences:

· High load: CPU shortages, long run queues, scheduling delays

· Misconfiguration: using public instead of private interconnect for message and block traffic

If the average wait times are acceptable and no interconnect or load issues can be diagnosed, then the accumulated time waited can usually be attributed to a few SQL statements which need to be tuned to minimize the number of blocks accessed.

The column CLUSTER_WAIT_TIME in V$SQLAREA represents the wait time incurred by individual SQL statements for global cache events and will identify the SQL which may need to be tuned.

Block-Related Wait Events

The main wait events for block-related waits are:

· gc current block 2-way

· gc current block 3-way

· gc cr block 2-way

· gc cr block 3-way

 

The block-related wait event statistics indicate that a block was received as either the result of a 2-way or a 3-way message, that is, the block was sent from either the resource master requiring 1 message and 1 transfer, or was forwarded to a third node from which it was sent, requiring 2 messages and 1 block transfer.

 Comment 

Creating RAC database Service with Physical Standby Role on a database

on July 2, 2014 at 11:40 am
Posted In: Oracle Data Guard Notes (Cat)

Creating RAC database Service with Physical Standby Role on a database

 

Since the standby database is not open for read/write, service has to be created in primary first and once the redo entries are applied to the standby database, create the service in standby database.

 

Steps:

 

On Primary:

1. Login as oracle user,  set the environment to the target database SID and create the service:

oracle DB1_1 NODE1> srvctl add service -d DB1_B -s SERVICE_NAME -r “DB1_1,DB1_2” -l PHYSICAL_STANDBY -q FALSE -e NONE -m NONE -w 0 -z 0

 

If the above command fails with permission issues (PRCR-1006,PRCR-1071,CRS-2566), go to Step 2, otherwise skip to Step 5.

 

2. Login as root and execute following commands:

# . oraenv

ORACLE_SID = [root] ? DB1_1

# export ORACLE_HOME=/oracle/product/1120/rdbms/a

# df –h /oracle/dbadmin

Filesystem            Size  Used Avail Use% Mounted on

/dev/asm/acfs_vol_1-126                       49G   15G   35G  31% /oracle/dbadmin

 

# /oracle/product/1120/rdbms/a/bin/srvctl modify filesystem -d /dev/asm/acfs_vol_1-126 -u oracle

 

3. Login as crs user and execute below commands:

NODE1> +ASM1

NODE1> df –h /oracle/dbadmin

NODE1> srvctl config filesystem -d /dev/asm/acfs_vol_1-126

 

4. Login as oracle, set the environment to the target database SID and add the service:

oracle DB1_1 NODE1> srvctl add service -d DB1_B -s SERVICE_NAME -r “DB1_1,DB1_2” -l PHYSICAL_STANDBY -q FALSE -e NONE -m NONE -w 0 -z 0

 

5. As oracle user:

oracle DB1_1 NODE1> srvctl start service -d DB1_B -s SERVICE_NAME

 

Verify if the service has started on primary:

 

oracle DB1_1 NODE1> srvctl status service -d DB1_B –s SERVICE_NAME

 

Switch the logfile:

SQL> alter system archive log current;

 

On Standby:

1. Login as oracle user,  set the environment to the target database and execute:

oracle DB1_1 NODE2> srvctl add service -d DB1_A -s SERVICE_NAME -r “DB1_1,DB1_2” -l PHYSICAL_STANDBY -q FALSE -e NONE -m NONE -w 0 -z 0

 

If the above command fails with permission issues (PRCR-1006,PRCR-1071,CRS-2566), go to Step 2, otherwise skip to Step 4.

 

2. Login as root and execute following commands:

# export ORACLE_HOME=/oracle/product/1120/rdbms/a

# df –h /oracle/dbadmin

Filesystem           1K-blocks      Used Available Use% Mounted on

/dev/asm/acfs_vol_1-306                      51380224   5678340  45701884  12% /oracle/dbadmin

# /oracle/product/1120/rdbms/a/bin/srvctl modify filesystem -d /dev/asm/acfs_vol_1-306 -u oracle

 

3. Login as crs user and execute:

+ASM1 NODE2> srvctl config filesystem -d /dev/asm/acfs_vol_1-306

 

4. Login as oracle, set the environment to the target database SID and ensure that the new service has been replicated to standby database:

SQL> SELECT count(1) from DBA_SERVICES WHERE name=’SERVICE_NAME’;

Above query should return 1.

 

On Primary:

Login as oracle user, set the environment to the target database and stop the service as it should not be running here:

oracle DB1_1 NODE1> srvctl service –d DB1_B –s SERVICE_NAME

 

On Standby:

1. Now create the service as oracle user:

oracle DB1_1 NODE2> srvctl add service -d DB1_A -s SERVICE_NAME -r “DB1_1,DB1_2” -l PHYSICAL_STANDBY -q FALSE -e NONE -m NONE -w 0 -z 0

 

2. start the service:

oracle DB1_1 NODE2> srvctl start service -s SERVICE_NAME -d DB1_A

 

check the status of the service:

oracle DB1_1 NODE2> srvctl status service -s SERVICE_NAME -d DB1_A

 

 

TNS entry:

SERVICE_NAME =

(DESCRIPTION_LIST=

(LOAD_BALANCE=off)

(FAILOVER=on)

(DESCRIPTION=        (CONNECT_TIMEOUT=10)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=1)

(ADDRESS_LIST=

(LOAD_BALANCE=on)

(ADDRESS=(PROTOCOL=TCP)(HOST=CLUSTER_A-scan)(PORT=1526)))

(CONNECT_DATA=(SERVICE_NAME= SERVICE_NAME)))

(DESCRIPTION=        (CONNECT_TIMEOUT=10)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=1)

(ADDRESS_LIST=

(LOAD_BALANCE=on)

(ADDRESS=(PROTOCOL=TCP)(HOST=CLUSTER_B-scan)(PORT=1526)))

(CONNECT_DATA=(SERVICE_NAME= SERVICE_NAME))))

 Comment 

duplicate script – without catalog & target – only need to connect to auxilary NOREDO

on July 2, 2014 at 11:32 am
Posted In: Oracle Data Guard Notes (Cat)

The backup was from a standby database in read only mode with Apply OFF.

rman auxiliary / log=’rman_dup2_DB2_A.log’ << RMAN_CMD

set echo on

run

{

set newname for database to ‘+DB2_A_DATA1’;

set newname for tablespace D_GGS_1 to ‘+DB2_A_SYSTEM’;

set newname for tablespace system to ‘+DB2_A_SYSTEM’;

set newname for tablespace sysaux to ‘+DB2_A_SYSTEM’;

set newname for tablespace psapundo to ‘+DB2_A_SYSTEM’;

set newname for tablespace psapundo_2 to ‘+DB2_A_SYSTEM’;

set newname for tablespace psapundo_3 to ‘+DB2_A_SYSTEM’;

ALLOCATE AUXILIARY CHANNEL d10 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d11 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d12 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d13 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d14 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d15 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d16 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d17 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d18 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d19 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d20 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d21 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d22 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d23 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d24 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d26 DEVICE TYPE DISK;

duplicate database to DB2 backup location ‘/oracle/dbdump1/DB1/1/oracle/dbdump/LP1_CHERISHED’

spfile

set db_unique_name=’DB2_A’

set instance_number = ‘1’

set filesystemio_options=’NONE’

set disk_asynch_io=’FALSE’

set db_domain=’UK.CENTRICAPLC.COM’

set compatible=’11.2.0.3.0′

set cluster_database=’FALSE’

set diagnostic_dest=’/oracle/DB2/saptrace’

set log_archive_dest_1=’location=”+DB2_A_ARCH”, valid_for=(ALL_LOGFILES,ALL_ROLES)’

set audit_file_dest=’/oracle/DB2/saptrace/audit’

set control_files=’+DB2_A_ARCH/DB2_A/cntrlDB2.dbf’,’+DB2_A_DATA1/DB2_A/cntrlDB2.dbf’,’+DB2_A_RECO/DB2_A/cntrlDB2.dbf’

set db_create_file_dest=’+DB2_A_DATA1′

set db_create_online_log_dest_1=’+DB2_A_OLOG’

set db_create_online_log_dest_2=’+DB2_A_MLOG’

set db_recovery_file_dest=’+DB2_A_RECO’

nofilenamecheck

noredo;

}

RMAN_CMD

exit 0

 


 Comment 

duplicate script – without catalog & target – only need to connect to auxilary

on July 2, 2014 at 11:28 am
Posted In: Oracle Data Guard Notes (Cat)

rman auxiliary / log=’rman_dup_DBN1_B.log’ << RMAN_CMD

set echo on

run

{

set newname for database to ‘+DBN1_DATA1’;

set newname for tablespace system to ‘+DBN1_SYSTEM’;

set newname for tablespace sysaux to ‘+DBN1_SYSTEM’;

set newname for tablespace undo1 to ‘+DBN1_SYSTEM’;

set newname for tablespace undo2 to ‘+DBN1_SYSTEM’;

ALLOCATE AUXILIARY CHANNEL d10 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d11 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d12 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d13 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d14 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d15 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d16 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d17 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d18 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d19 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d20 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d21 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d22 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d23 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d24 DEVICE TYPE DISK;

ALLOCATE AUXILIARY CHANNEL d26 DEVICE TYPE DISK;

duplicate database to DBN1 backup location ‘/oracle/dbdump/DBN2’

spfile

set db_unique_name=’DBN1_B’

set instance_number = ‘1’

set filesystemio_options=’NONE’

set disk_asynch_io=’FALSE’

set db_domain=’UK.CENTRICAPLC.COM’

set compatible=’11.2.0.3.0′

set cluster_database=’FALSE’

set log_archive_dest_1=’location=”+DBN1_ARCH01″, valid_for=(ALL_LOGFILES,ALL_ROLES)’

set control_files=’+DBN1_ARCH01/DBN1_b/cntrl1.dbf’,’+DBN1_RECO/DBN1_b/cntrl1.dbf’

set db_create_file_dest=’+DBN1_DATA1′

set db_create_online_log_dest_1=’+DBN1_OLOG’

set db_create_online_log_dest_2=’+DBN1_MLOG’

set db_recovery_file_dest=’+DBN1_RECO’

nofilenamecheck;

}

RMAN_CMD

exit 0

 

 Comment 
vi Cheat Sheet

VI Cheatsheet

on July 2, 2014 at 11:18 am
Posted In: Unix notes (Cat)
vi Cheat Sheet

vi Cheat Sheet

 Comment 

12c Changes

on July 2, 2014 at 11:15 am
Posted In: Oracle 11.2 Notes (Cat)

Oracle announced a new version of its main database, i.e. Oracle 12c.

One of the most important features is the concept of pluggable databases.

This allows us to take a normal database, with all its users, objects, grants, etc… and plug it into a larger container database. Such a pluggable database can easily be moved to another container, if needed on another server. This new feature improves efficiency, security and will make upgrades much easier.

Apart from this, about 500 other new features are available in Oracle 12c, but details about them will become clear in the near future.

 

 Comment 

ADRCI Notes

on July 2, 2014 at 11:12 am
Posted In: Oracle 11.2 Notes (Cat)

 

> adrci

ADRCI: Release 11.2.0.3.0 – Production on Thu Sep 13 09:40:02 2012

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

ADR base = “/oracle/product/11.2.0”

adrci> show homes

ADR Homes:

diag/tnslsnr/nodename/lsnr_vipa_name

diag/tnslsnr/nodename/listener

diag/asm/+asm/+ASM1

adrci> set home diag/asm/+asm/+ASM1

adrci> purge -age 1440

adrci> exit

>

 

Run this for each home listed to purge all dumps etc older than 1440 minutes.

 Comment 

List Live/Standby Backups from Catalogue

on July 2, 2014 at 11:09 am
Posted In: Oracle Rman Notes (Cat)

This script is run against the RMAN catalog and will produce a report on backups (similar to Jensen’s summary in the logs).

It can generate info about different databases ( both sides of a live / standby) for specified dates etc..

 

col db_name format a7

col db_unique_name format a9

col st_time format a17

col end_time format a17

col time_taken_display for a9

col output_bytes_display for a12

col status format a18

col input_type format a12

set lines 132

select distinct a.DB_NAME, c.session_key,

b.DB_UNIQUE_NAME, c.input_type, c.status, to_char(c.START_TIME, ‘dd/mm/yyyy hh24:mi’) st_time,

to_char(c.END_TIME,’dd/mm/yyyy hh24:mi’) end_time,

c.output_bytes_display, c.time_taken_display

from rc_rman_status a, rc_site b, rc_rman_backup_job_details c

where b.site_key = a.site_key

and  c.session_key = a.session_key

and trunc(a.start_time) > ’10-AUG-12′

and a.DB_NAME in(‘JPW’)

order by 1,3,6

/

 

 Comment 

Steps required to repair voting disks

on July 2, 2014 at 11:06 am
Posted In: Oracle RAC (Cat)

An alternative Suggestion – undrop ASM disk

Although this document covers the steps followed at the time for this case another suggestion came in once the work was completed.

 

alter diskgroup  undrop disk ;

This seems like a sensible approach given an error received earlier in the day trying to add the erroneous disk without clearing it out first :

 

ORA-15033: disk ‘/dev/oracleasm/disks/OCR_VOTE5’ belongs to diskgroup “OCR_VOTE”

Should this have been attempted and been successful that would probably have been enough once the final checks had been carried out. I say this as ocrcheck had shown a clean bill of health for the OCR through this process.

 

Reference

http://docs.oracle.com/cd/E11882_01/rac.112/e16794/votocr.htm#CHDHBBIJ

Steps Required to restore a Voting Disk

Make a note of the current Voting disk details.

[root]# /oracle/dbadmin/scripts/multipath_l.ksh -a

RAW Device Size ASM Disk Based on Minor,Major

==========      ====           ========    ========    ===========

:

VOTE1_01        2.0G           OCR_VOTE1   /dev/dm-54  [253,54]

VOTE2_01        2.0G           OCR_VOTE2   /dev/dm-56  [253,56]

VOTE3_01        2.0G           OCR_VOTE3   /dev/dm-58  [253,58]

VOTE4_01        2.0G           OCR_VOTE4   /dev/dm-59  [253,59]

VOTE5_01 2.0G           OCR_VOTE5   /dev/dm-60  [253,60]

 

Take a Manual Backup (just in case)

As root on one node

 

cd /oracle/GRID/11203/bin

./ocrconfig -manualbackup

:

2012/06/21 15:30:34     /oracle/GRID/11203/cdata/clustername/backup_20120621_153034.ocr

./ocrconfig -showbackup

:

wyclorah011     2012/06/21 15:30:34     /oracle/GRID/11203/cdata/racsaplp1a/backup_20120621_153034.ocr

Shutdown CRS and restart on one node in exclusive mode.

[root]# pwd

/oracle/GRID/11203/bin

[root]# ./crsctl stop crs

[root]# ./crsctl stop crs

[root]# ./crsctl stop crs

CRS-4000: Command Stop failed, or completed with errors.

So I forced the issue

[root]# ./crsctl stop crs -f

This hung trying to stop the ASM instance (alert log showed this). So I killed the ASM pmon process which immediately freed up the stop crs which, in turn, completed successfully.

Then restart on one node in exclusive mode.

[root]# ./crsctl start crs -excl -nocrs

ensure that the crsd process did not start

[root]# ./crsctl stat res -init -t

——————————————————————————–

NAME           TARGET  STATE        SERVER                   STATE_DETAILS

Cluster Resources

ora.asm 1 ONLINE ONLINE node1 Started

ora.cluster_interconnect.haip 1 ONLINE ONLINE wnode1

ora.crf 1 OFFLINE OFFLINE

ora.crsd 1        OFFLINE OFFLINE

ora.cssd 1 ONLINE ONLINE node1

ora.cssdmonitor 1 ONLINE ONLINE node1

ora.ctssd 1 ONLINE ONLINE node1 OBSERVER

ora.diskmon 1 OFFLINE OFFLINE

ora.drivers.acfs 1 ONLINE ONLINE node1

ora.evmd 1 OFFLINE OFFLINE

ora.gipcd 1 ONLINE ONLINE node1

ora.gpnpd 1 ONLINE ONLINE node1

ora.mdnsd 1 ONLINE ONLINE node1

Re-create the errant OCR Disk.

We can see from this query that the disk is still a valid ASM disk and marked as a Voting disk.

oracle wyclorah010> . ./crs_env

wyclorah010[+ASM1]>sqlplus / as sysasm

SQL> select group_number, name, failgroup, path from v$asm_disk where voting_file=’Y’;

GROUP_NUMBER NAME FAILGROUP PATH

0 /dev/oracleasm/disks/OCR_VOTE5

16 OCR_VOTE_0003          OCR_VOTE_0003               /dev/oracleasm/disks/OCR_VOTE4

16 OCR_VOTE_0002          OCR_VOTE_0002               /dev/oracleasm/disks/OCR_VOTE3

16 OCR_VOTE_0001          OCR_VOTE_0001               /dev/oracleasm/disks/OCR_VOTE2

16 OCR_VOTE_0000          OCR_VOTE_0000               /dev/oracleasm/disks/OCR_VOTE1

Earlier in the day I had tried to add it back into the diskgroup and was given short shrift.

From the ASM alert log :

ORA-15033: disk ‘/dev/oracleasm/disks/OCR_VOTE5’ belongs to diskgroup “OCR_VOTE”

ERROR: ALTER DISKGROUP OCR_VOTE ADD  DISK ‘/dev/oracleasm/disks/OCR_VOTE5’ SIZE 2048M /* ASMCA */

So a deleted it followed by a scan disks on the other nodes.

[root]# oracleasm querydisk ‘/dev/oracleasm/disks/OCR_VOTE5’

Device “/dev/oracleasm/disks/OCR_VOTE5” is marked an ASM disk with the label “OCR_VOTE5”

[root]# oracleasm deletedisk OCR_VOTE5

Clearing disk header: done

Dropping disk: done

[root]# oracleasm scandisks

Reloading disk partitions: done

Cleaning any stale ASM disks…

Cleaning disk “OCR_VOTE5”

Scanning system for ASM disks…

[root]# oracleasm scandisks

Reloading disk partitions: done

Cleaning any stale ASM disks…

Cleaning disk “OCR_VOTE5”

Scanning system for ASM disks…

And then re-created the ASM disk. Good job I made a note of this earlier.

[root]# oracleasm createdisk OCR_VOTE5 /dev/mapper/VOTE5_01

Writing disk header: done

Instantiating disk: done

[root]# oracleasm scandisks

Reloading disk partitions: done

Cleaning any stale ASM disks…

Scanning system for ASM disks…

Instantiating disk “OCR_VOTE5”

[root]# oracleasm scandisks

Reloading disk partitions: done

Cleaning any stale ASM disks…

Scanning system for ASM disks…

Instantiating disk “OCR_VOTE5”

Add the disk to the diskgroup.

[root]# su – oracle

Emergency Local Admin Environment configured

oracle > . ./crs_env

[+ASM1]>sqlplus / as sysasm

SQL> ALTER DISKGROUP OCR_VOTE ADD  DISK ‘/dev/oracleasm/disks/OCR_VOTE5’ SIZE 2048M;

Diskgroup altered.

Restore the OCR

I’m not sure that I needed to do this, ocrcheck always returned a valid status when run before attempting this fix. I wish I had run another ocrcheck and crsctl query css votedisk before doing this restore.

Anyway, the restore was run as follows :

[root]# ./ocrconfig -restore /oracle/GRID/11203/cdata/clustername/day.ocr

The note I was following suggested that I should run the following on the other nodes

ocrconfig -repair –replace

but I missed this, it doesn’t seem to have mattered.

Check Voting Diskgroup and OCR Integrity.

[+ASM1]>sqlplus / as sysasm

SQL> select group_number, name, failgroup, path from v$asm_disk where voting_file=’Y’;

GROUP_NUMBER NAME FAILGROUP PATH

16 OCR_VOTE_0004                  OCR_VOTE_0004                  /dev/oracleasm/disks/OCR_VOTE5

16 OCR_VOTE_0003                  OCR_VOTE_0003                  /dev/oracleasm/disks/OCR_VOTE4

16 OCR_VOTE_0002                  OCR_VOTE_0002                  /dev/oracleasm/disks/OCR_VOTE3

16 OCR_VOTE_0001                  OCR_VOTE_0001                  /dev/oracleasm/disks/OCR_VOTE2

16 OCR_VOTE_0000                  OCR_VOTE_0000                  /dev/oracleasm/disks/OCR_VOTE1

[root]# ./ocrcheck

Status of Oracle Cluster Registry is as follows :

Version                  :          3

Total space (kbytes)     :     262120

Used space (kbytes)      :       5260

Available space (kbytes) :     256860

ID                       :  207396515

Device/File Name         :  +OCR_VOTE

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root]# ./crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group

1. ONLINE   16ab9ac4f2d34f69bf4537800239bef7 (/dev/oracleasm/disks/OCR_VOTE1) [OCR_VOTE]

2. ONLINE   01d692b759e94f0cbf1bd86fb62b4ccf (/dev/oracleasm/disks/OCR_VOTE2) [OCR_VOTE]

3. ONLINE   a06ebbed329c4f7bbfc496b73d506d6f (/dev/oracleasm/disks/OCR_VOTE3) [OCR_VOTE]

4. ONLINE   32b346e3daed4f75bf54fc7628d02ae2 (/dev/oracleasm/disks/OCR_VOTE4) [OCR_VOTE]

5. ONLINE   1ff50824870d4ffdbf9d9cd4fe4df1dd (/dev/oracleasm/disks/OCR_VOTE5) [OCR_VOTE]

Located 5 (yes five) voting disk(s).

Stop CRS on the on exclusive node and restart on the other three.

[root]# ./crsctl stop crs

And then restart

[root]# ./crsctl start crs

CRS-4123: Oracle High Availability Services has been started.

[root]# ./crsctl start crs

CRS-4123: Oracle High Availability Services has been started.

[root]# ./crsctl start crs

CRS-4123: Oracle High Availability Services has been started.

Check everything comes up on all nodes.

[root]# ./crsctl stat res -init –t

On all nodes

[root]# ./crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group

1. ONLINE   16ab9ac4f2d34f69bf4537800239bef7 (/dev/oracleasm/disks/OCR_VOTE1) [OCR_VOTE]

2. ONLINE   01d692b759e94f0cbf1bd86fb62b4ccf (/dev/oracleasm/disks/OCR_VOTE2) [OCR_VOTE]

3. ONLINE   a06ebbed329c4f7bbfc496b73d506d6f (/dev/oracleasm/disks/OCR_VOTE3) [OCR_VOTE]

4. ONLINE   32b346e3daed4f75bf54fc7628d02ae2 (/dev/oracleasm/disks/OCR_VOTE4) [OCR_VOTE]

5. ONLINE   1ff50824870d4ffdbf9d9cd4fe4df1dd (/dev/oracleasm/disks/OCR_VOTE5) [OCR_VOTE]

Located 5 voting disk(s).

[root]# ./ocrcheck

Status of Oracle Cluster Registry is as follows :

Version                  :          3

Total space (kbytes)     :     262120

Used space (kbytes)      :       5260

Available space (kbytes) :     256860

ID                       :  207396515

Device/File Name         :  +OCR_VOTE

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

oracle +ASM1 > cluvfy comp ocr -n all -verbose

Verifying OCR integrity

Checking OCR integrity…

Checking the absence of a non-clustered configuration…

All nodes free of non-clustered, local-only configurations

ASM Running check passed. ASM is running on all specified nodes

Checking OCR config file “/etc/oracle/ocr.loc”…

OCR config file “/etc/oracle/ocr.loc” check successful

Disk group for ocr location “+OCR_VOTE” available on all the nodes

NOTE:

This check does not verify the integrity of the OCR contents. Execute ‘ocrcheck’ as a privileged user to verify the contents of OCR.

OCR integrity check passed

Verification of OCR integrity was successful.

oracle +ASM2 > crsstat | grep OFFL

ora.gsd                        OFFLINE, OFFLINE, OFFLINE

OK

 

 Comment 

srvctl tracing

on July 2, 2014 at 10:49 am
Posted In: Oracle RAC (Cat)

srvctl fails on one node but works on another

./srvctl

./srvctl: line 179: $JRE $JRE_OPTIONS -classpath $CLASSPATH $TRACE oracle.ops.opsctl.OPSCTLDriver “$@”

the obvious checks were already done :

check timestamp of srvctl and its size against another node

diff srvctl against another server

check version of java & path to version of java

environment variables match

o/s strace of srvctl did not give any clues to the problem.

other crs commands worked e.g. crsctl

no failures in /var/log/messages

To enable oracle tracing of srvctl:

oracle +ASM1 > export SRVM_TRACE=true (MOS 178683.1)

oracle +ASM1 > srvctl

Error occurred during initialization of VM

java.lang.OutOfMemoryError: unable to create new native thread

oracle +ASM1 > free -m

ps -ef | grep java

(pid 17377 is process for the oc4j CRS process)

oracle   17377     1  0 09:22 ?        00:00:30 /oracle/GRID/11203/jdk/jre//bin/java -server -Xcheck:jni -Xms128M -Xmx384M -Djava.awt.headless=true -Ddisable.checkForUpdate=true -Dstdstream.filesize=100 -Dstdstream.filenumber=10 -DTRACING.ENABLED=false -Doracle.wlm.dbwlmlogger.logging.level=INFO -Dport.rmi=23792 -jar /oracle/GRID/11203/oc4j/j2ee/home/oc4j.jar -config /oracle/GRID/11203/oc4j/j2ee/home/OC4J_DBWLM_config/server.xml -out /oracle/GRID/11203/oc4j/j2ee/home/log/oc4j.out -err /oracle/GRID/11203/oc4j/j2ee/home/log/oc4j.err

JRE_OPTIONS is a variable in the srvctl script where the default is JRE_OPTIONS=””

New Java process when running srvctl from another window with JRE_OPTIONS=”” and SRVM_TRACE=true

(the DTRACING is due to SRVM_TRACE set to TRUE)

oracle   15025 15019  0 13:26 pts/0    00:00:00 /oracle/GRID/11203/jdk/jre//bin/java -DORACLE_HOME=/oracle/GRID/11203 -classpath /oracle/GRID/11203/jlib/netcfg.jar:/oracle/GRID/11203/jdk/jre//jre/lib/rt.jar:/oracle/GRID/11203/jdk/jre//jre/lib/i18n.jar:/oracle/GRID/11203/jlib/srvm.jar:/oracle/GRID/11203/jlib/srvmhas.jar:/oracle/GRID/11203/jlib/srvmasm.jar:/oracle/GRID/11203/jlib/supercluster.jar:/oracle/GRID/11203/jlib/supercluster-common.jar:/oracle/GRID/11203/ons/lib/ons.jar:/oracle/GRID/11203/srvm/jlib/srvctl.jar:/oracle/GRID/11203/jlib/gns.jar -DTRACING.ENABLED=true -DTRACING.LEVEL=2 -Djava.net.preferIPv4Stack=true -Djava.util.logging.config.file=/oracle/GRID/11203/srvm/admin/logging.properties oracle.ops.opsctl.OPSCTLDriver

oracle   15942 15905  0 13:26 pts/0    00:00:00 /oracle/GRID/11203/jdk/jre//bin/java -DORACLE_HOME=/oracle/GRID/11203 -classpath /oracle/GRID/11203/jlib/netcfg.jar:/oracle/GRID/11203/jdk/jre//jre/lib/rt.jar:/oracle/GRID/11203/jdk/jre//jre/lib/i18n.jar:/oracle/GRID/11203/jlib/srvm.jar:/oracle/GRID/11203/jlib/srvmhas.jar:/oracle/GRID/11203/jlib/srvmasm.jar:/oracle/GRID/11203/jlib/supercluster.jar:/oracle/GRID/11203/jlib/supercluster-common.jar:/oracle/GRID/11203/ons/lib/ons.jar:/oracle/GRID/11203/srvm/jlib/srvctl.jar:/oracle/GRID/11203/jlib/gns.jar –DTRACING.ENABLED=true -DTRACING.LEVEL=2 -Djava.net.preferIPv4Stack=true -Djava.util.logging.config.file=/oracle/GRID/11203/srvm/admin/logging.properties oracle.ops.opsctl.OPSCTLDriver

New Java process when running srvctl from another window with JRE_OPTIONS=”” and SRVM_TRACE unset

oracle   26374 26366  0 13:28 pts/0    00:00:00 /oracle/GRID/11203/jdk/jre//bin/java -DORACLE_HOME=/oracle/GRID/11203 -classpath /oracle/GRID/11203/jlib/netcfg.jar:/oracle/GRID/11203/jdk/jre//jre/lib/rt.jar:/oracle/GRID/11203/jdk/jre//jre/l                                     ib/i18n.jar:/oracle/GRID/11203/jlib/srvm.jar:/oracle/GRID/11203/jlib/srvmhas.jar:/oracle/GRID/11203/jlib/srvmasm.jar:/or                                     acle/GRID/11203/jlib/supercluster.jar:/oracle/GRID/11203/jlib/supercluster-common.jar:/oracle/GRID/11203/ons/lib/ons.jar                                     :/oracle/GRID/11203/srvm/jlib/srvctl.jar:/oracle/GRID/11203/jlib/gns.jar -Djava.net.preferIPv4Stack=true -Djava.util.log                                     ging.config.file=/oracle/GRID/11203/srvm/admin/logging.properties oracle.ops.opsctl.OPSCTLDriver

with JRE_OPTIONS=”-Xms256M -Xmx1024M” and SRVM_TRACE unset set

oracle     695   685  0 13:35 pts/0    00:00:00 /oracle/GRID/11203/jdk/jre//bin/java -Xms256M -Xmx1024M -DORACLE_HOME=/oracle/GRID/11203 -classpath /oracle/GRID/11203/jlib/netcfg.jar:/oracle/GRID/11203/jdk/jre//jre/lib/rt.jar:/oracle/GRID/11203/jdk/jre//jre/lib/i18n.jar:/oracle/GRID/11203/jlib/srvm.jar:/oracle/GRID/11203/jlib/srvmhas.jar:/oracle/GRID/11203/jlib/srvmasm.jar:/oracle/GRID/11203/jlib/supercluster.jar:/oracle/GRID/11203/jlib/supercluster-common.jar:/oracle/GRID/11203/ons/lib/ons.jar:/oracle/GRID/11203/srvm/jlib/srvctl.jar:/oracle/GRID/11203/jlib/gns.jar -Djava.net.preferIPv4Stack=true -Djava.util.logging.config.file=/oracle/GRID/11203/srvm/admin/logging.properties oracle.ops.opsctl.OPSCTLDriver


Conclusion: The o/s was unable to satisfy memory requirement for srvctl intermitently. JRE_OPTIONS was set with a small and large memory value which worked but this was just a co-incidence. Even after JRE_OPTIONS was set back to null , srvctl carried on working.

 

 Comment 

CRS Filesystem dependancy change

on July 2, 2014 at 10:46 am
Posted In: Oracle RAC (Cat)

1. Deregister filesystem from previous install

# /sbin/acfsutil registry -d /dev/asm/acfs_vol1-395

]# /sbin/acfsutil registry -d /dev/asm/acfs_vol2-395

2. Add filesystem to CRS as a resource

# ./srvctl add filesystem -d /dev/asm/acfs_vol1-395 -g ACFS -v ACFS_VOL1 -m /oracle/dbadmin -u oracle

# ./srvctl add filesystem -d /dev/asm/acfs_vol2-395 -g ACFS -v ACFS_VOL2 -m /oracle/dbdump -u oracle

3. Start FileSsytem

# ./srvctl start filesystem -d /dev/asm/acfs_vol1-395

# ./srvctl start filesystem -d /dev/asm/acfs_vol2-395

4. Check Status of Filesystem

# ./srvctl status filesystem -d /dev/asm/acfs_vol1-395

ACFS file system is running on node1, node2

# ./srvctl status filesystem -d /dev/asm/acfs_vol2-395

ACFS file system is running on node1, node2

5. Make database dependant on the filesystems

oracle>  srvctl modify database -d dbname -j /oracle/dbadmin,/oracle/dbdump

 

 

 Comment 

Documentation / Tutorials

on July 2, 2014 at 9:12 am
Posted In: Oracle 11.2 Notes (Cat)

Tutorials

https://apex.oracle.com/pls/apex/f?p=44785:1:1969705861759208:::::

Interesting Oracle docs

http://www.oracle.com/us/dm/h2fy11/100047284-index-oem-1512886.html?msgid=3-5904687985

Reasonably simple explanation of subnets

http://www.techrepublic.com/article/ip-subnetting-made-easy/6089187


 Comment 

UDEV ASM Linux

on July 2, 2014 at 9:07 am
Posted In: Oracle RAC (Cat)

1.1 Oracle block devices

Due to the fact that RH will not be supporting raw devices after enterprise linux 5 Oracle now recommend using block devices instead of raw devices.

The following instructions for setting up the ASM devices is taken from the Oracle technical note  number 602952.1

 

1.1.1 Partition devices

Identify the sd devices to be used by asm.

“cat /proc/partitions” will list all devices, look at the “sd“  devices.

e.g.

# cat /proc/partitions

major minor  #blocks  name

8     0    20971520 sda

8     0        104391 sda1

8     0    20860402 sda2

8    16   12582912 sdb

8    32   12582912 sdc

8    48     1048576 sdd

8    64     1048576 sde

8    64     1048576 sdf

8    64     1048576 sdg

8    64     1048576 sdh

Here the ASM volumes are sdb and sdc (each 12Gb)

The OCR and Voting disks are sdd to sdh (each 1Gb)

Now we have identified the disks we need to create partitions on them.

For each device enter:

$ fdisk /dev/sd-??

Command (m for help): u

Command (m for help): p

Command (m for help): n

Command (m for help): p

1

32768

w

(Where ?? are the characters used to identify the device.)

The reason we are using 32768 as the start block for the device is because of the way scsi/san volume mapping uses the first few blocks of a device and subsequently causes excessive write I/O. This is better explained in the following link:

http://www.pythian.com/news/411/aligning-asm-disks-on-linux

e.g.

# fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won’t be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): u

Changing display/entry units to sectors

Command (m for help): p

Disk /dev/dm-5: 5394 MB, 5394923520 bytes

255 heads, 63 sectors/track, 655 cylinders, total 10536960 sectors

Units = sectors of 1 * 512 = 512 bytes

Device Boot      Start         End      Blocks   Id  System

Command (m for help): n

Command action

e   extended

p   primary partition (1-4)

p

Partition number (1-4): 1

First sector (63-10536959, default 63): 32768

Last sector or +size or +sizeM or +sizeK (32768-25165823, default 25165823):

Using default value 25165823

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

#

1.2 Configure device protection

Create or edit the file /etc/udev/rules.d/98-oracle-perm.rules and add the following:

#OCR DISKS

KERNEL==”sdb*”, GROUP=”oinstall”, MODE=”640”

KERNEL==”sdc*”, GROUP=”oinstall”, MODE=”640”

#VOTE DISKS

KERNEL==”sdd*”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”660”

KERNEL==”sde*”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”660”

#ASM DISKS

KERNEL==”sdf*”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”660”

KERNEL==”sdg*”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”660”

KERNEL==”sdh*”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”660”

Changing the sd device name where necessary.

 

Create or edit the file /etc/udev/rules.d/99-oracle-naming.rules and add the following:

KERNEL==”sdb1”, NAME=”ocr1”

KERNEL==”sdc1”, NAME=”ocr2”

KERNEL==”sdd1”, NAME=”vote1”

KERNEL==”sde1”, NAME=”vote2”

KERNEL==”sdf1”, NAME=”vote3”

KERNEL==”sdg1”, NAME=”asm2”

KERNEL==”sdh1”, NAME=”asm1”

Changing the sd device name where necessary.

 

Reboot the system.

After the reboot we should see the asm, ocr and vote volumes in /dev.

 

1.2.1 Map the asm volumes

For each of the asm volumes execute the command:

/etc/init.d/oracleasm createdisk ASMn /dev/asmn

Replacing the “n” with the relevant numbers.

Perform the following on all nodes in the cluster:

# /etc/init.d/oracleasm scandisks

# /etc/init.d/oracleasm listdisks

The ASM volumes should now be listed in the directory /dev/oracleasm/disks

# ls /dev/oracleasm/disks

ASM1     ASM2    ASM3 etc

 

ALTERNATIVE COMMANDS

================

You have to add the disk information to the below rules file on all nodes

# view /etc/udev/rules.d/99-oracle-asmdevices.rules

Ex entry

KERNEL==”dm-[0-9]*”, ENV{DM_UUID}==”mpath-360060e801526af00000126af00001c2b”, NAME=”ISP_A_ARCH_10″, OWNER=”oracle”, GROUP=”asmdba”, MODE=”0660″

Below information for the above line will be available in /etc/multipath.conf file

360060e801526af00000126af00001c2b

ISP_A_ARCH_10

Reload and Restart the udev rules on all nodes

# udevadm control –reload-rules

# start_udev

 

When you look the /dev/mapper directory; you can see the new disk added and you can see the disk the asmca

 

 Comment 

UDEV ASM OEL 6.3

on July 2, 2014 at 9:06 am
Posted In: Oracle RAC (Cat)

1) This example was performed on an OEL 6.3 configuration (ASM release 11.2.0.3 – 64-bit):

[grid@asmlnx2 ~]$ uname -a

Linux asmlnx2 2.6.39-200.24.1.el6uek.x86_64 #1 SMP Sat Jun 23 02:39:07 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux

[grid@asmlnx2 ~]$ cat /etc/*release*

Oracle Linux Server release 6.3

Red Hat Enterprise Linux Server release 6.3 (Santiago)

Oracle Linux Server release 6.3

2) Identify the new disk in your Linux configuration (e.g. OEL 6.3 or RedHat 6.3):

[grid@asmlnx2 ~]$ ls -l /dev/xvdv

brw-rw—- 1 root disk 202, 5376 Feb 11 21:31 /dev/xvdv

3) Create a new disk partition on it:

[root@asmlnx2 ~]#

 

fdisk /dev/xvdv

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel with disk identifier 0xba335ded.

Changes will remain in memory only, until you decide to write them.

After that, of course, the previous content won’t be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to

switch off the mode (command ‘c’) and change display units to

sectors (command ‘u’).

Command (m for help): n

Command action

e extended

p primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-261, default 1):

Using default value 1

Document https://mosemp.us.oracle.com/epmos/faces/ui/km/DocumentDisplay.js…

1 of 3 17/04/2013 15:32

Last cylinder, +cylinders or +size{K,M,G} (1-261, default 261):

Using default value 261

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

[root@asmlnx2 ~]# partprobe /dev/xvdv

[root@asmlnx2 ~]# partprobe /dev/xvdv1

[root@asmlnx2 ~]# ls -l /dev/xvdv

brw-rw—- 1 root disk 202, 5376 Feb 11 21:38 /dev/xvdv

[root@asmlnx2 ~]# ls -l /dev/xvdv1

brw-rw—- 1 root disk 202, 5377 Feb 11 21:38 /dev/xvdv1

4) Set the new UDEV rules on the new partition (“/dev/xvdv1”) as follows:

[root@asmlnx2 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules

KERNEL==”xvdv1″, NAME=”asmdisk1_udev_p1″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″

5) Reload and Restart the udev rules:

[root@asmlnx2 ~]# udevadm control –reload-rules

[root@asmlnx2 ~]# start_udev

Starting udev: [ OK ]

6) Make sure the new udev partition device is created with the correct ownership (

 

grid & asmadmin) and permissions (brw-rw—-):

[root@asmlnx2 ~]# ls -ltr /dev/asm*

brw-rw—- 1 grid asmadmin 202, 5377 Feb 11 21:47 /dev/asmdisk1_udev_p1

7) Now you can use it to create a new diskgroup or add it to an existing diskgroup as follows:

[root@asmlnx2 ~]# su – grid

[grid@asmlnx2 ~]$ . oraenv

ORACLE_SID = [grid] ? +ASM

The Oracle base has been set to /u01/app/grid

[grid@asmlnx2 ~]$ sqlplus “/as sysasm”

SQL*Plus: Release 11.2.0.3.0 Production on Mon Feb 11 21:56:31 2013

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production

With the Automatic Storage Management option

SQL> show parameter asm_diskstring

Document https://mosemp.us.oracle.com/epmos/faces/ui/km/DocumentDisplay.js…

2 of 3 17/04/2013 15:32

NAME TYPE VALUE

———————————— ———– ——————————

asm_diskstring string

SQL> alter system set asm_diskstring =

 

‘/dev/asmdisk1_udev_p1’ scope=both;

System altered.

SQL> show parameter asm_diskstring

NAME TYPE VALUE

———————————— ———– ——————————

asm_diskstring string

 

/dev/asmdisk1_udev_p1

SQL> select path from v$asm_disk where path like ‘%udev%’;

PATH

——————————————————————————–

/dev/asmdisk1_udev_p1

SQL> create diskgroup UDEVDG external redundancy disk

 

‘/dev/asmdisk1_udev_p1’ ;

Diskgroup created.

 Comment 

Management server reboot

on September 3, 2013 at 10:48 am
Posted In: Oracle Exadata (Cat)

 

1.       Rebooting the server can be done in two ways. One way is to Turn Off the power from the ILOM interface. This approach did not work in our case. The other way is to log in the cell server and invoke the reboot command.

2.       During this activity it is prudent to have an active console window, so the overall progress can be monitored. There is a button in the ILOM interface screen which can be used to open such a console window. However, in this occasion, a window was opened via SSH to the server and manually initiating the console. Please note that the console may time out after a certain period of time. It is therefore prudent to start it up just before is needed.

 

INCREASE THE POWER LIMMIT IN ASM TO SPEED UP THE REBALANCING.

$ sqlplus / as sysdba

SQL> show parameter POWER

asm_power_limit                      integer     2

SQL> alter system set asm_power_limit = 9 scope = both;

SQL> show parameter POWER

asm_power_limit                      integer     9

 

MONITOR THE PROCESS FROM THE ILOM

# ssh

 

Oracle(R) Integrated Lights Out Manager

-> start /SP/console

 

# ssh docexacel13-ilom

Oracle(R) Integrated Lights Out Manager

-> reset /SYS

Are you sure you want to reset /SYS (y/n)? y

Performing hard reset on /SYS

 

After a short while, output will start streaming in the console.
Monitor its progress until the login prompt appears.
Then, proceed with the checks, as described below.

 

sqlplus / as sysdba

SQL> select * from gv$asm_operation;

 INST_ID GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE EST_MINUTES ERROR_CODE

         1            1 REBAL WAIT          9

         1            3 REBAL WAIT          9

         3            1 REBAL WAIT          9

         3            3 REBAL RUN           9          9      91743     553951       8149          56

         4            1 REBAL WAIT          9

         4            3 REBAL WAIT          9

         2            1 REBAL RUN           9          9     116410     630450       9268          55

         2            3 REBAL WAIT          9

8 rows selected.

 

Keep monitoring until the rebalancing is complete, i.e. all groups are in WAIT state.

#dcli -g /home/oracle/cell_group -l root cellcli -e list alerthistory |grep 7445

 

# cellcli

Cell Efficiency Ratio: 9,022

CellCLI> list cell detail

         name:                  

         bbuTempThreshold:       60

         bbuChargeThreshold:     800

         bmcType:                IPMI

         cellVersion:            OSS_11.2.3.2.0_LINUX.X64_121226

         cpuCount:               24

         diagHistoryDays:        7

         fanCount:               8/8

         fanStatus:              normal

         flashCacheMode:         WriteThrough

         id:                     1238FM500A

         interconnectCount:      3

         interconnect1:          bondib0

         iormBoost:              0.0

         ipaddress1:             192.168.1.1/22

         kernelVersion:          2.6.32-400.1.1.el5uek

         locatorLEDStatus:       off

         makeModel:              Oracle Corporation SUN FIRE X4270 M3 SAS

         metricHistoryDays:      7

         notificationMethod:     snmp

         notificationPolicy:     critical,warning,clear

         offloadEfficiency:      9,021.8

         powerCount:             2/2

         powerStatus:            normal

         releaseVersion:         11.2.3.2.0

         releaseTrackingBug:     14212264,16042459

         snmpSubscriber:         host=mgmt6.uk.centricaplc.com,port=1830,community=public

                                 host=mgmt8.uk.centricaplc.com,port=1830,community=public

         status:                 online

         temperatureReading:     26.0

         temperatureStatus:      normal

         upTime:                 0 days, 0:02

         cellsrvStatus:          running

         msStatus:               running

         rsStatus:               running

 

CellCLI> list physicaldisk

         19:0            K60M7N          normal

         19:1            KZNBBL          normal

         19:2            K5ZLEN          normal

         19:3            K60P5N          normal

         19:4            K3ZRAN          normal

         19:5            K2AH4N          normal

         19:6            K3YWGN          normal

         19:7            K6015N          normal

         19:8            K5ZTGN          normal

         19:9            K608XN          normal

         19:10           K60NWN          normal

         19:11           K9U7TN          normal

         FLASH_1_0       5L0046ZR        normal

         FLASH_1_1       5L0046ZN        normal

         FLASH_1_2       5L0046W8        normal

         FLASH_1_3       5L0046SY        normal

         FLASH_2_0       5L0046LD        normal

         FLASH_2_1       5L0046NY        normal

         FLASH_2_2       5L0046RD        normal

         FLASH_2_3       5L0046PL        normal

         FLASH_4_0       5L0046SQ        normal

         FLASH_4_1       5L0047A4        normal

         FLASH_4_2       5L00470Q        normal

         FLASH_4_3       5L00472Q        normal

         FLASH_5_0       5L0046LF        normal

         FLASH_5_1       5L0046V8        normal

         FLASH_5_2       5L004773        normal

         FLASH_5_3       5L0046TP        normal

 

CellCLI> list griddisk

         DATA_CD_00_3mgmt      active

         DATA_CD_01_3mgmt      active

         DATA_CD_02_3mgmt      active

         DATA_CD_03_3mgmt      active

         DATA_CD_04_3mgmt      active

         DATA_CD_05_3mgmt      active

         DATA_CD_06_3mgmt      active

         DATA_CD_07_3mgmt      active

         DATA_CD_08_3mgmt      active

         DATA_CD_09_3mgmt      active

         DATA_CD_10_3mgmt      active

         DATA_CD_11_docexacel13mgmt      active

         DBFS_DG_CD_02_3mgmt   active

         DBFS_DG_CD_03_3mgmt   active

         DBFS_DG_CD_04_3mgmt   active

         DBFS_DG_CD_05_3mgmt   active

         DBFS_DG_CD_06_3mgmt   active

         DBFS_DG_CD_07_3mgmt   active

         DBFS_DG_CD_08_3mgmt   active

         DBFS_DG_CD_09_3mgmt   active

         DBFS_DG_CD_10_3mgmt   active

         DBFS_DG_CD_11_3mgmt   active

         RECO_CD_00_3mgmt      active

         RECO_CD_01_3mgmt      active

         RECO_CD_02_3mgmt      active

         RECO_CD_03_3mgmt      active

         RECO_CD_04_3mgmt      active

         RECO_CD_06_3mgmt      active

         RECO_CD_07_3mgmt      active

         RECO_CD_08_3mgmt      active

         RECO_CD_09_3mgmt      active

         RECO_CD_10_3mgmt      active

         RECO_CD_11_13mgmt      active

 

When the status for all groups is observed to be “WAIT”, we can return the power degree parameter to its original value.

sqlplus / as sysdba

SQL> alter system set asm_power_limit = 2 scope = both;

System altered.

SQL> show parameter POWER

asm_power_limit                      integer     2

 

 

 Comment 

FlashCache Writeback – Disable

on September 3, 2013 at 10:15 am
Posted In: Oracle Exadata (Cat)

These have been followed for PP1.

NOTE – For the list griddisk attributes name,asmmodestatus,asmdeactivationoutcome last step for each cell ensure that status is ONLINE
(not SYNCING) before moving on to the next storage server.

The following query in sqlplus / as sysasm is useful for checking that you have 12 disks for each cell
for diskgroups 1 and 3 (DATA and RECO) and 10 for diskgroup 2 (DBFS)
select group_number, substr(path,1,15), count(1) from v$asm_disk group by group_number, substr(path,1,15);

==========================================================================================================

1. Check griddisk status by verifying the griddisk attribute “asmdeactivationoutcome” = “Yes” for all griddisks on this cell.  Do not proceed if a griddisk is returned using the following command.  The following command should return no rows.
dcli -g /home/oracle/cell_group -l root cellcli -e “list griddisk where asmdeactivationoutcome != ‘Yes’ attributes name,asmmodestatus,asmdeactivationoutcome”

2. Determine amount of bytes to be flushed by determining how much is dirty is in the flash cache.  This will provide the number of bytes of data that needs to be de-staged to disk per cell, which will give an indication of how long the flush will take. 
dcli -g /home/oracle/cell_group -l root cellcli -e “list metriccurrent attributes name,metricvalue where name like \’FC_BY_DIRTY.*\’ “

3. Flush the flashcache across all cells

To perform this step, it is recommended to have two separate sessions: one to execute the command below and the other to monitor its progression in the next step.

Issue the following command to begin the flush in one of the two sessions:
dcli -g /home/oracle/cell_group -l root cellcli -e “alter flashcache all flush”

If any errors occur, they will be displayed in this session, otherwise, this session will show a successful flush across all cells.

4. Check the flush status across all cells

a. Execute the following command every few minutes in the second session to monitor the progress.  As dirty blocks are de-staged to disk, this count will reduce to zero (0).  This will take some time and you can determine a time estimate as you execute the following command over time: 
dcli -g /home/oracle/cell_group -l root cellcli -e “list metriccurrent attributes name,metricvalue where name like \’FC_BY_DIRTY.*\’ “

b. The following command should return  “working” for each flash disk on each cell while the cache is being flushed and “completed” when it is finished. Execute the following command in the second session:
dcli -g /home/oracle/cell_group -l root cellcli -e “LIST CELLDISK ATTRIBUTES name, flushstatus, flusherror” | grep FD

 

The following steps are to be executed individually on each cell, one at a time.  All steps that must be performed directly on a cell use the cellcli utility.

 Log onto the first cell that will have the write back flash cache disabled.

5. Drop the flashcache for this cell after the flush completes
cellcli -e drop flashcache

6. Inactivate all griddisks on the cell
cellcli -e alter griddisk all inactive

7. Shut down the cellsrv service
cellcli -e alter cell shutdown services cellsrv

8. Reset the cell flash cache state to writethrough
cellcli -e “alter cell flashCacheMode=writethrough”

9. Restart the cellsrv service
cellcli -e alter cell startup services cellsrv

10. Reactivate the griddisks on the cell
cellcli -e alter griddisk all active

11. Recreate the flash cache
cellcli -e create flashcache all

12. Check the status of this cell flash cache state
cellcli -e list cell detail | grep flashCacheMode

13. Check the griddisks of the cell

Before moving on to the next cell, check the attribute “asmModestatus” of all of the griddisks and make sure they are all “ONLINE” and the attribute “asmdeactivationoutcome” is set to “Yes”.  It may be necessary to execute the following command several times until the “asmModestatus” shows “ONLINE”. 
cellcli -e list griddisk attributes name,asmmodestatus,asmdeactivationoutcome

 
========================================================

checking on all nodes
dcli -g /home/oracle/cell_group -l root cellcli -e “list cell detail” | grep flashCacheMode

 Comment 

FlashCache Writeback – Enable

on September 3, 2013 at 10:13 am
Posted In: Oracle Exadata (Cat)

Exadata Smart Flashcache

Enabling Write Back Cache

# dcli -g cell_group -l celladmin cellcli -e list cell attributes flashCacheMode

: WriteThrough

:
: WriteThrough

i.e. no write back cache

CellCli> drop flashcache
CellCli> alter cell shutdown services cellsrv
CellCli> alter cell flashCacheMode = WriteBack
CellCli> alter cell startup services cellsrv
CellCli> create flashcache all
CellCli> list cell attributes flashCacheMode WriteBack
REPEAT FOR ALL CELL SERVERS

# dcli -g cell_group -l celladmin cellcli -e list cell attributes flashCacheMode

: WriteBack

:
: WriteBack

 Comment 

Diagnostics – OSWATCHER

on September 3, 2013 at 8:29 am
Posted In: Oracle Exadata (Cat)

to find where the oswatcher is running from use:
ps -ef | grep osw

 Comment 

Diagnostics – SOSREPORT

on September 3, 2013 at 8:28 am
Posted In: Oracle Exadata (Cat)

Normally in /usr/sbin
Run as root
no parameters required and the process makes no changes to the system.

/usr/sbin/sosreport

 Comment 

Diagnostics – SUNDIAG

on September 3, 2013 at 8:26 am
Posted In: Oracle Exadata (Cat)

On instructions from Oracle sundiag.sh is found in /opt/oracle.SupportTools on compute and storage nodes
Run as root with no parameters.

 Comment 

Infiniband Checks

on September 3, 2013 at 7:26 am
Posted In: Oracle Exadata (Cat)

InfiniBand Checks

cd /opt/oracle.SupportTools

./CheckSWProfile.sh -I IS_SPINE,,

ibdiagnet -c 1000 — Check network quality

ibqueryerrors.pl -rR -s LinkDowned,RcvSwRelayErrors,XmtDiscards,XmtWait –Check port configuration

ibclearcounters — Clears old InfiniBand errors

 

InfiniBand Switch Checks

Log on to InfiniBand as root

cd /usr/local/diag

version

showunhealthy

env_test

listlinkup

ibnetdiscover

ibcheckstate -v

 

 Comment 

Cell Server Administration

on August 28, 2013 at 12:47 pm
Posted In: Oracle Exadata (Cat)

For a half rack we have 7 cells.

Each cell has 12 disks

Each disk contains:

A cell disk which contains:

A number of Grid disks

which each contain an ASM disk.

 

Cell alerts can be raised by Exacheck, check on the cell using cellcli

e.g.

oracle> ssh celladmin@cell01-mgmt

cellcli

cellcli> list alerthistory where alerttype=’stateless’

Clear alerts:

cellcli> alter alerthistory ,,

examinedby=””

ASM details do not appear in “list griddisk” details

CELLCLI> list griddisk DATA_CD_01_ detail
:
available to:
:
CELLCLI> alter cell restart service cellsrv

CELL-02620 – unmapped CELLSRV

DELLCLI> alter cell restart services ms

To list the cells in use run the following against an ASM instance

SQL> select distinct substr(path,3,13) from v$asm_disk;

This will list the IP addresses of the cell servers.

 

 

$ ssh -l root

$ cellcli

cellcli> list cell detail

cellcli> list physicaldisk

cellcli> list celldisk

cellcli> list physicaldisk detail

cellcli> liist celldisk detail

cellcli> liist griddisk detail

cellcli> list alerthistory where alerttype=’stateless’

OR from the db server

$ dcli -g cells -l celladmin cellcli -e list cell detail

 

EXAMPLE 1:
MISSING DISK BECAUSE OF INCORRECT SIZINGS

in ASM

SQL> select group_number, name from v$asm_diskgroup;

1 DATA

2 SYS

3 RECO

 

SQL> select group_number, substr(path,1,15), count(1) from v$asm_disk group by group_number, substr(path,1,15);

GROUP_NUMBER SUBSTR(path,1,15) COUNT

============================

1 o/192.168.10.1 12

1 o/192.168.10.2 12

1 o/192.168.10.3 12

1 o/192.168.10.4 12

1 o/192.168.10.5 12

1 o/192.168.10.6 12

1 o/192.168.10.7 12

2 o/192.168.10.1 12

2 o/192.168.10.2 12

2 o/192.168.10.3 11

2 o/192.168.10.4 12

2 o/192.168.10.5 12

2 o/192.168.10.6 12

2 o/192.168.10.7 12

3 o/192.168.10.1 12

3 o/192.168.10.2 12

3 o/192.168.10.3 12

3 o/192.168.10.4 12

3 o/192.168.10.5 12

3 o/192.168.10.6 12

3 o/192.168.10.7 12

Therefore the SYS group is missing one disk on 192.168.10.3

Run cellcli on that cell server

cellcli> list griddisk

SYS_DG_CD_01_ active

SYS_DG_CD_02_ active

*** disk 3 is missing

SYS_DG_CD_04_ active

etc

Trying to recreate the volume gives errors:

cellcli> create griddisk SYS_DG_CD_03_ cellDisk=CD_03_

CELL-02566: No space for the specified allocation.

Investigating further shows that one of the RECO disks is incorrectly sized:

cellcli> list griddisk RECO_CD_02_ detail

name:RECO_CD_02_

asmDiskgroupName:  RECO

asmDiskName:          RECO_CD_02_

:

size:                       164G (whereas all the others are 105G)

 

RESIZE the disk

SQL> alter diskgroup RECO drop disk RECO_CD_03_;

CELLCLI> drop griddisk RECO_CD_03_

In case of issues whilst its being removed form the disk group try again

CELLCLI> drop griddisk RECO_CD_03_

CELLCLI> create griddisk RECO_CD_03_ cellDisk=CD_03_, size=105G

SQL> alter diskgroup add disk ‘o/192.168.10.3/RECO_CD_03_‘ name RECO_CD_03_;

Now we should be able to create the SYS disk

CELLCLI> create griddisk SYS_DG_CD_03_ cellDisk=CD_03_

SQL> alter diskgroup SYS_DG add disk ‘o/192.168.1.3/SYS_DG_CD_ name SYS_DG_CD_03_;

 

EXAMPLE 2:
RECREATING A DISK REPLACED BY ORACLE ENGINEERS

Create the CELLDISK

CellCLI> CREATE CELLDISK CD_07_ physicalDisk=’nm76dks’

CELL-04527: Cannot complete the creation of cell disk CD_07_wycexacel10mgmt. Received error: CELL-04539: Cannot create Single Disk LUN on physical disk 14:7 because there is already a LUN on this physical disk.

CellCLI> list PHYSICALDISK nm76dks detail

name:                   14:7

deviceId:               21

diskType:               HardDisk

enclosureDeviceId:      14

errMediaCount:          0

errOtherCount:          0

foreignState:           false

luns:                   0_7

makeModel:              “HITACHI HUS1560SCSUN600G”

physicalFirmware:       A6C0

physicalInsertTime:     2012-12-05T10:23:36+00:00

physicalInterface:      sas

physicalSerial:         nm76dks

physicalSize:           558.9109999993816G

slotNumber:             7

status:                 normal

CellCLI> CREATE CELLDISK CD_07_ lun=0_7

CellDisk CD_07_ successfully created

CellCLI> list  CELLDISK CD_07_ detail

name:                   CD_07_

comment:

creationTime:           2012-12-05T13:35:11+00:00

deviceName:             /dev/sdac

devicePartition:        /dev/sdac

diskType:               HardDisk

errorCount:             0

freeSpace:              556.8125G

freeSpaceMap:           offset=32M,size=19.96875G

offset=20.015625G,size=537.84375G

id:                     9b43a158-567e-4071-b267-576305a7f957

interleaving:           none

lun:                    0_7

physicalDisk:           nm76dks

raidLevel:              0

size:                   557G

status:                 normal

 

Create the Griddisks

CellCLI> create griddisk DATA_CD_07_ cellDisk= CD_07_, size=423G

GridDisk DATA_DG_CD_07_ successfully created

 

CellCLI> create griddisk RECO_CD_07_ cellDisk= CD_07_, size=105G

GridDisk RECO_DG_CD_07_ successfully created

 

CellCLI> create griddisk DBFS_DG_CD_07_ cellDisk= CD_07_, size= 29G

GridDisk DBFS_DG_CD_07_wycexacel01mgmt successfully created

The sizes can be found from list existing disks for the same group.

 

Create the ASM disks

Example syntax

SQL> alter diskgroup SYS_DG add disk ‘o/192.168.10.3/SYS_DG_CD_03_‘ name SYS_DG_CD_03_;

Restart the Cell Server when ASM doesn’t appear in “list griddisk” output.

CellCLI> list griddisk DATA_CD_07_ detail

name:                   DATA_CD_07_

availableTo:

cachingPolicy:          default

cellDisk:               CD_07_

comment:

creationTime:           2012-12-05T13:51:14+00:00

diskType:               HardDisk

errorCount:             0

id:                     d875363f-1f96-44a9-8488-9a1ab595e137

offset:                 32M

size:                   423G

status:                 active

CellCLI> alter cell restart services cellsrv

CellCLI> list griddisk DATA_CD_07_ detail

name:                   DATA_CD_07_

asmDiskgroupName:       DATA

asmDiskName:            DATA_CD_07_

asmFailGroupName:      

availableTo:

cachingPolicy:          default

cellDisk:               CD_07_

comment:

creationTime:           2012-12-05T13:51:14+00:00

diskType:               HardDisk

errorCount:             0

id:                     d875363f-1f96-44a9-8488-9a1ab595e137

offset:                 32M

size:                   423G

status:                 active

 

UNMAPPED CELLSRV ERROR

CELL-02620: An unmapped CELLSRV error has occurred. The internale message is: msosscommerr#3.

WORKAROUND:

cellcli> alter cell restart service ms

 

 

 

 

 

 

 Comment 

ILOM Administration

on August 28, 2013 at 12:37 pm
Posted In: Oracle Exadata (Cat)

Reset an ilom from a cell

$ ipmitool sunoem cli

-> reset /SP

Are you sure you want to reset /SP (y/n)? y

Performing reset on /SP

->

 

Reset an ilom from an ib switch (run from a db server)

$ ssh -l ilom-admin

-> reset /SP

Are you sure you want to reset /SP (y/n)? y

Performing reset on /SP

->

 

Add a read only user

$ ssh

-> create /SP/users/checkcomms password=checkscc role=o

-> show /SP/users/checkcomms

 

 Comment 

Exadata One liners

on August 28, 2013 at 9:39 am
Posted In: Oracle Exadata (Cat)

List Hardware serial Numbers

root> dmidecode

 

Change Passwords

to change oracle on all db servers

root> dcli -g /home/oracle/db_group -l root “echo | passwd –stdin oracle

to change root on all db servers

root> dcli -g /home/oracle/db_group -l root “echo | passwd –stdin root

 

To unlock oracle account

root> pam_tally2 -r -u oracle

 

Run exacheck

oracle> /home/oracle/exacheck/exechk_zip

or

oracle> /opt/oracle.SupportTools/exachk

 

Cell alerts can be raised by Exacheck, check on the cell using cellcli

e.g.

oracle> ssh celladmin@cell01-mgmt

cellcli

cellcli> list alerthistory where alerttype=’stateless’

Clear alerts:

cellcli> alter alerthistory ,,

examinedby=””

 

 

 Comment 

Golden Gate Manual Notes

on June 19, 2013 at 7:20 am
Posted In: Oracle Golden Gate (Cat)

CHAPTER 16

Running an initial data load

Overview of initial data load methods

You can use Oracle GoldenGate to:

● Perform a standalone batch load to populate database tables for migration or other purposes.

● Load data into database tables as part of an initial synchronization run in preparation for change synchronization with Oracle GoldenGate.

 

The initial load can be performed from an active source database. Users and applications can access and update data while the load is running. You can perform initial load from a quiesced source database if you delay access to the source tables until the target load is completed.

 

Supported load methods

You can use Oracle GoldenGate to load data in any of the following ways:

● “Loading data with a database utility” on page 217. The utility performs the initial load.

● “Loading data from file to Replicat” on page 218. Extract writes records to an extract file and Replicat applies them to the target tables. This is the slowest initial-load method.

● “Loading data from file to database utility” on page 222. Extract writes records to extract files in external ASCII format. The files are used as data files for input into target tables by a bulk load utility. Replicat creates the run and control files.

● “Loading data with an Oracle GoldenGate direct load” on page 227. Extract communicates with Replicat directly across TCP/IP without using a Collector process or files. Replicat applies the data through the database engine.

● “Loading data with a direct bulk load to SQL*Loader” on page 231. Extract extracts records in external ASCII format and delivers them directly to Replicat, which delivers them to Oracle’s SQL*Loader bulk-load utility. This is the fastest method of loading Oracle data with Oracle GoldenGate.

 

Running an initial data load

Using parallel processing in an initial load

For all initial load methods except those performed with a database utility, you can load large databases more quickly by using parallel Oracle GoldenGate processes.

 

To use parallel processing

·         Follow the directions in this chapter for creating an initial-load Extract and an initialload Replicat for each set of parallel processes that you want to use.

 

·         With the TABLE and MAP parameters, specify a different set of tables for each pair of Extract-Replicat processes, or you can use the SQLPREDICATE option of TABLE to partition the rows of large tables among the different Extract processes.

 

Prerequisites for initial load

 

Disable DDL processing

Before executing an initial load, disable DDL extraction and replication. DDL processing

is controlled by the DDL parameter in the Extract and Replicat parameter files.

 

Prepare the target tables

The following are suggestions that can make the load go faster and help you to avoid errors.

● Data: Make certain that the target tables are empty. Otherwise, there may be duplicate-row errors or conflicts between existing rows and rows that are being loaded.

● Constraints: Disable foreign-key constraints and check constraints. Foreign-key constraints can cause errors, and check constraints can slow down the loading process. Constraints can be reactivated after the load concludes successfully.

● Indexes: Remove indexes from the target tables. Indexes are not necessary for inserts. They will slow down the loading process significantly. For each row that is inserted into a table, the database will update every index on that table. You can add back the indexes after the load is finished.

● Keys: To use the HANDLECOLLISIONS function to reconcile incremental data changes with the load, each target table must have a primary or unique key. If you cannot create a key through your application, use the KEYCOLS option of the TABLE and MAP parameters to specify columns as a substitute key for Oracle GoldenGate’s purposes. A key helps identify which row to process. If you cannot create keys, the source database must be quiesced for the load.

 

Configure the Manager process

On the source and target systems, configure and start a Manager process. One Manager can be used for the initial-load processes and the change-synchronization processes.

 

Create a data-definitions file

A data-definitions file is required if the source and target databases have dissimilar definitions. Oracle GoldenGate uses this file to convert the data to the format required by the target database.

 

Create change-synchronization groups

NOTE If the load is performed from a quiet source database and will not be followed by continuous change synchronization, you can omit these groups.

To prepare for the capture and replication of transactional changes during the initial load, create online Extract and Replicat groups. You will start these groups during the load procedure. See the instructions in this  documentation that are appropriate for the type of replication configuration that you will be using.

Do not start the Extract or Replicat groups until instructed to do so in the initial-load instructions. Change synchronization keeps track of transactional changes while the load is being applied, and then the target tables are reconciled with those changes.

NOTE The first time that Extract starts in a new Oracle GoldenGate configuration, any open transactions will be skipped. Only transactions that begin after Extract starts are captured.

If the source database will remain active during the initial load, include the HANDLECOLLISIONS parameter in the Replicat parameter file; otherwise do not use it.

HANDLECOLLISIONS accounts for collisions that occur during the overlap of time between the initial load and the ongoing change replication. It reconciles insert operations for which the row already exists, and it reconciles update and delete operations for which the row does not exist.

It can be used in these ways:

● globally for all tables in a parameter file

● as an on/off toggle for groups of tables

● within MAP statements to enable or disable the error handling for specific table pairs.

 

Sharing parameters between process groups

Some of the parameters that you use in a change-synchronization parameter file also are required in an initial-load Extract and initial-load Replicat parameter file. You can copy those parameters from one parameter file to another, or you can store them in a central file and use the OBEY parameter in each parameter file to retrieve them. Alternatively, you can create an Oracle GoldenGate macro for the shared parameters and then call the macro from each parameter file with the MACRO parameter.

 

Loading data with a database utility

To use a database copy utility to establish the target data, you start a changesynchronization Extract group to extract ongoing data changes while the database utility makes and applies a static copy of the data. When the copy is finished, you start the change-synchronization Replicat group to re-synchronize rows that were changed while the copy was being applied. From that point forward, both Extract and Replicat continue Running to maintain data synchronization. This method does not involve any special initial-load Extract or Replicat processes.

 

To load data with a database utility

·         On the source and target systems, run GGSCI and start the Manager process.

START MANAGER

·         On the source system, start change extraction.

START EXTRACT

Where: is the name of the Extract group.

·         On the source system, start making the copy.

·         Wait until the copy is finished and record the time of completion.

·         View the Replicat parameter file to make certain that the HANDLECOLLISIONS parameter is listed. If not, add the parameter with the EDIT PARAMS command.

VIEW PARAMS

EDIT PARAMS

Where: is the name of the Replicat group.

·         On the target system, start change replication.

START REPLICAT

Where: is the name of the Replicat group.

·         On the target system, issue the following command to verify the status of change replication.

INFO REPLICAT

·         Continue to issue the INFO REPLICAT command until you have verified that change replication has posted all of the change data that was generated during the initial load. Reference the time of completion that you recorded. For example, if the copy stopped at 12:05, make sure change replication has posted data up to that point.

·         On the target system, issue the following command to turn off the HANDLECOLLISIONS parameter and disable the initial-load error handling.

SEND REPLICAT , NOHANDLECOLLISIONS

·         On the target system, edit the Replicat parameter file to remove the HANDLECOLLISIONS parameter. This prevents HANDLECOLLISIONS from being enabled again the next time Replicat starts.

EDIT PARAMS

·         Save and close the parameter file.

 

From this point forward, Oracle GoldenGate continues to synchronize data changes.

 

Loading data from file to Replicat

To use Replicat to establish the target data, you use an initial-load Extract to extract source records from the source tables and write them to an extract file in canonical format. From the file, an initial-load Replicat loads the data using the database interface. During the load, the change-synchronization groups extract and replicate incremental changes, which are then reconciled with the results of the load. During the load, the records are applied to the target database one record at a time, so this method is considerably slower than any of the other initial load methods. This method permits data transformation to be done on either the source or target system.

 

To load data from file to Replicat

·         On the source and target systems, run GGSCI and start Manager.

START MANAGER

·         On the source system, issue the following command to create an initial-load Extract parameter file.

EDIT PARAMS

·         Enter the parameters listed below in the order shown, starting a new line for each parameter statement.

·          Save and close the parameter file.

 

Initial-load Extract parameters for loading data from file to Replicat

SOURCEISTABLE

Designates Extract as an initial load process extracting records directly from the source tables.

[SOURCEDB ,][USERID [, PASSWORD ]]

SOURCEDB specifies a data source name, if required in the connection information. Not required for Oracle.

USERID specifies database credentials, if required. For Oracle, you can include a host string, for example:

USERID ggs@ora1.ora, PASSWORD ggs123

Specifies database connection information. These parameters also allow for authentication at the operating-system level.

RMTHOST , MGRPORT

Specifies the target system and port where Manager is running.

RMTFILE ,[MAXFILES , MEGABYTES ]

is the fully qualified name of the file.

MAXFILES creates a series of files that are aged as needed. Use if the file could exceed the operating system’s file size limitations. MEGABYTES designates the size of each file.

Specifies the extract file to which the load data will be written. Oracle GoldenGate creates this file during the load. Checkpoints are not maintained with RMTFILE. Use the MAXFILES and MEGABYTES options to control the size.

TABLE .

;

is the schema name.

is the name of the table or a group of tables defined with wildcards. To exclude tables from a wildcard specification, use the TABLEEXCLUDE parameter.

Specifies a source table or tables for initial data extraction.

 

·         On the target system, issue the following command to create an initial-load Replicat parameter file.

EDIT PARAMS

·         Enter the parameters listed below in the order shown, starting a new line for each parameter statement.

 

Initial-load Replicat parameters for loading data from file to Replicat

SPECIALRUN

Implements the initial-load Replicat as a one-time run that does not use checkpoints.

END RUNTIME

Directs the initial-load Replicat to terminate when the load is finished.

TARGETDB ,][USERID [, PASSWORD ]]

TARGETDB specifies a data source name, if requiredin the connection information. Not required for Oracle. USERID specifies database credentials, if required. For Oracle, you can include a host string, for example:

USERID ggs@ora1.ora, PASSWORD ggs123

Specifies database connection information. These parameters also allow for authentication at the operating-system level.

EXTFILE | EXTTRAIL

is the fully qualified name of the file.

Use EXTTRAIL only if you used the MAXFILES option of the RMTFILE parameter in the Extract parameter file.

Specifies the extract file specified with the Extract parameter RMTFILE.

{SOURCEDEFS } | ASSUMETARGETDEFS

Use SOURCEDEFS if the source and target tables have different definitions. Specify the sourcedefinitions file generated by DEFGEN.

 Use ASSUMETARGETDEFS if the source and target tables have the same definitions.

Specifies how to interpret data definitions.

MAP .

,TARGET .
;

  is the schema name.

 

is the name of a table or a wildcard definition for multiple tables. To exclude tables from a wildcard specification, use the MAPEXCLUDE parameter. Specifies a relationship between a source and target table or tables.

·         Save and close the file.

·         On the source system, start change extraction.

START EXTRACT

·         From the directory where Oracle GoldenGate is installed on the source system, start the initial-load Extract.

$ //extract paramfile dirprm/.prm reportfile

Where: is the name of the initial-load Extract that you used when creating the parameter file, and is the fully qualified name of the Extract report file.

·         Verify the progress and results of the initial extraction by viewing the Extract report file using the operating system’s standard method for viewing files.

·         Wait until the initial extraction is finished.

·         On the target system, start the initial-load Replicat.

$ //replicat paramfile dirprm/.prm reportfile

Where: is the name of the initial-load Replicat that you used when creating the parameter file, and is the fully qualified name of the Replicat report file.

·         When the initial-load Replicat is finished running, verify the results by viewing the Replicat report file using the operating system’s standard method for viewing files.

·         On the target system, start change replication.

START REPLICAT

·         On the target system, issue the following command to verify the status of change replication.

INFO REPLICAT

·         Continue to issue the INFO REPLICAT command until you have verified that Replicat posted all of the change data that was generated during the initial load. For example, if the initial-load Extract stopped at 12:05, make sure Replicat posted data up to that point.

·         On the target system, issue the following command to turn off the HANDLECOLLISIONS parameter and disable the initial-load error handling.

SEND REPLICAT , NOHANDLECOLLISIONS

·         On the target system, edit the Replicat parameter file to remove the HANDLECOLLISIONS parameter. This prevents HANDLECOLLISIONS from being enabled again the next time Replicat starts.

EDIT PARAMS

·         Save and close the parameter file.

From this point forward, Oracle GoldenGate continues to synchronize data changes.

 

Loading data from file to database utility

To use a database bulk-load utility, you use an initial-load Extract to extract source records from the source tables and write them to an extract file in external ASCII format. The file can be read by Oracle’s SQL*Loader, Microsoft’s BCP, DTS, or SQL Server Integration Services (SSIS) utility, or IBM’s Load Utility (LOADUTIL). During the load, the changesynchronization groups extract and replicate incremental changes, which are then

reconciled with the results of the load. As part of the load procedure, Oracle GoldenGate uses the initial-load Replicat to create run and control files required by the database utility. Any data transformation must be performed by the initial-load Extract on the source system because the control files are generated dynamically and cannot be pre-configured with transformation rules.

 

To load data from file to database utility

·         Make certain to satisfy “Prerequisites for initial load” on page 215.

·         On the source and target systems, run GGSCI and start Manager.

START MANAGER

·         On the source system, issue the following command to create an initial-load Extract parameter file.

EDIT PARAMS

·         Enter the parameters listed below in the order shown, starting a new line for each parameter statement.

 

Initial-load Extract parameters for loading from file to database utility

SOURCEISTABLE

Designates Extract as an initial load process that extracts records directly from the source tables.

[SOURCEDB ,][USERID [, PASSWORD ]]

SOURCEDB specifies a data source name, if required in the connection information. Not required for Oracle.

USERID specifies database credentials, if required. For Oracle, you can include a host string, for example:

USERID ggs@ora1.ora, PASSWORD ggs123

Specifies database connection information. These parameters also allow for authentication at the operating-system level.

RMTHOST , MGRPORT [, PARAMS – E -d ]

-E converts ASCII to EBCDIC.

-d specifies the source definitions file.

Specifies the target system and port where Manager is running.

The PARAMS clause is necessary when loading with IBM’s Load Utility, because Oracle GoldenGate will need to refer to the source definitions file.

RMTFILE ,[MAXFILES , MEGABYTES ]

is the fully qualified name of the file

MAXFILES creates a series of files that are aged as needed. Use if the file could exceed the operating

system’s file size limitations.

MEGABYTES designates the size of each file. Specifies the extract file to which the load data will

be written. Oracle GoldenGate creates this file during the load. Checkpoints are not maintained with RMTFILE.

FORMATASCII, {BCP | SQLLOADER}

BCP is used for BCP, DTS, or SSIS.

SQLLOADER is used for Oracle SQL*Loader or IBM Load Utility.

Directs output to be formatted as ASCII text rather than the default canonical format.

TABLE .

;

is the schema name.

is the name of the table or a group of tables defined with wildcards. To exclude tables from a wildcard specification, use the TABLEEXCLUDE parameter. Specifies a source table or tables for initial data extraction.

·         Save and close the parameter file.

·         On the target system, issue the following command to create an initial-load Replicat parameter file.

EDIT PARAMS

·         Enter the parameters listed below in the order shown, starting a new line for each parameter statement.

 

Initial-load Replicat parameters for loading from file to database utility

GENLOADFILES