isitdevops.com/databases

Useful computing notes
  • Archive
  • Home
  • Posts

Categories

  • Chef (Cat) (2)
  • Chef Cookbook Tutorials (8)
  • CHEF Full Tutorial (18)
  • Jernkins (Cat) (1)
  • Oracle (354)
    • Oracle 11g (Sect) (309)
      • Oracle 11.2 Notes (Cat) (18)
      • Oracle ASM (Cat) (1)
      • Oracle Data Guard Notes (Cat) (45)
      • Oracle DB and SQL performance analysis (Cat) (67)
      • Oracle DB Objects (tables, sql etc) (Cat) (56)
      • Oracle Exadata (Cat) (21)
      • Oracle Flashback Notes (Cat) (12)
      • Oracle Golden Gate (Cat) (11)
      • Oracle odds and ends (Cat) (39)
      • Oracle oui and dbca (Cat) (4)
      • Oracle RAC (Cat) (24)
      • Oracle Rman Notes (Cat) (10)
      • Oracle sql one liners (Cat) (1)
    • Oracle 12c (Sect) (45)
      • 12c Cloud control (Cat) (4)
      • 12c Cloud Control Agent (Cat) (2)
      • 12c General (Cat) (19)
      • 12c Performance (Cat) (4)
      • 12c rman (cat) (16)
  • SQL Server (Sect) (31)
    • MS SQL Server (Cat) (31)
  • Uncategorized (7)
  • Unix Notes (Sect) (50)
    • Configuration (Cat) (16)
    • Resource Management (Cat) (18)
    • Unix notes (Cat) (16)

Categories

  • Chef (Cat)
  • Chef Cookbook Tutorials
  • CHEF Full Tutorial
  • Jernkins (Cat)
  • Oracle
    • Oracle 11g (Sect)
      • Oracle 11.2 Notes (Cat)
      • Oracle ASM (Cat)
      • Oracle Data Guard Notes (Cat)
      • Oracle DB and SQL performance analysis (Cat)
      • Oracle DB Objects (tables, sql etc) (Cat)
      • Oracle Exadata (Cat)
      • Oracle Flashback Notes (Cat)
      • Oracle Golden Gate (Cat)
      • Oracle odds and ends (Cat)
      • Oracle oui and dbca (Cat)
      • Oracle RAC (Cat)
      • Oracle Rman Notes (Cat)
      • Oracle sql one liners (Cat)
    • Oracle 12c (Sect)
      • 12c Cloud control (Cat)
      • 12c Cloud Control Agent (Cat)
      • 12c General (Cat)
      • 12c Performance (Cat)
      • 12c rman (cat)
  • SQL Server (Sect)
    • MS SQL Server (Cat)
  • Uncategorized
  • Unix Notes (Sect)
    • Configuration (Cat)
    • Resource Management (Cat)
    • Unix notes (Cat)

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

Interesting 12c Password Management document

on May 3, 2017 at 8:36 am
Posted In: 12c General (Cat)

Interesting password management document

What’s New in Oracle Database 12c Password file?

 Comment 

sql for Optimizer and sql tuning jobs

on April 25, 2017 at 8:38 am
Posted In: 12c General (Cat)

select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’sql tuning advisor’;

select window_name, autotask_status, optimizer_stats from dba_autotask_window_clients;

select * from DBA_SCHEDULER_WINDOWS;

select window_name, resource_plan, REPEAT_INTERVAL, duration from dba_scheduler_windows;

select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7;

auto optimizer stats collection
sql tuning advisor
auto space advisor

select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’auto optimizer stats collection’;

select JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’auto optimizer stats collection’;

 Comment 

sql for Optimizer and sql tuning jobs

on April 25, 2017 at 8:37 am
Posted In: Uncategorized

select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’sql tuning advisor’;

select window_name, autotask_status, optimizer_stats from dba_autotask_window_clients;

select * from DBA_SCHEDULER_WINDOWS;

select window_name, resource_plan, REPEAT_INTERVAL, duration from dba_scheduler_windows;

select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7;

auto optimizer stats collection
sql tuning advisor
auto space advisor

select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’auto optimizer stats collection’;

select JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’auto optimizer stats collection’;

 Comment 

sql for Optimizer and sql tuning jobs

on April 25, 2017 at 8:37 am
Posted In: Uncategorized

select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’sql tuning advisor’;

select window_name, autotask_status, optimizer_stats from dba_autotask_window_clients;

select * from DBA_SCHEDULER_WINDOWS;

select window_name, resource_plan, REPEAT_INTERVAL, duration from dba_scheduler_windows;

select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7;

auto optimizer stats collection
sql tuning advisor
auto space advisor

select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’auto optimizer stats collection’;

select JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’auto optimizer stats collection’;

 Comment 

Move GG using old trail files

on April 7, 2017 at 9:16 am
Posted In: Oracle Golden Gate (Cat)

1)stop the replicate which replicates the data to database B. 
2)info replicate_name_B, showch 
3)send replicate_name_B status 
4)info replicate_name_B, details 
4)On the new goldengate home for the database B, Create the replicate 
5)Alter replicate_name_new_B with the extseqno , RBA 
6)start the replicate_name_new_B

 Comment 

Linux up/down arrow search

on January 25, 2017 at 10:02 am
Posted In: Configuration (Cat)

add the following 2 lines to .bash_profile

bind '"\e[A": history-search-backward' bind '"\e[B": history-search-forward'
 Comment 

Triggers and issues

on January 5, 2017 at 2:00 pm
Posted In: Oracle Golden Gate (Cat)

MATERIALIZED VIEWS

Use FOR UPDATE when creating the MV or

If the tables are being replicated exclude the materialized view from replication e.g. MAPEXCLUDE .

impdp fails on materialized vies

RA-31685: Object type failed due to insufficient privileges. Failing sql is:
ORA-31685: Object type failed due to insufficient privileges. Failing sql is:
ORA-31685: Object type MATERIALIZED_VIEW:”ACQ”.”DIM_MERCHANT_MV” failed due to insufficient privileges. Failing sql is:
CREATE MATERIALIZED VIEW …..

Oracle Doc ID 1563217.1

select status, object_id, object_type, owner||’.’||object_name “OWNER.OBJECT” from dba_objects where object_name like ‘%DIM_MERCHANT_MV%’ order by 4,2;

Ignore the error if the Materialized View already exists in the target database.

Also check if the schema on the source database still has the create materialized view privilege.

TRIGGERS

To enable the firing of triggers in the destination databasae by tables that are being replicated by goldendate , no matter which schema the trigger is defined in, we need to add instructions to the GG replicat file.

e.g.

Trigger cross_schema_ddl.trans_details_sbi_trig which works on fdn_central.trans_details_settle_batch_id whenever rdr_central.trans_details column settle_batch_id is changed/created.

SQLEXEC “begin dbms_ddl.set_trigger_firing_property(‘CROSS_SCHEMA_DDL’,’TRANS_DETAILS_SBI_TRIG’,FALSE); end;”

Changing to TRUE will DISABLE trigger firing by GoldenGate but not by other database access.

DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY (

   trig_owner         IN  VARCHAR2,    trig_name          IN  VARCHAR2,    property           IN INTEGER,    setting            IN BOOLEAN);

trig_owner

Schema of the trigger to set

trig_name

Name of the trigger to set

fire_once

    • If TRUE, the trigger is set to fire once. By default, the fire_once parameter is set to TRUE for DML and DDL triggers.

    • If FALSE, the trigger is set to always fire unless apply_server_only property is set to TRUE, which overrides fire_once property setting.

property

    • DBMS_DDL.fire_once to set the fire_once property of the trigger

    • DBMS_DDL.apply_server_only to indicate whether trigger fires only in the context of SQL apply processes maintaining a logical standby database or Streams apply processes

setting

Value of property being set

DML triggers created on a table have their fire-once property set to TRUE. In this case, the triggers only fire when the table is modified by an user process, and they are automatically disabled inside Oracle processes maintaining either a logical standby database (SQL Apply) or Oracle processes doing replication (Streams Apply) processes, and thus do not fire when a SQL Apply or a Streams Apply process modifies the table. There are two ways for a user to fire a trigger as a result of SQL Apply or a Streams Apply process making a change to a maintained table: (a) setting the fire-once property of a trigger to FALSE, which allows it fire both in the context of a user process or a SQL or Streams Apply process, or (b) by setting the apply-server-only property to TRUE and thus making the trigger fire only in the context of a SQL Apply or a Streams Apply process and not in the context of a user process.

    • FIRE_ONCE=TRUE, APPLY_SERVER_ONLY=FALSE

      This is the default property setting for a DML trigger. The trigger only fires when user process modifies the base table.

    • FIRE_ONCE=TRUE or FALSE, APPLY_SERVER_ONLY=TRUE

      The trigger only fires when SQL Apply or Streams Apply process modifies the base table. The trigger does not fire when a user process modifies the base table.Thus the apply-server-only property overrides the fire-once property of a trigger.

CONSTRAINTS ISSUES

To ensure CFEDWH and CFEBIDEMO constraints are the same:

To disable the constraints

Run the following on CFEDWH

set lines 256
set pages 0
set head off
SELECT ‘alter table ‘||owner || ‘.’ ||table_name||’ DISABLE NOVALIDATE constraint ‘||constraint_name||’;’ FROM dba_constraints WHERE OWNER = ‘ACQ’ and constraint_name not like ‘SYS_%’ and constraint_name not like ‘BIN%’;

And run the output from above on CFEBIDEMO

Run the following on CFEDWH

set lines 256
set pages 0
set head off
SELECT ‘alter table ‘||owner || ‘.’ ||table_name||’ ‘ || substr(status,1,length(status)-1) || ‘ ‘ || decode(VALIDATED,’VALIDATED’,’ VALIDATE ‘, ‘NOT VALIDATED’, ‘ NOVALIDATE ‘) || 
‘ constraint ‘||constraint_name||’;’ FROM dba_constraints WHERE OWNER = ‘ACQ’ and constraint_name not like ‘SYS_%’ and constraint_name not like ‘BIN%’;

And run the output from above on CFEBIDEMO

RMAN WILL NOT DELETE ARCHIVE LOGS

Refer RMAN-08137: WARNING: Archived Log Not Deleted, Needed For Standby Or Upstream Capture Proces (Doc ID 1993799.1)

https://support.oracle.com/epmos/faces/SearchDocDisplay?_adf.ctrl-state=16snt5y8wp_4&_afrLoop=192052942692042

Caused by an extract not being deleted cleanly.

SQL>SELECT CAPTURE_NAME, CAPTURE_TYPE, STATUS, to_char(REQUIRED_CHECKPOINT_SCN,’999999999999999′)as REQ_SCN ,to_char(OLDEST_SCN,’999999999999999′)as OLDEST_SCN FROM DBA_CAPTURE;

CAPTURE_NAME CAPTURE_TY STATUS REQ_SCN OLDEST_SCN
———————– —————— ———– ————- ——————-
OGG$CAP_CFEAQEXT LOCAL DISABLED 201737048 201737048

SQL>

We can see that CFEAQEXT extract is still holding archive logs in the database.

To clean up :

$ GG

$ ggsci

ggsci> dblogin useridalias ggadmindc

Successfully logged into database.

ggsci> unregister extract CFEAQEXT database

2016-10-31 10:32:56 INFO OGG-01750 Successfully unregistered EXTRACT CFEAQEXT from database.

One should now be able to delete the  archive using rman.

rman target /

RMAN> delete archivelog all completed before ‘sysdate-1’;

GOLDENGATE ERROR ORA-01732

Error in file is:

2016-11-03 16:22:51  WARNING OGG-00869  Oracle GoldenGate Delivery for Oracle, cfeaqrep.prm:  OCI Error ORA-01732: data manipulation operation not legal on this view (status = 1732),

SQL .

2016-11-03 16:22:51  WARNING OGG-01004  Oracle GoldenGate Delivery for Oracle, cfeaqrep.prm:  Aborted grouped transaction on ‘ACQ.TRANS_DETAILS_RECOGN_MV’, Database error 1732

(OCI Error ORA-01732: data manipulation operation not legal on this view (status = 1732),

Issue is that for Replicat, the materialized view must be updateable.

Options:

  1. Create the Materialized view as updatable (with FOR UPDATE)
  2. If the base tables that compose the Materialized view are being replicated, then exclude the materialized view from replication using the mapexclude parameter: MAPEXCLUDE .

TEMP SPACE ISSUE

Import from live fails for central if TEMP is less than 215G

UNDO SPACE ISSUE

Import from live fails for central if UNDO is less than 380G

GOLDENGATE TABLESPACE FULL

Shutdown all goldengate processes including the mgr.

alter table ggs_admin.ggs_ddl_hist enable row movement;

alter table ggs_admin.ggs_ddl_hist shrink space cascade;

alter table ggs_admin.ggs_ddl_hist disable row movement;

As soon as the database starts logging archive execute

alter tablespace goldengate autoextend on maxsize 64g;

I did not wait for the shrink operations to commit.

ORA-01466: unable to read data – table definition has changed

Tried an export of the ACQ schema from cfedwh for GG to cfebidemo but it kept failing with errors similar to this title.

The issue was that I had taken the SCN for the export, ETL had been processed by development and THEN I tried the expoprt.

As the tables had changed since the scn export was having issues.

Solution:

Take a new current_scn value form the database and immediately run the export.

Error reading LCR from data source. Status 509, data source type 0.

e.g.

2016-12-13 13:45:34 ERROR OGG-02171 Oracle GoldenGate Delivery for Oracle, cfeaqrep.prm: Error reading LCR from data source. Status 509, data source type 0. 
2016-12-13 13:45:34 ERROR OGG-02198 Oracle GoldenGate Delivery for Oracle, cfeaqrep.prm: Incompatible record (logical EOF) in trail file /backup/oracle/OGG/dirdat/cfedwh/AQ000000002, rba 110,658,340. 
2016-12-13 13:45:39 ERROR OGG-01668 Oracle GoldenGate Delivery for Oracle, cfeaqrep.prm: PROCESS ABENDING.

Logdump >open /backup/oracle/OGG/dirdat/cfedwh/AQ000000002 
Current LogTrail is /backup/oracle/OGG/dirdat/cfedwh/AQ000000002 
Logdump >count 
LogTrail /backup/oracle/OGG/dirdat/cfedwh/AQ000000002 has 147666 records 
Total Data Bytes 238186214 
Avg Bytes/Record 1613 
GGSPKUpdate 147657 
DDL 4 
Metadata Records 4 
Others 1 
After Images 147661 

Average of 10 Transactions 
Bytes/Trans ….. 24527418 
Records/Trans … 14766 
Files/Trans ….. 1


logdump>open /backup/oracle/OGG/dirdat/cfedwh/AQ000000002 
logdump>pos 110658340 
logdump>ghdr on 
logdump>detail data on 
logdump>usertoken detail 
logdump>ggstoken detail 
logdump>n

Bad record found at RBA 110658340, format 5.50 Header token) 
000a ffff | ….


It seems, that the record is corrupted, follow the note.:
Resync / Recover Data ( Tables , Trails ) Trails Are Missing, Damaged, Incompatible, Corrupt or Suspect for Extract or Replicat ( Doc ID 1614665.1 )

TRY AGAIN

select scn_to_timestamp (529512856) from dual;

info cfeaqext, showch

ALTER EXTRACT cfeaqext  SCN 529512856

ALTER EXTRACT cfeaqext  ETROLLOVER

info cfeaqext, showch

alter replicat CFEAQREP

ALTER REPLICAT CFEAQREP EXTSEQNO 133

ALTER REPLICAT CFEAQREP EXTRBA 0

start replicat CFEAQREP after scn value ok…

ALTER REPLICAT CFEAQREP EXTSEQNO 3

ALTER REPLICAT CFEAQREP EXTRBA 0

ADD EXTRACT cfeaqext integrated tranlog, begin now.

register extract cfeaqext database.

ADD EXTTRAIL /backup/oracle/OGG/dirdat/cfedwh/AQ  EXTRACT cfeaqext  MEGABYTES 250.

start extract cfeaqext.

ADD EXTRACT cfeaqext integrated tranlog, begin now.

register extract cfeaqext database.

ADD EXTTRAIL /backup/oracle/OGG/dirdat/cfedwh/AQ  EXTRACT cfeaqext  MEGABYTES 250.

start extract cfeaqext.

info cfeaqext  showch.

ALTER EXTRACT cfeaqext  SCN 529512856.

ALTER EXTRACT cfeaqext  ETROLLOVER.

info cfeaqext  showch.

start extract cfeaqext.

add replicat cfeaqrep integrated exttrail /backup/oracle/OGG/dirdat/cfedwh/AQ.

start replicat cfeaqrep aftercsn 529512856.

start replicat cfeaqrep aftercsn 529512856.

info cfeaqrep  detail.

info cfeaqrep.

start  replicat cfeaqrep aftercsn  529512856.

THEN

You can find the last applied csn by info replicat nnnn, DETAIL

example 
======= 
Integrated Replicat low watermark: 483311260 
(All source transactions prior to this scn have been applied) 
Please note down the above low watermark scn. 483311260 
connect the source DB 
Convert the SCN to timestmap 
select scn_to_timestamp (483311260) from dual; 
Alter the extract to capture from this timestamp or csn value. 
you need to use alter etrollover to capture into new trail file..


 Comment 

logdump commands

on January 5, 2017 at 1:52 pm
Posted In: Oracle Golden Gate (Cat)
open /backup/oracle/OGG/dirdat/cfedwh/AQ000000008
pos 0
fileheader detail
n
ghdr on
detail on
detail data
usertoken on
ggstoken on
open dirdat/cfedwh/AQ000000009
filter include filename ACQ.STORES
n
SQL> SELECT SEQUENCE#,NEXT_CHANGE# FROM V$ARCHIVED_LOG;
 Comment 

some ggsci replicat analysis and restart commands

on January 5, 2017 at 1:47 pm
Posted In: Oracle Golden Gate (Cat)

alter replicat cfeaqrep, exttrail /backup/oracle/OGG/dirdat/cfedwh/AQ000000006

stop replicat cfeaqrep

alter replicat cfeaqrep extseqno 11, extrba 0

start replicat cfeaqrep aftercsn 662496036

 

select scn_to_timestamp (529512856) from dual;

info cfeaqext, showch 
ALTER EXTRACT cfeaqext  SCN 529512856 
ALTER EXTRACT cfeaqext  ETROLLOVER
info cfeaqext, showch
alter replicat CFEAQREP 
ALTER REPLICAT CFEAQREP EXTSEQNO 133
ALTER REPLICAT CFEAQREP EXTRBA 0
start replicat CFEAQREP after scn value ok…

ALTER REPLICAT CFEAQREP EXTSEQNO 3
ALTER REPLICAT CFEAQREP EXTRBA 0

ADD EXTRACT cfeaqext integrated tranlog, begin now.
register extract cfeaqext database.
ADD EXTTRAIL /backup/oracle/OGG/dirdat/cfedwh/AQ  EXTRACT cfeaqext  MEGABYTES 250.
start extract cfeaqext.

info cfeaqext  showch.
ALTER EXTRACT cfeaqext  SCN 529512856.
ALTER EXTRACT cfeaqext  ETROLLOVER.
info cfeaqext  showch.
start extract cfeaqext.

add replicat cfeaqrep integrated exttrail /backup/oracle/OGG/dirdat/cfedwh/AQ.
start replicat cfeaqrep aftercsn 529512856.
info cfeaqrep  detail.
info cfeaqrep.

 Comment 

L2 – Chef Client Bootstrap and Logging

on December 16, 2016 at 10:59 am
Posted In: CHEF Full Tutorial

From a Chef Workstation

$ knife bootstrap  -x root -P password a.localhost.net -N a

 

To improve logging on the client

$ cd /etc/chef

vi client.rb

add the line “log_level :info”

$ chef-client # to run convergence

Reset sshd_config to remove root login

 Comment 

L1 – Install Chef Workstation

on December 16, 2016 at 10:45 am
Posted In: CHEF Full Tutorial

Open the chef server web page and login

https://chef-server/login

Administration Tab

Select the organisation you have created.

Select “Starter Kit” form the left hand side

Select “Download Starter Kit” and save it somewhere safe, you only really want to do this once for ALL ssystems and users.

Copy the kit to the directory where you want your chef-repo to be, in this case it is the user home directory.

$unzip chef-starter.zip

Install the chef workstation software

$ curl -L https://www.opscode.com/chef/install.sh | sudo bash # install chef workstation

Test the connection to the server

$ cd chef-repo

$ knife node list

This may return nothing if we have not added any nodes yet.

If an ssl certificate error is generated we can use the following command to get the Chef server certificate.

$ knife ssl fetch

and then retry the test

$ knife node list

$

 

 

 Comment 

L0 – Install chef server

on December 15, 2016 at 9:14 am
Posted In: CHEF Full Tutorial

CHEF SERVER

uninstall

chef-server-ctl uninstall

opcode-analytics-ctl uninstall

Install

As root

Download the package from https://downloads.chef.io/chef-server/.

rpm -Uvh chef-server-core-12.11.1-1.el7.x86_64.rpm

chef-server-ctl reconfigure

chef-server-ctl user-create username christian surname email.com ‘passwd‘ –filename /home/username/name.pem
chef-server-ctl org-create orgname ‘description‘ –association_user username –filename /home/dirname/org_validator.pem
 
Use Chef management console to manage data bags, attributes, run-lists, roles, environments, and cookbooks from a web user interface.
chef-server-ctl install chef-manage
chef-server-ctl reconfigure
chef-manage-ctl reconfigure
Use Chef push jobs to run jobs, an action or a command to be executed, against nodes independantly of a chef-client run.
chef-server-ctl install opscode-push-jobs-server
chef-server-ctl reconfigure
opscode-push-jobs-server-ctl reconfigure
Use Reporting to keep track of what happens during every chef-client runs across all of the infrastructure being managed by Chef. Run Reporting with Chef management console to view reports from a web user interface.
chef-server-ctl install opscode-reporting
chef-server-ctl reconfigure
opscode-reporting-ctl reconfigure

chef-server-ctl user-list

If you reset the chef server address etc then remember you need to run chef-server-ctl reconfigure

chef-server-ctl reconfigure
Some useful diagnostic commands
1  yum update
2  wget http://public-yum.oracle.com/
3  cd /etc/yum.repos.d
5  wget http://public-yum.oracle.com/public-yum-ol7.repo
10  yum update
11  shutdown -r now
21  vi /etc/sudo.conf
22  vi sudoers
30  hostname
32  vi /etc/hostname
37  ifconfig
59  netstat -tulpn | grep 443
60  curl localhost:443
61  curl https://localhost:443
62  curl https://localhost:443 –insecure
63  ps faux | grep nginx
64  cd /var/log/opscode/nginx/
69  less access.log
70  less config
72  ps faux | grep nginx
79  grep 443 *
80  iptables -L
81  service iptables stop
82  iptables -L
83  service iptables status
84  service firewalld status
85  service firewalld stop
86  chkconfig –list iptables
91  systemctl disable firewalld
 Comment 

L24 – Create cookbook versions for use in the environments

on December 2, 2016 at 2:17 pm
Posted In: CHEF Full Tutorial

To create a second version of the apache cookbook

$cd chef-repo/cookbooks/apache/attributes

$vi defualt.rb AND ADD THE LINE

default[“author”][“name”] = “PaulH”

$cd ../recipes

$vi default.rb ADD the author attribute

Change the index.html creation to:

template “/var/www/#{sitename}/index.html” do
source “index.html.erb”
mode “0644”
variables ({
:site_title=>data[“site_title”],
:tobewritten=>”To Be Written”,
:author_name=>node[“author”][“name”]
})
notifies :restart, “service[httpd]”
end

$ cd ../templates/default

$ vi index.html.erb

add a heading line e.g.

H2 H2

$ cd ../../
$ vi metadata.rb
Change
version          ‘0.1.0’
TO
version          ‘0.1.5’
$ knife cookbook upload apache
Using the website we can see that the current apache version is 0.1.5
So lets make DEV use this version and we’ll set live to use 0.1.0
BUT
The author name won’t be changed in DEV because we set up an override_attribute in the DEV environment file which overrides and values set in the particular cookbooks used.
If we remove the override_attribute then the new author_name would be displayed.
SO
 Comment 

L23 – Create an Environment

on December 2, 2016 at 2:13 pm
Posted In: CHEF Full Tutorial
$ cd chef-repo
we will use a ruby file rather than a JSON file for this example
To get current recipe versions we can look at, for example, chef-repo/cookbooks/apache/metadata.rb, and we will see a verion number defined.
$mkdir environments
$cd environments
$vi devenv1.rb
name “devenv1”
description “A dev environment”
cookbook “apache”, “= 0.1.2”
override_attributes({
“author” => {
“name” => “a n author”
})
 
i.e.
relies on apache cookbook v 0.1.2, whereas production is using 0.1.0
The attribute in an attribute file would look like, default[“author”][“name”] = “A different author”
So we would define all specific DEV attributes in this file and they would override the same attributes defined in the attributes files of the recipe.
 
$knife environment list -w
_default: …..
$knife environment from file devenv.rb

On Node

$ cd /etc/chef
$ vi client.rb and ADD
environment “devenv”
$ chef-client  fails as the apache version 0.1.5 does not yet exist
 
 

 

 

 

 Comment 

L22 – Chef Environments

on December 1, 2016 at 5:21 pm
Posted In: CHEF Full Tutorial

Map workflow for chef.

Different cookbook versions for different environments,

e.g. development and production

Environment information can be stored in JSON or .rb files.

Located in chef-repo/environments

example for a dev

name “dev1”
description “Dev Test 1”
cookbook “security”,”=0.1.0″
cookbook “motd”,”=0.2.0″
cookbook “apache”,”=0.3.0″
override_attributes({
“author” => {
“name” => true
}
})
Override precedence is greater than
Default, force_default, normal
Modify the client.rb with an environment variable
Knife-flip can set the environment for a node
We can assign it on the node.
We will:
Configure DEV and PROD environments
Modify cookbooks with versions
Set new version to dev and old to prod
Change client.rb to look at the correct environment
 Comment 

L21 – Recipe to create user accounts from data bags

on December 1, 2016 at 5:19 pm
Posted In: CHEF Full Tutorial
$ cd chef-repo/cookbooks
$ knife cookbook create addusers
$ cd addusers
$ cd recipes
# remember each data_bag must have a unique id attribute
$ vi default.rb
# Seach the data_bag users for all users
search(:users, “*:*”).each do |data|
user data[“id”] do
comment data[“comment”]
uid data[“guid”]
gid  data[“gid”]
home data[“home”]
shell data[“shell”]
end
end
 
$ knife cookbook upload addusers
 
$ knife node run_list add ” recipe[addusers]”
cc:
run_list:
recipe[secdel]
recipe[apache]
recipe[addusers]
CLIENT
$ chef-client
We can now see that the phh user hasd been added to the system.

Groups

$ vi groups.rb
# Seach the data_bag users for all users
search(:groups, “*:*”).each do |data|
group data[“id”] do
gid data[“gid”]
members  data[“members”]
home data[“home”]
shell data[“shell”]
end
end
 
$ vi default.rb
ADD at the end:
include_recipe “addusers::groups”
$ knife cookbook upload addusers
 
run the cghef-client on the client again and we will see the group has been added.
 
NOW ADD the defult user USER to the group
cd chef-repo/data_bags/groups
vi groups.json
change
“members”: [“phh”]
TO
“members”: [“phh”, “USER”]
 
$knife data_bag from file groups groups.json
 
NOW chef-client will amend the users WITHOUT having to change any recipes.
 
 
 Comment 

L20 – Data Bags

on November 22, 2016 at 5:58 pm
Posted In: CHEF Full Tutorial

A DATABAG is a GLOBAL variable stored in JSON – Java Script Object Notation.

Type of data, e.g.:

User Account information

Sudo users

API / Database schema Credentials

etc

Can be encrypted

e.g. users on a system, sudo users, API/DB Credentials etc
{
“id”:”PHH”,
“comment”:”Another User”,
“uid”:”1000″,
“home”:”/home/oracle”,
“shell”,”/bin/bash”
}
Set up data_bags directories for our users and our groups data_bags
$ cd chef-repo
$ mkdir data_bags
$ cd data_bags

$ mkdir {users,groups}

Set up the wheel sudoers group

$ cd ../groups
$ vi sudoers.json
{
“id”:”wheel”,
“gid”:”1000″,
“members”:[“phh”]
}
 
Set up the users data_bag for user phh
$ cd users
$ vi phh.json
{
“id”:”phh”,
“comment”:”Another User”,
“uid”:”10″,
“gid”:”1000″,
“home”:”/home/oracle”,
“shell”,”/bin/bash”,
“password”:”mypwd”
}
 
Create the data_bag items
$ knife data bag create users
Created data_bag[users]
 
$ knife data bag create groups
Created data_bag[groups]
Go to the WEB and look at the data_bags
-> Policy -> databags
Add the data_bag items
$cd ..
$ knife data_bag from file users phh.json
$ knife data_bag from file groups sudoers.json

Search groups

$ knife search users “id:phh”
1 item found
chef_type: data_bag_item
:
etc
 
$ knife search users “id:phh” -a home
1 item found
chef_type: data_bag_item
:
home:   /home/admin
:
 
etc
 
SAME as normal search
 
 Comment 

L19 – Search with knife

on November 17, 2016 at 2:04 pm
Posted In: CHEF Full Tutorial

Use

$ ohai

on the client to see the state.

OR use the chef server web interface.

knife search node key:pattern -a attribute

 

SHOW

====

$ knife node show -a apache

cc:
apache:
package: httpd
sites:
vchefh1h.com:
domain:     vchefh1h.com
port:       80
site_title: VCHEFH1H
vchefh2h.com:
domain:     vchefh2h.com
port:       80
site_title: VCHEFH2H
 
$ knife node show cc -a apache.sites
cc:
apache.sites:
vchefh1h.com:
domain:     vchefh1h.com
port:       80
site_title: VCHEFH1H
vchefh2h.com:
domain:     vchefh2h.com
port:       80
site_title: VCHEFH2H
[hallapa@cw chef-repo]$
 
$ knife node show -a apache.sites.name1:
apache.sites.name1:
domain:     name1.phh.com
port:       80
site_title: node1site

$ knife node show -a apache.sites.name1.domain

apache.sites.name1.domain:     name1.phh.com

$ knife node show cc -a memory

cc:

memory:

active:           873276kB

anon_pages:       910920kB

bounce:           0kB

buffers:          20kB

cached:           586884kB

commit_limit:     2989000kB

committed_as:     4242948kB

dirty:            8kB

free:             104836kB

hugepage_size:    2048kB

hugepages:

free:     0

reserved: 0

surplus:  0

total:    0

inactive:         633960kB

mapped:           125484kB

nfs_unstable:     0kB

page_tables:      31924kB

slab:             81084kB

slab_reclaimable: 37532kB

slab_unreclaim:   43552kB

swap:

cached: 10916kB

free:   1863680kB

total:  2097148kB

total:            1783708kB

vmalloc_chunk:    34359697400kB

vmalloc_total:    34359738367kB

vmalloc_used:     32728kB

writeback:        0kB

$ knife node show cc -a memory.active

cc:

memory.active: 873276kB

$ knife node show cc -a memory.swap
cc:
memory.swap:
cached: 10916kB
free:   1863680kB
total:  2097148kB
SEARCH

======

$ knife search node “os:linux”

1 items found

Node Name:   cc

Environment: _default

FQDN:        cc

IP:          10.32.9.34

Run List:    recipe[secdel], recipe[apache]

Roles:

Recipes:     secdel, secdel::default, apache, apache::default, php::default

Platform:    oracle 7.3

Tags:

$ knife search node “os:linux” -a recipes
1 items found
cc:
recipes:
secdel
secdel::default
apache
apache::default
php::default

$ knife search node “os:linux” -a memory.swap.free

1 items found

cc:

memory.swap.free: 1863680kB

$ knife search node “os:linux” -a “platform:*” -a memory.swap.free
1 items found
cc:
memory.swap.free: 1863680kB
platform:*:
$ knife search node “os:GNU/Linux” -a “kernel.release”
1 items found
cc:
kernel.release: 4.1.12-61.1.22.el7uek.x86_64
$ knife search node “os:GNU*” -a “kernel.release”
1 items found
cc:
kernel.release: 4.1.12-61.1.22.el7uek.x86_64
$ knife search node “kernel_release:4*”
1 items found
Node Name:   cc
Environment: _default
FQDN:        cc
IP:          10.32.9.34
Run List:    recipe[secdel], recipe[apache]
Roles:
Recipes:     secdel, secdel::default, apache, apache::default, php::default
Platform:    oracle 7.3
Tags:
$ knife search node “kernel_release:4*” -a kernel.release
1 items found
cc:
kernel.release: 4.1.12-61.1.22.el7uek.x86_64
$ knife search node “os:G* AND platform:oracle” -a kernel.release
1 items found
cc:
kernel.release: 4.1.12-61.1.22.el7uek.x86_64
$ knife search node “os:G* AND platform:oracle” -a platform_family
1 items found
cc:
platform_family: rhel
e.g.

knife search node “platform:ubuntu” -a memory.swap.free

knife search node “platform:*” -a memory.swap.free

knife search node “kernel_release:3.2*”

knife search node “kernel_release:3.2*” -a kernel.release

knife search node “kernel_release:*” -a kernel.release

 

knife search node “os:linux AND platform:ubuntu” -a kernel

knife search node “os:linux AND platform_family:debian” -a platform

knife search node “os:linux AND platform_family:debian” -a platform.family Searching for a nested value

knife search node “os:linux AND platform_family:debian” -a platform_family

 

 Comment 

L18 – Adding chef-repo to Git

on November 17, 2016 at 1:47 pm
Posted In: CHEF Full Tutorial

Create a repository on Git

Not using ssl, just username/password

$ cd chef-repo

 

$ git init

$ git add .

$ git commit -m “First chef repo”

$ git config –global user.name “My Name”

$ git config –global user.email my.email

$ git remote add origin https://github.com/mygitname/MyRepositoryName.git

$ git push -u origin master

 

We can now see all our chef-repo files in git.

 

 

 

 Comment 

L17 – Adding Platform Support

on November 17, 2016 at 1:36 pm
Posted In: CHEF Full Tutorial

Two clients running different OS’s, Ubuntu and CentOS

Enable root login on all systems for bootstrap

Ubuntu:

vi /etc/sshd_config

enable root login

service ssh restart

$ knife bootstarp a.b.com -x root -P -N

$ knife node run_list add nodename “recipe[apache]”

$ knife node run_list add nodename -b “recipe[apache]” “recipe[security]”

Now we need to modify the recipes to work for ubunto and centos

vi cookbooks/apache/attributes/default.rb

default[“apache”][“sites”][“nodename1”] = { “site_title” => “SITE ONE”, “port” => 80, “domain” =>  “nodename1.domain.com” }
default[“apache”][“sites”][“nodename2”] = { “site_title” => “SITE TWO”, “port” => 80, “domain” => “nodename2.domain.com” }
default[“apache”][“sites”][“nodename3”] = { “site_title” => “SITE THREE”, “port” => 80, “domain” => “nodename3.domain.com” }
case node[“platform”]
when “centos”
default[“apache”][“package”] = “httpd”
when “ubuntu”
default[“apache”][“package”] = “apache2”
end

vi cookbooks/apache/recipes/default.rb

 

#

# Cookbook Name:: apache
# Recipe:: default
#
# Copyright 2016, YOUR_COMPANY_NAME
#
# All rights reserved – Do Not Redistribute
#
if node[“platform”] == “ubuntu”
execute “apt-get update -y” do
end
end
package “apache2” do
package_name node[“apache”][“package”]
end
node[“apache”][“sites”].each do |sitename, data|
document_root = “/var/www/#{sitename}”
directory document_root do
mode “0755”
recursive true
end
if node[“platform”] == “ubuntu”
template_location = “/etc/apache2/sites-enabled/#{sitename}.conf”
elsif node[“platform”] == “centos”
template_location = “/etc/httpd/conf.d/#{sitename}.conf”
elsif node[“platform”] == “oracle”
template_location = “/etc/httpd/conf.d/#{sitename}.conf”
end
template template_location do
source “vhost.erb”
mode “0644”
variables ({
:document_root=>document_root,
:port=>data[“port”],
:domain=>data[“domain”]
})
notifies :restart, “service[httpd]”
end
template “/var/www/#{sitename}/index.html” do
source “index.html.erb”
mode “0644”
variables ({
:site_title=>data[“site_title”],
:tobewritten=>”To Be Written”
})
notifies :restart, “service[httpd]”
end
end
execute “rm  /etc/httpd/conf.d/welcome.conf” do
only_if do
File.exist?(“/etc/httpd/conf.d/welcome.conf”)
end
notifies :restart, “service[httpd]”
end
execute “rm /etc/httpd/conf.d/README” do
only_if do
File.exist?(“/etc/httpd/conf.d/README”)
end
notifies :restart, “service[httpd]”
end
service “httpd” do
service_name node[“apache”][“package”]
action [:enable, :start]
end
include_recipe “php::default”
 Comment 

Cr Lf in vi

on November 10, 2016 at 4:23 pm
Posted In: Unix notes (Cat)

To replace CRLF in vi

 

:set magic

:set nolist

Replace “,)” with “)”

:%s/,\n)/)/g

Can also use “VM” (press all keys at the same time)

 

 Comment 

L17 – Add platform support to Cookbooks

on November 10, 2016 at 3:38 pm
Posted In: Uncategorized

Add an ubuntu node to chef

$ knife bootstrap[ -x root -P PWD -N

$ knife node run_list add “recipe[apache]”

$ knife node runlist add unique node name> -b “recipe[apache]” “recipe[security]”

Modify Apache recipe for different OS

$ cd cookbooks/apache/attrributes

$vi defaults.rb

default[“apache”][“sites”][“nodename1”] = { “site_title” => “SITE ONE”, “port” => 80, “domain” => “nodename1.domain.com” }
default[“apache”][“sites”][“nodename2”] = { “site_title” => “SITE TWO”, “port” => 80, “domain” => “nodename2.domain.com” }
default[“apache”][“sites”][“anothernode”] = { “site_title” => “ANOTHERNODE”, “port” => 80, “domain” => “anothernode.domain.com” }
case node[“platform”]
when “centos”
default[“apache”][“package”] = ‘httpd”
when “ubuntu”
default[“apache”][“package”] = ‘hapache2″
end

 

$ cd cookbooks/apache/recipes

$vi defaults.rb

package “apache2” do
package_name node[“apache”][“package”]
end
if node[“platform”] == “ubuntu”
template_location = “/etc/apache2/sites-enabled/#{sitename}.conf”
elsif node[“platform”] == “centos”
template_location = “/etc/httpd/conf.d/#{sitename}.conf”
end
node[“apache”][“sites”].each do |sitename, data|
document_root = “/content/sites/#{sitename}”
directory document_root do
mode “0755”
recursive true
end
template “/etc/httpd/conf.d/#{sitename}.conf” do
source “vhost.erb”
mode “0644”
variables (
:document_root => document_root,
:port                 => data[“port”],
:domain             => data[“domain”]
)
notifies :restart, “service [httpd]”
end
end 
template “/content/sites/#{sitename}/index.html” do

source “index.html.erb”

mode “0644”
variables (
:site_title => data[“site_title”],
:tobewritten => “To Be Written”

)

notifies :restart, “service [httpd]”
end
execute “rm /etc/httpd/conf.d/welcome.conf” do
only_if do
File.exist?(“/etc/httpd/conf.d/welcome.conf”)
end
notifies :restart, “service[httpd]”
end
execute “rm /etc/httpd/conf.d/README” do
only_if do
File.exist?(“/etc/httpd/conf.d/README”)
end
notifies :restart, “service[httpd]”
end
service “httpd” do
service_name node[“apache”][“package”]
action [:enable, :start]
end
 Comment 

L16 – Execute commands on the node

on November 10, 2016 at 3:10 pm
Posted In: CHEF Full Tutorial

Execute linux commands on a chef client.

There is a default file created as part of apache install, lets delete it !

/etc/httpd/conf.d/welcome.conf

$ vi ~/chef-repo/cookbooks/apache/recipes/default.rb

AND ADD

execute “rm /etc/httpd/conf.d/welcome.conf” do

only_if do

File.exist?(“/etc/httpd/conf.d/welcome.conf”)

end

notifies :restart, “service[httpd]”

end

execute “rm /etc/httpd/conf.d/README” do

only_if do

File.exist?(“/etc/httpd/conf.d/README”)

end

notifies :restart, “service[httpd]”

end


$ knife cookbook upload apache

etc



 Comment 

L15 – Copy config files to the Node

on November 10, 2016 at 3:00 pm
Posted In: CHEF Full Tutorial

We will make a change top the php.ini file and ensure chef updates all nodes next time a configuration run is performed.

Copy /etc/php.ini from a node to ~/cookbooks/php/files/default

Then make a change, e.g. add a line to the php.ini file

[PHP]

:: php.ini managed by chef

etc

 

$ cd ~/cookbooks/php/recipes/

$ vi default.rb

package “php” do

action :install

end

 

cookbook_file “/etc/php.ini” do

source “php.ini”

mode “0644”

end

 

$ knife cookbook upload php

NODE

====

$ chef-client

Old file is saved as /var/chef/backup/etc/php.ini.chef-date.nnnn

 Comment 

Replace temp table tempfile

on November 10, 2016 at 1:56 pm
Posted In: 12c General (Cat)

Process to replace a small files temporary tablespace with a bigfile temp tablespace in asm.

 

create temporary tablespace temp_temp tempfile size 31g;

alter database default temporary tablespace temp_temp;

SELECT a.tablespace, a.segfile#, a.segblk#, a.blocks, b.sid, b.serial#, b.username, b.osuser, b.status
FROM v$session b,v$sort_usage a WHERE b.saddr = a.session_addr;

 

alter system kill session ‘SID_NUMBER, SERIAL#NUMBER’;

drop tablespace temp including contents and datafiles;

create bigfile temporary tablespace temp tempfile size 220G autoextend on maxsize 240g;

alter database default temporary tablespace temp;

drop tablespace temp_temp including contents and datafiles;

 

 

 Comment 

Moving MGMTDB

on November 8, 2016 at 11:52 am
Posted In: 12c General (Cat)

Grid Ingrastructure Management Repository Database (MGMTDB)

Default location is same storage as OCR and Voting files.

Mos note 1589394.1 for moving MGMTDB AND MOS note 2065175.1 for the new MDBUtil

BACKUP THE DATA

$ GRID_HOME/bin/oclumon dumpnodeview [[-allnodes] | [-n node1 node2 noden] [-last “duration”] | [-s “time_stamp” -e “time_stamp”] [-i interval] [-v]] [-h]

Stop the cluster health monitor – on EACH node
$ GRID_HOME/bin/crsctl stop res ora.crf -init
$ GRID_HOME/bin/crsctl modify res ora.crf -attr ENABLED=0 -init

$ srvctl status mgmtdb
$ GRID_HOME/bin/dbca -silent -deleteDatabase -sourceDB -MGMTDB

$ srvctl status mgmtdb

$GRID_HOME/bin/dbca -silent -createDatabase -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName {+NEW_DG} -datafileJarLocation $GRID_HOME/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -oui_internal

OR

$GRID_HOME/bin/dbca -silent -createDatabase -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType FS -datafileDestination {NEW_FS} -datafileJarLocation $GRID_HOME/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -oui_internal

$ srvctl status mgmtdb

ONLY THE CONTAINER DB IS OPEN, WE NOW NEED TO CREATE THE PLUGGABLE DATABASE.

$ OLSNODES -c
to get the cluster name

Create the management repository pluggable database
$ GRID_HOME/bin/dbca -silent -createPluggableDatabase -sourceDB -MGMTDB -pdbName {pluggable_db_name} -createPDBFrom RMANBACKUP -PDBBackUpfile $GRID_HOME/assistants/dbca/templates/mgmtseed_pdb.dfb -PDBMetadataFile $GRID_HOME/assistants/dbca/templates/mgmtseed_pdb.xml -createAsClone true -internalSkipGIHomeCheck

SQL> select dbid,name,cdb from v$database;

$ GRID_HOME/bin/mgmtca
SQL> alter session set container=pluggable_db_name;
SQL> exec dbms_service.create_service(‘pluggable_db_name’,’pluggable_db_name’);
SQL> exec dbms_service.start_service(‘pluggable_db_name’);

$ GRID_HOME/bin/crsctl start res ora.crf -init
$ GRID_HOME/bin/oclumon manage -get reppath

 

 Comment 

Alter Pluggable seed database

on November 8, 2016 at 9:51 am
Posted In: 12c General (Cat)

MODIFY THE 12c SEED DATABASE
============================
This is an initial empty setup
SQL> select name, cdb from v$database;
ORCPDB1 YES

SQL> show pdbs
CON_ID CON_NAME   OPEN MODE  RESTRICTED
——- ———- ———- ———-
2 PDB$SEED   READ ONLY  NO

SQL> alter session set “_oracle_script”=true;

SQL> shutdown immediate

SQL> startup

SQL> select name,open_mode from v$pdbs;
NAME            OPEN_MODE
————— ———-
PDB$SEED        READ WRITE

SQL> show con_name
PDB$SEED

SQL> create tablespace MYTBS1 datafile ‘/oradata/orpcdb1/pdbseed/mytbs1.dbf’ size 200M;

SQL> alter database default tablespace MYTBS1;

SQL> create user devops identified by temp123 default tablespace MYTBS1 password expire;

SQL> shutdown immediate

SQL> startup open read only

SQL> select name,open_mode from v$pdbs;
NAME            OPEN_MODE
————— ———-
PDB$SEED        READ ONLY

SQL> alter session set “_oracle_script”=true;

SQL> show con_name

SQL> create pluggable database MYpdb1 admin user measadmin identified by measadmin file_name_convert=(‘/oradata/orpcdb1/pdbseed/’,’/oradata/orpcdb1/mypdb1/’);

SQL> show pdbs
CON_ID CON_NAME                       OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED                       READ ONLY  NO
3 MYPDB1                         MOUNTED

SQL> alter pluggable database Mypdb1 open;

The new database now already has the user and tablespace.

 Comment 

L14 – Recipe Includes and Dependencies

on November 8, 2016 at 8:52 am
Posted In: CHEF Full Tutorial

$ knife cookbook create php

$ cd cookbooks/php/recipes

$ vi default,rb

# found by using “yum search php”
package “php5” do
action :install
end
$ cd ~/cookbooks/apache
$ vi metadata.rb   # and add
depends   “php”
$ knife cookbook upload php
cd recipes
$ vi default.rb
# add the following at the end of the file
include_recipe “php::default”
$ knife cookbook upload apache
NODE
====
chef-client to run

This ensures php is installed before any Apache install.

 

 Comment 

Chef Attribute precedence Diagram

on November 8, 2016 at 8:11 am
Posted In: Chef (Cat)

_images/overview_chef_attributes_table.png

 Comment 

L13 – Add HTML templates dynamically

on November 3, 2016 at 9:19 am
Posted In: CHEF Full Tutorial

WORKSTATION

$ vi cookbooks/apache/attributes/default.rb

default[“apache”][“sites”][“vchefh1h.com”] = { “site_title” => “VCHEFH1H”, “port” => 80, “domain” => “vchefh1h.com” }

default[“apache”][“sites”][“vchefh2h.com”] = { “site_title” => “VCHEFH2H”, “port” => 80, “domain” => “vchefh2h.com” }

 

$ vi cookbooks/apache/templates/default/index.html.erb

 

$ cookbooks/apache/recipes/default.rb
#
# Cookbook Name:: apache
# Recipe:: default
#
# Copyright 2016, YOUR_COMPANY_NAME
#
# All rights reserved – Do Not Redistribute
#
package “httpd” do
action :install
end
node[“apache”][“sites”].each do |sitename, data|
document_root = “/var/www/#{sitename}”
directory document_root do
mode “0755”
recursive true
end
template “/etc/httpd/conf.d/#{sitename}.conf” do
source “vhost.erb”
mode “0644”
variables ({
:document_root=>document_root,
:port=>data[“port”],
:domain=>data[“domain”]
})
notifies :restart, “service[httpd]”
end
template “/var/www/#{sitename}/index.html” do
source “index.html.erb”
mode “0644”
variables ({
:site_title=>data[“site_title”],
:tobewritten=>”To Be Written”
})
notifies :restart, “service[httpd]”
end
end
service “httpd” do
action [:enable, :start]
end
$ knife cookbook upload apache

CLIENT

Run convergence

$ chef-client

Add the addresses to the hosts file if not using dns

192.168.1.56 cc cc.localnet

192.168.1.56 vchefh1h.com vchefh1h

192.168.1.56 vchefh2h.com vchefh2h

Check the web pages

http://cc.localnet/

http://vchefh1h.com/

http://vchefh2h.com/

 Comment 

L12 – Adding Atrributes and Templates

on November 3, 2016 at 8:59 am
Posted In: CHEF Full Tutorial

WORKSTATION

$ cd cookbooks/apache/

$ cd attributes

$ vi cookbooks/apache/attributes/default.rb

default[“apache”][“sites”][“vchefh1h.com”] = { “port” => 80, “domain” => “vchefh1h.com” }
default[“apache”][“sites”][“vchefh2h.com”] = { “port” => 80, “domain” => “vchefh2h.com” }

 

$ vi cookbooks/apache/recipes/default.rb

#
# Cookbook Name:: apache
# Recipe:: default
#
# Copyright 2016, YOUR_COMPANY_NAME
#
# All rights reserved – Do Not Redistribute
#
package “httpd” do
action :install
end
node[“apache”][“sites”].each do |sitename, data|
document_root = “/var/www/#{sitename}”
directory document_root do
mode “0755”
recursive true
end
template “/etc/httpd/conf.d/#{sitename}.conf” do
source “vhost.erb”
mode “0644”
variables ({
:document_root=>document_root,
:port=>data[“port”],
:domain=>data[“domain”]
})
notifies :restart, “service[httpd]”
end
end
service “httpd” do
action [:enable, :start]
end

$ vi cookbooks/apache/templates/default/vhost.erb

ServerName

DocumentRoot

ServerAlias

$ knife cookbook upload apache

CLIENT

chef-client

$ cd /var/www/vchefh1h.com/

$ vi index.html

VCHEFH1H

VCHEFH1H

To Be Written

$ cd /var/www/vchefh2h.com/

$ vi index.html

VCHEFH2H

VCHEFH2H

To Be Written

 

 

 Comment 

L11 – Start the apache Recipe

on November 3, 2016 at 8:33 am
Posted In: CHEF Full Tutorial

WORKSTATION

$ cd chef-repo

$ knife cookbook create apache

$ cd cookbooks/apache/recipes

$ vi default.rb

#

# Cookbook Name:: apache

# Recipe:: default

#

# Copyright 2016, YOUR_COMPANY_NAME

#

# All rights reserved – Do Not Redistribute

#

package “httpd” do

action :install

end

service “httpd” do

action [:enable, :start]

end

$ knife cookbook upload apache

$ knife node run_list add “recipe[apache]”

CLIENT

$ chef-client

WORKSTATION

$ knife cookbook create secdel

$ cd cookbooks/security/recipes

$ vi default.rb

file “/etc/chef/validation.pem” do

action :delete

end

$ knife cookbook upload secdel

$ knife node run_list add -b “recipe[apache]” “recipe[secdel]”

CLIENT

$ chef-client

 

 Comment 

L10 – Getting started

on November 3, 2016 at 8:21 am
Posted In: CHEF Full Tutorial

GETTING SET UP

ON CLIENT

Enable ssh

$ vi /etc/ssh/sshd_config

Uncomment “#PermitRootLogin yes”

$ service sshd restart

ON CHEF SERVER

Download the starter kit from web page – ONLY DO THIS ONCE

ON WORKSTATION

Get chef-starter.zip from wherever we downloaded it to.

then

$ curl -L https://www.opscode.com/chef/install.sh | sudo bash # install chef workstation

$ knife bootstrap  -x root -P -N

# install chef agent on the client

ON CLIENT

$ cd /etc/chef

vi client.rb

add the line “log_level :info”

$ chef-client # to run convergence

Reset sshd_config to remove root login

 

 Comment 

UNDO usage

on October 25, 2016 at 4:52 pm
Posted In: 12c Performance (Cat)

This shows you the amount of undo size used by each session that is currently active.

select s.sid,         s.username,        sum(ss.value) / 1024 / 1024 as undo_size_mb from  v$sesstat ss   join v$session s on s.sid = ss.sid   join v$statname stat on stat.statistic# = ss.statistic# where stat.name = 'undo change vector size' and s.type  'BACKGROUND' and s.username IS NOT NULL group by s.sid, s.username;

TOTAL TEMP USAGE

select b.Total_MB,
       b.Total_MB - round(a.used_blocks*8/1024) Current_Free_MB,
       round(used_blocks*8/1024)                Current_Used_MB,
      round(max_used_blocks*8/1024)             Max_used_MB
from v$sort_segment a,
 (select round(sum(bytes)/1024/1024) Total_MB from dba_temp_files ) b;
TOTAL_MB CURRENT_FREE_MB CURRENT_USED_MB MAX_USED_MB ---------- --------------- --------------- ----------- 188416 23036 165380 165381

session temp usage

col hash_value for a40
col tablespace for a10
col username for a15
set linesize 132 pagesize 1000
SELECT s.sid, s.username, u.tablespace, s.sql_hash_value||'/'||u.sqlhash hash_value, u.segtype, u.contents, u.blocks
FROM v$session s, v$tempseg_usage u
WHERE s.saddr=u.session_addr
order by u.blocks;
      SID USERNAME        TABLESPACE HASH_VALUE                               SEGTYPE   CONTENTS      BLOCKS ---------- --------------- ---------- ---------------------------------------- --------- --------- -------         23 SYS             TEMP       3732391352/3732391352                    DATA      TEMPORARY  128         23 SYS             TEMP       3732391352/3732391352                    INDEX     TEMPORARY 128 :
23 SYS TEMP 3732391352/3732391352 DATA TEMPORARY 128
23 SYS TEMP 3732391352/3732391352 INDEX TEMPORARY 128
:
357 SYS TEMP 1357698168/693605571 SORT TEMPORARY 861952
357 SYS TEMP 1357698168/693605571 SORT TEMPORARY 872704
357 SYS TEMP 1357698168/693605571 SORT TEMPORARY 931456
357 SYS TEMP 1357698168/693605571 SORT TEMPORARY 945280
select hash_value, sorts, rows_processed/executions
from v$sql
where hash_value in (select hash_value from v$open_cursor where sid=357)
and sorts > 0
and PARSING_SCHEMA_NAME='SYS'
order by rows_processed/executions;

 Comment 

Monitoring Import

on October 25, 2016 at 10:52 am
Posted In: 12c General (Cat)

IMPDP

select * from dba_datapump_jobs;
select * from dba_datapump_sessions;
select username,opname,target_desc,sofar,totalwork,message from v$session_longops;
select sid, serial#, sofar, totalwork, dp.owner_name, dp.state, dp.job_mode
from gv$session_longops sl, gv$datapump_job dp
where sl.opname = dp.job_name and sofar != totalwork;

select    sid,    serial#
from    v$session s,    dba_datapump_sessions d
where     s.saddr = d.saddr;

select    sid,    serial#,    sofar,    totalwork
from    v$session_longops;

select x.job_name,b.state,b.job_mode,b.degree
, x.owner_name,z.sql_text, p.message
, p.totalwork, p.sofar
, round((p.sofar/p.totalwork)*100,2) done
, p.time_remaining
from dba_datapump_jobs b 
left join dba_datapump_sessions x on (x.job_name = b.job_name)
left join v$session y on (y.saddr = x.saddr)
left join v$sql z on (y.sql_id = z.sql_id)
left join v$session_longops p ON (p.sql_id = y.sql_id)
WHERE y.module like 'Data Pump %';
/* AND p.time_remaining > 0; */

 Comment 

Alias in exec

on October 19, 2016 at 9:04 am
Posted In: Uncategorized

shopt -s expand_aliases

# run the alias command
AL
exec >> $LOG 2>&1
 Comment 

Cleanmup thousands fo audit files

on October 18, 2016 at 3:53 pm
Posted In: Unix notes (Cat)

#!/bin/bash

#find . -name *.aud > /tmp/temp.txt
find . -type f -mtime +7 > /tmp/temp.txt
for i in $(cat /tmp/temp.txt)
do
if ! [[ $(lsof | grep /u01/app/oracle/admin/accounts/adump/$i) ]]
then
if [[ $(grep “.aud” $i) ]]
then
mv $i backuplogs
fi
fi
done

#tar -cfz backuplogs/audits_${Dstamp} backuplogs/*.aud
#rm backuplogs/*.aud

 Comment 

Identify RAC Network

on October 18, 2016 at 2:35 pm
Posted In: 12c General (Cat)

Cluster name = olsnodes -c

VIP Names = olsnodes -n -i

srvctl config nodeapps -a

crsctl stat res -p

PRIVATE Inetrconnect = olsnodes -n -i -l -p

OCR info

oifcfg getif

oifcfg iflist -p -n

netstat -i | -n | -r | -rnv

ifconfig -a

dig +noall +answer

srvctl config scan

srvctl config scan_listener

 

 

 Comment 

alias , xargs

on October 13, 2016 at 12:01 pm
Posted In: Unix notes (Cat)

Alias – from

http://unix.stackexchange.com/questions/1496/why-doesnt-my-bash-script-recognize-aliases

Aliases are not expanded when the shell is not interactive, unless the expand_aliases shell option is set using shopt (see the description of shopt under SHELL BUILTIN COMMANDS below).

So put a

shopt -s expand_aliases

in your script.

Make sure to source your aliases file after setting this in your script.

shopt -s expand_aliases source ~/.bash_aliases

XARGS – from

http://stackoverflow.com/questions/937716/how-do-you-send-the-output-of-ls-to-mv

$ ls | xargs -Ifile mv file file.txt $ ls a.txt  b.txt  c.txt  x.txt  y.txt  z.txt
 Comment 

Oracle to SQL Server and infinity issues

on October 6, 2016 at 10:37 am
Posted In: Oracle Golden Gate (Cat)

 

Map and convert 3 columns in the table that may have binary_double Infinity values.

MAP Src_schema.table, TARGET Dst_schema.table, COLMAP ( USEDEFAULTS, ENTRY_AMT = @IF ( @STRFIND( @STRUP(columnA) , ‘INF’) ,9999999999999, columnA),columnB = @IF ( @STRFIND( @STRUP(columnB) , ‘INF’) ,9999999999999, columnB),columnC= @IF ( @STRFIND( @STRUP(columnC) , ‘INF’) ,9999999999999, columnC));


 Comment 

Get bind variables

on October 6, 2016 at 9:43 am
Posted In: 12c General (Cat)

select * from gV$SQL_BIND_CAPTURE where sql_id=;

e.g.

select * from gV$SQL_BIND_CAPTURE where sql_id=’080uc49jbt7m9′;

 Comment 

GG DDL History

on August 25, 2016 at 2:22 pm
Posted In: Oracle Golden Gate (Cat)

 

select METADATA_TEXT from ggs_admin.ggs_ddl_hist where ddltype = ‘CREATE’;

 

SEQNO

NOT NULL NUMBER

OBJECTID

NUMBER

DATAOBJECTID

NUMBER

DDLTYPE

VARCHAR2(130)

OBJECTNAME

VARCHAR2(400)

OBJECTOWNER

VARCHAR2(400)

OBJECTTYPE

VARCHAR2(130)

FRAGMENTNO

NOT NULL NUMBER

OPTIME

NOT NULL CHAR(19)

STARTSCN

NUMBER

METADATA_TEXT

NOT NULL VARCHAR2(4000)

AUDITCOL

VARCHAR2(80)

Use DDLOPTIONS ADDTRANDATA to:

◆ enable Oracle’s supplemental logging automatically for new tables created with a CREATE TABLE.
â—† to update supplemental logging for tables affected by an ALTER TABLE to add or drop columns.
â—† update supplemental logging for tables that are renamed.
â—† update supplemental logging for tables where unique or primary keys are added or dropped.
The default is: DDLOPTIONS ADDTRANDATA RETRYOP RETRYDELAY 10 MAXRETRIES 10

By default, truncate operations are not captured from the source or replicated to the target.

Use the HANDLECOLLISIONS and NOHANDLECOLLISIONS parameters to control whether or not Replicat tries to resolve duplicate-record and missing-record errors when applying SQL on the target.

CHCKPNTTABLE

CHCKPNTTABLE_LOX

CHKPTAB

CHKPTAB_LOX

GGS_DDL_COLUMNS

GGS_DDL_HIST

GGS_DDL_HIST_ALT

GGS_DDL_LOG_GROUPS

GGS_DDL_OBJECTS

GGS_DDL_PARTITIONS

GGS_DDL_PRIMARY_KEYS

GGS_DDL_RULES

GGS_DDL_RULES_LOG

GGS_MARKER

GGS_SETUP

GGS_STICK

GGS_TEMP_COLS

GGS_TEMP_UK

select METADATA_TEXT from ggs_admin.ggs_ddl_hist where DDLTYPE = ‘CREATE’ order by SEQNO
 Comment 

GG Investigations

on August 23, 2016 at 7:52 am
Posted In: Oracle Golden Gate (Cat)

GG LOG FILE

———————–
Filename = ARMACREP.rpt
See the following error:

OGG-06510 Using the following key columns for target table CF_ACC.ACC_PRELOAD_DESTINATION: PAYER_ID, CURRENCY_CODE_ALPHA3.

2016-08-19 10:24:11 ERROR OGG-01296 Error mapping from CF_ACC.ACC_PRELOAD_DESTINATION to CF_ACC.ACC_PRELOAD_DESTINATION.

Last log location read:
FILE: /backup/oracle/OGG/accounts/dirdat/AC000000005
SEQNO: 5
RBA: 13671788
TIMESTAMP: 2016-08-18 13:46:10.993587
EOF: NO
READERR: 0

 

logdump > open /backup/oracle/OGG/accounts/dirdat/AC000000005
logdump>ghdr on
logdump>detail data on
logdump>ggstoken detail on
logdump>pos 13671788
logdump > n

___________________________________________________________________

Hdr-Ind    :     E  (x45)     Partition  :     .  (x0c)

UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)

RecLength  :    50  (x0032)   IO Time    : 2016/08/18 13:46:10.993.587

IOType     :   135  (x87)     OrigNode   :   255  (xff)

TransInd   :     .  (x03)     FormatType :     R  (x52)

SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)

AuditRBA   :      11229 AuditPos   : 3576336

Continued  :     N  (x00)     RecCount   :     1  (x01)


2016/08/18 13:46:10.993.587 GGSUnifiedPKUpdate   Len    50 RBA 13671788

Name: CF_ACC.ACC_PRELOAD_DESTINATION  (TDR Index: 33)

After  Image:                                             Partition 12   G  s

0000 0017 0000 000a 0000 0000 0000 0006 b8f4 0001 | ………………..

0005 ffff 0000 0000 0000 0a00 0000 0000 0000 06b8 | ………………..

ff00 0100 05ff ff00 0000                          | ……….

Before Image          Len    27 (x0000001b)

BeforeColumnLen     23 (x00000017)

Column     0 (x0000), Len    10 (x000a)

0000 0000 0000 0006 b8f4                          | ……….

Column     1 (x0001), Len     5 (x0005)

ffff 0000 00                                      | …..


After Image           Len    23 (x00000017)

Column     0 (x0000), Len    10 (x000a)

0000 0000 0000 0006 b8ff                          | ……….

Column     1 (x0001), Len     5 (x0005)

ffff 0000 00                                      | …..


GGS tokens:

TokenID x52 ‘R’ ORAROWID         Info x00  Length   20

4141 4161 4932 4141 4a41 4143 574c 4841 4141 0001 | AAAaI2AAJAACWLHAAA..

TokenID x4c ‘L’ LOGCSN           Info x00  Length    9

3131 3035 3939 3232 35                            | 110599225

TokenID x36 ‘6’ TRANID           Info x00  Length   10

3231 2e39 2e32 3033 3934                          | 21.9.20394

TokenID x69 ‘i’ ORATHREADID      Info x01  Length    2

0002                                              | ..


Logdump 7 >n

___________________________________________________________________

Hdr-Ind    :     E  (x45)     Partition  :     .  (x0c)

UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)

RecLength  :    88  (x0058)   IO Time    : 2016/08/18 13:46:10.993.637

IOType     :   134  (x86)     OrigNode   :   255  (xff)

TransInd   :     .  (x00)     FormatType :     R  (x52)

SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)

AuditRBA   :      11229       AuditPos   : 4526096

Continued  :     N  (x00)     RecCount   :     1  (x01)


2016/08/18 13:46:10.993.637 GGSUnifiedUpdate     Len    88 RBA 13671951

Name: CF_ACC.ACC_AGGREGATED_TRANSACTIONS  (TDR Index: 1)

After  Image:                                             Partition 12   G  b

0000 002a 0000 000a 0000 0000 0000 042b 692c 0001 | …*………..+i,..

000a 0000 0000 0000 0000 001c 0002 000a 0000 0000 | ………………..

0000 0000 0013 0000 000a 0000 0000 0000 042b 692c | ……………..+i,

0001 000a 0000 0000 0000 0000 001d 0002 000a 0000 | ………………..

0000 0000 0000 0013                               | ……..

Before Image          Len    46 (x0000002e)

BeforeColumnLen     42 (x0000002a)

Column     0 (x0000), Len    10 (x000a)

0000 0000 0000 042b 692c                          | …….+i,

Column     1 (x0001), Len    10 (x000a)

0000 0000 0000 0000 001c                          | ……….

Column     2 (x0002), Len    10 (x000a)

0000 0000 0000 0000 0013                          | ……….


After Image           Len    42 (x0000002a)

Column     0 (x0000), Len    10 (x000a)

0000 0000 0000 042b 692c                          | …….+i,

Column     1 (x0001), Len    10 (x000a)

0000 0000 0000 0000 001d                          | ……….

Column     2 (x0002), Len    10 (x000a)

0000 0000 0000 0000 0013                          | ……….


GGS tokens:

TokenID x52 ‘R’ ORAROWID         Info x00  Length   20

4141 4159 4570 4141 4741 414d 4d75 4f41 4153 0001 | AAAYEpAAGAAMMuOAAS..

TokenID x4c ‘L’ LOGCSN           Info x00  Length    9

3131 3035 3939 3232 37                            | 110599227

TokenID x36 ‘6’ TRANID           Info x00  Length   11

3231 2e31 342e 3230 3339 35                       | 21.14.20395

TokenID x69 ‘i’ ORATHREADID      Info x01  Length    2

0002                                              | ..

logdump > n

 

 

SET MARKUP HTML ON SPOOL ON HEAD “LOGMNR – INFO ”
SET ECHO ON
REM Spooling to html file

spool LOGMNR_Info1.html
show parameter comp;
select global_name, sysdate from global_name;
select * from v$version;
select * from v$nls_parameters where parameter in (‘NLS_LANGUAGE’, ‘NLS_TERRITORY’, ‘NLS_CHARACTERSET’, ‘NLS_NCHAR_CHARACTERSET’);

ALTER SESSION SET NLS_DATE_FORMAT=’YYYY-MM-DD HH24:MI:SS’;
set numformat 9999999999999999999999
exec dbms_logmnr.add_logfile(‘ Archive log path for sequence –> 11229 ‘);
exec dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog);
select username, os_username, DATA_OBJ#, rbasqn, RBABLK, rbabyte, RS_ID, row_id, rollback, seg_name,seg_owner,operation, timestamp, sql_redo, scn, cscn from v$logmnr_contents where row_id=’AAAaI2AAJAACWLHAAA’;
EXECUTE DBMS_LOGMNR.END_LOGMNR;

spool off

SET MARKUP HTML OFF
SET ECHO ON


 Comment 

GoldenGate Node A to Node B to Node C

on August 22, 2016 at 10:37 am
Posted In: Oracle Golden Gate (Cat)

NODE A

NODE A EXTRACT

extract EXTA

SETENV (NLS_LANG = ENGLISH_UNITED KINGDOM.AL32UTF8)

SETENV (ORACLE_HOME = “/u01/app/oracle/product/12.1.0.2/dba”)

SETENV (ORACLE_SID = “dba”)

USERIDALIAS ggadmin

TRANLOGOPTIONS DBLOGREADER

DISCARDFILE disc/dba.dsc, append, megabytes 500

EXTTRAIL /backup/oracle/gg/dirdat/AP

LOGALLSUPCOLS

UPDATERECORDFORMAT COMPACT

DDL INCLUDE ALL

DDLOPTIONS ADDTRANDATA

TABLEEXCLUDE *.DBMS**

TABLE PHH.*;

SEQUENCE PHH.*;

NODE A PUMP

extract PMPA

SETENV (NLS_LANG = ENGLISH_UNITED KINGDOM.AL32UTF8)

SETENV (ORACLE_HOME = “/u01/app/oracle/product/12.1.0.2/dba”)

SETENV (ORACLE_SID = “dba”)

USERIDALIAS ggadmin

RmtHost nodeB, MgrPort 8880

RmtTrail /backup/oracle/gg/dirdat/AP

PASSTHRU

TABLE PHH.*;

SEQUENCE PHH.*;

NODE B

NODE B REPLICAT

REPLICAT REPB

SETENV (NLS_LANG = ENGLISH_UNITED KINGDOM.AL32UTF8)

SETENV (ORACLE_HOME = “/u01/app/oracle/product/12.1.0.2/dbb”)

SETENV (ORACLE_SID = “dbb”)

ASSUMETARGETDEFS

USERIDALIAS ggadmin

DISCARDFILE /backup/oracle/gg/disc/dbb.dsc, append, megabytes 100

DDL INCLUDE ALL

DDLOPTIONS UPDATEMETADATA

MAPEXCLUDE PHH.TABN

MAP PHH.*, TARGET PHH.*;

NODE B EXTRACT

extract EXTB

SETENV (NLS_LANG = ENGLISH_UNITED KINGDOM.AL32UTF8)

SETENV (ORACLE_HOME = “/u01/app/oracle/product/12.1.0.2/DBB”)

SETENV (ORACLE_SID = “DBB”)

USERIDALIAS ggadmin

DISCARDFILE disc/DBB.dsc, append, megabytes 500

EXTTRAIL /backup/oracle/gg/dirdat/BP

LOGALLSUPCOLS

UPDATERECORDFORMAT COMPACT

DDL INCLUDE ALL

DDLOPTIONS GETREPLICATES, GETAPPLOPS, REMOVECOMMENTS BEFORE

TABLEEXCLUDE *.DBMS**

TABLE PHH.*;

SEQUENCE PHH.*;

NODE B PUMP

extract PMPB
SETENV (NLS_LANG = ENGLISH_UNITED KINGDOM.AL32UTF8)
SETENV (ORACLE_HOME = “/u01/app/oracle/product/12.1.0.2/dbb”)
SETENV (ORACLE_SID = “dbb”)
USERIDALIAS ggadmin
RmtHost NODEC, MgrPort 8880
RmtTrail /backup/oracle/gg/dirdat/CP
PASSTHRU
TABLE PHH.*;
SEQUENCE PHH.*;

NODEC

NODE C REPLICAT

REPLICAT REPC
SETENV (NLS_LANG = ENGLISH_UNITED KINGDOM.AL32UTF8)
SETENV (ORACLE_HOME = “/u01/app/oracle/product/12.1.0.2/dbc”)
SETENV (ORACLE_SID = “dbc”)
ASSUMETARGETDEFS
USERIDALIAS ggadmin
DISCARDFILE /backup/oracle/gg/disc/dbc.dsc, append, megabytes 100
DDL INCLUDE ALL
MAPEXCLUDE PHH.TESTZ
MAP PHH.*, TARGET PHH.*;

GGSCI

NODE A

ADD EXTRACT EXTA integrated tranlog, begin now

register extract EXTA database

ADD EXTTRAIL /backup/oracle/gg/dirdat/AP  EXTRACT EXTA  MEGABYTES 250

start extract EXTA

ADD EXTRACT PMPA  EXTTRAILsource /backup/oracle/gg/dirdat/AP

ADD RMTTRAIL /backup/oracle/gg/dirdat/AP EXTRACT PMPA  MEGABYTES 250

start extract PMPA

NODE B

add replicat REPB integrated exttrail /backup/oracle/gg/dirdat/AP

start replicat REPB

ADD EXTRACT EXTB integrated tranlog, begin now
REGISTER EXTRACT EXTB database
ADD EXTTRAIL /backup/oracle/gg/dirdat/BP  EXTRACT EXTB MEGABYTES 250
ADD EXTRACT PMPB  EXTTRAILsource /backup/oracle/gg/dirdat/dbp/BP
ADD RMTTRAIL /backup/oracle/gg/dirdat/CP EXTRACT PMPB  MEGABYTES 250
START EXTRACT EXTB
START EXTRACT PMPB

NODE C

add replicat REPC integrated exttrail /backup/oracle/GG/dirdat/CP
start replicat REPC
 Comment 

X11 forwarding

on August 12, 2016 at 9:26 am
Posted In: Configuration (Cat)

Make sure that you have the following line in your /etc/ssh/sshd_config file:

 AllowX11Forwarding              yes

 Comment 

Generate Sourcedefs

on August 10, 2016 at 1:18 pm
Posted In: Oracle Golden Gate (Cat)

ggsci > Edit params defgen

defsfile ./dirsql/SOURCE_DEFGEN.def

userid gguser password ******

TABLE schema.*;

 

Enter the parameters listed above in the order shown and invoke DEFGEN from GG_HOME location

$ ./defgen paramfile ./dirprm/defgen.prm

 

Use FTP or SFTP to transfer this SOURCE_DEFGEN.def file to the TARGET server

 

Specify this copied SOURCE_DEFGEN.def file location in REPLICAT param file

e.g. SOURCEDEFS ./dirsql/SOURCE_DEFGEN.def

 Comment 

Oracle Wallets

on August 10, 2016 at 12:17 pm
Posted In: 12c General (Cat)

To create a wallet

mkstore -wrl /tmp/wallet –create
Enter password: mysecret
PKI-01002: Invalid password.
Enter password: mysecret1 (not echoed)
Enter password again: mysecret1 (not echoed)

To add entries to the wallet

mkstore -wrl -createCredential The db_connect_string must be identical to the connection string that you specify

in the URL used in the datasource definition (the part of the string that follows the

@). It can be either the short form or the long form of the URL.

For example: myhost:1521/myservice or (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=myhost-scan)

(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=myservice))) To list the wallet entries $ mkstore -wrl /u01/app/oracle/wallet -listCredential Oracle Secret Store Tool : Version 12.1.0.2 Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved. Enter wallet password: List credential (index: connect_string username) 2: 1: $

To use the wallet

Add the following to the sqlnet.ora

WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = /.../wallet))) MKSTORE COMMAND [-create] [-createSSO] [-createLSSO] [-createALO] [-delete] [-deleteSSO] [-list] [-createEntry alias secret] [-viewEntry alias] [-modifyEntry alias secret] [-deleteEntry alias] [-createCredential connect_string username password] [-listCredential] [-modifyCredential connect_string username password] [-deleteCredential connect_string] [-help] [-nologo]

CHANGE WALLET PASSWORD

$ orapki wallet change_pwd -wallet /u01/.../wallets Oracle PKI Tool : Version 12.2.0.1.0 Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved. Enter wallet password: Enter New Wallet Password: Confirm Wallet Password: Wallet password has been changed. $
 Comment 

Ruby

on August 5, 2016 at 8:45 am
Posted In: Jernkins (Cat)
** = expoonent
% = modulo
puts - includes (or newline as they call it).
print - no newline generated after string
"some string".length
.length
.reverse
.upcase
.downcase
.capitalize!
if string.include? "s"
.gsub!(/s/, "th")
puts "paul".upcase

COMMENTS

# siingle line comment
=begin
Multiple line
comment
=end
name = "Paul".downcase.reverse.upcase => "LUAP"
gets.chomp => get input and remove CR
print "What's your first name? "
first_name = gets.chomp.capitalize!
print "What's your last name? "
last_name = gets.chomp.capitalize!
print "What's your city? "
city_name = gets.chomp.capitalize!
print "What's your state? "
state_name = gets.chomp.upcase!
puts "#{first_name} #{last_name} from #{city_name} in #{state_name}"
CONTROL FLOW
if
elsif
else
end
unless
==, !=, , =, &&, ||, !
EXAMPLE
print "Thtring, pleathe!: "
user_input = gets.chomp
user_input.downcase!
if user_input.include? "s"
user_input.gsub!(/s/, "th")
else
puts "Nothing to do here!"
end
puts "Your string is: #{user_input}"

LOOPS

counter = 1
while counter puts counter
counter = counter + 1
end
for num in 1...10 OR for num in 1..10 # note the use of two or 3 dots give different results
puts num
end
Infinite loop => loop { print "~Hello World" }
i = 0
loop do
i += 1
print "#{i}"
break if i > 5
end
for i in 1..5 # skips even numbers
next if i % 2 == 0
print i
end

OPERATORS

+=, -=, *=, /=

Arrays

my_array = [1,2,3,4,5]
muluti_array = [[1,2,3],[1,2,3],[1,2,3]]
multiarray.each -- will access each array in the array.
my_array.each do |x|
:
end
my_array.each { |item| print item * 2}

Hashes

key value pairs.
my_hash = { "name" => "Eric", "age" => 26, "hungry?" => true }
puts my_hash["name"]
puts my_hash["age"]
puts my_hash["hungry?"]
my_hash = Hash.new
pets = Hash.new
pets["Rio"] = "Moluccan Cockatoo"
family = { "Homer" => "dad", "Marge" => "mom", "Lisa" => "sister", "Maggie" => "sister", "Abe" => "grandpa", "Santa's Little Helper" => "dog" }
pets.each { |x| puts "#{x}" }
family.each { |x, y| puts "#{x}: #{y}" }

Times Split

10.times { print "This will print ten times "}
words = sometext.split(" ")
words.each do |astring |

SORTS

puts "Text please: "
text = gets.chomp
words = text.split(" ")
frequencies = Hash.new(0)
words.each { |word| frequencies[word] += 1 }
frequencies = frequencies.sort_by {|a, b| b }
frequencies.reverse!
frequencies.each { |word, frequency| puts word + " " + frequency.to_s }
my_array.sort!
COMBINED
a  b, -1, 0, +1
# To sort our books in ascending order, in-place books.sort! { |firstBook, secondBook| firstBook  secondBook } # Sort your books in descending order, in-place below books.sort! { |firstBook, secondBook| secondBook  firstBook }

METHODS

def puts_1_to_10
  (1..10).each { |i| puts i }
end

puts_1_to_10 # Ignore this for now. We'll explain it soon!

BLOCKS

1.times do puts "I'm a code block!" end 1.times { puts "As am I!" }

# block that capitalizes each string in the array ["ryan", "jane"].each {|string| puts "#{string[0].upcase}#{string[1..-1]}"} # prints "Ryan", then "Jane"


 Comment 

CHEF Basic Commands – WITH solo

on August 5, 2016 at 7:41 am
Posted In: Chef (Cat)

CHEF SERVER

chef-server-ctl uninstall

opcode-analytics-ctl uninstall

 

Download the package from https://downloads.chef.io/chef-server/.

rpm -Uvh chef-server-core-12.11.1-1.el7.x86_64.rpm

chef-server-ctl reconfigure

chef-server-ctl user-create hallapa paul halla p.h@email.com ‘admin123’ –filename /home/hallapa/ph.pem
chef-server-ctl org-create phh ‘phh_at_Cashflows’ –association_user hallapa –filename /home/hallapa/phh_validator.pem

Use Chef management console to manage data bags, attributes, run-lists, roles, environments, and cookbooks from a web user interface.
chef-server-ctl install chef-manage

chef-server-ctl reconfigure

chef-manage-ctl reconfigure
Use Chef push jobs to run jobs, an action or a command to be executed, against nodes independantly of a chef-client run.
chef-server-ctl install opscode-push-jobs-server
chef-server-ctl reconfigure

opscode-push-jobs-server-ctl reconfigure
Use
Reporting to keep track of what happens during every chef-client runs across all of the infrastructure being managed by Chef. Run Reporting with Chef management console to view reports from a web user interface.

chef-server-ctl install opscode-reporting
chef-server-ctl reconfigure

opscode-reporting-ctl reconfigure

chef-server-ctl user-list


https:///login

 

$ sudo su – root

# cd ~

# curl -L https://www.opscode.com/chef/install.sh | bash

# chef-solo -v

# wget http://github.com/opscode/chef-repo/tarball/master

# tar -zxf master
# mv chef-chef-repo* chef-repo
# rm master
# cd chef-repo
# mkdir .chef
# echo “cookbook_path [ ‘/root/chef-repo/cookbooks’ ] “> .chef/knife.rb
# chef generate cookbook phpapp

 

cd ~/chef-repo/cookbooks
knife cookbook site download apache2
tar zxf apache2*
rm apache2*.tar.gz
knife cookbook site download apt etc
knife cookbook site download iptables etc
knife cookbook site download logrotate etc
knife cookbook site download pacman etc
knife cookbook site download compat_resource etc
TO INSTALL APACHE2 USING MY COOKBOOK
cd phpapp
vi metadata.rb # and addthe line
depends “apache2”
vi recipes/default.rb # add the lines
include_recipe “apache2”
apache_site “default” do
enable true
end
cd ..
create solo.rb
vi solo.rb
file_cache_path “/root/chef-solo”
cookbook_path “/root/chef-repo/cookbooks”
create web.json
vi web.json
{
“run_list”: [ “recipe[apt]”, “recipe[phpapp]” ]
}
chef-solo -c solo.rb -j web.json
APACHE2 NOW INSTALLED
 Comment 

GG Installation

on August 4, 2016 at 4:44 pm
Posted In: 12c General (Cat)

Introduction

This document will detail the Oracle GoldenGate (GGS) installation and configuration for Linux with an Oracle database as the source and Microsoft windows with a SQL Server database as the destination.

Windows installation and Configuration

Requirements

Oracle GoldenGate for Microsoft windows SQL Server requires

A full Oracle client installation

An SQL Server database user and schema

An odbc dsn setup for the MS SqlServer database.

Oracle Client Installation

Go to oracle technet -> downloads -> Oracle Database

http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html .

Select “See all” next to the Microsoft Windows x64 (64-bit).

Select the Oracle Database Client for Microsoft Windows (x64) download.

Once the download has completed unzip the file and run setup.exe.

Select “Administrartor” and “Next”.

Select English and next.

Select “Use Windows Build-in Account”

Enter the Base and Software location directories.

For this installation we used “D:\Oracle\” and “D:\Oracle\Client”

Select Install

SQLServer Login and Schema

Login

Connect to the database as an administrator.

Right click Security and select New -> Login.

Add GUSER and select the relevant database as the default database; do not select OK.

Select User Mapping from the left hand menu and then:

Select the database from the top screen and db_owner from the bottom screen.

Select OK to exit.

Schema

With the database selected, choose New Query and execute the command:

create schema gguser;

Right click the Security option under the database and select new -> user

Enter a name for the user and select the “…” next to the Login name field.

Select Browse from the Select Login page, select the user we created and select OK.

Enter the schema name we created in the Default Schema box.

Select OK

ODBC Setup

Select Start > Settings > Control Panel.

Double-click Administrative Tools.

Double-click Data Sources (ODBC)

Select the System DSN tab and then select the Add button.

The Create New Data Source dialog box appears.

Select SQL Server and then Finish

Enter a name for the source, e.g. “GGToSQL”

Enter a description, e.g. “Oracle GG to MS SQL Server”

Enter the server name.

Select “Next”

Normally we would use a windows authenticated user but for this example we have added the sql server user GGUSER.

Use the drop down list to change the to the relevant database.

Select Next and then Finish.

GoldenGate Installation

Install

Unzip the download file provided by oracle into

D:\oracle\GoldenGate,

please note the directory name cannot include spaces.

Change to the windows directory

D:\oracle\GoldenGate

Enter

ggsci

At the ggsci prompt enter the commands

create subdirs

exit

Windows Service

The GoldenGate manager process is not installed as a windows service by default and must be manually added.

As a system administrator:

start -> run -> cmd

cd

instll ADDEVENTS ADDSERVICE AUTOSTART

This will start the service on system boot under the local system account and add events to the windows event manager.

GoldenGate Directory Setup

The create subdirs GoldenGate command has created the relevant directories under the GoldenGate home.

As the trail and discard files can grow quite large and we do not want to endanger other processes running of the D drive a separate volume was created for these files. We need to create the relevant directories on this volume.

cd G:

mkdir goldengatefiles

cd goldengatefiles

mkdir dirdat

mkdir dirrpt

GoldenGate Configuration

All commands will be run from the D:\oracle\GoldenGate directory

Credentialstore

Use a credential store to remove the requirement for passwords to be typed on the command line.

ggsci> add credentialstore

ggsci> alter credentialstore add user gguser password gguser alias gguser

This is the sql server database login chosen earlier.

 

Checkpoint table

$ ggsci

ggsci> edit params ./GLOBALS

CHECKPOINTTABLE gguser.chkptab

ggsci> exit

The exit and restart is required to reload the GLOBALS file.

Note the GL:OBALS file must be uppercase, in the top level directory (goldengate) and have no file extension.

$ ggsci

ggsci> dblogin sourcedb gguser useridalias gguser

ggsci> add checkpointtable

Manager service.

$ggsci

ggsci> edit params mgr

PORT 7908

DYNAMICPORTLIST 7908-7915

ACCESSRULE, PROG *, IPADDR ip1.ip1.ip1.ip1, ALLOW

SOURCEDB gguser

USERIDALIAS gguser

PURGEOLDEXTRACTS G:\goldengatefiles\dirdat\TP*, minkeephours 12

Start and stop the manager service using the windows services panel


Linux Installation and Configuration

GoldenGate Linux user

Create the oracle goldengate user as a member of the oracle database installation group; this will enable Classic capture mode should it ever be required.

useradd –g oinstall oraggs

mkdir /u01/app/oracle/ggs_12201

chown oracle:oinstall /u01/app/oracle/ggs_12201

chmod 775 /u01/app/oracle/ggs_12201

passwd oraggs

Log in as oraggs

Edit ~/.bash_profile and add the following:

export PATH=/u01/app/oracle/ggs_12201:$PATH

export LD_LIBRARY_PATH=/u01/app/oracle/ggs_12201:/lib:$LD_LIBRARY_PATH

alias ggs=”cd /u01/app/oracle/ggs_12201

Install goldenGate

Download to /tmp and then expand  the oracle GoldenGate zip file.

This unzips as /tmp/fbo_ggs_Linux_x64_shiphome.

Change directory to Disk1 and run the installer.

cd Disk1

./runInstaller

Select the 12c installation

Enter the correct locations.

 

 

Select Install

Select Close

GoldenGate Directory Setup

The GoldenGate directories have been created under the GoldenGate home.

As for the windows installation we should create directories on a separate volume for the GoldenGate trail and log files.

mkdir /u01/oracle/

mkdir GG_DIRECTORY

mkdir GG_DIRECTORY/dirdat

mkdir GG_DIRECTORY/dirrpt

Database Modifications

The oracle database should be in archivelog mode, logging and with flashback on.

SQL> select log_mode from v$database;

SQL> select flashback_on from v$database;

SQL> select force_logging from v$database;

If any are not in force then make the necessary changes.

Enable GoldenGate

SQL> alter system set ENABLE_GOLDENGATE_REPLICATION=true

Create the GoldenGate user

Sql> create user oraggs identified by oraggs default tablespace users temporary tablespace temp;

Sql> grant create session, connect, resource, dba to oraggs;

Sql> alter user oraggs quota unlimited on users;

Sql> grant execute on dbms_goldengate_auth to oraggs;

Sql> grant execute on dbms_flashback to oraggs;

Sql> exec dbms_goldengate_auth.grant_admin_privilege(‘ORAGGS’);

Sql> alter pluggable database add supplemental log data;

Sql> alter system switch logfile;

GoldenGate Configuration

All commands will be run from the GG_DIRECTORY directory as oraggs.

The commands all assume the oraggs user profile has been executed to define the oracle database home and sid environment variables.

Credentialstore

Use a credential store to remove the requirement for passwords to be typed on the command line.

ggsci> add credentialstore

ggsci> alter credentialstore add user oraggs password oraggs alias oraggs

Checkpoint table

$ggsci

ggsci> edit params ./GLOBALS

CHECKPOINTTABLE oraggs.chkptab

ggsci> exit

The exit and restart is required to reload the GLOBALS file.

Note the GLOBALS file must be uppercase, in the top level directory (goldengate) and have no file extension.

$ ggsci

ggsci> dblogin useridalias oraggs

ggsci> add checkpointtable

Manager service.

$ggsci

ggsci> edit params mgr

PORT 7908

DYNAMICPORTLIST 7909-7915

ACCESSRULE, PROG *, IPADDR ip2.ip2.ip2.ip2, ALLOW

USERIDALIAS oraggs

PURGEOLDEXTRACTS GG_DIRECTORY/dirdat/TP*, minkeephours 12

Start and stop the manager service

ggsci> stop mgr

ggsci> start mgr

Trandata

On the source oracle goldengate server

ggsci> dblogin useridalias oraggs

ggsci> ADD TRANDATA .*;

ggsci> exit

Oracle to SQLServer initial load

Mapping tables by name

SQLServer setup

$ ggsci

ggsci> add replicat RnameINI, SPECIALRUN

ggsci> edit params RnameINI

REPLICAT RnameINI

TARGETDB gguser, USERIDALIAS gguser

discardfile G:\goldengatefiles\INIFAIL.dsc, APPEND

MAP ., TARGET .;

: : :

MAP ., TARGET .;

ggsci> start replicat RnameINI

N.B. In the above command we are naming all the tables to be processed

Oracle Setup

As oraggs

$ggsci

ggsci> add extract EnameINI, SOURCEISTABLE

ggsci> edit params enameINI

EXTRACT EnameINI

USERIDALIAS oraggs

RMTHOST ip2.ip2.ip2.ip2, mgrport 7908

RMTTASK REPLICAT, GROUP RnameINI

TABLE .;

:

TABLE .;

The tables listed are the same as those in the RnameINI parameter file used on the Windows server.

 

ggsci> start extract EnameINI

ggsci> info all

ggsci> send extract EnameINI, report

 

n.b. If the extract completes quickly then the info all and reports will both show that the extract is not currently running, this is normal but you should check the ggserr.log file.

Oracle to SQLServer Change Delivery

Oracle Setup

As oraggs

$ggsci

ggsci> add extract Ename1, SOURCEISTABLE

ggsci> edit params Ename1

EXTRACT Ename1

USERIDALIAS oraggs

RMTHOST ip2.ip2.ip2.ip2, mgrport 7908

RMTTASK REPLICAT, GROUP Rname1

TABLE .;

:

TABLE .;

 

The tables listed are the same as those in the Rname1 parameter file used on the Windows server.

 

ggsci> start extract Ename1

ggsci> info all

ggsci> send extract Ename1, report

SQLServer setup

$ ggsci

ggsci> add replicat Rname1, exttrail G:\GoldenGatefiles\dirdat\PP

ggsci> edit params Rname1

REPLICAT Rname1

TARGETDB gguser, USERIDALIAS gguser

discardfile G:\goldengatefiles\INIFAIL.dsc, APPEND

MAP ., TARGET .;

: : :

MAP ., TARGET .;

Oracle Setup

As oraggs

$ggsci

ggsci> add extract Ename2, integrated tranlog, begin now

ggsci> register extract Ename2

ggsci> edit params Ename2

EXTRACT Ename2

USERIDALIAS oraggs

EXTTRAIL ./dirdat/TP

TABLE .;

:

TABLE .;

 

ggsci> add exttrail GG_DIRECTORY/dirdat/TP, extract Ename2

ggsci> add extract Pname1, exttrailsource GG_DIRECTORY/dirdat/TP

ggsci> register extract Pname1

ggsci> edit params Pname1

EXTRACT Pname1

USERIDALIAS oraggs

RMTHOST ip2.ip2.ip2.ip2, mgrport 7908

RMTTRAIL G:\goldengatefiles\dirdat\PP

TABLE .;

:

TABLE .;

 

ggsci> add rmttrail G:\GoldenGatefiles\dirdat\PP, extract Pname1,  megabytes 10

 

ggsci> start extract Ename2

ggsci> start extract Pname1

SQLServer

ggsci> start replicat Rname1

 

Changes made to the source tables should now be replicated to the destination tables.

Report Commands

Here are some of the commands to generate reports.

ggsci> info all

 

ggsci> send extract Ename1, report

ggsci> view report Ename1

 

ggsci> send extract Pname1, report

ggsci> view report Pname1

 

ggsci> send replicat Rname1, report

ggsci> view report rname1

 

ggsci> info extract Ename1

ggsci> info exttrail *

 

n.b. For initial load extracts if it completes quickly then the info all and reports will both show that the extract is not currently running, this is normal but you should check the ggserr.log file.


 

Security

For encryption of the trail files both on the system and in transit between the systems we will use wallets.

Master Key and Wallet

Create the master key and wallet.

$ ggsci

ggsci> create wallet

Created wallet at location ‘dirwlt’.

Opened wallet at location ‘dirwlt’.

ggsci> add masterkey

Master key ‘OGG_DEFAULT_MASTERKEY’ added to wallet at location ‘dirwlt’.

ggsci> info masterkey

Masterkey Name:                 OGG_DEFAULT_MASTERKEY

Creation Date:

Version:        Creation Date:                  Status:

1 Current

ggsci> info masterkey version 1

Masterkey Name:                 OGG_DEFAULT_MASTERKEY

Creation Date:

Version:                        1

Renew Date:

Status:                         Current

Key Hash (SHA1):                0…6

 

Copy the wallet “dirwlt/cwallet.sso’ to the windows server.

Parameter files

For the extract group parameter file (e.g. ESRCDST1) add the following line:

ENCRYPTTRAIL AES256

For the pump group parameter file (e.g. PSRCDST1) add the following line:

RMTHOSTOPTIONS ENCRYPT AES256

 

Restart all extracts and the files should now be encrypted. As long as the wallet has been copied to the dirwlt directory on the windows server decryption is automatic.

 

 

 Comment 

CHEF create an Apache Virtualhost on AWS EC2 Ubuntu

on August 4, 2016 at 12:39 pm
Posted In: Chef Cookbook Tutorials

Continued from CHEF configure WordPress on AWS EC2 Ubuntu

We need to define a template for the Apache VirtualHost that will run WordPress.

cd cookbooks/phpapp

vi templates/default/site.conf.erb

# Auto generated by Chef. Changes will be overwritten.

 

ServerName

DocumentRoot

 

Options FollowSymLinks

AllowOverride FileInfo Options

AllowOverride All

Order allow,deny

Allow from all

 

 

Options FollowSymLinks

AllowOverride None

 

 

 

We’ll now use a new resource which is provided by the apache2 cookbook called web_app to create an Apache VirtualHost using our template site.conf.erb.

vi recipes/default.rb

#Disable the default apache site , i.e. make the following change which is just after the include_recipe lines

apache_site “default” do

enable false

end

TO

apache_site “default” do

enable true

end

# Then add the following lines at the end of the file.

web_app ‘phpapp’ do

template ‘site.conf.erb’

docroot node[‘phpapp’][‘path’]

server_name node[‘phpapp’][‘server_name’]

end

vi attributes/default.rb # and add the line

default[‘phpapp’][‘server_name’] = “phpapp”

If your server has a hostname setup in DNS we should override our default attribute and specify the actual name in web.json. If there is no proper hostname defined ignore this step.

cd ../..

vi web.json # and change the contents to

{

“mysql”: {“server_root_password”: “808052769e2c6d909027a2905b224bad”, “server_debian_password”: “569d1ed2d46870cc020fa87be83af98d”, “server_repl_password”: “476911180ee92a2ee5a471f33340f6f4”},

“phpapp”: {“db_password”: “212b09752d173876a84d374333ae1ffe”, “server_name”: “###.###.###”},

“run_list”: [ “recipe[apt]”, “recipe[phpapp]” ]

}

chef-solo -c solo.rb -j web.json

Visit the web server and see if that’s worked.

http:///phpapp/wp-admin/install.php

 

 Comment 

CHEF Cookbook Tutorial on AWS EC2 Ubuntu

on August 4, 2016 at 11:47 am
Posted In: Chef Cookbook Tutorials

From http://gettingstartedwithchef.com/first-steps-with-chef.html

The simplest way to use Chef is chef-solo. It allows you to install, configure and manage the packages required by your application without the complication of any client and server configuration. We’ll start with the common scenario that you have to setup a website designer with a WordPress environment.

Install chef

$ sudo su – root

# cd ~

curl -L https://www.opscode.com/chef/install.sh | bash

Check chef
chef-solo -v

So, what do we need to do to get our web server up and running?

  • Install and configure Apache
  • Install and configure MySQL
  • Install and configure PHP
  • Deploy our website code to the site

How do we do that? We write our first Chef cookbook. But before we do that we should setup a file structure that will help us organise our various Chef files. Opscode, the makers of Chef provide one. They call it simply the Chef Repository.

wget http://github.com/opscode/chef-repo/tarball/master

tar -zxf master
mv chef-chef-repo* chef-repo
rm master
cd chef-repo
Our Chef cookbook should unsurprisingly reside within the cookbooks directory. We’re going to call it “phpapp”. We can use the command knife to help us manage our cookbooks. First we should tell knife where to find our cookbooks directory.
 
mkdir .chef
echo “cookbook_path [ ‘/root/chef-repo/cookbooks’ ] “> .chef/knife.rb
knife cookbook create phpapp   # TO BE REPLACED BY “chef generate cookbook phpapp”

 

Nex page

CHEF install apache on AWS EC2 Ubuntu

 

 Comment 

CHEF configure WordPress on AWS EC2 Ubuntu

on August 4, 2016 at 11:25 am
Posted In: Chef Cookbook Tutorials

Carried on from CHEF install WordPress on AWS EC2 Ubuntu

WordPress keeps its configuration in a file called wp-config.php. We need to create that file and put database names and user access details inside it. Chef provides a resource called template that can do just that.

cd cookbooks/phpapp

vi recipes/default.rb # add the lines

wp_secrets = Chef::Config[:file_cache_path] + ‘/wp-secrets.php’

if File.exist?(wp_secrets)

salt_data = File.read(wp_secrets)

else

require ‘open-uri’

salt_data = open(‘https://api.wordpress.org/secret-key/1.1/salt/’).read

open(wp_secrets, ‘wb’) do |file|

file << salt_data

end

end

template node[‘phpapp’][‘path’] + ‘/wp-config.php’ do

source ‘wp-config.php.erb’

mode 0755

owner ‘root’

group ‘root’

variables(

:database        => node[‘phpapp’][‘database’],

:user            => node[‘phpapp’][‘db_username’],

:password        => node[‘phpapp’][‘db_password’],

:wp_secrets      => salt_data)

end

WordPress comes with an example configuration file which we’d usually alter to create our template but for brevity we’ll just specify a cut down version below.

Create templates/default/wp-config.php.erb

vi templates/default/wp-config.php.erb

 

Variables that should be inserted into the template are done so with . You can also include attributes inside templates using although that can prevent your template from being easily reused elsewhere so is not considered best practice.

cd ../..

chef-solo -c solo.rb -j web.json

Next

CHEF create an Apache Virtualhost on AWS EC2 Ubuntu

 

 

 Comment 

CHEF install WordPress on AWS EC2 Ubuntu

on August 4, 2016 at 11:19 am
Posted In: Chef Cookbook Tutorials

Carried on from CHEF configure MySQL on AWS EC2 Ubuntu

We want to setup each new WordPress build with the latest version so we’ll fetch it from WordPress.org. But how? Fortunately enough, Chef comes with the resource remote_file which will do just that for us.

cd cookbooks/phpapp

vi recipes/default.rb # and add the following lines

wordpress_latest = Chef::Config[:file_cache_path] + “/wordpress-latest.tar.gz”

remote_file wordpress_latest do

source “http://wordpress.org/latest.tar.gz”

mode “0644”

end

directory node[“phpapp”][“path”] do

owner “root”

group “root”

mode “0755”

action :create

recursive true

end

execute “untar-wordpress” do

cwd node[‘phpapp’][‘path’]

command “tar –strip-components 1 -xzf ” + wordpress_latest

creates node[‘phpapp’][‘path’] + “/wp-settings.php”

 

end

vi attributes/default.rb # and add the following
default[“phpapp”][“path”] = “/var/www/phpapp”
cd ../..
chef-solo -c solo.rb -j web.json
Next CHEF configure WordPress on AWS EC2 Ubuntu
 Comment 

CHEF configure MySQL on AWS EC2 Ubuntu

on August 4, 2016 at 10:22 am
Posted In: Chef Cookbook Tutorials

Carrried on from CHEF install PHP on AWS EC2 Ubuntu

Now if you were setting up a server by hand, this is where you’d manually setup a database, copy the web application code over, create a MySQL user for the website and configure virtual hosts. Instead, we’ll use Chef to automate the setup of our application. This allows us to set up multiple servers and know we will always get the same results.

Before we setup our database we need to fetch a final few cookbooks. The database cookbook provides resources that allow us to easily manage databases and database users. The database cookbook depends on the postgresql, mariadb, xfs and aws cookbooks so we’ll need those as well even though we won’t be using them. We’ll also fetch the mysql2_chef_gem cookbook which the database cookbook requires when used with MySQL.

cd /root/chef-repo/cookbooks
knife cookbook site download database
tar zxf database-*.tar.gz
knife cookbook site download postgresql
tar zxf postgresql-*.tar.gz
knife cookbook site download xfs
tar zxf xfs-*.tar.gz
knife cookbook site download mariadb
tar mariadb xfs-*.tar.gz
knife cookbook site download aws
tar zxf aws-*.tar.gz
knife cookbook site download mysql2_chef_gem 0.1.0
tar zxf mysql2_chef_gem-*.tar.gz
root@intro:~/chef-repo/cookbooks# rm *.tar.gz

cd phpapp
vi metadata.rb # and add the following 2 lines
depends “database”
depends “mysql2_chef_gem”

To setup our web application we need to:

  1. Create a MySQL database for our application
  1. Create a MySQL database user for our application
  1. Fetch the code for our web application
  1. Create a configuration file with database details and other configuration options for our web application
  1. Create an Apache VirtualHost for our web application

A resource is an action that your recipe can perform. The template resource creates a file by expanding variables in a template. The user resource can be used to manage users. The database cookbook provides the resource mysql_database which we will now use to perform the first step.

vi recipes/default.rb
#
# Cookbook Name:: phpapp
# Recipe:: default
#
# Copyright 2013, YOUR_COMPANY_NAME
#
# All rights reserved – Do Not Redistribute
#
include_recipe “apache2”
include_recipe “mysql::client”
include_recipe “mysql::server”
include_recipe “php”
include_recipe “php::module_mysql”
include_recipe “apache2::mod_php5”
include_recipe “mysql::ruby”
include_recipe “mysql2_chef_gem”
apache_site “default” do
enable true
end
mysql_database node[‘phpapp’][‘database’] do
connection ({:host => ‘localhost’, :username => ‘root’, :password => node[‘mysql’][‘server_root_password’]})
action :create
end

The line “mysql_database node[‘phpapp’][‘database’] do ” allows us to define the database name later because we have also included the mysql2_chef_gem recipe which installs a Ruby library (called a “gem”) that allows Chef to interact with a MySQL server.

Attributes

The database attribute can be defined in a few places. We could define the attribute in web.json like we did with the MySQL ones but that makes our cookbook unnecessarily difficult to use. We want to provide the option to use a database called something other than phpapp but we should really provide a default.

vi cookbooks/phpapp/attributes/default.rb # and add the line

default[“phpapp”][“database”] = “phpapp”

Create the Database

cd ../..

chef-solo -c solo.rb -j web.json

Create the Database User

cd cookbooks/phpapp

vi recipes/default.rb # append the lines
mysql_database_user node[‘phpapp’][‘db_username’] do
connection ({:host => ‘localhost’, :username => ‘root’, :password => node[‘mysql’][‘server_root_password’]})
password node[‘phpapp’][‘db_password’]
database_name node[‘phpapp’][‘database’]
privileges [:select,:update,:insert,:create,:delete]
action :grant
end
vi attrributes/default.rb # Add the line
default[“phpapp”][“db_username”] = “phpapp”
cd ../..
vi web.json # add the following

{
“mysql”: {“server_root_password”: “808052769e2c6d909027a2905b224bad”, “server_debian_password”: “569d1ed2d46870cc020fa87be83af98d”, “server_repl_password”: “476911180ee92a2ee5a471f33340f6f4”},
“phpapp”: {“db_password”: “212b09752d173876a84d374333ae1ffe”},
“run_list”: [ “recipe[apt]”, “recipe[phpapp]” ]
}

chef-solo -c solo.rb -j web.json

MySQL is now up and running, try:

1) mysql -p

808052769e2c6d909027a2905b224bad

2) mysql -u phpapp

212b09752d173876a84d374333ae1ffe

Both get to the mysql database prompt.

Next Page

CHEF install WordPress on AWS EC2 Ubuntu

 

 Comment 

CHEF install PHP on AWS EC2 Ubuntu

on August 4, 2016 at 8:51 am
Posted In: Chef Cookbook Tutorials

 

FOLLOWS ON FROM “CHEF install MySQL on AWS EC2 Ubuntu“

 

The php cookbook depends on the xml, yum-epel, windows, and iis cookbooks, so we’ll need those even though we won’t be using all of them. We’ll also have to install sub-dependencies yum (a dependency of yum-epel), chef_handler, and powershell (dependencies of windows).

 

cd /root/chef-repo/cookbooks

knife cookbook site download php

tar zxf php*.tar.gz

knife cookbook site download xml

tar zxf xml-*.tar.gz

knife cookbook site download yum

tar zxf yum-*.tar.gz

knife cookbook site download yum-epel

tar zxf yum-epel-*.tar.gz

knife cookbook site download powershell

tar zxf powershell-*.tar.gz

knife cookbook site download iis

tar zxf iis-*.tar.gz

rm *.tar.gz

 

Here we add the PHP default recipe, one to install the PHP MySQL extension and one to enable the Apache PHP module mod_php. We also enable the default site so we can check our installation has worked.

 

cd phpapp

vi metadata.rb # add the following line

depends “php”

vi recipes/default.rb # change the include_recipe lines to the following

include_recipe “apache2”

include_recipe “mysql::client”

include_recipe “mysql::server”

include_recipe “php”

include_recipe “php::module_mysql”

include_recipe “apache2::mod_php5”

cd ../..
chef-solo -c solo.rb -j web.json

N.B. I had to edit php/metadata.json and change the text “mysql”:”>= 6.0.0″ to “mysql”:”>= 0.0.0″

TEST
Open /var/www/test.php in your editor and insert the line
< ? php
phpinfo();
? >
NOW

vi /etc/apache2/apache2.conf  # and change the section to the following:

Options FollowSymLinks
AllowOverride None
Require all granted

Now goto http://yourserver/test.php

Next page CHEF configure MySQL on AWS EC2 Ubuntu

 
 Comment 

CHEF install MYSQL on AWS EC2 Ubuntu

on August 4, 2016 at 7:40 am
Posted In: Chef Cookbook Tutorials

FOLLOWS ON FROM “CHEF install apache on AWS EC2 Ubuntu”

Next we’ll setup MySQL. As the community site has a cookbook for MySQL, the process is similar to Apache. Again we’ll ask knife to fetch the cookbook from the community site for us.

 

cd cookbooks

knife cookbook site download mysql

tar zxf mysql*

rm mysql-*.tar.gz

Install mysql as both a client and server as we only have the one machine.

cd mysql/recipes/

cd ../../phpapp

 

vi metadata.rb # and add the line

depends “mysql”

 

vi recipes/default.rb # and after the include apache2 add the following lines

include_recipe “mysql::client”

include_recipe “mysql::server”

cd ..

 

knife cookbook site download openssl

tar zxf openssl*.tar.gz

rm openssl*.tar.gz

knife cookbook site download build-essential

tar zxf build-essential-*.tar.gz

rm build-essential-*.tar.gz

knife cookbook site download homebrew

tar zxf homebrew-*.tar.gz

rm homebrew-*.tar.gz

knife cookbook site download windows

tar zxf windows-*.tar.gz

rm windows-*.tar.gz

knife cookbook site download chef_handler

tar zxf chef_handler-*.tar.gz

rm chef_handler-*.tar.gz

knife cookbook site download chef-sugar

tar zxf chef-sugar-*.tar.gz

rm chef-sugar-*.tar.gz

knife cookbook site download seven_zip

tar zxf ..

rm ..

knife cookbook site download mingw

tar zxf ..

rm ..

vi mysql/metadata.rb # remove the version numbers for openssl and build_essential

depends ‘openssl’
depends ‘build-essential’

TO

depends ‘openssl’
depends ‘build-essential’
cd ..
We need to define a root password for MySQL. This is an attribute. In Chef, attributes are values which we use to configure our applications or platform. An attribute could be a port number for Apache. Often a sensible default is specified inside a cookbook. Such a default for a web server port would be 80. There’s no sensible default MySQL password, so we need to specify one. Open web.json.
 
vi web.json # and amend to look like:
{
“mysql”: {“server_root_password”: “808052769e2c6d909027a2905b224bad”, “server_debian_password”: “569d1ed2d46870cc020fa87be83af98d”, “server_repl_password”: “476911180ee92a2ee5a471f33340f6f4”},
“run_list”: [ “recipe[apt]”, “recipe[phpapp]” ]
}
chef-solo -c solo.rb -j web.json
DONE – NOW INSTALL PHP

Next Page CHEF install PHP on AWS EC2 Ubuntu

 

 

 

 Comment 

CHEF install apache on AWS EC2 Ubuntu

on August 3, 2016 at 4:24 pm
Posted In: Chef Cookbook Tutorials

Continued from

CHEF Cookbook Tutorial on AWS EC2 Ubuntu

 

cd cookbooks/phpapp

 

Now we’d need write our cookbook to install and configure Apache, MySQL and PHP.
Welcome to the Opscode Community cookbook site.
Here you’ll find lots of well crafted, tested and battle hardened cookbooks that will do most of the work for you. Think of them as libraries you will use inside your code. We’ll start with the apache2 cookbook. There’s no need to manually download it from the community site, knife has this functionality built in. We’ll also install the apt cookbook. This will help us ensure chef-solo does an apt-get update before we install any packages.
 
cd ..
knife cookbook site download apache2
tar zxf apache2*
rm apache2*.tar.gz
knife cookbook site download apt
tar zxf apt*
rm apt*.tar.gz
knife cookbook site download iptables
tar zxf iptables*
rm iptables*.tar.gz
knife cookbook site download logrotate
tar zxf logrotate*
rm logrotate*.tar.gz
knife cookbook site download pacman
tar zxf pacman*
rm pacman*.tar.gz
knife cookbook site download compat_resource
tar xvf compat_resource-12.10.7.tar.gz
rm compat_resource-12.10.7.tar.gz
cd phpapp
vi metadata.rb # and addthe line
depends “apache2”
vi recipes/default.rb # add the lines
include_recipe “apache2”
apache_site “default” do
enable true
end
cd ..
create solo.rb
vi solo.rb
file_cache_path “/root/chef-solo”
cookbook_path “/root/chef-repo/cookbooks”
create web.json
vi web.json
{
“run_list”: [ “recipe[apt]”, “recipe[phpapp]” ]
}
chef-solo -c solo.rb -j web.json
DONE
Next Page is CHEF install MySQL on AWS EC2 Ubuntu
 Comment 

Connection test

on July 29, 2016 at 2:22 pm
Posted In: Unix notes (Cat)

nc -z targetip mgrport=7809

nc - arbitrary TCP and UDP connections and listens

DESCRIPTION

The nc (or netcat) utility is used for just about anything under the sun involving TCP or UDP.  It can open TCP connections, send UDP packets, listen on arbitrary TCP and UDP ports, do port scanning, and deal with both IPv4 and IPv6.  Unlike telnet(1), nc scripts nicely, and separates error messages onto standard error instead of sending them to standard output, as telnet(1) does with some.

Common uses include:

·   simple TCP proxies

·   shell-script based HTTP clients and servers

·   network daemon testing

·   a SOCKS or HTTP ProxyCommand for ssh(1)

·   and much, much more

The options are as follows:

-4      Forces nc to use IPv4 addresses only.

-6      Forces nc to use IPv6 addresses only.

-D      Enable debugging on the socket.

-d      Do not attempt to read from stdin.

-h      Prints out nc help.

-i interval Specifies a delay time interval between lines of text sent and received.  Also causes a delay time between connections to multiple ports.

-k Forces nc to stay listening for another connection after its current connection is completed.  It is an error to use this option without the -l option.

-l Used to specify that nc should listen for an incoming connection rather than initiate a connection to a remote host.  It is an error to use this option in conjunction with the -p, -s, or -z options.  Additionally, any timeouts specified with the -w option are ignored.

-n Do not do any DNS or service lookups on any specified addresses, hostnames or ports.

-p source_port Specifies the source port nc should use, subject to privilege restrictions and availability.  It is an error to use this option in conjunction with the -l option.

-r Specifies that source and/or destination ports should be chosen randomly instead of sequentially within a range or in the orderthat the system assigns them.

-S Enables the RFC 2385 TCP MD5 signature option.

-s source_ip_address Specifies the IP of the interface which is used to send the packets. It is an error to use this option in conjunction with the -l option.

-T ToS Specifies IP Type of Service (ToS) for the connection. Valid values are the tokens “lowdelay”, “throughput”, “reliability”, or an 8-bit hexadecimal value preceded by “0x”.

-C Send CRLF as line-ending

-t Causes nc to send RFC 854 DON’T and WON’T responses to RFC 854 DO and WILL requests. This makes it possible to use nc to script telnet sessions.

-U Specifies to use Unix Domain Sockets.

-u Use UDP instead of the default option of TCP.

-v Have nc give more verbose output.

-w timeout If a connection and stdin are idle for more than timeout seconds, then the connection is silently closed. The -w flag has no effect on the -l option, i.e. nc will listen forever for a connection, with or without the -w flag. The default is no timeout.

-X proxy_version Requests that nc should use the specified protocol when talking to the proxy server. Supported protocols are “4” (SOCKS v.4), “5” (SOCKS v.5) and “connect” (HTTPS proxy). If the protocol is not specified, SOCKS version 5 is used.

-x proxy_address[:port] Requests that nc should connect to hostname using a proxy at proxy_address and port. If port is not specified, the well-known port for the proxy protocol is used (1080 for SOCKS, 3128 for HTTPS).

-z Specifies that nc should just scan for listening daemons, without sending any data to them. It is an error to use this option in conjunction with the -l option.

hostname can be a numerical IP address or a symbolic hostname (unless the -n option is given). In general, a hostname must be specified, unless the -l option is given (in which case the local host is used).

port[s] can be single integers or ranges. Ranges are in the form nn-mm. In general, a destination port must be specified, unless the -U option is given (in which case a socket must be specified).

CLIENT/SERVER MODEL It is quite simple to build a very basic client/server model using nc. On one console, start nc listening on a specific port for a connection.

For example:

$ nc -l 1234

nc is now listening on port 1234 for a connection. On a second console

 Comment 

DCD – Dead Connection Detection

on July 27, 2016 at 2:40 pm
Posted In: 12c Performance (Cat)

For 12c read oracle articles:

Doc ID 257650.1

Doc ID 1591874.1

Basically:

set EXPIRE_TIME

 Comment 

Trace File Analyzer – TFA

on July 27, 2016 at 11:29 am
Posted In: 12c Performance (Cat)

Trace File Analyzer Collector (TFA)

is a diagnostic collection utility to simplify diagnostic data collection on Oracle Clusterware/Grid Infrastructure, RAC and Single Instance Database systems.

TFA is similar to the diagcollection utility packaged with Oracle Clusterware in that it collects and packages diagnostic data – however – TFA is MUCH more powerful than diagcollection with its ability to centralize and automate the collection of diagnostic information.

TFA provides the following key benefits:

  • +

    Increase Productivity

  • +

    Reduce Diagnostic Data Upload Package Size

  • +

    Reduce SR Resolution Times

  • +

    Single Source and Interface for Diagnostic Tools

TFA Collector Features:

  • Simplifies diagnostic data collections
  • A single command performs clusterwide diagnostic collection for ALL components
  • Diagnostic files “trimmed” around the incident time
  • Collected diagnostics are consolidated on a single node
  • Increased efficiency of admin staff
  • Includes and Integrates the following tools that were previously packaged as the “RAC and DB Support Tools Bundle” (MOS Version Only):
    • ORAchk to promote proactive mainteance
    • OSWatcher, Procwatcher, SQLT, ORAtop and DARDA to ensure such data is available for reactive types of issues
USAGE

[grid]$ tfactl diagcollect -all -since 6h

Collecting data for all components using above parameters…

 

Collecting data for all nodes

Collection Id : 20160727092122DBN

 

Repository Location in DBN : /u01/app/grid/tfa/repository

 

Collection monitor will wait up to 30 seconds for collections to start

 

2016/07/27 09:21:25 UTC : Collection Name : tfa_Wed_Jul_27_09_21_22_UTC_2016.zip

2016/07/27 09:21:25 UTC : Sending diagcollect request to host : HOSTN

2016/07/27 09:21:25 UTC : Scanning of files for Collection in progress…

2016/07/27 09:21:25 UTC : Collecting extra files…

2016/07/27 09:22:18 UTC : Completed collection of extra files…

2016/07/27 09:22:25 UTC : Getting list of files satisfying time range [07/27/2016 03:21:25 UTC, 07/27/2016 09:22:25 UTC]

2016/07/27 09:22:25 UTC : Starting Thread to identify stored files to collect

2016/07/27 09:22:25 UTC : Getting List of Files to Collect

2016/07/27 09:22:26 UTC : Trimming file : HOST/crs/HOST/crs/trace/crsd_oraagent_grid.trc with original file size : 10MB

:

2016/07/27 09:22:29 UTC : Trimming file : HOST/asm/+asm/+ASM2/trace/+ASM2_lms0_10658.trc with original file size : 1.7MB

 

2016/07/27 09:22:29 UTC : HOST: Zipped 100 Files so Far

:

2016/07/27 09:22:29 UTC : HOST: Zipped 1000 Files so Far

 

:

2016/07/27 09:22:44 UTC : Collecting ADR incident files…

 

2016/07/27 09:22:49 UTC : Completed Zipping of all files

2016/07/27 09:22:49 UTC : Cleaning up temporary files

2016/07/27 09:22:49 UTC : Finished Cleaning up temporary files

2016/07/27 09:22:49 UTC : Finalizing the Collection Zip File

2016/07/27 09:22:49 UTC : Finished Finalizing the Collection Zip File

2016/07/27 09:22:49 UTC : Total Number of Files checked : 45905

2016/07/27 09:22:49 UTC : Total Size of all Files Checked : 18GB

2016/07/27 09:22:49 UTC : Number of files containing required range : 1172

2016/07/27 09:22:49 UTC : Total Size of Files containing required range : 4.4GB

2016/07/27 09:22:49 UTC : Number of files trimmed : 30

2016/07/27 09:22:49 UTC : Total Size of data prior to zip : 506MB

2016/07/27 09:22:49 UTC : Saved 4.1GB by trimming files

2016/07/27 09:22:49 UTC : Zip file size : 33MB

2016/07/27 09:22:49 UTC : Total time taken : 84s

2016/07/27 09:22:49 UTC : Remote Collection in Progress…

2016/07/27 09:22:49 UTC : lhrdb01:Completed Collection

2016/07/27 09:22:49 UTC : Completed collection of zip files.

Logs are being collected to: /u01/app/grid/tfa/repository/collection_…


 

 

 Comment 

SQL Debug

on July 27, 2016 at 11:23 am
Posted In: 12c Performance (Cat)

SQL> ORADEBUG SETOSPID 10189

Oracle pid: 81, Unix process pid: 10189, image: oracle@nnn.com

SQL> ORADEBUG TRACEFILE_NAME;

/u01/app/oracle/diag/rdbms/dbn/dbn_id/trace/db_id_ora_10189.trc

SQL> oradebug event 10046 trace name context forever, level 12

SQL> oradebug event 10046 trace name context off

 

SQL> oradebug session_event 10046 trace name context forever,level 12

 

SQL> oradebug event 10046 trace name context off

 

 

 

 Comment 

Install 12c RAC database

on June 10, 2016 at 4:48 pm
Posted In: 12c General (Cat)

As ORACLE

cd /media/database

./runInstaller




 

 Comment 

Install 12c grid – PART 1

on June 10, 2016 at 4:41 pm
Posted In: 12c General (Cat)

PLEASE NOTE:

This is a 12c standard cluster using asm installation on my test VBox nodes and I know there were some issues on swapspace etc but as these were minimal I didn’t care, I just wanted to make sure I got my grin install documented.

If you don’t like this then feel free to go elsewhere.

Not using GNS, Not using Leaf Nodes, Using local ASM

Configure the hosts file

edit /etc/hosts

127.0.0.1 localhost localhost.localdomain

::1 localhost localhost.localdomain

192.168.0.25 ambari-RAC1 ambari-RAC1.localdomain

192.168.0.26 ambari-RAC2 ambari-RAC2.localdomain

192.168.0.67 ambari-cluster ambari-cluster.localdomain

192.168.0.25 ambari-RAC1-vip ambari-RAC1-vip.localdomain

192.168.0.26 ambari-RAC2-vip ambari-RAC2-vip.localdomain

10.1.1.5 ambari-RAC1-priv ambari-RAC1-priv.localdomain

10.1.1.6 ambari-RAC2-priv ambari-RAC2-priv.localdomain

Set up the hostname

For ambari-rac1

$ hostname

e.g.

hostname ambari-rac1.localdomain

$ domainname

(none)

$ vi /etc/hostname

e.g.

ambari-rac1.localdomain

$ hostnamectl set-hostname ambari-rac1.localdomain

$ nmcli general hostname

Create Groups

oinstall, dba, oper, bckpdba, asmdba, dgdba, kmdba, asmadmin, asmoper

Create Users

grid – member of wheel, oinstall, asmadmin, asmdba, asmoper, dba,bckpdba

oracle – member of wheel, oinstall, dba, oper, bckpdba, asmdba, dgdba ,  kmdba

Profiles

Grid /home/grid/.bash_profile

GRID_BASE=/u01/app/grid

ORACLE_BASE=/u01/app/grid

CRS_HOME=/u01/app/12.1.0/grid

GRID_HOME=/u01/app/12.1.0/grid

ORACLE_HOME=/u01/app/12.1.0/grid

ORA_CRS_HOME=/u01/app/12.1.0/grid

ORACLE_SID=+ASM??

ORA_ENV=12102

export EDITOR ORACLE_BASE ORACLE_HOME GRID_BASE GRID_HOME ORACLE_SID ORA_ENV ORACLE_HOSTNAME CRS_HOME

PATH=$PATH:/usr/sbin:$ORACLE_BASE/dba/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch

export PATH

Oracle /home/oracle/.bash_profile

ORACLE_BASE=/u01/app/oracle

ORACLE_HOME=/u01/app/oracle/product/12102/dbhome_1

ORACLE_SID=orcl

ORA_ENV=dbhome_1

export EDITOR ORACLE_BASE ORACLE_HOME ORACLE_SID ORA_ENV ORACLE_HOSTNAME

NLS_DATE_FORMAT=’YYYYMMDD.HH24MISS’

NLS_LANG=american_america.al32utf8

export NLS_DATE_FORMAT NLS_LANG

PATH=$PATH:/usr/sbin:$ORACLE_BASE/dba/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatchexport PATH

Ulimits

edit /etc/security/limits.conf and add the following lines:

grid hard nofile 65536

oracle hard memlock 4335826

SWAPFILE

for a 4 gig memory system that already has a 2 gig swapfile:

# dd if=/dev/zero of=/swapfile bs=1024 count=2097152

# mkswap /swapfile

# chmod 0600 /swapfile

# swapon /swapfile

# vi /etc/fstab                                  ## to make permanent on reboot

/swapfile swap swap defaults 0 0

# cat /proc/swaps                            ## to check swap space

# free ## to check swap space

SSH

Not strictly necessary as the installer can do this

ORACLE DIRECTORIES AND ORAINST

mkdir -p /u01/app/grid

mkdir -p /u01/app/12.1.0/grid

mkdir -p /u01/app/oraInventory

chmod -R grid:oinstall /u01/app/grid

chmod -R grid:oinstall /u01/app/12.1.0

mkdir -p /u01/app/oracle/product/12102

chmod -R oracle:oinstall /u01/app/oracle

Install packages

get the oracle public yum

from http://public-yum.oracle.com/public-yum-ol7.repo

yum install oracle-rdbms-server-12cR1-preinstall

Install the ASM drivers

kmod-oracleasm

oracleasm-support

Install cvuqdisk from the install kit

rpm -Upv /kit dir/grid/rpm/cvuqdisk-1.0.9-1.rpm

Install asmlib

rpm -Upv oracleasmlib-2.0.12-1.el7.x86_64.rpm

Update OS

yum update -y

Remove NTPD

avahi-deamon

stop and disable

system parameters

create /etc/sysctl.d/ora_params with the following:

# Oracle 11g R2 and 12c R1 kernel tuning
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

These will be loaded on system reboot.

Configure ASM (if using it)

/etc/init.d/oracleasm configure

grid

oinstall

y

y

/usr/sbin/oracleasm init

/etc/init.d/oracleasm scandisks

/etc/init.d/oracleasm listdisks

initialise the disks.

Assuming sdb and sdc are the disks:

fdisk /dev/sdb

n for new

carriage return

carriage return

carriage return

carriage return

w

fdisk /dev/sdc

n

carriage return

carriage return

carriage return

carriage return

w

/usr/sbin/oracleasm createdisk diskb /dev/sdb1

/usr/sbin/oracleasm createdisk diskc /dev/sdc1

Install GRID

chmod grid:oinstall /u01

chmod grid:oinstall /u01/app

chmod grid:oinstall /u01/app/grid

Unzip the oracl grid software

Run cluvfy

as the grid user

/unzip directory/runcluvfy.sh stage -pre crsinst -n ambari-rac1,ambari-rac2 -fixupnoexec

Run fixup to resolve most issues and manually correct any others.

Run the grid installer

/unzip directory/runInstaller

Install and configure oracle grid infrastructure for a cluster

Configure a standard cluster

Typical Installation

Cluster name: ambari-cluster

Nodes: ambari-rac1, ambari-rac1-vip
Nodes: ambari-rac2, ambari-rac2-vip

SSH Connectivity.

Setup

GO TO install 12c grid Part 2

 Comment 

Install 12c grid – PART 2

on June 10, 2016 at 1:18 pm
Posted In: 12c General (Cat)

 

 

chown grid:oinstall /u01

chmod 777 /u01

Try again

Considering I cloned these from a successful build I don’t understand why the kernel versions are different !

Why resolv.conf is incorrect ?

Select “Ignore All” and next

DNS /NIS Name Service Check – Ignored

We can ignore this issue as we have ensured all names and addresses are in all hosts files.

Various warnings and errors exist BUT we have already ignored the warnings and errors , swap size and DNS as we know they work.

 

Normal 0 false false false false EN-US X-NONE X-NONE

 

 Comment 

ASM Preferred read

on September 11, 2015 at 11:09 am
Posted In: 12c General (Cat)

N.B. Bug as it doesn’t work with Flex Asm 12.1 – but should be fixed by now.

Suppose that you have an extended 3 nodes RAC:

  • racnode1 located in SITE A
  • racnode2 located in SITE A
  • racnode3 located in SITE B

And 2 ASM instances actives:

  • +ASM1 located in SITE A
  • +ASM3 located in SITE B
srvctl status asm ASM is running on racnode3,racnode1

set the asm_preferred_read_failure_groups parameter that way for the DATAP diskgroup

SQL> alter system set asm_preferred_read_failure_groups='DATAP.SITEB' sid='+ASM3'; System altered. SQL> alter system set asm_preferred_read_failure_groups='DATAP.SITEA' sid='+ASM1'; System altered.

So that ASM3 prefers to read from SITEB and ASM1 from SITEA
BUT we find Instance on node 3 are actually using ASM1

SQL> select I.INSTANCE_NAME,C.INSTANCE_NAME,C.DB_NAME 2 from gv$instance I, gv$asm_client C 3 where C.INST_ID=I.INST_ID and C.instance_name='NOPBDT3'; INSTANCE_NAME INSTANCE_NAME DB_NAME ---------------- ---------------------------------- ------------ +ASM1 NOPBDT3 NOPBDT

Check using

asmcmd iostat

asmiostat.sh

You can temporary fix this that way (connected on the +ASM1 instance):

SQL> ALTER SYSTEM RELOCATE CLIENT ‘NOPBDT3:NOPBDT’;

System altered.

SQL> select I.INSTANCE_NAME,C.INSTANCE_NAME,C.DB_NAME

2       from gv$instance I, gv$asm_client C

3       where C.INST_ID=I.INST_ID and C.instance_name=’NOPBDT3′;

INSTANCE_NAME INSTANCE_NAME DB_NAME

—————- ——————————– ——–

+ASM3        NOPBDT3                   NOPBDT

+ASM3        NOPBDT3                   NOPBDT

 



 Comment 

Flex cluster srvctl and asm Commands

on September 11, 2015 at 10:33 am
Posted In: 12c General (Cat)

Check the status of the ASM instance in RAC

[oracle@oel6-112-rac1 Desktop]$ srvctl status asm ASM is running on oel6-112-rac2,oel6-112-rac1 

Command to check whether Oracle Flex ASM is enabled or not

[oracle@oel6-112-rac1 Desktop]$ asmcmd ASMCMD> showclustermode  ASM cluster : Flex mode enabled ASMCMD> showclusterstate Normal

Command to change the cardinality of the ASM

[oracle@oel6-112-rac1 Desktop]$ srvctl status asm -detail ASM is running on oel6-112-rac2,oel6-112-rac1 ASM is enabled. 
[oracle@oel6-112-rac1 Desktop]$ srvctl config asm -detail ASM home: /u01/app/12.1.0/grid Password file: +DATA/orapwASM ASM listener: LISTENER ASM is enabled. ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM

Command to check whether Oracle Flex ASM is enabled or not

[oracle@oel6-112-rac2 Desktop]$ asmcmd ASMCMD> showclustermode ASM cluster : Flex mode enabled ASMCMD> showclusterstate Normal ASMCMD> exit

How to change the cardinality of the ASM

[oracle@oel6-112-rac2 Desktop]$ srvctl config  asm -detail ASM home: /u01/app/12.1.0/grid Password file: +DATA/orapwASM ASM listener: LISTENER ASM is enabled. ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM

Bringing Down the ASM instance in RAC Database instance1

[oracle@oel6-112-rac1 Desktop]$ srvctl stop asm -node oel6-112-rac1 -stopoption abort -force

Check the status of ASM instance in RAC Database instance1

[oracle@oel6-112-rac1 Desktop]$ srvctl status asm PRCR-1070 : Failed to check if resource ora.asm is registered Cannot communicate with crsd

Checking the status of cluster services in RAC Database instance1

[oracle@oel6-112-rac1 Desktop]$ crsctl check cluster CRS-4535: Cannot communicate with Cluster Ready Services CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online 

Checking the status of ASM & RAC Database in instance1

[oracle@oel6-112-rac1 Desktop]$ ps -ef | grep pmon oracle    3813     1  0 17:40 ?        00:00:00 mdb_pmon_-MGMTDB oracle    5806     1  0 17:42 ?        00:00:00 ora_pmon_orcl1 oracle    6193     1  0 17:42 ?        00:00:00 apx_pmon_+APX1

Note: Here a database instance is associated with the specific ASM instance running in the specific node. If for some reason the ASM instance was unable to be brought up or the service goes down, the database instance can still be brought up as the database instance will look for an ASM instance running on the same cluster.

Check the status of RAC Database instance running without ASM instance in RAC database instance1

[oracle@oel6-112-rac1 Desktop]$ . oraenv ORACLE_SID = [orcl1] ? orcl1 ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0/db_1 The Oracle base remains unchanged with value /u01/app/oracle

Log into Database instance from RAC database instance1

[oracle@oel6-112-rac1 Desktop]$ sqlplus /nolog SQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 25 18:24:36 2013 Copyright (c) 1982, 2013, Oracle.  All rights reserved.  SQL> connect sys/oracle@orcl as sysdba Connected.  SQL> select instance_name,instance_number from gv$instance; INSTANCE_NAME           INSTANCE_NUMBER ------------------------------------------- orcl2                         2 orcl1                         1 
SQL> select instance_name,instance_number from v$instance; INSTANCE_NAME INSTANCE_NUMBER ------------------------------------------- orcl2 2 SQL> connect sys/oracle@orcl as sysdba Connected.

Connecting to ASM instance of RAC Database instance2 from RAC Database instance1

[oracle@oel6-112-rac1 Desktop]$ . oraenv ORACLE_SID = [orcl1] ? +ASM2 ORACLE_HOME = [/home/oracle] ? /u01/app/12.1.0/grid The Oracle base remains unchanged with value /u01/app/oracle  [oracle@oel6-112-rac1 Desktop]$ asmcmd --privilege sysasm --inst +ASM2  ASMCMD> lsdg State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name MOUNTED  EXTERN  N         512   4096  1048576     15342     4782                0            4782              0             Y  DATA/ ASMCMD> 

check how many ASM instances have been specified for use

# srvctl config asm | grep -i count ASM instance count: 2

change the configuration to have 3 ASM instances active

# srvctl modify asm -count 3

# srvctl status asm -detail

 Comment 

Grid / Cloud Control Reports

on September 4, 2015 at 1:01 pm
Posted In: 12c General (Cat)

 

11g public reports page is

http://:port/em/public/reports

 

12c does not support this, Business Intelligence Publisher is replacing reports.

https://:port/em/faces/core-bip-reports

 

 

 

 

 Comment 

2005 – list sql text

on September 4, 2015 at 10:45 am
Posted In: MS SQL Server (Cat)

DECLARE @sqltext varbinary(128)
SELECT @sqltext = sql_handle
FROM sys.sysprocesses
WHERE spid =

SELECT TEXT FROM sys.dm_exec_sql_text(@sqltext)
GO

 

SELECT DEST.TEXT
FROM sys.[dm_exec_connections] SDEC
CROSS APPLY sys.[dm_exec_sql_text](SDEC.[most_recent_sql_handle]) AS DEST
WHERE SDEC.[most_recent_session_id] =

List sessions:

SELECT TOP 50 * FROM(SELECT COALESCE(OBJECT_NAME(s2.objectid),’Ad-Hoc’) AS ProcName,
execution_count,s2.objectid,
(SELECT TOP 1 SUBSTRING(s2.TEXT,statement_start_offset / 2+1 ,
( (CASE WHEN statement_end_offset = -1
THEN (LEN(CONVERT(NVARCHAR(MAX),s2.TEXT)) * 2)
ELSE statement_end_offset END)- statement_start_offset) / 2+1)) AS sql_statement,
last_execution_time
FROM sys.dm_exec_query_stats AS s1
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 ) x
WHERE sql_statement NOT like ‘SELECT TOP 50 * FROM(SELECT %’
–and OBJECTPROPERTYEX(x.objectid,’IsProcedure’) = 1
ORDER BY last_execution_time DESC

 

 Comment 

2005 List Active Sessions

on September 4, 2015 at 9:47 am
Posted In: MS SQL Server (Cat)

LIST ACTIVE SESSIONS – sp_who

 

spid

ecid

status

loginame

hostname

blk

dbname

cmd

request_id

1

0

background

sa

0

NULL

RESOURCE MONITOR

0

2

0

suspended

sa

0

NULL

LOG WRITER

0

51

0

sleeping

STORES\RLittle

CC-P-AOMAPP

0

master

AWAITING COMMAND

0

52

0

sleeping

FOUNDATION

ACC-JY

0

Foundation

AWAITING COMMAND

0

53

0

sleeping

CRM_HOST_LIVE

WEB-GB

0

CRM_HOST_LIVE

SELECT

0

54

0

sleeping

NT AUTHORITY\SYSTEM

CC-P-AOMDB

0

msdb

AWAITING COMMAND

0

55

0

sleeping

Admin

CCUK00130T001

0

CRM_HOST_LIVE

AWAITING COMMAND

0

56

0

sleeping

bte_ideradba

VM0227

0

master

AWAITING COMMAND

0

57

0

sleeping

NT AUTHORITY\SYSTEM

CC-P-AOMDB

0

msdb

AWAITING COMMAND

0

58

0

sleeping

NT AUTHORITY\SYSTEM

CC-P-AOMDB

0

ReportServer

AWAITING COMMAND

0

59

0

sleeping

CRM_HOST_LIVE

CC-P-AOMAPP

0

CRM_HOST_LIVE

AWAITING COMMAND

0

60

0

sleeping

FOUNDATION

VIRTUALXP-63912

0

Foundation

SELECT

0

 

LIST ACTIVE SESSIONS – T_sql

SELECT DISTINCT name AS database_name, session_id, host_name, login_time, login_name, reads, writes

FROM sys.dm_exec_sessions

LEFT OUTER JOIN

sys.dm_tran_locks ON sys.dm_exec_sessions.session_id = sys.dm_tran_locks.request_session_id

INNER JOIN

sys.databases ON sys.dm_tran_locks.resource_database_id = sys.databases.database_id

WHERE resource_type ‘DATABASE’

AND name =’CRM_HOST_LIVE’

ORDER BY name

 

 

CRM_HOST_LIVE

76

_ICT_CIM_5

2015-09-04 09:53:05.800

CRM_HOST_LIVE

3

11

CRM_HOST_LIVE

84

WEB-GB

2015-09-01 10:18:47.557

CRM_HOST_LIVE

0

0

 

GET EXCLUSIVE LOCKS – T_sql

SELECT DISTINCT name AS database_name, session_id, host_name, login_time, login_name, reads, writes

FROM sys.dm_exec_sessions

LEFT OUTER JOIN

sys.dm_tran_locks ON sys.dm_exec_sessions.session_id = sys.dm_tran_locks.request_session_id

INNER JOIN

sys.databases ON sys.dm_tran_locks.resource_database_id = sys.databases.database_id

WHERE resource_type ‘DATABASE’

AND request_mode like ‘%X%’

–AND name =’CRM_HOST_LIVE’

ORDER BY name

Kill

 

 Comment 

12c CDB PDB

on August 7, 2015 at 11:21 am
Posted In: 12c General (Cat)

Oracle Database 12c Release 1 (12.1) introduces the multitenant architecture.

This database architecture has a multitenant container database (CDB) that includes a root container, CDB$ROOT, a seed database, PDB$SEED, and multiple pluggable databases (PDBs).

Each pluggable database is equivalent to a separate database instance in Oracle Database release 11g.

The root container, CDB$ROOT, holds common objects that are accessible to every PDB utilizing metadata links or object links.

The seed database, PDB$SEED, is used when creating a new PDB to seed the new database.

The key benefit of the Oracle Database 12c multitenant architecture is that the database resources, such as CPU and memory, can be shared across all of the PDBs. This architecture also enables many databases to be treated as one for tasks such as upgrades or patches, and backups.

 Comment 

Rman recovery – Advanced Scenarios

on August 5, 2015 at 2:35 pm
Posted In: 12c rman (cat)

Recovering a NOARCHIVELOG Database with Incremental Backups

STARTUP FORCE MOUNT;
RESTORE DATABASE FROM TAG “consistent_whole_backup”;
RECOVER DATABASE NOREDO;
ALTER DATABASE OPEN RESETLOGS;

Restoring the Server Parameter File

STARTUP FORCE NOMOUNT;
RUN
{
ALLOCATE CHANNEL c1 DEVICE TYPE sbt PARMS …;
SET UNTIL TIME ‘SYSDATE-7’;
SET CONTROLFILE AUTOBACKUP FORMAT
FOR DEVICE TYPE sbt TO ‘/disk1/control_files/autobackup_%F’;
SET DBID 123456789;
RESTORE SPFILE TO ‘/tmp/spfileTEMP.ora’ FROM AUTOBACKUP MAXDAYS 10;
}

Restoring the Server Parameter File from a Control File Autobackup

SET DBID 320066378;
RUN
{
SET CONTROLFILE AUTOBACKUP FORMAT
FOR DEVICE TYPE DISK TO ‘autobackup_format’;
RESTORE SPFILE FROM AUTOBACKUP;
}

 

RESTORE SPFILE TO PFILE ‘/tmp/initTEMP.ora’;

STARTUP FORCE PFILE=’/tmp/initTEMP.ora’;


 Comment 

Performing Complete Recovery of CDBs / PDBs

on August 5, 2015 at 2:14 pm
Posted In: 12c rman (cat)

17-18

Performing Complete Recovery of a Whole CDB

When you recover a whole CDB, you recover the root and all PDBs in a single operation.
RESTORE DATABASE;
RECOVER DATABASE;

Performing Complete Recovery of the Root

You might consider recovering only the root if a data corruption or user error occurs that affects only the root. However, Oracle strongly recommends that you recover all PDBs after recovering the root to prevent metadata inconsistencies among the root and the PDBs. In this case, it might be preferable to perform a complete recovery of the whole CDB.

SHUTDOWN IMMEDIATE
STARTUP MOUNT
RESTORE DATABASE ROOT;
RECOVER DATABASE ROOT;
Strongly recommended) Recover all PDBs, including the seed PDB.
a.Issue the RESTORE PLUGGABLE DATABASE and RECOVER PLUGGABLE DATABASE commands.
The following example recovers the PDBs sales and hr:
RESTORE PLUGGABLE DATABASE ’PDB$SEED’, sales, hr;
RECOVER PLUGGABLE DATABASE ’PDB$SEED’, sales, hr;
b.Examine the output to see if media recovery was successful. If so, proceed to the next step.
7.Open the CDB and all PDBs.
ALTER DATABASE OPEN;
ALTER PLUGGABLE DATABASE ALL OPEN;

Performing Complete Recovery of PDBs with RMAN

There are two approaches to recovering a PDB with RMAN:
â– Connect to the root and then use the RESTORE PLUGGABLE DATABASE and RECOVER PLUGGABLE DATABASE commands. This approach enables you to recover multiple PDBs with a single command.
â– Connect to the PDB and use the RESTORE DATABASE and RECOVER DATABASE commands. This approach recovers only a single PDB and enables you to use the same commands used for recovering non-CDB databases.

To recover one or more PDBs while connected to the root:

ALTER PLUGGABLE DATABASE sales, hr CLOSE;

If any data files are missing, an error occurs and you cannot close a PDB. You must then connect to the PDB to which the missing data file belongs, take the missing data file offline, and then close the PDB.
The following command takes the data file 12 offline:
ALTER PLUGGABLE DATABASE DATAFILE 12 OFFLINE;

Issue the RESTORE PLUGGABLE DATABASE and RECOVER PLUGGABLE DATABASE commands.
The following example recovers the seed database PDB$SEED and the PDBs sales and hr:
RESTORE PLUGGABLE DATABASE ‘pdb$seed’, sales, hr;
RECOVER PLUGGABLE DATABASE ‘pdb$seed’, sales, hr;

Connect to the PDB to which the missing data file belongs and then make the data file online. The following command makes the data file 12 online:
ALTER DATABASE DATAFILE 12 ONLINE;

Examine the output to see if media recovery was successful. If so, open the PDBs.
ALTER PLUGGABLE DATABASE sales, hr OPEN;

To connect to and recover one PDB:

Connect to the PDB as a local user with SYSDBA system privilege.

ALTER PLUGGABLE DATABASE CLOSE;

ALTER DATABASE DATAFILE 12 OFFLINE;

RESTORE DATABASE;

RECOVER DATABASE;

ALTER DATABASE DATAFILE 12 ONLINE;

ALTER PLUGGABLE DATABASE OPEN;

Cloud Control can be used for Performing Complete Recovery of PDBs.

Cloud Control can be used for Performing Complete Recovery of tablespaces in a PDB.

Performing Complete Recovering of CDBs After Switching to a Copy

To switch a data file in a CDB, connect to the root and use the same steps that you would use for a non-CDB.
To switch a data file in a PDB, use one of the following techniques:
â– Connect to the root and use the SWITCH … PLUGGABLE DATABASE or SWITCH DATAFILE command. This enables you to switch the data files for one or more PDBs.
â– Connect to the PDB and use the SWITCH DATABASE or SWITCH DATAFILE command to switch data files in that PDB.

 

 Comment 

Some restore commands

on August 5, 2015 at 2:12 pm
Posted In: 12c rman (cat)

VALIDATE DATABASE;

If the output indicates that a data file is inaccessible. You can then run the REPORT SCHEMA command to obtain the tablespace name and file name for the data file.

RMAN> REPORT SCHEMA;

SELECT STATUS FROM V$INSTANCE;

SELECT FILE#, STATUS, ERROR, RECOVER, TABLESPACE_NAME, NAME FROM V$DATAFILE_HEADER WHERE RECOVER = ‘YES’ OR (RECOVER IS NULL AND ERROR IS NOT NULL);

SELECT FILE#, ERROR, ONLINE_STATUS, CHANGE#, TIME FROM V$RECOVER_FILE;

SELECT r.FILE# AS df#, d.NAME AS df_name, t.NAME AS tbsp_name, d.STATUS, r.ERROR, r.CHANGE#, r.TIME FROM V$RECOVER_FILE r, V$DATAFILE d, V$TABLESPACE t WHERE t.TS# = d.TS# AND d.FILE# = r.FILE#;

If you do not have a record of the DBID of your database, then you can find it in the following places without opening your database:

â– The DBID is used in forming the file name for the control file autobackup. Locate this file

By default, the format of the autobackup file for all configured devices is the substitution variable %F in the FORMAT clause. This variable format translates into c-IIIIIIIIII-YYYYMMDD-QQ, with the placeholders defined as follows:
IIIIIIIIII stands for the DBID.
YYYYMMDD is a time stamp of the day the backup is generated.
QQ is the hexadecimal sequence that starts with 00 and has a maximum of FF.

â– If you have any text files that preserve the output from an RMAN session, then the DBID is displayed by the RMAN client when it starts up and connects to your database.

RESTORE DATABASE PREVIEW;

RESTORE ARCHIVELOG FROM TIME ‘SYSDATE-7’ PREVIEW;

RESTORE DATABASE PREVIEW SUMMARY;

RESTORE ARCHIVELOG ALL PREVIEW RECALL;

RUN
{
SET ARCHIVELOG DESTINATION TO ‘/oracle/temp_restore’;
RESTORE ARCHIVELOG ALL;
}

RUN
{
SET ARCHIVELOG DESTINATION TO ‘/oracle/temp_restore’;
RESTORE DATABASE;
RECOVER DATABASE; # restores and recovers logs automatically
}

RUN
{
# Set a new location for logs 1 through 100.
SET ARCHIVELOG DESTINATION TO ‘/fs1/tmp’;
RESTORE ARCHIVELOG FROM SEQUENCE 1 UNTIL SEQUENCE 100;
# Set a new location for logs 101 through 200.
SET ARCHIVELOG DESTINATION TO ‘/fs2/tmp’;
RESTORE ARCHIVELOG FROM SEQUENCE 101 UNTIL SEQUENCE 200;
# Set a new location for logs 201 through 300.
SET ARCHIVELOG DESTINATION TO ‘/fs3/tmp’;
RESTORE ARCHIVELOG FROM SEQUENCE 201 UNTIL SEQUENCE 300;
# restore and recover data files as needed
.
.
}

 

RUN
{
SET NEWNAME FOR DATAFILE 2 TO ‘/disk2/df2.dbf’;
SET NEWNAME FOR DATAFILE 3 TO ‘/disk2/df3.dbf’;
SET NEWNAME FOR DATAFILE 4 TO ‘/disk2/df4.dbf’;
RESTORE DATABASE;
SWITCH DATAFILE ALL;
RECOVER DATABASE;
}

To restore and recover a tablespace:

ALTER TABLESPACE users OFFLINE IMMEDIATE;

RESTORE TABLESPACE users;

RECOVER TABLESPACE users;

OR

RUN
{
# specify the new location for each datafile
SET NEWNAME FOR DATAFILE ‘/disk1/oracle/dbs/users01.f’ TO ‘/disk2/users01.f’;
SET NEWNAME FOR DATAFILE ‘/disk1/oracle/dbs/users02.f’ TO ‘/disk2/users02.f’;
SET NEWNAME FOR DATAFILE ‘/disk1/oracle/dbs/users03.f’ TO ‘/disk2/users03.f’;
RESTORE TABLESPACE users;
SWITCH DATAFILE ALL; # update control file with new file names
RECOVER TABLESPACE users;
}

ALTER TABLESPACE users ONLINE;

Performing Complete Recovery After Switching to a Copy

Switching to a Data File Copy

ALTER DATABASE DATAFILE 4 OFFLINE;
SWITCH DATAFILE 4 TO COPY;
RECOVER DATAFILE 4;
ALTER DATABASE DATAFILE 4 ONLINE;

To switch to a database copy and perform recovery:

SWITCH DATABASE TO COPY;
RECOVER DATABASE;
ALTER DATABASE OPEN;

 

 

 

 Comment 

Data Recovery Advisor

on August 5, 2015 at 1:41 pm
Posted In: 12c rman (cat)

Data Recovery Advisor and Oracle Real Application Clusters

In the current release, Data Recovery Advisor only supports single-instance databases. Oracle Real Application Clusters (Oracle RAC) databases are not supported.

Data Recovery Advisor and Oracle Data Guard

In a Data Guard environment, Data Recovery Advisor cannot do the following:
â– Use files transferred from a physical standby database to repair failures on a primary database
â– Diagnose and repair failures on a standby database
However, if the primary database is unavailable, then Data Recovery Advisor may recommend a failover to a standby database. After the failover you can repair the old primary database. If you are using Enterprise Manager Cloud Control in a Data Guard configuration, then you can initiate a failover through the Data Recovery Advisor recommendations page.

Data Recovery Advisor and CDBs

In the current release, Data Recovery Advisor can only be used to diagnose and repair data corruptions in non-CDBs and the root of a multitenant container database (CDB). Data Recovery Advisor is not supported for pluggable databases (PDBs).

Data Recovery Advisor Commands

RMAN>
LIST FAILURE;
LIST FAILURE 101 DETAIL;
LIST FAILURE LOW;
LIST FAILURE CLOSED;
LIST FAILURE EXCLUDE FAILURE 234234;
VALIDATE DATABASE;
VALIDATE CHECK LOGICAL SKIP INACCESSIBLE DATABASE;
ADVISE FAILURE;
ADVISE FAILURE LOW etc;
ADVISE FAILURE 101;
REPAIR FAILURE PREVIEW;
REPAIR FAILURE;
CHANGE FAILURE 101 PRIORITY LOW;
CHANGE FAILURE CLOSED;

 Comment 

Incarnations and PDB, CDB

on August 5, 2015 at 1:28 pm
Posted In: 12c rman (cat)

Relationship Among Database Incarnations

Database incarnations can stand in the following relationships to each other:

â– The current incarnation is the one in which the database is currently operating.

â– The incarnation from which the current incarnation branched following an OPEN RESETLOGS operation is the parent incarnation of the current incarnation.

â– The parent of the parent incarnation is an ancestor incarnation. Any parent of an ancestor incarnation is also an ancestor of the current incarnation.

â– The direct ancestral path of the current incarnation begins with the earliest incarnation and includes only the branches to an ancestor of the current incarnation, the parent incarnation, or the current incarnation.
An incarnation number is used to uniquely tag and identify a stream of redo.

Incarnations of PDBs

A pluggable database (PDB) incarnation is a subincarnation of the multitenant container database (CDB) and is expressed as (database_incarnation, pdb_incarnation). For example, if the CDB is incarnation 5, and a PDB is incarnation 3, then the fully specified incarnation number of the PDB is (5, 3). The initial incarnation of a PDB is 0. Subsequent incarnations are unique but not always sequential numbers.

The V$PDB_INCARNATION view contains information about all PDB incarnations.

Use the following query to display the current incarnation of a PDB:
select PDB_INCARNATION# from v$pdb_incarnation where STATUS = ‘CURRENT’ and CON_ID = PDB_container_id;

 

 

 

 

 

 Comment 

Maintaining the Fast Recovery Area

on August 5, 2015 at 12:54 pm
Posted In: 12c rman (cat)

Deletion Rules for the Fast Recovery Area

“Overview of the Fast Recovery Area” on page 5-16 explains the contents of the fast recovery area and the difference between permanent and transient files. Review this section before proceeding. The following rules govern when files become eligible for deletion from the recovery area:
â– Permanent files are never eligible for deletion.
â– Files that are obsolete under the retention policy are eligible for deletion.
â– Transient files that have been copied to tape are eligible for deletion.
â– Archived redo logs are not eligible for deletion until all the consumers of the logs have satisfied their requirements.
“Configuring an Archived Redo Log Deletion Policy” on page 5-29 explains how to configure an archived redo log deletion policy that determines when logs are eligible to be deleted. Consumers of logs can include RMAN, standby databases, Oracle Streams databases, and the Flashback Database feature.
â– Foreign archived logs that have been mined by a LogMiner session on a logical standby database are eligible for deletion. Because it is generated from a different database than the current database, a foreign archived redo log has a different DBID than the current archived redo logs.

The safe and reliable way to control deletion from the fast recovery area is to configure your retention policy and archived log deletion policy. To increase the likelihood that files moved to tape are retained on disk, increase the fast recovery area quota.

Monitoring Fast Recovery Area Space Usage

SELECT * FROM V$RECOVERY_FILE_DEST;

SELECT * FROM V$RECOVERY_AREA_USAGE;

Managing Space for Flashback Logs in the Fast Recovery Area

You cannot manage the flashback logs in the fast recovery area directly other than by setting the flashback retention target or using guaranteed restore points. Nevertheless, you can manage fast recovery area space as a whole to maximize the space available for retention of flashback logs. In this way you increase the likelihood of achieving the flashback target.
To make space for flashback logs, back up the other contents of your fast recovery area to tape with commands such as BACKUP RECOVERY AREA, BACKUP BACKUPSET, and so on. Oracle Database automatically removes obsolete files from the fast recovery area. If offloading backups to tape still does not create enough space to satisfy the backup retention policy and flashback retention target, then allocate more space in the fast recovery area.

Responding to a Full Fast Recovery Area

If the RMAN retention policy requires keeping a set of backups larger than the fast recovery area disk quota, or if the retention policy is set to NONE, then the fast recovery area can fill completely with no reclaimable space.
The database issues a warning alert when reclaimable space is less than 15% and a critical alert when reclaimable space is less than 3%. To warn the DBA of this condition, an entry is added to the alert log and to the DBA_OUTSTANDING_ALERTS table (used by Enterprise Manager). Nevertheless, the database continues to consume space in the fast recovery area until there is no reclaimable space left.
When the recovery area is completely full, the error displayed is as follows, where nnnnn is the number of bytes required and mmmmm is the disk quota:
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim nnnnn bytes disk space from mmmmm limit
You have several choices for how to resolve a full fast recovery area when no files are eligible for deletion:
â– Make more disk space available and increase DB_RECOVERY_FILE_DEST_SIZE to reflect the additional space.
â– Move backups from the fast recovery area to tertiary storage such as tape.
One convenient way to back up all of your recovery area files to tape together is the BACKUP RECOVERY AREA command. After you transfer backups from the recovery area to tape, you can delete files from the fast recovery area. Flashback logs cannot be backed up outside the recovery area and are not backed up by BACKUP RECOVERY AREA.
â– Run DELETE for any files that have been removed with an operating system utility.
If you use host operating system commands to delete files, then the database is not aware of the resulting free space. You can run the RMAN CROSSCHECK command to have RMAN recheck the contents of the fast recovery area and identify expired files, and then use the DELETE EXPIRED command to delete every expired backup from the RMAN repository.
â– Ensure that your guaranteed restore points are necessary. If not, delete them.
Flashback logs that are not needed for a guaranteed restore point are deleted automatically to gain space for other files in the fast recovery area. A guaranteed restore point forces the retention of flashback logs required to perform Flashback Database to the restore point SCN.
â– Review your backup retention policy and, if using Data Guard, your archived redo log deletion policy.

Disabling the Fast Recovery Area

ALTER SYSTEM SET DB_RECOVERY_FILE_DEST=” SCOPE=BOTH SID=’*’;

Responding to an Instance Crash During File Creation

ORA-19816: WARNING: Files may exist in location that are not known to database.
In such a situation, use the RMAN command CATALOG RECOVERY AREA to recatalog any such files. If the file header of the file in question is corrupted, then delete the file manually with an operating system utility.

 

 

 Comment 

Making Database Connections with RMAN – CDB + PDB

on August 5, 2015 at 12:31 pm
Posted In: 12c rman (cat)

 

Type of database connection

Keyword

Description

Target database

TARGET

A database to be backed up or recovered by RMAN

Recovery catalog database

CATALOG

A database that provides an optional backup store for the RMAN repository in addition to the control file.

Auxiliary instance

Or

Auxiliary database

AUXILIARY

A physical standby database, or a database instance created for performing a specific task such as creating a duplicate database, transporting tablespaces, or performing tablespace point-in-time recovery (TSPITR).

For many tasks that use an auxiliary database, RMAN creates an automatic auxiliary instance for use during the task, connects to it, performs the task, and then destroys it when the task is completed. You do not give any explicit command

Authentication Using the Operating System

The following are the prerequisites for connecting to a database using operating system authentication (OS authentication):

â– You must set the ORACLE_SID environment variable, specifying the system identifier (SID) for the database.
â– You must be a member of the OSDBA operating system group to connect with the SYSDBA privilege or the OSBACKUPDBA operating system group to connect with the SYSBACKUP privilege.

On UNIX and Linux, the OSDBA group is typically named dba, and the OSBACKUPDBA group is typically named backupdba.

These names are assigned during database installation.
% rman target ‘”/ as sysdba”‘
% rman target ‘”/ as sysbackup”‘
rman target /

If neither AS SYSBACKUP nor AS SYSDBA is specified in the connection string, then the default used is AS SYSDBA.

About Backup and Recovery of CDBs

You can perform RMAN operations on a whole CDB, the root only, or one or more PDBs. You make RMAN connections to CDBs according to the following rules:
â– To perform operations on the whole CDB (for example, to back up the whole CDB) you connect as target to the root.

â– To perform operations on the root only (for example, to back up the root) you connect as target to the root.

â– To perform operations on a single PDB, you can connect as target either to the root or directly to the PDB.
–If you connect to the root, you must use the PLUGGABLE DATABASE syntax in your RMAN commands. For example, to back up a PDB, you use the BACKUP PLUGGABLE DATABASE command.
–If instead you connect directly to a PDB, you can use the same commands that you would use when connecting to a non-CDB. For example, to back up a PDB, you would use the BACKUP DATABASE command.

â– To perform operations on two or more PDBs with a single command, you connect as target to the root.
For example, to back up both the sales and hr PDBs, you connect to the root and submit the following command:
BACKUP PLUGGABLE DATABASE sales, hr;

Restrictions When Connected to a PDB

The following operations are not available when you connect as target directly to a PDB:
â– Back up archived logs
â– Delete archived logs
â– Delete archived log backups
â– Restore archived logs (RMAN does restore archived logs when required during media recovery.)
â– Point-in-time recovery (PITR)
â– TSPITR
â– Table recovery
â– Duplicate database
â– Flashback operations
â– Running Data Recovery Advisor
â– Report/delete obsolete
â– Register database
â– Import catalog
â– Reset database
â– Configuring the RMAN environment (using the CONFIGURE command)

Connecting Locally to the Root

rman target sys
target database Password: password
connected to target database: CDB (DBID=659628168)

Connecting to the Root with Operating System Authentication

rman target /
connected to target database: CDB (DBID=659628168)

Connecting to the Root with a Net Service Name

rman target c##bkuser@sales
target database Password: password
connected to target database: CDB (DBID=659628168)

Connecting as Target to a PDB

To connect as target to a PDB, you must:
â– Connect with a net service name that resolves to a database service for that PDB.
â– Connect as a local user or common user with the SYSDBA privilege.
The following example assumes the following
â– You want to perform RMAN operations on a PDB named hrpdb.
â– The net service name hrpdb resolves to a database service for the hrpdb PDB.
â– The local user hrbkup was created in the hrpdb PDB and granted the SYSDBA privilege.
rman target hrbkup@hrpdb
target database Password: password
connected to target database: CDB (DBID=659628168)

Specifying the Location of RMAN Output

By default, RMAN writes command output to standard output. To redirect output to a log file, enter the LOG parameter on the command line when you start RMAN, as in the following example:
# rman LOG /tmp/rman.log
In this case, RMAN displays command input but does not display the RMAN output. The easiest way to send RMAN output both to a log file and to standard output is to use the Linux tee command or its equivalent. For example, the following technique enables both input and output to be visible in the RMAN command-line interface:
# rman | tee rman.log

Setting Globalization Support Environment Variables for RMAN

Before invoking RMAN, it may be useful to set the NLS_DATE_FORMAT and NLS_LANG environment variables. These variables determine the format used for the time parameters in RMAN commands such as RESTORE, RECOVER, and REPORT.
The following example shows typical language and date format settings:
NLS_LANG=american
NLS_DATE_FORMAT=’Mon DD YYYY HH24:MI:SS’

If you are going to use RMAN to connect to an unmounted database and mount the database later while RMAN is still connected, then set the NLS_LANG environment variable so that it also specifies the character set used by the database.
A database that is not mounted assumes the default character set, which is US7ASCII. If your character set is different from the default, then RMAN returns errors after the database is mounted.

For example, if the character set is WE8DEC, then to avoid errors, you can set the NLS_LANG variable as follows:
NLS_LANG=american_america.we8dec
For the environment variable NLS_DATE_FORMAT to be applied and override the defaults set for the server in the server initialization file, the environment variable NLS_LANG must also be set.

 

 Comment 

Backing Up CDBs and PDBs

on August 5, 2015 at 12:16 pm
Posted In: 12c rman (cat)

Backing Up a Whole CDB

Backing up a whole CDB is similar to backing up a non-CDB. When you back up a whole CDB, RMAN backs up the root, all the PDBs, and the archived redo logs. You can then recover either the whole CDB, the root only, or one or more PDBs from the CDB backup.

Connect to the root as a common user with the SYSBACKUP or SYSDBA privilege.

 Comment 

Backing up the database

on August 5, 2015 at 12:12 pm
Posted In: 12c rman (cat)

RMAN can create backups on disk as image copies or as backup sets.

You can override this default with the AS COPY or AS BACKUPSET clauses.

BACKUP AS COPY DEVICE TYPE DISK DATABASE;

BACKUP AS BACKUPSET DATABASE;

BACKUP AS BACKUPSET DEVICE TYPE DISK DATABASE;

BACKUP AS BACKUPSET DEVICE TYPE SBT DATABASE;

You can use FROM TAG to copy an image copy with a specific tag, and then use TAG to assign the output copy a different tag.

E.G. creates new copies of all image copies of the database that have the tag full_cold_copy and gives the new copies the tag new_full_cold_copy.
BACKUP AS COPY COPY OF DATABASE FROM TAG full_cold_copy TAG new_full_cold_copy;

Views to Identify Multisection Backups

Use the MULTI_SECTION column of the V$BACKUP_SET view or the recovery catalog view RC_BACKUP_SET to determine if a backup is a multisection backup. For multisection backups, the MULTI_SECTION column contains the value YES.

Views That Contain Metadata for Multisection Backups

The V$BACKUP_DATAFILE and RC_BACKUP_DATAFILE views provide information about the number of blocks in each section of a multisection backup. The BLOCKS column specifies the number of blocks in each multisection backup.

Making Multisection Backups Using Image Copies

RMAN enables you to create multisection backups using image copies.

Multisection backups provide better performance by using multiple channels to back up large files in parallel. Starting with Oracle Database 12c Release 1 (12.1), you can create multisection full backups that are stored as image copies. While the image copy is being created, multiple channels are used to write files sections. However, the output of this operation is one copy for each data file.
Use the SECTION SIZE clause to create multisection backups. If the section size that you specify is larger than the size of the file, then RMAN does not use multisection backups for that file. If you specify a small section size that would produce more than 256 sections, then RMAN increases the section size to a value that results in exactly 256 sections

BACKUP AS COPY SECTION SIZE 500M DATABASE;

About Manual Backups of the Control File

A manual backup of the control file is different from a control file autobackup.

RMAN makes a control file autobackup after the files specified in the BACKUP command are backed up. Thus, the autobackup—unlike a manual control file backup—contains metadata about the backup that just completed. Also, RMAN can automatically restore autobackups without the use of a recovery catalog.

You can make a manual backup of the current control file either as a backup set or as an image copy. For a backup set, RMAN first creates a snapshot control file for read consistency. You can configure the file name and location of the snapshot control file. A snapshot control file is not needed for an image copy.

In an Oracle Real Application Clusters (Oracle RAC) environment, the following restrictions apply:
■The snapshot control file location must be on shared storage—that is, storage that is accessible by all Oracle RAC instances.
â– The destination of an image copy of the current control file must be shared storage.

 

 

 Comment 

basic Flashback and restore points commands and performance

on August 5, 2015 at 11:58 am
Posted In: 12c rman (cat)

CREATE RESTORE POINT before_upgrade;
CREATE RESTORE POINT before_upgrade GUARANTEE FLASHBACK DATABASE;

LIST RESTORE POINT restore_point_name;
LIST RESTORE POINT ALL;

SELECT NAME, SCN, TIME, DATABASE_INCARNATION#, GUARANTEE_FLASHBACK_DATABASE,STORAGE_SIZE FROM V$RESTORE_POINT;

DROP RESTORE POINT before_app_upgrade;

ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=4320; # 3 days

ALTER DATABASE FLASHBACK ON;

ALTER TABLESPACE tbs_3 FLASHBACK OFF;

if flashback buf free by RVWR is the top wait event, then you know that Oracle Database cannot write flashback logs very quickly.

The V$FLASHBACK_DATABASE_STAT view shows the bytes of flashback data logged by the database. Each row in the view shows the statistics accumulated (typically over the course of an hour). The FLASHBACK_DATA and REDO_DATA columns describe bytes of flashback data and redo data written respectively during the time interval, while the DB_DATA column describes bytes of data blocks read and written. The columns FLASHBACK_DATA and REDO_DATA correspond to sequential writes, whereas DB_DATA column corresponds to random reads and writes.

 

Because of the difference between sequential I/O and random I/O, a better indication of I/O overhead is the number of I/O operations issued for flashback logs. The V$SYSSTAT statistics shown can tell you the number of I/O operations that your instance has issued for various purposes.

Physical write I/O request The number of write operations issued for writing data blocks
Physical read I/O request  The number of read operations issued for reading data blocks
Redo writes                     The number of write operations issued for writing to the redo log
Flashback log writes         The number of write operations issued for writing to flashback logs
Flashback log write bytes  Total size in bytes of flashback database data written from this instance

When flashback is enabled or when there are guaranteed restore points, the background process RVWR writes flashback data to flashback database logs in the fast recovery area.

If RVWR encounters an I/O error, then the following behavior is expected:

â– If there are any guaranteed restore points defined, then the instance fails when RVWR encounters I/O errors.

â– If no guaranteed restore points are defined, then the instance remains unaffected when RVWR encounters I/O errors. Note the following; On a primary database, Oracle Database automatically disables Flashback Database while the database is open. All existing transactions and queries proceed unaffected. This behavior is expected for both single-instance and Oracle RAC databases.

 

 

 

 

 Comment 

Stored Scripts

on August 5, 2015 at 10:55 am
Posted In: 12c rman (cat)

You can use a stored script as an alternative to a command file for managing frequently used sequences of RMAN commands. The script is stored in the recovery catalog rather than on the file system.

create a stored script:

1.Start RMAN and connect to a target database and recovery catalog (if used).

2.Run the CREATE SCRIPT command.
CREATE SCRIPT full_backup
{    
BACKUP DATABASE PLUS ARCHIVELOG;
DELETE OBSOLETE;
}

For a global script, the syntax is similar:
CREATE GLOBAL SCRIPT global_full_backup
{    
BACKUP DATABASE PLUS ARCHIVELOG;
DELETE OBSOLETE;
}

Optionally, you can provide a COMMENT with descriptive information:
CREATE GLOBAL SCRIPT global_full_backup
COMMENT ‘use only with ARCHIVELOG mode databases’
{    
BACKUP DATABASE PLUS ARCHIVELOG;
DELETE OBSOLETE;
}

You can also create a script by reading its contents from a text file. The file must begin with a left brace ({) character, contain a series of commands valid within a RUN block, and end with a right brace (}) character. Otherwise, a syntax error is signalled, just as if the commands were entered at the keyboard.
CREATE SCRIPT full_backup FROM FILE ‘/tmp/my_script_file.txt’;

3.Examine the output.
If no errors are displayed, then RMAN successfully created the script and stored in the recovery catalog.

Replacing Stored Scripts

To update stored scripts, use the REPLACE SCRIPT command. If you are replacing a local script, then you must be connected to the target database that you connected to when you created the script. If the script does not exist, then RMAN creates it.

To replace a stored script:
REPLACE SCRIPT full_backup
{
BACKUP DATABASE PLUS ARCHIVELOG;
}

You can update global scripts by specifying the GLOBAL keyword as follows:
REPLACE GLOBAL SCRIPT global_full_backup
COMMENT ‘A script for full backup to be used with any database’
{
BACKUP AS BACKUPSET DATABASE PLUS ARCHIVELOG;
}

As with CREATE SCRIPT, you can update a local or global stored script from a text file with the following form of the command:
REPLACE GLOBAL SCRIPT global_full_backup FROM FILE ‘/tmp/my_script_file.txt’;

Executing Stored Scripts

Use the EXECUTE SCRIPT command to run a stored script. If GLOBAL is specified, then a global script with this name must exist in the recovery catalog; otherwise, RMAN returns error RMAN-06004. If GLOBAL is not specified, then RMAN searches for a local stored script defined for the current target database. If no local script with this name is found, then RMAN searches for a global script by the same name and executes it if one is found.

If needed, use SHOW to examine your configured channels.
Your script uses the automatic channels configured at the time you execute the script. Use ALLOCATE CHANNEL commands in the script if you must override the configured channels. Because of the RUN block, if an RMAN command in the script fails, subsequent RMAN commands in the script do not execute.

RUN
{
EXECUTE SCRIPT full_backup;
}

The preceding command invokes a local script if one exists with the name specified. If no local script is found, but there is a global script with the name specified, then RMAN executes the global script.

You can also use EXECUTE GLOBAL SCRIPT to control which script is invoked if a local and a global script have the same name. If there is no local script called global_full_backup, the following two commands have the same effect:
RUN
{
EXECUTE GLOBAL SCRIPT global_full_backup;
}

RUN
{
EXECUTE SCRIPT global_full_backup;
}

Creating and Executing Dynamic Stored Scripts

You can specify substitution variables in the CREATE SCRIPT command. When you start RMAN on the command line, the USING clause specifies one or more values for use in substitution variables in a command file. As in SQL*Plus, &1 indicates where to place the first value, &2 indicates where to place the second value, and so on.

Create a command file that contains a CREATE SCRIPT statement with substitution variables for values that must be dynamically updated.
The following example uses substitution variables for the name of the tape set, for a string in the FORMAT specification, and for the name of the restore point.
CREATE SCRIPT quarterly {
ALLOCATE CHANNEL c1
DEVICE TYPE sbt
PARMS ‘ENV=(OB_MEDIA_FAMILY=&1)’;
BACKUP
TAG &2
FORMAT ‘/disk2/bck/&1%U.bck’
KEEP FOREVER
RESTORE POINT &3
DATABASE;
}

Connect RMAN to a target database (which must be mounted or open) and recovery catalog, specifying the initial values for the recovery catalog script.
For example, enter the following command:
# rman TARGET / CATALOG rco@catdb USING arc_backup bck0906 FY06Q3

A recovery catalog is required for KEEP FOREVER, but is not required for any other KEEP option.

Run the command file created in the first step to create the stored script.
For example, run the /tmp/catscript.rman command file as follows:
RMAN> @/tmp/catscript.rman
This step creates but does not execute the stored script.

Every quarter, execute the stored script, passing values for the substitution variables.
The following example executes the recovery catalog script named quarterly. The example specifies arc_backup as the name of the media family (set of tapes), bck1206 as part of the FORMAT string and FY06Q4 as the name of the restore point.
RUN
{
EXECUTE SCRIPT quarterly
USING arc_backup
bck1206
FY06Q4;
}

Printing Stored Scripts

PRINT SCRIPT full_backup;
PRINT SCRIPT full_backup TO FILE ‘/tmp/my_script_file.txt’;
RINT GLOBAL SCRIPT global_full_backup;
PRINT GLOBAL SCRIPT global_full_backup TO FILE ‘/tmp/my_script_file.txt’;

Listing Stored Script Names

Use the LIST … SCRIPT NAMES command to display the names of scripts defined in the recovery catalog. LIST GLOBAL SCRIPT NAMES and LIST ALL SCRIPT NAMES are the only commands that work when RMAN is connected to a recovery catalog without connecting to a target instance; the other forms of the LIST … SCRIPT NAMES command require a recovery catalog connection.

LIST SCRIPT NAMES;
LIST GLOBAL SCRIPT NAMES;
LIST ALL SCRIPT NAMES;
For each script listed, the output indicates which target database the script is defined for (or whether a script is global).

Deleting Stored Scripts

If you use DELETE SCRIPT without GLOBAL, and there is no stored script for the target database with the specified name, then RMAN looks for a global stored script by the specified name and deletes the global script if it exists. For example, suppose you enter the following command:
DELETE SCRIPT ‘global_full_backup’;
In this case, RMAN looks for a script global_full_backup defined for the connected target database, and if it did not find one, it searches the global scripts for a script called global_full_backup and delete that script.
To delete a global stored script, use DELETE GLOBAL SCRIPT:
DELETE GLOBAL SCRIPT ‘global_full_backup’;

Executing a Stored Script at RMAN Startup

To run the RMAN client and start a stored script in the recovery catalog on startup, use the SCRIPT argument when starting the RMAN client. For example, you could enter the following command to execute script /tmp/fbkp.cmd:
% rman TARGET / CATALOG rco@catdb SCRIPT ‘/tmp/fbkp.cmd’;
You must connect to the recovery catalog, which contains the stored script, and the target database, to which the script applies, when starting the RMAN client.
If local and global stored scripts are defined with the same name, then RMAN always executes the local script.

 Comment 

rman useful commands

on August 5, 2015 at 10:42 am
Posted In: 12c rman (cat)

 

CATALOG DATAFILECOPY ‘/disk1/old_datafiles/01_01_2003/users01.dbf’;
CATALOG ARCHIVELOG   ‘/disk1/arch_logs/archive1_731.dbf’, ‘/disk1/arch_logs/archive1_732.dbf’;
CATALOG BACKUPPIECE  ‘/disk1/backups/backup_820.bkp’;
CATALOG START WITH   ‘/disk1/backups/’;

 

 Comment 

Create a catalog, Register a database

on August 5, 2015 at 10:40 am
Posted In: 12c rman (cat)

Typical Recovery Catalog Space Requirements for 1 Year

Type of Space Space Requirement

SYSTEM tablespace

90 MB

Temp tablespace

5 MB

Rollback or undo tablespace

5 MB

Recovery catalog tablespace

15 MB for each database registered in the recovery catalog

Online redo logs

1 MB each (three groups, each with two members)

To create the recovery catalog schema in the recovery catalog database:

Start SQL*Plus and connect with administrator privileges to the database containing the recovery catalog. In this example, the database is catdb.

Create a user and schema for the recovery catalog. For example, you could enter the following SQL statement

CREATE USER rco IDENTIFIED BY password  TEMPORARY TABLESPACE temp   DEFAULT TABLESPACE tools  QUOTA UNLIMITED ON tools; 

Grant the RECOVERY_CATALOG_OWNER role to the schema owner. This role provides the user with all privileges required to maintain and query the recovery catalog.

GRANT RECOVERY_CATALOG_OWNER TO rco; 

To create the recovery catalog:

Start RMAN and connect to the database that will contain the catalog. Connect to the database as the recovery catalog owner.

Run the CREATE CATALOG command to create the catalog. The creation of the catalog can take several minutes. If the catalog tablespace is this user’s default tablespace, then you can run the following command:

RMAN> CREATE CATALOG; 

You can specify the tablespace name for the catalog in the CREATE CATALOG command. For example:

RMAN> CREATE CATALOG TABLESPACE cat_tbs; 

You can check the results by using SQL*Plus to query the recovery catalog to see which tables were created:

SQL> SELECT TABLE_NAME FROM USER_TABLES; 

Registering a Database in the Recovery Catalog

About Standby Database Registration

In a Data Guard environment, the primary and standby databases share the same DBID and database name. To be eligible for registration in the recovery catalog, each database in the Data Guard environment must have different DB_UNIQUE_NAME values. The DB_UNIQUE_NAME parameter for a database is set in its initialization parameter file.

If you use RMAN in a Data Guard environment, then you can use the REGISTER DATABASE command only for the primary database. You can use the following techniques to register a standby database in the recovery catalog:

  • When you connect to a standby database as TARGET, RMAN automatically registers the database in the recovery catalog.

  • When you run the CONFIGURE DB_UNIQUE_NAME command for a standby database that is not known to the recovery catalog, RMAN automatically registers this standby database if its primary database is registered.

Registering a Database with the REGISTER DATABASE Command

Start RMAN and connect to a target database and recovery catalog. The recovery catalog database must be open.

 

# rman TARGET / CATALOG rco@catdb; 

If the target database is not mounted, then mount or open it:

STARTUP MOUNT; 

Register the target database in the connected recovery catalog:

REGISTER DATABASE; 

RMAN creates rows in the catalog tables to contain information about the target database, then copies all pertinent data about the target database from the control file into the catalog, synchronizing the catalog with the control file.

Verify that the registration was successful by running REPORT SCHEMA:

REPORT SCHEMA;  Report of database schema for database with db_unique_name TRGT  List of Permanent Datafiles=========================== File Size(MB)   Tablespace       RB segs Datafile Name ---- ---------- ---------------- ------- ------------------- 1        307200 SYSTEM             NO    /oracle/oradata/trgt/system01.dbf 2         20480 UNDOTBS            YES   /oracle/oradata/trgt/undotbs01.dbf 3         10240 CWMLITE            NO    /oracle/oradata/trgt/cwmlite01.dbf 4         10240 DRSYS              NO    /oracle/oradata/trgt/drsys01.dbf 5         10240 EXAMPLE            NO    /oracle/oradata/trgt/example01.dbf 6         10240 INDX               NO    /oracle/oradata/trgt/indx01.dbf 7         10240 TOOLS              NO    /oracle/oradata/trgt/tools01.dbf 8         10240 USERS              NO    /oracle/oradata/trgt/users01.dbf  List of Temporary Files ======================= File Size(MB) Tablespace           Maxsize(MB) Tempfile Name ---- -------- -------------------- ----------- -------------------- 1    200      TEMP                 32767       /oracle/oradata/trgt/tbs_tmp.dbf 

 

 

 Comment 

Backup the recovery catalog strategy

on August 5, 2015 at 10:27 am
Posted In: 12c rman (cat)

When backing up the recovery catalog database, you can use RMAN to make the backups. As illustrated in Figure 13-1, start RMAN with the NOCATALOG option so that the repository for RMAN is the control file in the catalog database.

Description of Figure 13-1 follows

Follow these guidelines when developing an RMAN backup strategy for the recovery catalog database:

  • Run the recovery catalog database in ARCHIVELOG mode so that you can do point-in-time recovery if needed.

  • Set the retention policy to a REDUNDANCY value greater than 1.

  • Back up the database to two separate media (for example, disk and tape).

  • Run BACKUP DATABASE PLUS ARCHIVELOG at regular intervals, to a media manager if available, or just to disk.

  • Do not use another recovery catalog as the repository for the backups.

  • Configure the control file autobackup feature to ON.

With this strategy, the control file autobackup feature ensures that the recovery catalog database can always be recovered, so long as the control file autobackup is available.

 

 Comment 

Virtual Private Catalogs

on August 5, 2015 at 10:23 am
Posted In: 12c rman (cat)

Every RMAN recovery catalog starting with Oracle Database 11g supports virtual private catalogs, but they are not used unless explicitly created. There is no restriction to the number of virtual private catalogs that can be created beneath one recovery catalog. Each virtual private catalog is owned by a database schema user which is different than the user who owns the recovery catalog.

After you set up a virtual private catalog user, the administrator for the recovery catalog grants each virtual private catalog the privilege to use that catalog for one or more databases that are currently registered in the recovery catalog. The administrator of the recovery catalog can also grant the privilege to register new databases while using a virtual private catalog.

Creating and Managing Virtual Private Catalogs with Oracle Database 12c Release 1 (12.1.0.2)

This section assumes that you created the base recovery catalog.

Assume that the following databases are registered in the base recovery catalog: prod1, prod2, and prod3. The database user who owns the base recovery catalog is rco. You want to create database user vpc1 and grant this user access privileges only to prod1 and prod2. Next, you want to connect to the virtual private catalog owned by vpc1, register the database prod1 with the virtual private catalog, and store backup metadata for prod1 in the virtual private catalog.

To create and grant privileges to a virtual private catalog owner:

1.Start SQL*Plus and connect to the recovery catalog database with administrator privileges.

2.Create the user who will own the virtual private catalog.
E.G. if you want database user vpc1 to own the virtual private catalog, then execute the following command (replacing password with a user-defined password):
SQL> CREATE USER vpc1 IDENTIFIED BY password DEFAULT TABLESPACE vpcusers QUOTA UNLIMITED ON vpcusers;

3.Grant the CREATE SESSION privilege to the user and then exit SQL*Plus.
The following example grants the CREATE SESSION privilege to user vpc1:
SQL> GRANT CREATE SESSION TO vpc1;

4.Start RMAN and connect to the recovery catalog database as the base recovery catalog owner (not the virtual private catalog owner).
The following example connects to the base recovery catalog as rco:
RMAN> CONNECT CATALOG rco@catdb;

5.Grant desired privileges to the virtual private catalog owner.
The following example grants user vpc1 access to the metadata for prod1 and prod2 (but not prod3):
RMAN> GRANT CATALOG FOR DATABASE prod1 TO vpc1;
RMAN> GRANT CATALOG FOR DATABASE prod2 TO vpc1;

You can also use a DBID instead of a database name. The virtual private catalog user does not have access to the metadata for any other databases registered in the recovery catalog.

You can also grant the user the ability to register new target databases in the recovery catalog. For example:
RMAN> GRANT REGISTER DATABASE TO vpc1;

To register a database with a virtual private catalog and store backup metadata:

1.Start RMAN and connect to the recovery catalog database as the virtual private catalog owner (not the base recovery catalog owner). Connect to the database that you want to register as TARGET.
RMAN> CONNECT TARGET /
RMAN> CONNECT CATALOG vpc1@catdb;

2.Register the database whose metadata must be stored in the virtual private catalog.
The following example registers the database with the virtual private catalog owner vpc1.
RMAN> REGISTER DATABASE;

3.Back up the database using the BACKUP command with the required clauses.
Metadata related to the backup is stored in the virtual private catalog.

 Comment 

RMAN New Features

on August 5, 2015 at 10:03 am
Posted In: 12c rman (cat)

12.1.0.2

Oracle Virtual Private Database (VPD) for RMAN virtual private catalog

The RMAN recovery catalog is created and managed using VPD. This provides better performance and scalability when a large number of virtual private catalogs are created.

12.1.0.1

Support for multitenant container databases and pluggable databases

RMAN provides backup and recovery of multitenant container databases (CDBs), which are introduced in Oracle Database 12c Release 1 (12.1). This support includes backup and point-in-time recovery of specified pluggable databases (PDBs).

SYSBACKUP Privilege

The SYSBACKUP administrative privilege encompasses the permissions required for backup and recovery, including the ability to connect to a closed database. System administrators can grant SYSBACKUP instead of SYSDBA to users who perform backup and recovery, thus reducing the proliferation of the SYSDBA privilege. In contrast to SYSDBA, SYSBACKUP does not include data access privileges such as SELECT ANY TABLE.

Storage Snapshot Optimization

Storage Snapshot Optimization enables you to use third-party technologies to take a storage snapshot of your database without putting the database in BACKUP mode. You can then use the snapshot to recover all or part of the database.

SQL Interface Improvements

You can now issue most SQL commands in RMAN without preceding the command with the SQL keyword. For a few commands that exist in both RMAN and SQL and have very different uses, you can specify the SQL keyword to eliminate ambiguity. You no longer need to enclose the SQL command in quotes, which greatly simplifies the syntax when the SQL command itself requires quotation marks. The SQL ALTER command replaces the RMAN command.
The new RMAN DESCRIBE command provides the functionality of the SQL*Plus DESCRIBE command.

Multisection Backup Improvements

RMAN provides multisection backup support for incremental backups and image copies. Wherever possible, unused block compression and Block Change Tracking are used in conjunction with multisection incremental backups. This improves backup and restore performance.

Restoring and Recovering Files Over a Network

You can now restore or recover a database, data files, tablespaces, or control files by using backup sets from a physical standby database. RMAN transfers the backup sets, over the network, to the destination host. This is useful in a Data Guard environment when you want to synchronize the standby and primary databases.

Active Database Duplication Improvements

RMAN can now perform active database duplication using backup sets. When sufficient auxiliary channels are allocated, the auxiliary instance connects to the target instance and retrieves the backup sets over the network, thus reducing the processing load on the target instance. Unused block compression can be used during the duplication process, thus reducing the size of backups transported over the network. You can specify the binary compression level to be used. You can also encrypt backups and use multisection backups while performing active database duplication.

Cross-Platform Backup and Restore Improvements

You can transport data across platforms by using full and incremental backup sets. Incremental backups can reduce overall application downtime during cross-platform data migration.

Recovering Tables and Table Partitions from RMAN Backups

RMAN can recover tables and table partitions to a specified point in time from previously-created RMAN backups.

Unified auditing and RMAN

Unified auditing consolidates all the Oracle Database audit records into one single audit trail. To use unified auditing, you must first upgrade your database to Oracle Database 12c Release 1 (12.1) and then migrate your database to use unified auditing.

DUPLICATE enhancements

You can specify that the duplicate database must not be opened using RESETLOGS after it is created. You may prefer not to open the duplicate database if you want to change the initialization parameters of the duplicate database or if opening the duplicate database may start services in the duplicate database that will conflict with the source database.

 Comment 

Oracle Database Introduction

on August 5, 2015 at 9:39 am
Posted In: 12c General (Cat)

Short Summary from the Introduction to Oracle Database

A database management system (DBMS) is software that controls the storage, organization, and retrieval of data. Typically, a DBMS has the following elements:

  • Kernel code

    This code manages memory and storage for the DBMS.

  • Repository of metadata

    This repository is usually called a data dictionary.

  • Query language

    This language enables applications to access the data.

The relational model is the basis for a relational database management system (RDBMS). Essentially, an RDBMS moves data into a database, stores the data, and retrieves it so that applications can manipulate it.

Data Concurrency

A requirement of a multiuser RDBMS is the control of data concurrency, which is the simultaneous access of the same data by multiple users. Without concurrency controls, users could change data improperly, compromising data integrity. For example, one user could update a row while a different user simultaneously updates it.

If multiple users access the same data, then one way of managing concurrency is to make users wait. However, the goal of a DBMS is to reduce wait time so it is either nonexistent or negligible. All SQL statements that modify data must proceed with as little interference as possible. Destructive interactions, which are interactions that incorrectly update data or alter underlying data structures, must be avoided.

Oracle Database uses locks to control concurrent access to data. A lock is a mechanism that prevents destructive interaction between transactions accessing a shared resource. Locks help ensure data integrity while allowing maximum concurrent access to data.

Data Consistency

In Oracle Database, each user must see a consistent view of the data, including visible changes made by a user’s own transactions and committed transactions of other users. For example, the database must prevent dirty reads, which occur when one transaction sees uncommitted changes made by another concurrent transaction.

Oracle Database always enforces statement-level read consistency, which guarantees that the data that a single query returns is committed and consistent for a single point in time. Depending on the transaction isolation level, this point is the time at which the statement was opened or the time the transaction began. The Flashback Query feature enables you to specify this point in time explicitly.

The database can also provide read consistency to all queries in a transaction, known as transaction-level read consistency. In this case, each statement in a transaction sees data from the same point in time, which is the time at which the transaction began.

Description of Figure 1-1 follows

Physical Storage Structures

The physical database structures are the files that store the data. When you execute the SQL command CREATE DATABASE, the following files are created:

  • Data files

    Every Oracle database has one or more physical data files, which contain all the database data. The data of logical database structures, such as tables and indexes, is physically stored in the data files.

  • Control files

    Every Oracle database has a control file. A control file contains metadata specifying the physical structure of the database, including the database name and the names and locations of the database files.

  • Online redo log files

    Every Oracle Database has an online redo log, which is a set of two or more online redo log files. An online redo log is made up of redo entries (also called redo records), which record all changes made to data.

Many other files are important for the functioning of an Oracle database server. These include parameter files and networking files. Backup files and archived redo log files are offline files important for backup and recovery.

Logical Storage Structures

This section discusses logical storage structures. The following logical storage structures enable Oracle Database to have fine-grained control of disk space use:

  • Data blocks

    At the finest level of granularity, Oracle Database data is stored in data blocks. One data block corresponds to a specific number of bytes on disk.

  • Extents

    An extent is a specific number of logically contiguous data blocks, obtained in a single allocation, used to store a specific type of information.

  • Segments

    A segment is a set of extents allocated for a user object (for example, a table or index), undo data, or temporary data.

  • Tablespaces

    A database is divided into logical storage units called tablespaces. A tablespace is the logical container for a segment. Each tablespace consists of at least one data file.

Instance Memory Structures

Oracle Database creates and uses memory structures for program code, data shared among users, and private data areas for each connected user. The following memory structures are associated with an instance:

  • System Global Area (SGA)

    The SGA is a group of shared memory structures that contain data and control information for one database instance. Examples of SGA components include the database buffer cache and shared SQL areas. Starting in Oracle Database 12c Release 1 (12.1.0.2), the SGA can contain an optional In-Memory Column Store (IM column store), which enables data to be populated in memory in a columnar format.

  • Program Global Areas (PGA)

    A PGA is a memory region that contain data and control information for a server or background process. Access to the PGA is exclusive to the process. Each server process and background process has its own PGA.

Multitenant Architecture

Oracle Database 12c supports the multitenant architecture, which enables an Oracle database to be a multitenant container database (CDB). A CDB is a single physical database that contains zero, one, or many user-created pluggable databases. A pluggable database (PDB) is a portable collection of schemas, schema objects, and nonschema objects that appears to an Oracle Net client as a non-CDB. A non-CDB is a traditional Oracle database that cannot contain PDBs.

Starting in Oracle Database 12c, you must create a database as either a CDB or non-CDB. You can plug a non-CDB into a CDB as a PDB. To move a PDB to a non-CDB, you must use Oracle Data Pump.

Description of Figure 1-2 follows

shows the same data after being consolidated into the CDB named MYCDB.

Description of Figure 1-3 follows

Physically, MYCDB is an Oracle database. MYCDB has one database instance (although multiple instances are possible in Oracle Real Application Clusters) and one set of database files, just like a non-CDB.

MYCDB contains two PDBs: hrpdb and salespdb. As shown in Figure 1-3, these PDBs appear to their respective applications just as they did before database consolidation. To administer the CDB itself or any PDB within it, a CDB administrator can connect to the root container, which is a collection of schemas, schema objects, and nonschema objects to which all PDBs belong.

CDBs and non-CDBs have architectural differences. This manual assumes the architecture of a non-CDB unless otherwise indicated.

 

 Comment 

12.1.0.2 Major New features

on August 5, 2015 at 9:28 am
Posted In: 12c General (Cat)

 

This document lists the major new features identified by Oracle and is not a comprehensive list of all the 12c new features; e.g. rman VPD for the catalog is not listed here.

Advanced Index Compression

(Database Administrator’s Guide).

Advanced Index Compression improves the compression ratios significantly while still providing efficient access to the index.

Approximate_Count_Distinct()

(Database SQL Language Reference).

Processing of large volumes of data is significantly faster

Attribute Clustering

(Database Data Warehousing Guide).

Attribute clustering is a table-level directive that clusters data in close physical proximity based on the content of certain columns. This directive applies to any kind of direct path operation, such as a bulk insert or a move operation.
Storing data that logically belongs together in close physical proximity can greatly reduce the amount of data to be processed and can lead to better compression ratios.

Automatic Big Table Caching

(Database VLDB and Partitioning Guide).

In previous releases, in-memory parallel query did not work well when multiple scans contended for cache memory. This feature implements a new cache called big table cache for table scan workload.

Multitenant Container databases – FDA Support for

(Database Development Guide).

Flashback Data Archive (FDA) is supported for multitenant container databases (CDBs) in this release.

Plugable Database – CONTAINERS Clause

(Database Administrator’s Guide).

The CONTAINERS clause is a new way of looking at multitenant container databases (CDBs). With this clause, data can be aggregated from a single identical table or view across many pluggable databases (PDBs) from the root container. The CONTAINERS clause accepts a table or view name as an input parameter that is expected to exist in all PDBs in that container. Data from a single PDB or a set of PDBs can be included with the use of CON_ID in the WHERE clause. For example:
SELECT ename FROM CONTAINERS(scott.emp) WHERE CON_ID IN (45, 49);

Plugable Database – File Placement in OMF

(Database Administrator’s Guide)

The new parameter, CREATE_FILE_DEST, allows administrators to set a default location for Oracle Managed Files (OMF) data files in the pluggable database (PDB). When not set, the PDB inherits the value from the root container.
If a file system directory is specified as the default location, then the directory must already exist; Oracle does not create it. The directory must have appropriate permissions that allow Oracle to create files in it. Oracle generates unique names for the files. A file created in this manner is an Oracle-managed file.
The CREATE_FILE_DEST parameter allows administrators to structure the PDB files independently of the multitenant container database (CDB) file destination. This feature helps administrators to plug or to unplug databases from one container to another in a shared storage environment.

Plugable Database – Logging Clause

(Database Administrator’s Guide).

The PDB LOGGING or NOLOGGING clause can be specified in a CREATE or ALTER PLUGGABLE DATABASE statement to set or modify the logging attribute of the pluggable database (PDB). This attribute is used to establish the logging attribute of tablespaces created within the PDB if the LOGGING clause was not specified in the CREATE TABLESPACE statement.
If a PDB LOGGING clause is not specified in the CREATE PLUGGABLE DATABASE statement, the logging attribute of the PDB defaults to LOGGING.
This new clause improves the manageability of PDBs in a multitenant container database (CDB).

Plugable Database – Metadata Clone

(Database Administrator’s Guide).

An administrator can now create a clone of a pluggable database only with the data model definition. The dictionary data in the source is copied as is but all user-created table and index data from the source is discarded.
This feature enhances cloning functionality and facilitates rapid provisioning of development environments.

Plugable Database – Remote Clone

(Database Administrator’s Guide).

The new release of Oracle Multitenant fully supports remote full and snapshot clones over a database link. A non-multitenant container database (CDB) can be adopted as a pluggable database (PDB) simply by cloning it over a database link. Remote snapshot cloning is also supported across two CDBs sharing the same storage.
This feature further improves rapid provisioning of pluggable databases. Administrators can spend less time on provisioning and focus more on other innovative operations.

Plugable Database – Snapshot Cloning Additional Platform Support

(Database Administrator’s Guide).

With the initialization parameter CLONEDB set to true, snapshot clones of a pluggable database are supported on any local, Network File Storage (NFS) or clustered file systems with Oracle Direct NFS (dNFS) enabled. The source of the clone must remain read-only while the target needs to be on a file system that supports sparseness.
Snapshot cloning support is now extended to other third party vendor systems.
This feature eases the requirement of specific file systems for snapshot clones of pluggable databases. With file system agnostic snapshot clones, pluggable databases can be provisioned even faster than before.

Plugable Database – STANDBYS Clause

(Database Administrator’s Guide).

The STANDBYS clause allows a user to specify whether a pluggable database (PDB) needs to be a part of the existing standby databases. The STANDBYS clause takes two values: ALL and NONE. While ALL is the default value, when a PDB is created with STANDBYS=NONE, the PDB’s data files are offlined and marked as UNNAMED on all of the standby databases. Any of the new standby databases instantiated after the PDB has been created needs to explicitly disable the PDB for recovery to exclude it from the standby database. However, if a PDB needs to be enabled on a standby database after it was excluded on that standby database, PDB data files need to be copied to the standby database from the primary database and the control file needs to be updated to reflect their paths after which a user needs to connect to the PDB on that standby database and issue ALTER PLUGGABLE DATABASE ENABLE RECOVERY which automatically onlines all of the data files belonging to the PDB.
This feature increases consolidation density by supporting different service-level agreements (SLAs) in the same multitenant container database (CDB).

Plugable Database – State Management Across CDB Restart

(Database Administrator’s Guide).

The SAVE STATE clause and DISCARD STATE clause are now available with the ALTER PLUGGABLE DATABASE SQL statement to preserve the open mode of a pluggable database (PDB) across multitenant container database (CDB) restarts.
If SAVE STATE is specified, open mode of specified PDB is preserved across CDB restart on instances specified in the INSTANCES clause. Similarly, with the DISCARD STATE clause, the open mode of specified PDB is no longer preserved.
These new SQL clauses provide the flexibility to choose the automatic startup of application PDBs when a CDB undergoes a restart. This feature enhances granular control and effectively reduces downtime of an application in planned or unplanned outages.

Plugable Database – Subset Cloning

(Database Administrator’s Guide).

The USER_TABLESPACES clause allows a user to specify which tablespaces need to be available in the new pluggable database (PDB). An example of the application of this clause is a case where a customer is migrating from a non-multitenant container database (CDB) where schema-based consolidation was used to separate data belonging to multiple tenants to a CDB where data belonging to each tenant is kept in a separate PDB. The USER_TABLESPACES clause helps to create one PDB for each schema in the non-CDB.
This powerful clause helps convert cumbersome schema-based consolidations to more agile and efficient pluggable databases.

Full Database Caching

(Database Performance Tuning Guide).

Full database caching can be used to cache the entire database in memory.

In-Memory Aggregation

(Database SQL Tuning Guide).

In-Memory Aggregation optimizes queries that join dimension tables to fact tables and aggregate data (for example, star queries) using CPU and memory efficient KEY VECTOR and VECTOR GROUP BY aggregation operations. These operations may be automatically chosen by the SQL optimizer based on cost estimates.

In-Memory Column Store

(Database Administrator’s Guide).

In-Memory Column Store enables objects (tables or partitions) to be stored in memory in a columnar format. The columnar format enables scans, joins and aggregates to perform much faster than the traditional on-disk formats for analytical style queries.

JSON Support

(XML DB Developer’s Guide).

This feature adds support for storing, querying and indexing JavaScript Object Notation (JSON) data to Oracle Database and allows the database to enforce that JSON stored in the Oracle Database conforms to the JSON rules. This feature also allows JSON data to be queried using a PATH based notation and adds new operators that allow JSON PATH based queries to be integrated into SQL operations.

New FIPS 140 Parameter for Encryption

(Database Security Guide).

The new database parameter, DBFIPS_140, provides the ability to turn on and off the Federal Information Processing Standards (FIPS) 140 cryptographic processing mode inside the Oracle database.

Rapid Home Provisioning

(Clusterware Administration and Deployment Guide).

Rapid Home Provisioning allows deploying of Oracle homes based on gold images stored in a catalog of pre-created homes.

Zone Maps

(Data Warehousing Guide).

For full table access, zone maps allow I/O pruning of data based on the physical location of the data on disk, acting like an anti-index.

 Comment 

hallengren TSQL source

on August 5, 2015 at 8:44 am
Posted In: MS SQL Server (Cat)

Source for lots of TSQL:

http://ola.hallengren.com

 

 Comment 

Date Time

on August 5, 2015 at 8:42 am
Posted In: MS SQL Server (Cat)

sf_date(,6,10)

CREATE FUNCTION [dbo].[SF_DATE]( @pd_date DATETIME, @pn_hours SMALLINT, @pn_mins SMALLINT) RETURNS datetime
AS
BEGIN
IF ( Year(@pd_date) = ‘1900’ ) RETURN( NULL )
RETURN( CONVERT(datetime , Convert(VarChar(10),@pd_date, 103 ) + ‘ ‘ + Convert(varchar,@pn_hours) + ‘:’ + Convert(varchar,@pn_mins), 103 ) )
END

 

 Comment 

Calculate database space used for a database

on August 5, 2015 at 8:40 am
Posted In: MS SQL Server (Cat)

use dbname

select *, fileproperty(name, ‘SpaceUsed’) as Used From dbo.sysfiles

GO

 Comment 

Calculate database space used for all databases

on August 5, 2015 at 8:36 am
Posted In: MS SQL Server (Cat)

DECLARE @DBInfo TABLE 
(

ServerName VARCHAR(100), 
DatabaseName VARCHAR(100), 
FileSizeMB INT, 
LogicalFileName sysname, 
PhysicalFileName NVARCHAR(520), 
Status sysname, 
Updateability sysname, 
RecoveryMode sysname, 
FreeSpaceMB INT, 
FreeSpacePct VARCHAR(7), 
FreeSpacePages INT, 
PollDate datetime

)

 

DECLARE @command VARCHAR(5000)

 

SELECT @command = ‘Use [‘ + ‘?’ + ‘] SELECT  @@servername as ServerName,  ‘ + ”” + ‘?’ + ”” + ‘ AS DatabaseName,  CAST(sysfiles.size/128.0 AS int) AS FileSize,  sysfiles.name AS LogicalFileName, sysfiles.filename AS PhysicalFileName,  CONVERT(sysname,DatabasePropertyEx(”?”,”Status”)) AS Status,  CONVERT(sysname,DatabasePropertyEx(”?”,”Updateability”)) AS Updateability,  CONVERT(sysname,DatabasePropertyEx(”?”,”Recovery”)) AS RecoveryMode,  CAST(sysfiles.size/128.0 – CAST(FILEPROPERTY(sysfiles.name, ‘ + ”” +  ‘SpaceUsed’ + ”” + ‘ ) AS int)/128.0 AS int) AS FreeSpaceMB,  CAST(100 * (CAST (((sysfiles.size/128.0 -CAST(FILEPROPERTY(sysfiles.name,’ + ”” + ‘SpaceUsed’ + ”” + ‘ ) AS int)/128.0)/(sysfiles.size/128.0))  AS decimal(4,2))) AS varchar(8)) + ‘ + ”” + ‘%’ + ”” + ‘ AS FreeSpacePct,  GETDATE() as PollDate FROM dbo.sysfiles’

INSERT INTO @DBInfo 
(ServerName, DatabaseName, FileSizeMB,  LogicalFileName,  PhysicalFileName,  Status,  Updateability,  RecoveryMode, FreeSpaceMB,  FreeSpacePct,  PollDate)

EXEC sp_MSForEachDB @command 

SELECT  ServerName,  DatabaseName,  FileSizeMB,  LogicalFileName,  PhysicalFileName,  Status,  Updateability,  RecoveryMode,  FreeSpaceMB,  FreeSpacePct,  PollDate 
FROM @DBInfo 
ORDER BY  ServerName,  DatabaseName

===========================================

 Comment 

Preferred Credentials

on August 4, 2015 at 3:02 pm
Posted In: 12c Cloud control (Cat)

Preferred credentials are used by Grid control (GC) monitoring pages as well as batch jobs.

Using preferred credentials is a relatively easy way to manage database passwords in GC although it does not actually change the passwords in the relevant databases it does mean that each GC user only needs to know their own account and password to access their systems.

 

In grid control

select Setup -> Security -> Preferred Credentials

Select the “HOST” credentials line.

Select “Manage Preferred credentials”.

Select the “Normal Host credentials” line.

Select “Set”

The following popup window appears.

Using the windows domain DBATask user, enter the relevant password and create a name for the credential.

Select “Save”

Select ALL the hosts in the list. Select “Set”

Use the named credential created earlier. Select “Save”

Modify any unix systems to use oracle username/password.

Repeat for “Database Instance credentials”

For which one can set

“Normal database credentials”,

“SYSDBA credentials” and

“Database Host credentials”

 

 Comment 

Account and Alerting setup

on August 4, 2015 at 12:01 pm
Posted In: 12c Cloud control (Cat)

 

Set up Administrator user accounts

In grid control as an administrator

 

Select “Setup -> Security -> Administrators”

 

Select the “Create” button

 

Enter the User name and password

Select the “Default” profile

Enter the E-mail address (used for alerting).

Select “Super Administrator”

Select “Next”

 

Select “Next”

 

Select “Finish”

Set up Monitoring user accounts

In grid control as an administrator

 

Select “Setup -> Security -> Administrators”

 

Select the “Create” button

 

Enter the User name and password

Select the “Default” profile

Enter the E-mail address (used for alerting).

Do NOT select “Super Administrator”

Select “Next”

 

Add the role “EM_ALL_VIEWER” to the default roles “EM_USER” and “PUBLIC” and then select “Next”

 

Select the following privileges and then “Next”

· Connect to any viewable target

· View all Java Service targets

· View Any Database Service

· View any Target

· Monitor Enterprise Manager

· View logs for Java Services

· View any Resource Provider

· View Cloud Home

 

Select “Next”

 

Select “Finish”

Set up user alerting

In grid control as an administrator

 

Select “Setup -> Incidents -> Incident Rules”

 

Select the rule set to be associated with the user.

In this case I have selected the default “Incident Management rule set for all targets” as we have not yet created any specific rule sets.

Select “E-mail another Administrator”

 

Select the search icon.

 

Select the radio button next to the user to be e-mailed.

Select “Select”

 

Select OK

 

The screen confirms that the user has been setup.

 

 Comment 

Modify Metrics

on August 4, 2015 at 10:34 am
Posted In: 12c Cloud control (Cat)

 

Modify a template metric

Select “Enterprise -> Monitoring -> Monitoring Templates

 

Select the template to be modified and then select “Edit”

 

Select the “Metric Thresholds” tab.

 

 

Find the metric to be modified.

Select the ‘3 pencils’ at the end of the line to edit the metric.

 

Select “Edit”

 

To disable the metric completely set all thresholds to blanks.

 

Here we have modified the number of occurrences to 5.

Select “Continue”

 

Select “Continue”

 

Select “Ok”

Apply a modified template

Please remember that where metrics have been modified for an individual target applying the template may overwrite those settings.

Select “Enterprise -> Monitoring -> Monitoring Templates

 

Select the template and then select “Apply…”

 

Ensure the radio button for “Template will only override metrics that are common to both template and target”

Select “Add”

 

Select the relevant targets and then select “Select”

 

Select “Ok”

 

Success screen.

 

Modifying Target specific metrics

Select a target, in this case will use the host BTEMON1P as an example and modify the disk space metric such that the D drive can be 95% full before any warnings are issued.

 

 

 

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

Select Host -> Monitoring -> Metric and Collection Settings

 

 

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

Select a metric, e.g. “Filesystems -> Filesystem Space Available(%)

Select the Pencil icon at the end of the line.

 

 

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

Select “Add”

Select the ‘search’ symbol on the line that has just been added and select the D drive.

 

 

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

Select “Select”

 

 

 

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

Remove the “\” after the drive letter (looks like a bug with windows setup).

Set the new metric thresholds for Warning and Critical, e.g. 5% and 4% free.

Select “Continue”

 

The modified metric is now visible.

Select “Ok” to apply the changes.

 

 

 Comment 

Disable Compliance Alerting

on August 4, 2015 at 10:03 am
Posted In: 12c Cloud control (Cat)

In grid control as an administrator

Select “Enterprise -> Compliance -> Library”

Select the “Compliance Standards” tab.

Near the bottom of the list select the line with “Security Recommendations for Oracle Products”

Select “Override Target Type Settings…”

For “Compliance Standard Evaluation Status” select “Disabled”

Select “Ok”

 

 

 

 Comment 

Agent Known Issues

on August 3, 2015 at 3:15 pm
Posted In: Uncategorized

 

Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

SR 3-11030109811 : GC 12c OMS unable to obtain metrics from newly installed agent

Agent is up and running.

No data is uploaded to grid control.

Grid control can see the host but nothing else; i.,e. the databases, listeners etc can not be discovered.

The analysis was that the internal targets did not get added when the agent was installed. Manually adding the targets worked.

$AGENT_INST/bin/emctl status agent
$AGENT_INST/bin/emctl secure agent
$AGENT_INST/bin/emctl pingOMS
$AGENT_INST/bin/emctl upload agent
$AGENT_INST/bin/emctl config agent addinternaltargets
$AGENT_INST/bin/emctl status agent

SR 3-11063183161 : After adding agents on numerous hosts all listeners are shown as down

Agent is up and running.

Listener is up and running.

Grid control believes the agent is down.

The issue was that the hostname used as Machine Name under Grid Control “Oracle Listener” -> “Target Setup” -> “Monitoring Configuration” is linked to an ipv6 address based on the output of the ping command:

On the listener host:

# ping btemon2p.btexpedite.co.uk

Pinging BTEMON2P.btexpedite.co.uk [fe80::d79:a032:5bfc:3688%11] with 32 bytes of data:
Reply from fe80::d79:a032:5bfc:3688%11: time<1ms
Reply from fe80::d79:a032:5bfc:3688%11: time<1ms
Reply from fe80::d79:a032:5bfc:3688%11: time<1ms
Reply from fe80::d79:a032:5bfc:3688%11: time<1ms

Changing the Machine Name of the Listener in Grid Control “Oracle Listener” -> “Target Setup” -> “Monitoring Configuration” to the ip on which it is registered overcame the issue.

 

 Comment 

Installing 12c Agent on Windows using PsExec

on August 3, 2015 at 2:57 pm
Posted In: 12c Cloud Control Agent (Cat)

 

References

Installing Oracle Management Agent in Silent Mode

Configuring Enterprise Manager for Firewalls

 

Introduction

This document is a guide to deploying an Oracle Management Agent to a Windows host, using Oracle Enterprise Manager Cloud Control 12c (EM12c) Release 5.

From EM12c Release 4 onwards, Oracle Management Agents can now be deployed on Windows hosts using PsExec from another Windows server, which acts as a staging host.

 

Windows staging server setup

Download the PsExec utility onto the Microsoft Windows staging host server from the following URL:

http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx

export OMS_HOME=/u01/app/oracle/middleware/em12cr5/oms

$OMS_HOME/bin/emcli login -username=sysman -password=******

Run the following command from the OMS host to obtain the list of platforms for which the Management Agent software is downloaded and applied:

$OMS_HOME/bin/emcli get_supported_platforms

Create a zip image for the Management Agent software:

export ZIP_LOC=/usr/bin/zip

$OMS_HOME/bin/emcli get_agentimage -destination=/swlib/win_x64_agent -platform=”Microsoft Windows x64 (64-bit)” -version=12.1.0.5.0

=== Partition Detail ===

Space free : 6 GB

Space required : 1 GB

Check the logs at /u01/app/oracle/middleware/gc_inst/em/EMGC_OMS1/sysman/emcli/setup/.emcli/get_agentimage_2015-06-25_11-09-33-AM.log

Downloading /swlib/win_x64_agent/12.1.0.5.0_AgentCore_233.zip

File saved as /swlib/win_x64_agent/12.1.0.5.0_AgentCore_233.zip

Downloading /swlib/win_x64_agent/12.1.0.5.0_PluginsOneoffs_233.zip

File saved as /swlib/win_x64_agent/12.1.0.5.0_PluginsOneoffs_233.zip

Downloading /swlib/win_x64_agent/unzip

File saved as /swlib/win_x64_agent/unzip

Agent Image Download completed successfully.

Transfer the following file to the Windows staging server:

/swlib/win_x64_agent/12.1.0.5.0_AgentCore_233.zip

Download the agentDeployPsExec.bat script onto the staging host, from the following location:

$OMS_HOME/sysman/prov/agentpush/scripts/agentDeployPsExec.bat

Create a response file to specify the Management Agent install inputs.

Create agent response file (agent.rsp) accordingly:

# Oracle Management Agent response file

# http://docs.oracle.com/cd/E24628_01/install.121/e22624/appdx_install_agent_psexec.htm#CIHCHIGA

HOST_NAMES=.company.co.uk

USER_NAME=\GCUser

PASSWORD=******

PSEXEC_DIR=E:\EM12c\staging\PSTools

AGENT_IMAGE_PATH=E:\EM12c\staging\core\12.1.0.5.0_AgentCore_233.zip

AGENT_BASE_DIR=E:\app\oracle\agent12c

AGENT_REGISTRATION_PASSWORD=******

EM_UPLOAD_PORT=4903

OMS_HOST=

REMOTE_SYS_DIR=C:\Windows

LOG_LOC=E:\EM12c\staging\logs

ADDITIONAL_PARAMETERS=AGENT_PORT=3872:b_startAgent=true:s_agentHomeName=agent12cR5


Target Windows Setup


Setup EMUSER account

Add the \GCUser account to the local Administrators group.

Modify Firewall properties

The windows firewall settings need to be modified to allow communication between the agent and the oms host.

Start the firewall gui.

Start -> Administrative Tools -> Windows Firewall with Advanced configuration

Modify the settings as shown in the following screenshots.

Right click on “Inbound Rules” and select “New Rule”

 

 

Enter the port numbers shown

 

 

Select

 

 

 

Enter the rule name as “Oracle management Server to Oracle Management agent”

 

 

Right click on “Outbound Rules” and select “New Rule”

 

 

Modify SQLNET.ORA

For security purposes the sqlnet.ora file in use on the agent host may have an entry such as:

TCP.INVITED_NODES = ( node1, node2, node3 )

This entry is used to restrict oracle tcp access to the host; if this entry exists then the grid control host must be added to the invited nodes list; e.g.

TCP.INVITED_NODES = ( node1, node2, node3, OEM )

 

Normal 0 false false false EN-GB JA X-NONE

Agent deployment

1. Logon onto the Windows EM12c staging server.

2. Update the following response file to include the target host name, credentials, and an appropriate installation directory for the agent (highlighted in yellow below):

E:\EM12c\staging\core\agent.rsp

Example contents:

HOST_NAMES=

USER_NAME=\GCUser

PASSWORD=password123

PSEXEC_DIR=E:\EM12c\staging\PSTools

AGENT_IMAGE_PATH=E:\EM12c\staging\core\12.1.0.5.0_AgentCore_233.zip

AGENT_BASE_DIR=E:\app\oracle\agent12c

AGENT_REGISTRATION_PASSWORD=password123

EM_UPLOAD_PORT=4903

OMS_HOST=

REMOTE_SYS_DIR=C:\Windows

LOG_LOC=E:\EM12c\staging\logs

ADDITIONAL_PARAMETERS=AGENT_PORT=3872:b_startAgent=true:s_agentHomeName=agent12cR5


3. Open a Command Prompt Window (as Administrator) and change directory to the PsExec tools folder:

cd /d E:\EM12c\staging\PSTools

4. Test connectivity to the target server using PsExec with the \GCUser account:

psexec.exe \\ -u \GCUser -p ****** -d -c E:\EM12c\staging\PSTools\\test.bat

PsExec v2.11 – Execute processes remotely

Copyright (C) 2001-2014 Mark Russinovich

Sysinternals – www.sysinternals.com

test.bat started on with process ID 5068.

E:\EM12c\staging\PSTools>

5. Assuming the test completes without any issues, run the following command from the staging host to install the agent on the selected Windows target host:

E:\EM12c\staging\PSTools>E:\EM12c\staging\core\agentDeployPsExec.bat PROPERTIES_FILE=E:\EM12c\staging\core\agent.rsp

E:\EM12c\staging\PSTools>echo off

===================================================================

Agent deployment started on host :

Creating installation base directory …

Copying unzip to the target host …

Copying agent software to the target host …

Extracting agent software …

Installing agent ….

Performing clean up …

Agent deployment completed successfully on host :

E:\EM12c\staging\PSTools>

Yo You should see a message saying the deployment completed successfully.

T The agent target is now ready to be configured in Oracle Enterprise Manager.


v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);}

Normal 0 false false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}

Known issues

Passwords with special characters

Special characters in the \GCUser password (or equivalent account) can lead to PsExec command failures. Should this occur, PsExec will produce an error like the one below:

The handle is invalid.

Connecting to …

Couldn’t access :

Account hasn’t been added to the local Administrators group on the target host

If the \GCUser account doesn’t have local administrator rights on the target server, you will get an access denied error when trying to connect via PsExec:

Couldn’t access

Access is denied.

 

 

 Comment 

Installing 12c Agent on Windows manually.

on August 3, 2015 at 2:50 pm
Posted In: 12c Cloud Control Agent (Cat)

 

Normal 0 false false false EN-GB JA X-NONE

References

Installing Oracle Management Agent in Silent Mode

Configuring Enterprise Manager for Firewalls

Setup the agent host

On the destination host determine the current Oracle BASE directory.

e.g. O:\Oracle

Create a temporary directory, e.g. EM12C, to hold the installation kit and log files.

e.g. O:\Oracle\EM12C

Copy the agent release kit software to the destination host and unzip the file.

i.e. copy 12.1.0.5.0_AgentCore_233.zip to O:\Oracle\EM12C

unzip 12.1.0.5.0_AgentCore_233.zip

This will create the directory O:\Oracle\EM12C\12.1.0.5.0_AgentCore_233

Modify agent installation response file

Inside the expanded release kit directory we will find the file agent.rsp which needs to have the following lines added at the end;

e.g. edit O:\Oracle\EM12C\12.1.0.5.0_AgentCore_233\agent.rsp

AGENT_IMAGE_PATH=O:\Oracle\EM12C\12.1.0.5.0_AgentCore_233\12.1.0.5.0_AgentCore_233.zip

AGENT_REGISTRATION_PASSWORD=

EM_UPLOAD_PORT=4903

OMS_HOST=

REMOTE_SYS_DIR=C:\Windows

LOG_LOC=O:\Oracle\EM12C\logs

ADDITIONAL_PARAMETERS=AGENT_PORT=3872:b_startAgent=true:s_agentHomeName=agent12cR5

Change text in bold as appropriate.

Install agent software only

Open a command window and change the default location to the installation kit directory.

O:\Oracle\EM12C\12.1.0.5.0_AgentCore_233\agent.rsp

Execute the command to install the agent software.

agentDeploy.bat AGENT_BASE_DIR=O:\Oracle\agent12c OMS_HOST=

EM_UPLOAD_PORT=4903 AGENT_REGISTRATION_PASSWORD=

RESPONSE_FILE=O:\Oracle\EM12C\12.1.0.5.0_AgentCore_233\agent.rsp

-softwareOnly

Change text in bold as appropriate.

Modify Firewall properties

The windows firewall settings need to be modified to allow communication between the agent and the oms host.

Start the firewall gui.

Start -> Administrative Tools -> Windows Firewall with Advanced configuration

Modify the settings as shown in the following screenshots.

Right click on “Inbound Rules” and select “New Rule”

 

Enter the port numbers shown

Select

Enter the rule name as “Oracle management Server to Oracle Management agent”

Right click on “Outbound Rules” and select “New Rule”

 

Modify SQLNET.ORA

For security purposes the sqlnet.ora file in use on the agent host may have an entry such as:

TCP.INVITED_NODES = ( node1, node2, node3 )

This entry is used to restrict oracle tcp access to the host; if this entry exists then the grid control host must be added to the invited nodes list; e.g.

TCP.INVITED_NODES = ( node1, node2, node3, OEM )

Configure the agent

Once the firewalls etc have been configured we can reconfigure the agent.

agentDeploy.bat AGENT_BASE_DIR=O:\Oracle\agent12c OMS_HOST=

EM_UPLOAD_PORT=4903 AGENT_REGISTRATION_PASSWORD=

RESPONSE_FILE=O:\Oracle\EM12C\12.1.0.5.0_AgentCore_233\agent.rsp

-configOnly

 

 

 

 

 

 

 Comment 
  • Page 1 of 5
  • 1
  • 2
  • 3
  • 4
  • 5
  • »

©2006-2017 isitdevops.com/databases | Powered by WordPress with ComicPress | Subscribe: RSS | Back to Top ↑