Interesting password management document
What’s New in Oracle Database 12c Password file?
Interesting password management document
What’s New in Oracle Database 12c Password file?
select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’sql tuning advisor’;
select window_name, autotask_status, optimizer_stats from dba_autotask_window_clients;
select * from DBA_SCHEDULER_WINDOWS;
select window_name, resource_plan, REPEAT_INTERVAL, duration from dba_scheduler_windows;
select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7;
auto optimizer stats collection
sql tuning advisor
auto space advisor
select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’auto optimizer stats collection’;
select JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’auto optimizer stats collection’;
select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’sql tuning advisor’;
select window_name, autotask_status, optimizer_stats from dba_autotask_window_clients;
select * from DBA_SCHEDULER_WINDOWS;
select window_name, resource_plan, REPEAT_INTERVAL, duration from dba_scheduler_windows;
select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7;
auto optimizer stats collection
sql tuning advisor
auto space advisor
select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’auto optimizer stats collection’;
select JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’auto optimizer stats collection’;
select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’sql tuning advisor’;
select window_name, autotask_status, optimizer_stats from dba_autotask_window_clients;
select * from DBA_SCHEDULER_WINDOWS;
select window_name, resource_plan, REPEAT_INTERVAL, duration from dba_scheduler_windows;
select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7;
auto optimizer stats collection
sql tuning advisor
auto space advisor
select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’auto optimizer stats collection’;
select JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY where JOB_START_TIME >systimestamp -7 and client_name=’auto optimizer stats collection’;
1)stop the replicate which replicates the data to database B.Â
2)info replicate_name_B, showchÂ
3)send replicate_name_B statusÂ
4)info replicate_name_B, detailsÂ
4)On the new goldengate home for the database B, Create the replicateÂ
5)Alter replicate_name_new_B with the extseqno , RBAÂ
6)start the replicate_name_new_B
add the following 2 lines to .bash_profile
bind '"\e[A": history-search-backward' bind '"\e[B": history-search-forward'
Use FOR UPDATE when creating the MV or
If the tables are being replicated exclude the materialized view from replication e.g. MAPEXCLUDE
RA-31685: Object type failed due to insufficient privileges. Failing sql is:
ORA-31685: Object type failed due to insufficient privileges. Failing sql is:
ORA-31685: Object type MATERIALIZED_VIEW:”ACQ”.”DIM_MERCHANT_MV” failed due to insufficient privileges. Failing sql is:
CREATE MATERIALIZED VIEW …..
Oracle Doc ID 1563217.1
select status, object_id, object_type, owner||’.’||object_name “OWNER.OBJECT” from dba_objects where object_name like ‘%DIM_MERCHANT_MV%’ order by 4,2;
Ignore the error if the Materialized View already exists in the target database.
Also check if the schema on the source database still has the create materialized view privilege.
To enable the firing of triggers in the destination databasae by tables that are being replicated by goldendate , no matter which schema the trigger is defined in, we need to add instructions to the GG replicat file.
e.g.
Trigger cross_schema_ddl.trans_details_sbi_trig which works on fdn_central.trans_details_settle_batch_id whenever rdr_central.trans_details column settle_batch_id is changed/created.
SQLEXEC “begin dbms_ddl.set_trigger_firing_property(‘CROSS_SCHEMA_DDL’,’TRANS_DETAILS_SBI_TRIG’,FALSE); end;”
DBMS_DDL.SET_TRIGGER_FIRING_PROPERTY (
trig_owner IN VARCHAR2, trig_name IN VARCHAR2, property IN INTEGER, setting IN BOOLEAN);
|
Schema of the trigger to set |
|
Name of the trigger to set |
|
|
|
|
|
Value of property being set |
DML triggers created on a table have their fire-once property set to TRUE
. In this case, the triggers only fire when the table is modified by an user process, and they are automatically disabled inside Oracle processes maintaining either a logical standby database (SQL Apply) or Oracle processes doing replication (Streams Apply) processes, and thus do not fire when a SQL Apply or a Streams Apply process modifies the table. There are two ways for a user to fire a trigger as a result of SQL Apply or a Streams Apply process making a change to a maintained table: (a) setting the fire-once property of a trigger to FALSE
, which allows it fire both in the context of a user process or a SQL or Streams Apply process, or (b) by setting the apply-server-only property to TRUE
and thus making the trigger fire only in the context of a SQL Apply or a Streams Apply process and not in the context of a user process.
FIRE_ONCE
=TRUE
, APPLY_SERVER_ONLY
=FALSE
This is the default property setting for a DML trigger. The trigger only fires when user process modifies the base table.
FIRE_ONCE
=TRUE
or FALSE
, APPLY_SERVER_ONLY
=TRUE
The trigger only fires when SQL Apply or Streams Apply process modifies the base table. The trigger does not fire when a user process modifies the base table.Thus the apply-server-only property overrides the fire-once property of a trigger.
To ensure CFEDWH and CFEBIDEMO constraints are the same:
To disable the constraints
Run the following on CFEDWH
set lines 256
set pages 0
set head off
SELECT ‘alter table ‘||owner || ‘.’ ||table_name||’ DISABLE NOVALIDATEÂ constraint ‘||constraint_name||’;’ FROM dba_constraints WHERE OWNER = ‘ACQ’ and constraint_name not like ‘SYS_%’ and constraint_name not like ‘BIN%’;
And run the output from above on CFEBIDEMO
Run the following on CFEDWH
set lines 256
set pages 0
set head off
SELECT ‘alter table ‘||owner || ‘.’ ||table_name||’ ‘ || substr(status,1,length(status)-1) || ‘ ‘ || decode(VALIDATED,’VALIDATED’,’ VALIDATE ‘, ‘NOT VALIDATED’, ‘ NOVALIDATE ‘) ||Â
‘ constraint ‘||constraint_name||’;’ FROM dba_constraints WHERE OWNER = ‘ACQ’ and constraint_name not like ‘SYS_%’ and constraint_name not like ‘BIN%’;
And run the output from above on CFEBIDEMO
Refer RMAN-08137: WARNING: Archived Log Not Deleted, Needed For Standby Or Upstream Capture Proces (Doc ID 1993799.1)
Caused by an extract not being deleted cleanly.
SQL>SELECT CAPTURE_NAME, CAPTURE_TYPE, STATUS, to_char(REQUIRED_CHECKPOINT_SCN,’999999999999999′)as REQ_SCN ,to_char(OLDEST_SCN,’999999999999999′)as OLDEST_SCN FROM DBA_CAPTURE;
CAPTURE_NAME CAPTURE_TY STATUS REQ_SCN OLDEST_SCN
———————– —————— ———– ————- ——————-
OGG$CAP_CFEAQEXT LOCAL DISABLED 201737048 201737048
SQL>
We can see that CFEAQEXT extract is still holding archive logs in the database.
To clean up :
$ GG
$ ggsci
ggsci> dblogin useridalias ggadmindc
Successfully logged into database.
ggsci> unregister extract CFEAQEXT database
2016-10-31 10:32:56 INFO OGG-01750 Successfully unregistered EXTRACT CFEAQEXT from database.
One should now be able to delete the  archive using rman.
rman target /
RMAN> delete archivelog all completed before ‘sysdate-1’;
Error in file is:
2016-11-03 16:22:51Â WARNING OGG-00869Â Oracle GoldenGate Delivery for Oracle, cfeaqrep.prm:Â OCI Error ORA-01732: data manipulation operation not legal on this view (status = 1732),
SQL
2016-11-03 16:22:51Â WARNING OGG-01004Â Oracle GoldenGate Delivery for Oracle, cfeaqrep.prm:Â Aborted grouped transaction on ‘ACQ.TRANS_DETAILS_RECOGN_MV’, Database error 1732
(OCI Error ORA-01732: data manipulation operation not legal on this view (status = 1732),
Issue is that for Replicat, the materialized view must be updateable.
Options:
Import from live fails for central if TEMP is less than 215G
Import from live fails for central if UNDO is less than 380G
Shutdown all goldengate processes including the mgr.
alter table ggs_admin.ggs_ddl_hist enable row movement;
alter table ggs_admin.ggs_ddl_hist shrink space cascade;
alter table ggs_admin.ggs_ddl_hist disable row movement;
As soon as the database starts logging archive execute
alter tablespace goldengate autoextend on maxsize 64g;
I did not wait for the shrink operations to commit.
Tried an export of the ACQ schema from cfedwh for GG to cfebidemo but it kept failing with errors similar to this title.
The issue was that I had taken the SCN for the export, ETL had been processed by development and THEN I tried the expoprt.
As the tables had changed since the scn export was having issues.
Solution:
Take a new current_scn value form the database and immediately run the export.
e.g.
2016-12-13 13:45:34 ERROR OGG-02171 Oracle GoldenGate Delivery for Oracle, cfeaqrep.prm: Error reading LCR from data source. Status 509, data source type 0.Â
2016-12-13 13:45:34 ERROR OGG-02198 Oracle GoldenGate Delivery for Oracle, cfeaqrep.prm: Incompatible record (logical EOF) in trail file /backup/oracle/OGG/dirdat/cfedwh/AQ000000002, rba 110,658,340.Â
2016-12-13 13:45:39 ERROR OGG-01668 Oracle GoldenGate Delivery for Oracle, cfeaqrep.prm: PROCESS ABENDING.
Logdump >open /backup/oracle/OGG/dirdat/cfedwh/AQ000000002Â
Current LogTrail is /backup/oracle/OGG/dirdat/cfedwh/AQ000000002Â
Logdump >countÂ
LogTrail /backup/oracle/OGG/dirdat/cfedwh/AQ000000002 has 147666 recordsÂ
Total Data Bytes 238186214Â
Avg Bytes/Record 1613Â
GGSPKUpdate 147657Â
DDL 4Â
Metadata Records 4Â
Others 1Â
After Images 147661Â
Average of 10 TransactionsÂ
Bytes/Trans ….. 24527418Â
Records/Trans … 14766Â
Files/Trans ….. 1
logdump>open /backup/oracle/OGG/dirdat/cfedwh/AQ000000002Â
logdump>pos 110658340Â
logdump>ghdr onÂ
logdump>detail data onÂ
logdump>usertoken detailÂ
logdump>ggstoken detailÂ
logdump>n
Bad record found at RBA 110658340, format 5.50 Header token)Â
000a ffff | ….
It seems, that the record is corrupted, follow the note.:
Resync / Recover Data ( Tables , Trails ) Trails Are Missing, Damaged, Incompatible, Corrupt or Suspect for Extract or Replicat (Â Doc ID 1614665.1 )
select scn_to_timestamp (529512856) from dual;
info cfeaqext, showch
ALTER EXTRACT cfeaqext SCN 529512856
ALTER EXTRACT cfeaqext ETROLLOVER
info cfeaqext, showch
alter replicat CFEAQREP
ALTER REPLICAT CFEAQREP EXTSEQNO 133
ALTER REPLICAT CFEAQREP EXTRBA 0
start replicat CFEAQREP after scn value ok…
ALTER REPLICAT CFEAQREP EXTSEQNO 3
ALTER REPLICAT CFEAQREP EXTRBA 0
ADD EXTRACT cfeaqext integrated tranlog, begin now.
register extract cfeaqext database.
ADD EXTTRAIL /backup/oracle/OGG/dirdat/cfedwh/AQ EXTRACT cfeaqext MEGABYTES 250.
start extract cfeaqext.
ADD EXTRACT cfeaqext integrated tranlog, begin now.
register extract cfeaqext database.
ADD EXTTRAIL /backup/oracle/OGG/dirdat/cfedwh/AQ EXTRACT cfeaqext MEGABYTES 250.
start extract cfeaqext.
info cfeaqext showch.
ALTER EXTRACT cfeaqext SCN 529512856.
ALTER EXTRACT cfeaqext ETROLLOVER.
info cfeaqext showch.
start extract cfeaqext.
add replicat cfeaqrep integrated exttrail /backup/oracle/OGG/dirdat/cfedwh/AQ.
start replicat cfeaqrep aftercsn 529512856.
start replicat cfeaqrep aftercsn 529512856.
info cfeaqrep detail.
info cfeaqrep.
start replicat cfeaqrep aftercsn 529512856.
THEN
You can find the last applied csn by info replicat nnnn, DETAIL
exampleÂ
=======Â
Integrated Replicat low watermark: 483311260Â
(All source transactions prior to this scn have been applied)Â
Please note down the above low watermark scn. 483311260Â
connect the source DBÂ
Convert the SCN to timestmapÂ
select scn_to_timestamp (483311260) from dual;Â
Alter the extract to capture from this timestamp or csn value.Â
you need to use alter etrollover to capture into new trail file..
alter replicat cfeaqrep, exttrail /backup/oracle/OGG/dirdat/cfedwh/AQ000000006
stop replicat cfeaqrep
alter replicat cfeaqrep extseqno 11, extrba 0
start replicat cfeaqrep aftercsn 662496036
Â
select scn_to_timestamp (529512856) from dual;
info cfeaqext, showchÂALTER REPLICAT CFEAQREP EXTSEQNO 3
ALTER REPLICAT CFEAQREP EXTRBA 0
ADD EXTRACT cfeaqext integrated tranlog, begin now.
register extract cfeaqext database.
ADD EXTTRAIL /backup/oracle/OGG/dirdat/cfedwh/AQ  EXTRACT cfeaqext  MEGABYTES 250.
start extract cfeaqext.
add replicat cfeaqrep integrated exttrail /backup/oracle/OGG/dirdat/cfedwh/AQ.
start replicat cfeaqrep aftercsn 529512856.
info cfeaqrep  detail.
info cfeaqrep.
From a Chef Workstation
$ knife bootstrap  -x root -P password a.localhost.net -N a
Â
To improve logging on the client
$ cd /etc/chef
vi client.rb
add the line “log_level :info”
$ chef-client # to run convergence
Reset sshd_config to remove root login
Open the chef server web page and login
https://chef-server/login
Administration Tab
Select the organisation you have created.
Select “Starter Kit” form the left hand side
Select “Download Starter Kit” and save it somewhere safe, you only really want to do this once for ALL ssystems and users.
Copy the kit to the directory where you want your chef-repo to be, in this case it is the user home directory.
$unzip chef-starter.zip
Install the chef workstation software
$ curl -L https://www.opscode.com/chef/install.sh | sudo bash # install chef workstation
Test the connection to the server
$ cd chef-repo
$ knife node list
This may return nothing if we have not added any nodes yet.
If an ssl certificate error is generated we can use the following command to get the Chef server certificate.
$ knife ssl fetch
and then retry the test
$ knife node list
$
CHEF SERVER
chef-server-ctl uninstall
opcode-analytics-ctl uninstall
As root
Download the package from https://downloads.chef.io/chef-server/.
rpm -Uvh chef-server-core-12.11.1-1.el7.x86_64.rpm
chef-server-ctl reconfigure
chef-server-ctl user-list
If you reset the chef server address etc then remember you need to run chef-server-ctl reconfigure
To create a second version of the apache cookbook
$cd chef-repo/cookbooks/apache/attributes
$vi defualt.rb AND ADD THE LINE
default[“author”][“name”] = “PaulH”
$cd ../recipes
$vi default.rb ADD the author attribute
Change the index.html creation to:
template “/var/www/#{sitename}/index.html” do$ cd ../templates/default
$ vi index.html.erb
add a heading line e.g.H2 H2
$ cd chef-repo
we will use a ruby file rather than a JSON file for this example
To get current recipe versions we can look at, for example, chef-repo/cookbooks/apache/metadata.rb, and we will see a verion number defined.
$mkdir environments
$cd environments
$vi devenv1.rbname “devenv1”
On Node
$ cd /etc/chefÂ
Â
Map workflow for chef.
Different cookbook versions for different environments,
e.g. development and production
Environment information can be stored in JSON or .rb files.
Located in chef-repo/environments
example for a dev
name “dev1”A DATABAG is a GLOBAL variable stored in JSON – Java Script Object Notation.
Type of data, e.g.:
User Account information
Sudo users
API / Database schema Credentials
etc
Can be encrypted
e.g. users on a system, sudo users, API/DB Credentials etc$ mkdir {users,groups}
Use
$ ohai
on the client to see the state.
OR use the chef server web interface.
knife search node key:pattern -a attribute
SHOW
====
$ knife node show -a apache
cc: apache: package: httpd sites: vchefh1h.com: domain: Â Â vchefh1h.com port: Â Â Â 80 site_title: VCHEFH1H vchefh2h.com: domain: Â Â vchefh2h.com port: Â Â Â 80 site_title: VCHEFH2H Â $ knife node show cc -a apache.sites cc: apache.sites: vchefh1h.com: domain: Â Â vchefh1h.com port: Â Â Â 80 site_title: VCHEFH1H vchefh2h.com: domain: Â Â vchefh2h.com port: Â Â Â 80 site_title: VCHEFH2H [hallapa@cw chef-repo]$ Â $ knife node show -a apache.sites.name1: apache.sites.name1:$ knife node show -a apache.sites.name1.domain
apache.sites.name1.domain: Â Â name1.phh.com$ knife node show cc -a memory
cc:
memory:
active: Â Â Â Â Â 873276kB
anon_pages: Â Â Â 910920kB
bounce: Â Â Â Â Â 0kB
buffers: Â Â Â Â Â 20kB
cached: Â Â Â Â Â 586884kB
commit_limit: Â Â 2989000kB
committed_as: Â Â 4242948kB
dirty: Â Â Â Â Â Â 8kB
free: Â Â Â Â Â Â 104836kB
hugepage_size: Â Â 2048kB
hugepages:
free: Â Â 0
reserved: 0
surplus: Â 0
total: Â Â 0
inactive: Â Â Â Â 633960kB
mapped: Â Â Â Â Â 125484kB
nfs_unstable: Â Â 0kB
page_tables: Â Â Â 31924kB
slab: Â Â Â Â Â Â 81084kB
slab_reclaimable: 37532kB
slab_unreclaim: Â 43552kB
swap:
cached: 10916kB
free: Â 1863680kB
total: Â 2097148kB
total: Â Â Â Â Â Â 1783708kB
vmalloc_chunk: Â Â 34359697400kB
vmalloc_total: Â Â 34359738367kB
vmalloc_used: Â Â 32728kB
writeback: Â Â Â Â 0kB
$ knife node show cc -a memory.active
cc:
memory.active: 873276kB
======
$ knife search node “os:linux”
1 items found
Node Name: Â cc
Environment: _default
FQDN: Â Â Â Â cc
IP: Â Â Â Â Â 10.32.9.34
Run List: Â Â recipe[secdel], recipe[apache]
Roles:
Recipes: Â Â secdel, secdel::default, apache, apache::default, php::default
Platform: Â Â oracle 7.3
Tags:
$ knife search node “os:linux” -a memory.swap.free
1 items found
cc:
memory.swap.free: 1863680kB
knife search node “platform:ubuntu” -a memory.swap.free
knife search node “platform:*” -a memory.swap.free
knife search node “kernel_release:3.2*”
knife search node “kernel_release:3.2*” -a kernel.release
knife search node “kernel_release:*” -a kernel.release
knife search node “os:linux AND platform:ubuntu” -a kernel
knife search node “os:linux AND platform_family:debian” -a platform
knife search node “os:linux AND platform_family:debian” -a platform.family Searching for a nested value
knife search node “os:linux AND platform_family:debian” -a platform_family
Create a repository on Git
Not using ssl, just username/password
$ cd chef-repo
Â
$ git init
$ git add .
$ git commit -m “First chef repo”
$ git config –global user.name “My Name”
$ git config –global user.email my.email
$ git remote add origin https://github.com/mygitname/MyRepositoryName.git
$ git push -u origin master
Â
We can now see all our chef-repo files in git.
Â
Â
Â
Two clients running different OS’s, Ubuntu and CentOS
Enable root login on all systems for bootstrap
Ubuntu:
vi /etc/sshd_config
enable root login
service ssh restart
$ knife bootstarp a.b.com -x root -P -N
$ knife node run_list add nodename “recipe[apache]”
$ knife node run_list add nodename -b “recipe[apache]” “recipe[security]”
Now we need to modify the recipes to work for ubunto and centos
vi cookbooks/apache/attributes/default.rb
default[“apache”][“sites”][“nodename1”] = { “site_title” => “SITE ONE”, “port” => 80, “domain” =>Â “nodename1.domain.com” }
default[“apache”][“sites”][“nodename2”] = { “site_title” => “SITE TWO”, “port” => 80, “domain” => “nodename2.domain.com” }
default[“apache”][“sites”][“nodename3”] = { “site_title” => “SITE THREE”, “port” => 80, “domain” => “nodename3.domain.com” }
case node[“platform”]
when “centos”
default[“apache”][“package”] = “httpd”
when “ubuntu”
default[“apache”][“package”] = “apache2”
end
vi cookbooks/apache/recipes/default.rb
Â
# # Cookbook Name:: apache # Recipe:: default # # Copyright 2016, YOUR_COMPANY_NAME # # All rights reserved – Do Not Redistribute # if node[“platform”] == “ubuntu” execute “apt-get update -y” do end end package “apache2” do package_name node[“apache”][“package”] end node[“apache”][“sites”].each do |sitename, data| document_root = “/var/www/#{sitename}” directory document_root do mode “0755” recursive true end if node[“platform”] == “ubuntu” template_location = “/etc/apache2/sites-enabled/#{sitename}.conf” elsif node[“platform”] == “centos” template_location = “/etc/httpd/conf.d/#{sitename}.conf” elsif node[“platform”] == “oracle” template_location = “/etc/httpd/conf.d/#{sitename}.conf” end template template_location do source “vhost.erb” mode “0644” variables ({ :document_root=>document_root, :port=>data[“port”], :domain=>data[“domain”] }) notifies :restart, “service[httpd]” end template “/var/www/#{sitename}/index.html” do source “index.html.erb” mode “0644” variables ({ :site_title=>data[“site_title”], :tobewritten=>”To Be Written” }) notifies :restart, “service[httpd]” end end execute “rm  /etc/httpd/conf.d/welcome.conf” do only_if do File.exist?(“/etc/httpd/conf.d/welcome.conf”) end notifies :restart, “service[httpd]” end execute “rm /etc/httpd/conf.d/README” do only_if do File.exist?(“/etc/httpd/conf.d/README”) end notifies :restart, “service[httpd]” end service “httpd” do service_name node[“apache”][“package”] action [:enable, :start] end include_recipe “php::default”To replace CRLF in vi
Â
:set magic
:set nolist
Replace “,
:%s/,\n)/)/g
Can also use “
Â
Add an ubuntu node to chef
$ knife bootstrap[
$ knife node run_list add
$ knife node runlist add unique node name> -b “recipe[apache]” “recipe[security]”
Modify Apache recipe for different OS
$ cd cookbooks/apache/attrributes
$vi defaults.rb
default[“apache”][“sites”][“nodename1”] = { “site_title” => “SITE ONE”, “port” => 80, “domain” => “nodename1.domain.com” } default[“apache”][“sites”][“nodename2”] = { “site_title” => “SITE TWO”, “port” => 80, “domain” => “nodename2.domain.com” } default[“apache”][“sites”][“anothernode”] = { “site_title” => “ANOTHERNODE”, “port” => 80, “domain” => “anothernode.domain.com” } case node[“platform”] when “centos” default[“apache”][“package”] = ‘httpd” when “ubuntu” default[“apache”][“package”] = ‘hapache2″Â
$ cd cookbooks/apache/recipes
$vi defaults.rb
package “apache2” dosource “index.html.erb”
mode “0644” variables ( :site_title => data[“site_title”], :tobewritten => “To Be Written”)
notifies :restart, “service [httpd]”Execute linux commands on a chef client.
There is a default file created as part of apache install, lets delete it !
/etc/httpd/conf.d/welcome.conf
$ vi ~/chef-repo/cookbooks/apache/recipes/default.rb
AND ADD
execute “rm /etc/httpd/conf.d/welcome.conf” do
only_if do
File.exist?(“/etc/httpd/conf.d/welcome.conf”)
end
notifies :restart, “service[httpd]”
end
execute “rm /etc/httpd/conf.d/README” do
only_if do
File.exist?(“/etc/httpd/conf.d/README”)
end
notifies :restart, “service[httpd]”
end
$ knife cookbook upload apache
etc
We will make a change top the php.ini file and ensure chef updates all nodes next time a configuration run is performed.
Copy /etc/php.ini from a node to ~/cookbooks/php/files/default
Then make a change, e.g. add a line to the php.ini file
[PHP]
:: php.ini managed by chef
etc
Â
$ cd ~/cookbooks/php/recipes/
$ vi default.rb
package “php” do
action :install
end
Â
cookbook_file “/etc/php.ini” do
source “php.ini”
mode “0644”
end
Â
$ knife cookbook upload php
NODE
====
$ chef-client
Old file is saved as /var/chef/backup/etc/php.ini.chef-date.nnnn
Process to replace a small files temporary tablespace with a bigfile temp tablespace in asm.
Â
create temporary tablespace temp_temp tempfile size 31g;
alter database default temporary tablespace temp_temp;
SELECT a.tablespace, a.segfile#, a.segblk#, a.blocks, b.sid, b.serial#, b.username, b.osuser, b.status
FROM v$session b,v$sort_usage a WHERE b.saddr = a.session_addr;
Â
alter system kill session ‘SID_NUMBER, SERIAL#NUMBER’;
drop tablespace temp including contents and datafiles;
create bigfile temporary tablespace temp tempfile size 220G autoextend on maxsize 240g;
alter database default temporary tablespace temp;
drop tablespace temp_temp including contents and datafiles;
Â
Â
Grid Ingrastructure Management Repository Database (MGMTDB)
Default location is same storage as OCR and Voting files.
Mos note 1589394.1 for moving MGMTDB AND MOS note 2065175.1 for the new MDBUtil
BACKUP THE DATA
$ GRID_HOME/bin/oclumon dumpnodeview [[-allnodes] | [-n node1 node2 noden] [-last “duration”] | [-s “time_stamp” -e “time_stamp”] [-i interval] [-v]] [-h]
Stop the cluster health monitor – on EACH node
$ GRID_HOME/bin/crsctl stop res ora.crf -init
$ GRID_HOME/bin/crsctl modify res ora.crf -attr ENABLED=0 -init
$ srvctl status mgmtdb
$ GRID_HOME/bin/dbca -silent -deleteDatabase -sourceDB -MGMTDB
$ srvctl status mgmtdb
$GRID_HOME/bin/dbca -silent -createDatabase -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName {+NEW_DG} -datafileJarLocation $GRID_HOME/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -oui_internal
OR
$GRID_HOME/bin/dbca -silent -createDatabase -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType FS -datafileDestination {NEW_FS} -datafileJarLocation $GRID_HOME/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -oui_internal
$ srvctl status mgmtdb
ONLY THE CONTAINER DB IS OPEN, WE NOW NEED TO CREATE THE PLUGGABLE DATABASE.
$ OLSNODES -c
to get the cluster name
Create the management repository pluggable database
$ GRID_HOME/bin/dbca -silent -createPluggableDatabase -sourceDB -MGMTDB -pdbName {pluggable_db_name} -createPDBFrom RMANBACKUP -PDBBackUpfile $GRID_HOME/assistants/dbca/templates/mgmtseed_pdb.dfb -PDBMetadataFile $GRID_HOME/assistants/dbca/templates/mgmtseed_pdb.xml -createAsClone true -internalSkipGIHomeCheck
SQL> select dbid,name,cdb from v$database;
$ GRID_HOME/bin/mgmtca
SQL> alter session set container=pluggable_db_name;
SQL> exec dbms_service.create_service(‘pluggable_db_name’,’pluggable_db_name’);
SQL> exec dbms_service.start_service(‘pluggable_db_name’);
$ GRID_HOME/bin/crsctl start res ora.crf -init
$ GRID_HOME/bin/oclumon manage -get reppath
Â
MODIFY THE 12c SEED DATABASE
============================
This is an initial empty setup
SQL> select name, cdb from v$database;
ORCPDB1 YES
SQL> show pdbs
CON_ID CON_NAMEÂ Â OPEN MODEÂ RESTRICTED
——- ———- ———- ———-
2 PDB$SEEDÂ Â READ ONLYÂ NO
SQL> alter session set “_oracle_script”=true;
SQL> shutdown immediate
SQL> startup
SQL> select name,open_mode from v$pdbs;
NAMEÂ Â Â Â Â Â Â Â Â Â Â OPEN_MODE
————— ———-
PDB$SEEDÂ Â Â Â Â Â Â READ WRITE
SQL> show con_name
PDB$SEED
SQL> create tablespace MYTBS1 datafile ‘/oradata/orpcdb1/pdbseed/mytbs1.dbf’ size 200M;
SQL> alter database default tablespace MYTBS1;
SQL> create user devops identified by temp123 default tablespace MYTBS1 password expire;
SQL> shutdown immediate
SQL> startup open read only
SQL> select name,open_mode from v$pdbs;
NAMEÂ Â Â Â Â Â Â Â Â Â Â OPEN_MODE
————— ———-
PDB$SEEDÂ Â Â Â Â Â Â READ ONLY
SQL> alter session set “_oracle_script”=true;
SQL> show con_name
SQL> create pluggable database MYpdb1 admin user measadmin identified by measadmin file_name_convert=(‘/oradata/orpcdb1/pdbseed/’,’/oradata/orpcdb1/mypdb1/’);
SQL> show pdbs
CON_ID CON_NAMEÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â OPEN MODEÂ RESTRICTED
———- —————————— ———- ———-
2 PDB$SEEDÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â READ ONLYÂ NO
3 MYPDB1Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â MOUNTED
SQL> alter pluggable database Mypdb1 open;
The new database now already has the user and tablespace.
$ knife cookbook create php
$ cd cookbooks/php/recipes
$ vi default,rb
# found by using “yum search php” package “php5” do action :install end $ cd ~/cookbooks/apache $ vi metadata.rb  # and addThis ensures php is installed before any Apache install.
Â
$ vi cookbooks/apache/attributes/default.rb
default[“apache”][“sites”][“vchefh1h.com”] = { “site_title” => “VCHEFH1H”, “port” => 80, “domain” => “vchefh1h.com” }
default[“apache”][“sites”][“vchefh2h.com”] = { “site_title” => “VCHEFH2H”, “port” => 80, “domain” => “vchefh2h.com” }
Â
$ vi cookbooks/apache/templates/default/index.html.erb
Â
Run convergence
$ chef-client
Add the addresses to the hosts file if not using dns
192.168.1.56 cc cc.localnet
192.168.1.56 vchefh1h.com vchefh1h
192.168.1.56 vchefh2h.com vchefh2h
Check the web pages
http://cc.localnet/
http://vchefh1h.com/
http://vchefh2h.com/
$ cd cookbooks/apache/
$ cd attributes
$ vi cookbooks/apache/attributes/default.rb
default[“apache”][“sites”][“vchefh1h.com”] = { “port” => 80, “domain” => “vchefh1h.com” } default[“apache”][“sites”][“vchefh2h.com”] = { “port” => 80, “domain” => “vchefh2h.com” }Â
$ vi cookbooks/apache/recipes/default.rb
# # Cookbook Name:: apache # Recipe:: default # # Copyright 2016, YOUR_COMPANY_NAME # # All rights reserved – Do Not Redistribute # package “httpd” do action :install end node[“apache”][“sites”].each do |sitename, data| document_root = “/var/www/#{sitename}” directory document_root do mode “0755” recursive true end template “/etc/httpd/conf.d/#{sitename}.conf” do source “vhost.erb” mode “0644” variables ({ :document_root=>document_root, :port=>data[“port”], :domain=>data[“domain”] }) notifies :restart, “service[httpd]” end end service “httpd” do action [:enable, :start] end$ vi cookbooks/apache/templates/default/vhost.erb
ServerName
DocumentRoot
ServerAlias
chef-client
$ cd /var/www/vchefh1h.com/
$ vi index.html
VCHEFH1H
$ cd /var/www/vchefh2h.com/
$ vi index.html
VCHEFH2H
Â
Â
$ cd chef-repo
$ knife cookbook create apache
$ cd cookbooks/apache/recipes
$ vi default.rb
#
# Cookbook Name:: apache
# Recipe:: default
#
# Copyright 2016, YOUR_COMPANY_NAME
#
# All rights reserved – Do Not Redistribute
#
package “httpd” do
action :install
end
service “httpd” do
action [:enable, :start]
end
$ knife node run_list add “recipe[apache]”
$ chef-client
$ knife cookbook create secdel
$ cd cookbooks/security/recipes
$ vi default.rb
file “/etc/chef/validation.pem” do
action :delete
end
$ knife cookbook upload secdel
$ knife node run_list add -b “recipe[apache]” “recipe[secdel]”
$ chef-client
Â
Enable ssh
$ vi /etc/ssh/sshd_config
Uncomment “#PermitRootLogin yes”
$ service sshd restart
Download the starter kit from web page – ONLY DO THIS ONCE
Get chef-starter.zip from wherever we downloaded it to.
then
$ curl -L https://www.opscode.com/chef/install.sh | sudo bash # install chef workstation
$ knife bootstrap
# install chef agent on the client
ON CLIENT
$ cd /etc/chef
vi client.rb
add the line “log_level :info”
$ chef-client # to run convergence
Reset sshd_config to remove root login
Â
This shows you the amount of undo size used by each session that is currently active.
select s.sid, s.username, sum(ss.value) / 1024 / 1024 as undo_size_mb from v$sesstat ss join v$session s on s.sid = ss.sid join v$statname stat on stat.statistic# = ss.statistic# where stat.name = 'undo change vector size' and s.type 'BACKGROUND' and s.username IS NOT NULL group by s.sid, s.username;
TOTAL TEMP USAGE
select
b.Total_MB,
      Â
b.Total_MB - round(a.used_blocks*8/1024) Current_Free_MB,
      Â
round(used_blocks*8/1024)Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Current_Used_MB,
     Â
round(max_used_blocks*8/1024)Â Â Â Â Â Â Â Â Â Â Â Â Max_used_MB
from
v$sort_segment a,
Â
(
select
round(
sum
(bytes)/1024/1024) Total_MB
from
dba_temp_files ) b;
TOTAL_MB CURRENT_FREE_MB CURRENT_USED_MB MAX_USED_MB ---------- --------------- --------------- ----------- 188416 23036 165380 165381 session temp usage
col hash_value for a40col tablespace for a10col username for a15set linesize 132 pagesize 1000SELECT s.sid, s.username, u.tablespace, s.sql_hash_value||'/'||u.sqlhash hash_value, u.segtype, u.contents, u.blocksFROM v$session s, v$tempseg_usage uWHERE s.saddr=u.session_addrorder by u.blocks; SID USERNAME TABLESPACE HASH_VALUE SEGTYPE CONTENTS BLOCKS ---------- --------------- ---------- ---------------------------------------- --------- --------- ------- 23 SYS TEMP 3732391352/3732391352 DATA TEMPORARY 128 23 SYS TEMP 3732391352/3732391352 INDEX TEMPORARY 128 :
23 SYS TEMP 3732391352/3732391352 DATA TEMPORARY 128
23 SYS TEMP 3732391352/3732391352 INDEX TEMPORARY 128
:
357 SYS TEMP 1357698168/693605571 SORT TEMPORARY 861952
357 SYS TEMP 1357698168/693605571 SORT TEMPORARY 872704
357 SYS TEMP 1357698168/693605571 SORT TEMPORARY 931456
357 SYS TEMP 1357698168/693605571 SORT TEMPORARY 945280
select
hash_value, sorts, rows_processed/executions
from
v$sql
where
hash_value
in
(
select
hash_value
from
v$open_cursor
where
sid=357)
and
sorts > 0
and
PARSING_SCHEMA_NAME=
'SYS'
order
by
rows_processed/executions;
IMPDP
select * from dba_datapump_jobs;
select * from dba_datapump_sessions;
select username,opname,target_desc,sofar,totalwork,message from v$session_longops;
select sid, serial#, sofar, totalwork, dp.owner_name, dp.state, dp.job_mode
from gv$session_longops sl, gv$datapump_job dp
where sl.opname = dp.job_name and sofar != totalwork;
select   sid,   serial#
from   v$session s,   dba_datapump_sessions d
where   s.saddr = d.saddr;
select   sid,   serial#,   sofar,   totalwork
from   v$session_longops;
select x.job_name,b.state,b.job_mode,b.degree
, x.owner_name,z.sql_text, p.message
, p.totalwork, p.sofar
, round((p.sofar/p.totalwork)*100,2) done
, p.time_remaining
from dba_datapump_jobs bÂ
left join dba_datapump_sessions x on (x.job_name = b.job_name)
left join v$session y on (y.saddr = x.saddr)
left join v$sql z on (y.sql_id = z.sql_id)
left join v$session_longops p ON (p.sql_id = y.sql_id)
WHERE y.module like 'Data Pump %';
/* AND p.time_remaining > 0; */
shopt -s expand_aliases
#!/bin/bash
#find . -name *.aud > /tmp/temp.txt
find . -type f -mtime +7 > /tmp/temp.txt
for i in $(cat /tmp/temp.txt)
do
if ! [[ $(lsof | grep /u01/app/oracle/admin/accounts/adump/$i) ]]
then
if [[ $(grep “.aud” $i) ]]
then
mv $i backuplogs
fi
fi
done
#tar -cfz backuplogs/audits_${Dstamp} backuplogs/*.aud
#rm backuplogs/*.aud
Cluster name = olsnodes -c
VIP Names = olsnodes -n -i
srvctl config nodeapps -a
crsctl stat res -p
PRIVATE Inetrconnect = olsnodes -n -i -l -p
oifcfg getif
oifcfg iflist -p -n
netstat -i | -n | -r | -rnv
ifconfig -a
dig
srvctl config scan
srvctl config scan_listener
Â
Â
http://unix.stackexchange.com/questions/1496/why-doesnt-my-bash-script-recognize-aliases
Aliases are not expanded when the shell is not interactive, unless the expand_aliases shell option is set using shopt (see the description of shopt under SHELL BUILTIN COMMANDS below).
So put a
shopt -s expand_aliases
in your script.
Make sure to source your aliases file after setting this in your script.
shopt -s expand_aliases source ~/.bash_aliases
http://stackoverflow.com/questions/937716/how-do-you-send-the-output-of-ls-to-mv
$ ls | xargs -Ifile mv file file.txt $ ls a.txt b.txt c.txt x.txt y.txt z.txt
Â
Map and convert 3 columns in the table that may have binary_double Infinity values.
MAP Src_schema.table, TARGET Dst_schema.table, COLMAP ( USEDEFAULTS, ENTRY_AMT = @IF ( @STRFIND( @STRUP(columnA) , ‘INF’) ,9999999999999, columnA),columnB = @IF ( @STRFIND( @STRUP(columnB) , ‘INF’) ,9999999999999, columnB),columnC= @IF ( @STRFIND( @STRUP(columnC) , ‘INF’) ,9999999999999, columnC));
select * from gV$SQL_BIND_CAPTURE where sql_id=
e.g.
select * from gV$SQL_BIND_CAPTURE where sql_id=’080uc49jbt7m9′;
Â
select METADATA_TEXT from ggs_admin.ggs_ddl_hist where ddltype = ‘CREATE’;
Â
SEQNO |
NOT NULL NUMBER |
OBJECTID |
NUMBER |
DATAOBJECTID |
NUMBER |
DDLTYPE |
VARCHAR2(130) |
OBJECTNAME |
VARCHAR2(400) |
OBJECTOWNER |
VARCHAR2(400) |
OBJECTTYPE |
VARCHAR2(130) |
FRAGMENTNO |
NOT NULL NUMBER |
OPTIME |
NOT NULL CHAR(19) |
STARTSCN |
NUMBER |
METADATA_TEXT |
NOT NULL VARCHAR2(4000) |
AUDITCOL |
VARCHAR2(80) |
Use DDLOPTIONS ADDTRANDATA to:
By default, truncate operations are not captured from the source or replicated to the target.
Use the HANDLECOLLISIONS and NOHANDLECOLLISIONS parameters to control whether or not Replicat tries to resolve duplicate-record and missing-record errors when applying SQL on the target.
CHCKPNTTABLE
CHCKPNTTABLE_LOX
CHKPTAB
CHKPTAB_LOX
GGS_DDL_COLUMNS
GGS_DDL_HIST
GGS_DDL_HIST_ALT
GGS_DDL_LOG_GROUPS
GGS_DDL_OBJECTS
GGS_DDL_PARTITIONS
GGS_DDL_PRIMARY_KEYS
GGS_DDL_RULES
GGS_DDL_RULES_LOG
GGS_MARKER
GGS_SETUP
GGS_STICK
GGS_TEMP_COLS
GGS_TEMP_UK
GGÂ LOG FILE
———————–
Filename = ARMACREP.rpt
See the following error:
OGG-06510 Using the following key columns for target table CF_ACC.ACC_PRELOAD_DESTINATION: PAYER_ID, CURRENCY_CODE_ALPHA3.
2016-08-19 10:24:11 ERROR OGG-01296 Error mapping from CF_ACC.ACC_PRELOAD_DESTINATION to CF_ACC.ACC_PRELOAD_DESTINATION.
Last log location read:
FILE: /backup/oracle/OGG/accounts/dirdat/AC000000005
SEQNO: 5
RBA: 13671788
TIMESTAMP: 2016-08-18 13:46:10.993587
EOF: NO
READERR: 0
Â
logdump > open /backup/oracle/OGG/accounts/dirdat/AC000000005
logdump>ghdr on
logdump>detail data on
logdump>ggstoken detail on
logdump>pos 13671788
logdump > n
___________________________________________________________________
Hdr-Ind   :   E  (x45)   Partition  :   .  (x0c)
UndoFlag  :   .  (x00)   BeforeAfter:   A  (x41)
RecLength  :   50  (x0032)  IO Time   : 2016/08/18 13:46:10.993.587
IOType   :  135  (x87)   OrigNode  :  255  (xff)
TransInd  :   .  (x03)   FormatType :   R  (x52)
SyskeyLen  :   0  (x00)   Incomplete :   .  (x00)
AuditRBA  :    11229 AuditPos  : 3576336
Continued  :   N  (x00)   RecCount  :   1  (x01)
2016/08/18 13:46:10.993.587 GGSUnifiedPKUpdate  Len   50 RBA 13671788
Name: CF_ACC.ACC_PRELOAD_DESTINATION Â (TDR Index: 33)
After  Image:                       Partition 12  G  s
0000 0017 0000 000a 0000 0000 0000 0006 b8f4 0001 | ………………..
0005 ffff 0000 0000 0000 0a00 0000 0000 0000 06b8 | ………………..
ff00 0100 05ff ff00 0000 Â Â Â Â Â Â Â Â Â Â Â Â Â | ……….
Before Image      Len   27 (x0000001b)
BeforeColumnLen   23 (x00000017)
Column   0 (x0000), Len   10 (x000a)
0000 0000 0000 0006 b8f4 Â Â Â Â Â Â Â Â Â Â Â Â Â | ……….
Column   1 (x0001), Len   5 (x0005)
ffff 0000 00 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â | …..
After Image      Len   23 (x00000017)
Column   0 (x0000), Len   10 (x000a)
0000 0000 0000 0006 b8ff              | ……….
Column   1 (x0001), Len   5 (x0005)
ffff 0000 00 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â | …..
GGS tokens:
TokenID x52 ‘R’ ORAROWID     Info x00  Length  20
4141 4161 4932 4141 4a41 4143 574c 4841 4141 0001 | AAAaI2AAJAACWLHAAA..
TokenID x4c ‘L’ LOGCSN      Info x00  Length   9
3131 3035 3939 3232 35 Â Â Â Â Â Â Â Â Â Â Â Â Â Â | 110599225
TokenID x36 ‘6’ TRANID      Info x00  Length  10
3231 2e39 2e32 3033 3934 Â Â Â Â Â Â Â Â Â Â Â Â Â | 21.9.20394
TokenID x69 ‘i’ ORATHREADID    Info x01  Length   2
0002 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â | ..
Logdump 7 >n
___________________________________________________________________
Hdr-Ind   :   E  (x45)   Partition  :   .  (x0c)
UndoFlag  :   .  (x00)   BeforeAfter:   A  (x41)
RecLength  :   88  (x0058)  IO Time   : 2016/08/18 13:46:10.993.637
IOType   :  134  (x86)   OrigNode  :  255  (xff)
TransInd  :   .  (x00)   FormatType :   R  (x52)
SyskeyLen  :   0  (x00)   Incomplete :   .  (x00)
AuditRBA  :    11229    AuditPos  : 4526096
Continued  :   N  (x00)   RecCount  :   1  (x01)
2016/08/18 13:46:10.993.637 GGSUnifiedUpdate   Len   88 RBA 13671951
Name: CF_ACC.ACC_AGGREGATED_TRANSACTIONS Â (TDR Index: 1)
After  Image:                       Partition 12  G  b
0000 002a 0000 000a 0000 0000 0000 042b 692c 0001 | …*………..+i,..
000a 0000 0000 0000 0000 001c 0002 000a 0000 0000 | ………………..
0000 0000 0013 0000 000a 0000 0000 0000 042b 692c | ……………..+i,
0001 000a 0000 0000 0000 0000 001d 0002 000a 0000 | ………………..
0000 0000 0000 0013 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â | ……..
Before Image      Len   46 (x0000002e)
BeforeColumnLen   42 (x0000002a)
Column   0 (x0000), Len   10 (x000a)
0000 0000 0000 042b 692c              | …….+i,
Column   1 (x0001), Len   10 (x000a)
0000 0000 0000 0000 001c              | ……….
Column   2 (x0002), Len   10 (x000a)
0000 0000 0000 0000 0013 Â Â Â Â Â Â Â Â Â Â Â Â Â | ……….
After Image      Len   42 (x0000002a)
Column   0 (x0000), Len   10 (x000a)
0000 0000 0000 042b 692c              | …….+i,
Column   1 (x0001), Len   10 (x000a)
0000 0000 0000 0000 001d              | ……….
Column   2 (x0002), Len   10 (x000a)
0000 0000 0000 0000 0013 Â Â Â Â Â Â Â Â Â Â Â Â Â | ……….
GGS tokens:
TokenID x52 ‘R’ ORAROWID     Info x00  Length  20
4141 4159 4570 4141 4741 414d 4d75 4f41 4153 0001 | AAAYEpAAGAAMMuOAAS..
TokenID x4c ‘L’ LOGCSN      Info x00  Length   9
3131 3035 3939 3232 37 Â Â Â Â Â Â Â Â Â Â Â Â Â Â | 110599227
TokenID x36 ‘6’ TRANID      Info x00  Length  11
3231 2e31 342e 3230 3339 35 Â Â Â Â Â Â Â Â Â Â Â | 21.14.20395
TokenID x69 ‘i’ ORATHREADID    Info x01  Length   2
0002 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â | ..
logdump > n
Â
Â
SET MARKUP HTML ON SPOOL ON HEAD “LOGMNR – INFO ”
SET ECHO ON
REM Spooling to html file
spool LOGMNR_Info1.html
show parameter comp;
select global_name, sysdate from global_name;
select * from v$version;
select * from v$nls_parameters where parameter in (‘NLS_LANGUAGE’, ‘NLS_TERRITORY’, ‘NLS_CHARACTERSET’, ‘NLS_NCHAR_CHARACTERSET’);
ALTER SESSION SET NLS_DATE_FORMAT=’YYYY-MM-DD HH24:MI:SS’;
set numformat 9999999999999999999999
exec dbms_logmnr.add_logfile(‘ Archive log path for sequence –> 11229 ‘);
exec dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog);
select username, os_username, DATA_OBJ#, rbasqn, RBABLK, rbabyte, RS_ID, row_id, rollback, seg_name,seg_owner,operation, timestamp, sql_redo, scn, cscn from v$logmnr_contents where row_id=’AAAaI2AAJAACWLHAAA’;
EXECUTE DBMS_LOGMNR.END_LOGMNR;
spool off
SET MARKUP HTML OFF
SET ECHO ON
extract EXTA
SETENV (NLS_LANG = ENGLISH_UNITED KINGDOM.AL32UTF8)
SETENV (ORACLE_HOME = “/u01/app/oracle/product/12.1.0.2/dba”)
SETENV (ORACLE_SID = “dba”)
USERIDALIAS ggadmin
TRANLOGOPTIONS DBLOGREADER
DISCARDFILE disc/dba.dsc, append, megabytes 500
EXTTRAIL /backup/oracle/gg/dirdat/AP
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
DDL INCLUDE ALL
DDLOPTIONS ADDTRANDATA
TABLEEXCLUDE *.DBMS**
TABLE PHH.*;
SEQUENCE PHH.*;
extract PMPA
SETENV (NLS_LANG = ENGLISH_UNITED KINGDOM.AL32UTF8)
SETENV (ORACLE_HOME = “/u01/app/oracle/product/12.1.0.2/dba”)
SETENV (ORACLE_SID = “dba”)
USERIDALIAS ggadmin
RmtHost nodeB, MgrPort 8880
RmtTrail /backup/oracle/gg/dirdat/AP
PASSTHRU
TABLE PHH.*;
SEQUENCE PHH.*;
REPLICAT REPB
SETENV (NLS_LANG = ENGLISH_UNITED KINGDOM.AL32UTF8)
SETENV (ORACLE_HOME = “/u01/app/oracle/product/12.1.0.2/dbb”)
SETENV (ORACLE_SID = “dbb”)
ASSUMETARGETDEFS
USERIDALIAS ggadmin
DISCARDFILE /backup/oracle/gg/disc/dbb.dsc, append, megabytes 100
DDL INCLUDE ALL
DDLOPTIONS UPDATEMETADATA
MAPEXCLUDE PHH.TABN
MAP PHH.*, TARGET PHH.*;
extract EXTB
SETENV (NLS_LANG = ENGLISH_UNITED KINGDOM.AL32UTF8)
SETENV (ORACLE_HOME = “/u01/app/oracle/product/12.1.0.2/DBB”)
SETENV (ORACLE_SID = “DBB”)
USERIDALIAS ggadmin
DISCARDFILE disc/DBB.dsc, append, megabytes 500
EXTTRAIL /backup/oracle/gg/dirdat/BP
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
DDL INCLUDE ALL
DDLOPTIONS GETREPLICATES, GETAPPLOPS, REMOVECOMMENTS BEFORE
TABLEEXCLUDE *.DBMS**
TABLE PHH.*;
SEQUENCE PHH.*;
ADD EXTRACT EXTA integrated tranlog, begin now
register extract EXTA database
ADD EXTTRAIL /backup/oracle/gg/dirdat/AP Â EXTRACT EXTA Â MEGABYTES 250
start extract EXTA
ADD EXTRACT PMPA Â EXTTRAILsource /backup/oracle/gg/dirdat/AP
ADD RMTTRAIL /backup/oracle/gg/dirdat/AP EXTRACT PMPA Â MEGABYTES 250
start extract PMPA
NODE B
add replicat REPB integrated exttrail /backup/oracle/gg/dirdat/AP
start replicat REPB
Make sure that you have the following line in your /etc/ssh/sshd_config
file:
AllowX11Forwarding yes
ggsci > Edit params defgen
defsfile ./dirsql/SOURCE_DEFGEN.def
userid gguser password ******
TABLE schema.*;
Â
Enter the parameters listed above in the order shown and invoke DEFGEN from GG_HOME location
$ ./defgen paramfile ./dirprm/defgen.prm
Â
Use FTP or SFTP to transfer this SOURCE_DEFGEN.def file to the TARGET server
Â
Specify this copied SOURCE_DEFGEN.def file location in REPLICAT param file
e.g. SOURCEDEFS ./dirsql/SOURCE_DEFGEN.def
To create a wallet
mkstore -wrl /tmp/wallet –create
To add entries to the wallet
Enter password: mysecret
PKI-01002: Invalid password.
Enter password: mysecret1 (not echoed)
Enter password again: mysecret1 (not echoed)mkstore -wrl -createCredential The db_connect_string must be identical to the connection string that you specify
in the URL used in the datasource definition (the part of the string that follows the
@). It can be either the short form or the long form of the URL.
For example: myhost:1521/myservice or (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=myhost-scan)
(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=myservice))) To list the wallet entries $ mkstore -wrl /u01/app/oracle/wallet -listCredential Oracle Secret Store Tool : Version 12.1.0.2 Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved. Enter wallet password: List credential (index: connect_string username) 2: 1: $
To use the wallet
Add the following to the sqlnet.ora
WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = /.../wallet))) MKSTORE COMMAND [-create] [-createSSO] [-createLSSO] [-createALO] [-delete] [-deleteSSO] [-list] [-createEntry alias secret] [-viewEntry alias] [-modifyEntry alias secret] [-deleteEntry alias] [-createCredential connect_string username password] [-listCredential] [-modifyCredential connect_string username password] [-deleteCredential connect_string] [-help] [-nologo]
CHANGE WALLET PASSWORD
$ orapki wallet change_pwd -wallet /u01/.../wallets Oracle PKI Tool : Version 12.2.0.1.0 Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved. Enter wallet password: Enter New Wallet Password: Confirm Wallet Password: Wallet password has been changed. $
** = expoonent
% = modulo
puts - includes (or newline as they call it).
print - no newline generated after string
"some string".length
.length
.reverse
.upcase
.downcase
.capitalize!
if string.include? "s"
.gsub!(/s/, "th")
puts "paul".upcase
COMMENTS
# siingle line comment
=begin
Multiple line
comment
=end
name = "Paul".downcase.reverse.upcase => "LUAP"
gets.chomp => get input and remove CR
print "What's your first name? "
first_name = gets.chomp.capitalize!
print "What's your last name? "
last_name = gets.chomp.capitalize!
print "What's your city? "
city_name = gets.chomp.capitalize!
print "What's your state? "
state_name = gets.chomp.upcase!
puts "#{first_name} #{last_name} from #{city_name} in #{state_name}"
CONTROL FLOW
if
elsif
else
end
unless
==, !=, , =, &&, ||, !
EXAMPLE
print "Thtring, pleathe!: "
user_input = gets.chomp
user_input.downcase!
if user_input.include? "s"
user_input.gsub!(/s/, "th")
else
puts "Nothing to do here!"
end
puts "Your string is: #{user_input}"LOOPS
counter = 1
while counter puts counter
counter = counter + 1
end
for num in 1...10 OR for num in 1..10 # note the use of two or 3 dots give different results
puts num
end
Infinite loop =>Â loop { print "~Hello World" }
i = 0
loop do
i += 1
print "#{i}"
break if i > 5
end
for i in 1..5 # skips even numbers
next if i % 2 == 0
print i
endOPERATORS
+=, -=, *=, /=Arrays
my_array = [1,2,3,4,5]
muluti_array = [[1,2,3],[1,2,3],[1,2,3]]
multiarray.each -- will access each array in the array.
my_array.each do |x|
:
end
my_array.each { |item| print item * 2}Hashes
key value pairs.
my_hash = { "name" => "Eric", "age" => 26, "hungry?" => true }
puts my_hash["name"]
puts my_hash["age"]
puts my_hash["hungry?"]
my_hash = Hash.new
pets = Hash.new
pets["Rio"] = "Moluccan Cockatoo"
family = { "Homer" => "dad", "Marge" => "mom", "Lisa" => "sister", "Maggie" => "sister", "Abe" => "grandpa", "Santa's Little Helper" => "dog" }
pets.each { |x| puts "#{x}" }
family.each { |x, y| puts "#{x}: #{y}" }Times Split
10.times { print "This will print ten times "}
words = sometext.split(" ")
words.each do |astring |SORTS
puts "Text please: "
text = gets.chomp
words = text.split(" ")
frequencies = Hash.new(0)
words.each { |word| frequencies[word] += 1 }
frequencies = frequencies.sort_by {|a, b| b }
frequencies.reverse!
frequencies.each { |word, frequency| puts word + " " + frequency.to_s }
my_array.sort!
COMBINED
a b, -1, 0, +1
# To sort our books in ascending order, in-place books.sort! { |firstBook, secondBook| firstBook secondBook } # Sort your books in descending order, in-place below books.sort! { |firstBook, secondBook| secondBook firstBook }
METHODS
def puts_1_to_10
 (1..10).each { |i| puts i }
endputs_1_to_10 # Ignore this for now. We'll explain it soon!
BLOCKS
1.times do puts "I'm a code block!" end 1.times { puts "As am I!" }
# block that capitalizes each string in the array ["ryan", "jane"].each {|string| puts "#{string[0].upcase}#{string[1..-1]}"} # prints "Ryan", then "Jane"
CHEF SERVER
chef-server-ctl uninstall
opcode-analytics-ctl uninstall
Â
Download the package from https://downloads.chef.io/chef-server/.
rpm -Uvh chef-server-core-12.11.1-1.el7.x86_64.rpm
chef-server-ctl reconfigure
chef-server-ctl user-list
https:///login
Â
$ sudo su – root
# cd ~
# curl -L https://www.opscode.com/chef/install.sh | bash
# wget http://github.com/opscode/chef-repo/tarball/master
Â
This document will detail the Oracle GoldenGate (GGS) installation and configuration for Linux with an Oracle database as the source and Microsoft windows with a SQL Server database as the destination.
Oracle GoldenGate for Microsoft windows SQL Server requires
A full Oracle client installation
An SQL Server database user and schema
An odbc dsn setup for the MS SqlServer database.
Go to oracle technet -> downloads -> Oracle Database
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html .
Select “See all†next to the Microsoft Windows x64 (64-bit).
Select the Oracle Database Client for Microsoft Windows (x64) download.
Once the download has completed unzip the file and run setup.exe.
Select “Administrartor†and “Nextâ€.
Select English and next.
Select “Use Windows Build-in Accountâ€
Enter the Base and Software location directories.
For this installation we used “D:\Oracle\†and “D:\Oracle\Clientâ€
Select Install
Connect to the database as an administrator.
Right click Security and select New -> Login.
Add GUSER and select the relevant database as the default database; do not select OK.
Select User Mapping from the left hand menu and then:
Select the database from the top screen and db_owner from the bottom screen.
Select OK to exit.
With the database selected, choose New Query and execute the command:
create schema gguser;
Right click the Security option under the database and select new -> user
Enter a name for the user and select the “…†next to the Login name field.
Select Browse from the Select Login page, select the user we created and select OK.
Enter the schema name we created in the Default Schema box.
Select OK
Select Start > Settings > Control Panel.
Double-click Administrative Tools.
Double-click Data Sources (ODBC)
Select the System DSN tab and then select the Add button.
The Create New Data Source dialog box appears.
Select SQL Server and then Finish
Enter a name for the source, e.g. “GGToSQLâ€
Enter a description, e.g. “Oracle GG to MS SQL Serverâ€
Enter the server name.
Select “Nextâ€
Normally we would use a windows authenticated user but for this example we have added the sql server user GGUSER.
Use the drop down list to change the to the relevant database.
Select Next and then Finish.
Unzip the download file provided by oracle into
D:\oracle\GoldenGate,
please note the directory name cannot include spaces.
Change to the windows directory
D:\oracle\GoldenGate
Enter
ggsci
At the ggsci prompt enter the commands
create subdirs
exit
The GoldenGate manager process is not installed as a windows service by default and must be manually added.
As a system administrator:
start -> run -> cmd
cd
instll ADDEVENTS ADDSERVICE AUTOSTART
This will start the service on system boot under the local system account and add events to the windows event manager.
The create subdirs GoldenGate command has created the relevant directories under the GoldenGate home.
As the trail and discard files can grow quite large and we do not want to endanger other processes running of the D drive a separate volume was created for these files. We need to create the relevant directories on this volume.
cd G:
mkdir goldengatefiles
cd goldengatefiles
mkdir dirdat
mkdir dirrpt
All commands will be run from the D:\oracle\GoldenGate directory
Use a credential store to remove the requirement for passwords to be typed on the command line.
ggsci> add credentialstore
ggsci> alter credentialstore add user gguser password gguser alias gguser
This is the sql server database login chosen earlier.
Â
$ ggsci
ggsci> edit params ./GLOBALS
CHECKPOINTTABLE gguser.chkptab
ggsci> exit
The exit and restart is required to reload the GLOBALS file.
Note the GL:OBALS file must be uppercase, in the top level directory (goldengate) and have no file extension.
$ ggsci
ggsci> dblogin sourcedb gguser useridalias gguser
ggsci> add checkpointtable
$ggsci
ggsci> edit params mgr
PORT 7908
DYNAMICPORTLIST 7908-7915
ACCESSRULE, PROG *, IPADDR ip1.ip1.ip1.ip1, ALLOW
SOURCEDB gguser
USERIDALIAS gguser
PURGEOLDEXTRACTS G:\goldengatefiles\dirdat\TP*, minkeephours 12
Start and stop the manager service using the windows services panel
Create the oracle goldengate user as a member of the oracle database installation group; this will enable Classic capture mode should it ever be required.
useradd –g oinstall oraggs
mkdir /u01/app/oracle/ggs_12201
chown oracle:oinstall /u01/app/oracle/ggs_12201
chmod 775 /u01/app/oracle/ggs_12201
passwd oraggs
Log in as oraggs
Edit ~/.bash_profile and add the following:
export PATH=/u01/app/oracle/ggs_12201:$PATH
export LD_LIBRARY_PATH=/u01/app/oracle/ggs_12201:/lib:$LD_LIBRARY_PATH
alias ggs=â€cd /u01/app/oracle/ggs_12201
Download to /tmp and then expand the oracle GoldenGate zip file.
This unzips as /tmp/fbo_ggs_Linux_x64_shiphome.
Change directory to Disk1 and run the installer.
cd Disk1
./runInstaller
Select the 12c installation
Enter the correct locations.
Â
Â
Select Install
Select Close
The GoldenGate directories have been created under the GoldenGate home.
As for the windows installation we should create directories on a separate volume for the GoldenGate trail and log files.
mkdir /u01/oracle/
mkdir GG_DIRECTORY
mkdir GG_DIRECTORY/dirdat
mkdir GG_DIRECTORY/dirrpt
The oracle database should be in archivelog mode, logging and with flashback on.
SQL> select log_mode from v$database;
SQL> select flashback_on from v$database;
SQL> select force_logging from v$database;
If any are not in force then make the necessary changes.
Enable GoldenGate
SQL> alter system set ENABLE_GOLDENGATE_REPLICATION=true
Create the GoldenGate user
Sql> create user oraggs identified by oraggs default tablespace users temporary tablespace temp;
Sql> grant create session, connect, resource, dba to oraggs;
Sql> alter user oraggs quota unlimited on users;
Sql> grant execute on dbms_goldengate_auth to oraggs;
Sql> grant execute on dbms_flashback to oraggs;
Sql> exec dbms_goldengate_auth.grant_admin_privilege(‘ORAGGS’);
Sql> alter pluggable database add supplemental log data;
Sql> alter system switch logfile;
All commands will be run from the GG_DIRECTORY directory as oraggs.
The commands all assume the oraggs user profile has been executed to define the oracle database home and sid environment variables.
Use a credential store to remove the requirement for passwords to be typed on the command line.
ggsci> add credentialstore
ggsci> alter credentialstore add user oraggs password oraggs alias oraggs
$ggsci
ggsci> edit params ./GLOBALS
CHECKPOINTTABLE oraggs.chkptab
ggsci> exit
The exit and restart is required to reload the GLOBALS file.
Note the GLOBALS file must be uppercase, in the top level directory (goldengate) and have no file extension.
$ ggsci
ggsci> dblogin useridalias oraggs
ggsci> add checkpointtable
$ggsci
ggsci> edit params mgr
PORT 7908
DYNAMICPORTLIST 7909-7915
ACCESSRULE, PROG *, IPADDR ip2.ip2.ip2.ip2, ALLOW
USERIDALIAS oraggs
PURGEOLDEXTRACTS GG_DIRECTORY/dirdat/TP*, minkeephours 12
Start and stop the manager service
ggsci> stop mgr
ggsci> start mgr
On the source oracle goldengate server
ggsci> dblogin useridalias oraggs
ggsci> ADD TRANDATA .*;
ggsci> exit
$ ggsci
ggsci> add replicat RnameINI, SPECIALRUN
ggsci> edit params RnameINI
REPLICAT RnameINI
TARGETDB gguser, USERIDALIAS gguser
discardfile G:\goldengatefiles\INIFAIL.dsc, APPEND
MAP ., TARGET .;
: : :
MAP ., TARGET .;
ggsci> start replicat RnameINI
N.B. In the above command we are naming all the tables to be processed
As oraggs
$ggsci
ggsci> add extract EnameINI, SOURCEISTABLE
ggsci> edit params enameINI
EXTRACT EnameINI
USERIDALIAS oraggs
RMTHOST ip2.ip2.ip2.ip2, mgrport 7908
RMTTASK REPLICAT, GROUP RnameINI
TABLE .;
:
TABLE .;
The tables listed are the same as those in the RnameINI parameter file used on the Windows server.
Â
ggsci> start extract EnameINI
ggsci> info all
ggsci> send extract EnameINI, report
Â
n.b. If the extract completes quickly then the info all and reports will both show that the extract is not currently running, this is normal but you should check the ggserr.log file.
As oraggs
$ggsci
ggsci> add extract Ename1, SOURCEISTABLE
ggsci> edit params Ename1
EXTRACT Ename1
USERIDALIAS oraggs
RMTHOST ip2.ip2.ip2.ip2, mgrport 7908
RMTTASK REPLICAT, GROUP Rname1
TABLE .;
:
TABLE .;
Â
The tables listed are the same as those in the Rname1 parameter file used on the Windows server.
Â
ggsci> start extract Ename1
ggsci> info all
ggsci> send extract Ename1, report
$ ggsci
ggsci> add replicat Rname1, exttrail G:\GoldenGatefiles\dirdat\PP
ggsci> edit params Rname1
REPLICAT Rname1
TARGETDB gguser, USERIDALIAS gguser
discardfile G:\goldengatefiles\INIFAIL.dsc, APPEND
MAP ., TARGET .;
: : :
MAP ., TARGET .;
As oraggs
$ggsci
ggsci> add extract Ename2, integrated tranlog, begin now
ggsci> register extract Ename2
ggsci> edit params Ename2
EXTRACT Ename2
USERIDALIAS oraggs
EXTTRAIL ./dirdat/TP
TABLE .;
:
TABLE .;
Â
ggsci> add exttrail GG_DIRECTORY/dirdat/TP, extract Ename2
ggsci> add extract Pname1, exttrailsource GG_DIRECTORY/dirdat/TP
ggsci> register extract Pname1
ggsci> edit params Pname1
EXTRACT Pname1
USERIDALIAS oraggs
RMTHOST ip2.ip2.ip2.ip2, mgrport 7908
RMTTRAIL G:\goldengatefiles\dirdat\PP
TABLE .;
:
TABLE .;
Â
ggsci> add rmttrail G:\GoldenGatefiles\dirdat\PP, extract Pname1, megabytes 10
Â
ggsci> start extract Ename2
ggsci> start extract Pname1
ggsci> start replicat Rname1
Â
Changes made to the source tables should now be replicated to the destination tables.
Here are some of the commands to generate reports.
ggsci> info all
Â
ggsci> send extract Ename1, report
ggsci> view report Ename1
Â
ggsci> send extract Pname1, report
ggsci> view report Pname1
Â
ggsci> send replicat Rname1, report
ggsci> view report rname1
Â
ggsci> info extract Ename1
ggsci> info exttrail *
Â
n.b. For initial load extracts if it completes quickly then the info all and reports will both show that the extract is not currently running, this is normal but you should check the ggserr.log file.
Â
For encryption of the trail files both on the system and in transit between the systems we will use wallets.
Create the master key and wallet.
$ ggsci
ggsci> create wallet
Created wallet at location ‘dirwlt’.
Opened wallet at location ‘dirwlt’.
ggsci> add masterkey
Master key ‘OGG_DEFAULT_MASTERKEY’ added to wallet at location ‘dirwlt’.
ggsci> info masterkey
Masterkey Name:Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â OGG_DEFAULT_MASTERKEY
Creation Date:
Version:Â Â Â Â Â Â Â Creation Date:Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Status:
1 Current
ggsci> info masterkey version 1
Masterkey Name:Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â OGG_DEFAULT_MASTERKEY
Creation Date:
Version:Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 1
Renew Date:
Status:Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Current
Key Hash (SHA1):               0…6
Â
Copy the wallet “dirwlt/cwallet.sso’ to the windows server.
For the extract group parameter file (e.g. ESRCDST1) add the following line:
ENCRYPTTRAIL AES256
For the pump group parameter file (e.g. PSRCDST1) add the following line:
RMTHOSTOPTIONS ENCRYPT AES256
Â
Restart all extracts and the files should now be encrypted. As long as the wallet has been copied to the dirwlt directory on the windows server decryption is automatic.
Â
Â
Continued from CHEF configure WordPress on AWS EC2 Ubuntu
We need to define a template for the Apache VirtualHost that will run WordPress.
cd cookbooks/phpapp
vi templates/default/site.conf.erb
# Auto generated by Chef. Changes will be overwritten.
ServerName
DocumentRoot
Options FollowSymLinks
AllowOverride FileInfo Options
AllowOverride All
Order allow,deny
Allow from all
Options FollowSymLinks
AllowOverride None
We’ll now use a new resource which is provided by the apache2 cookbook called web_app to create an Apache VirtualHost using our template site.conf.erb.
vi recipes/default.rb
#Disable the default apache site , i.e. make the following change which is just after the include_recipe lines
apache_site “default” do
enable false
end
TO
apache_site “default” do
enable true
end
web_app ‘phpapp’ do
template ‘site.conf.erb’
docroot node[‘phpapp’][‘path’]
server_name node[‘phpapp’][‘server_name’]
end
vi attributes/default.rb # and add the line
default[‘phpapp’][‘server_name’] = “phpapp”
If your server has a hostname setup in DNS we should override our default attribute and specify the actual name in web.json. If there is no proper hostname defined ignore this step.
cd ../..
vi web.json # and change the contents to
{
“mysql”: {“server_root_password”: “808052769e2c6d909027a2905b224bad”, “server_debian_password”: “569d1ed2d46870cc020fa87be83af98d”, “server_repl_password”: “476911180ee92a2ee5a471f33340f6f4”},
“phpapp”: {“db_password”: “212b09752d173876a84d374333ae1ffe”, “server_name”: “###.###.###”},
“run_list”: [ “recipe[apt]”, “recipe[phpapp]” ]
}
Visit the web server and see if that’s worked.
http:///phpapp/wp-admin/install.php
From http://gettingstartedwithchef.com/first-steps-with-chef.html
The simplest way to use Chef is chef-solo. It allows you to install, configure and manage the packages required by your application without the complication of any client and server configuration. We’ll start with the common scenario that you have to setup a website designer with a WordPress environment.
Install chef
$ sudo su – root
# cd ~
curl -L https://www.opscode.com/chef/install.sh | bash
So, what do we need to do to get our web server up and running?
How do we do that? We write our first Chef cookbook. But before we do that we should setup a file structure that will help us organise our various Chef files. Opscode, the makers of Chef provide one. They call it simply the Chef Repository.
wget http://github.com/opscode/chef-repo/tarball/master
Nex page
|
Carried on from CHEF install WordPress on AWS EC2 Ubuntu
WordPress keeps its configuration in a file called wp-config.php. We need to create that file and put database names and user access details inside it. Chef provides a resource called template that can do just that.
cd cookbooks/phpapp
vi recipes/default.rb # add the lines
wp_secrets = Chef::Config[:file_cache_path] + ‘/wp-secrets.php’
if File.exist?(wp_secrets)
salt_data = File.read(wp_secrets)
else
require ‘open-uri’
salt_data = open(‘https://api.wordpress.org/secret-key/1.1/salt/’).read
open(wp_secrets, ‘wb’) do |file|
file << salt_data
end
end
template node[‘phpapp’][‘path’] + ‘/wp-config.php’ do
source ‘wp-config.php.erb’
mode 0755
owner ‘root’
group ‘root’
variables(
:database     => node[‘phpapp’][‘database’],
:user       => node[‘phpapp’][‘db_username’],
:password     => node[‘phpapp’][‘db_password’],
:wp_secrets    => salt_data)
end
WordPress comes with an example configuration file which we’d usually alter to create our template but for brevity we’ll just specify a cut down version below.
Create templates/default/wp-config.php.erb
vi templates/default/wp-config.php.erb
Variables that should be inserted into the template are done so with . You can also include attributes inside templates using although that can prevent your template from being easily reused elsewhere so is not considered best practice.
cd ../..
chef-solo -c solo.rb -j web.json
Next
|
Carried on from CHEF configure MySQL on AWS EC2 Ubuntu
We want to setup each new WordPress build with the latest version so we’ll fetch it from WordPress.org. But how? Fortunately enough, Chef comes with the resource remote_file which will do just that for us.
cd cookbooks/phpapp
vi recipes/default.rb # and add the following lines
wordpress_latest = Chef::Config[:file_cache_path] + “/wordpress-latest.tar.gz”
remote_file wordpress_latest do
source “http://wordpress.org/latest.tar.gz”
mode “0644”
end
directory node[“phpapp”][“path”] do
owner “root”
group “root”
mode “0755”
action :create
recursive true
end
execute “untar-wordpress” do
cwd node[‘phpapp’][‘path’]
command “tar –strip-components 1 -xzf ” + wordpress_latest
creates node[‘phpapp’][‘path’] + “/wp-settings.php”
Â
end
Carrried on from CHEF install PHP on AWS EC2 Ubuntu
Now if you were setting up a server by hand, this is where you’d manually setup a database, copy the web application code over, create a MySQL user for the website and configure virtual hosts. Instead, we’ll use Chef to automate the setup of our application. This allows us to set up multiple servers and know we will always get the same results.
Before we setup our database we need to fetch a final few cookbooks. The database cookbook provides resources that allow us to easily manage databases and database users. The database cookbook depends on the postgresql, mariadb, xfs and aws cookbooks so we’ll need those as well even though we won’t be using them. We’ll also fetch the mysql2_chef_gem cookbook which the database cookbook requires when used with MySQL.
cd /root/chef-repo/cookbooks
knife cookbook site download database
tar zxf database-*.tar.gz
knife cookbook site download postgresql
tar zxf postgresql-*.tar.gz
knife cookbook site download xfs
tar zxf xfs-*.tar.gz
knife cookbook site download mariadb
tar mariadb xfs-*.tar.gz
knife cookbook site download aws
tar zxf aws-*.tar.gz
knife cookbook site download mysql2_chef_gem 0.1.0
tar zxf mysql2_chef_gem-*.tar.gz
root@intro:~/chef-repo/cookbooks# rm *.tar.gz
cd phpapp
vi metadata.rb # and add the following 2 lines
depends “database”
depends “mysql2_chef_gem”
To setup our web application we need to:
A resource is an action that your recipe can perform. The template resource creates a file by expanding variables in a template. The user resource can be used to manage users. The database cookbook provides the resource mysql_database which we will now use to perform the first step.
vi recipes/default.rb
#
# Cookbook Name:: phpapp
# Recipe:: default
#
# Copyright 2013, YOUR_COMPANY_NAME
#
# All rights reserved – Do Not Redistribute
#
include_recipe “apache2”
include_recipe “mysql::client”
include_recipe “mysql::server”
include_recipe “php”
include_recipe “php::module_mysql”
include_recipe “apache2::mod_php5”
include_recipe “mysql::ruby”
include_recipe “mysql2_chef_gem”
apache_site “default” do
enable true
end
mysql_database node[‘phpapp’][‘database’] do
connection ({:host => ‘localhost’, :username => ‘root’, :password => node[‘mysql’][‘server_root_password’]})
action :create
end
The line “mysql_database node[‘phpapp’][‘database’] do ” allows us to define the database name later because we have also included the mysql2_chef_gem recipe which installs a Ruby library (called a “gem”) that allows Chef to interact with a MySQL server.
The database attribute can be defined in a few places. We could define the attribute in web.json like we did with the MySQL ones but that makes our cookbook unnecessarily difficult to use. We want to provide the option to use a database called something other than phpapp but we should really provide a default.
vi cookbooks/phpapp/attributes/default.rb # and add the line
default[“phpapp”][“database”] = “phpapp”
cd ../..
chef-solo -c solo.rb -j web.json
cd cookbooks/phpapp
vi recipes/default.rb # append the lines
mysql_database_user node[‘phpapp’][‘db_username’] do
connection ({:host => ‘localhost’, :username => ‘root’, :password => node[‘mysql’][‘server_root_password’]})
password node[‘phpapp’][‘db_password’]
database_name node[‘phpapp’][‘database’]
privileges [:select,:update,:insert,:create,:delete]
action :grant
end
vi attrributes/default.rb # Add the line
default[“phpapp”][“db_username”] = “phpapp”
cd ../..
vi web.json # add the following
{
“mysql”: {“server_root_password”: “808052769e2c6d909027a2905b224bad”, “server_debian_password”: “569d1ed2d46870cc020fa87be83af98d”, “server_repl_password”: “476911180ee92a2ee5a471f33340f6f4”},
“phpapp”: {“db_password”: “212b09752d173876a84d374333ae1ffe”},
“run_list”: [ “recipe[apt]”, “recipe[phpapp]” ]
}
chef-solo -c solo.rb -j web.json
MySQL is now up and running, try:
1) mysql -p
808052769e2c6d909027a2905b224bad
2) mysql -u phpapp
212b09752d173876a84d374333ae1ffe
Both get to the mysql database prompt.
Next Page
|
FOLLOWS ON FROM “CHEF install MySQL on AWS EC2 Ubuntu“
The php cookbook depends on the xml, yum-epel, windows, and iis cookbooks, so we’ll need those even though we won’t be using all of them. We’ll also have to install sub-dependencies yum (a dependency of yum-epel), chef_handler, and powershell (dependencies of windows).
Â
cd /root/chef-repo/cookbooks
knife cookbook site download php
tar zxf php*.tar.gz
knife cookbook site download xml
tar zxf xml-*.tar.gz
knife cookbook site download yum
tar zxf yum-*.tar.gz
knife cookbook site download yum-epel
tar zxf yum-epel-*.tar.gz
knife cookbook site download powershell
tar zxf powershell-*.tar.gz
knife cookbook site download iis
tar zxf iis-*.tar.gz
rm *.tar.gz
Here we add the PHP default recipe, one to install the PHP MySQL extension and one to enable the Apache PHP module mod_php. We also enable the default site so we can check our installation has worked.
cd phpapp
vi metadata.rb # add the following line
depends “php”
vi recipes/default.rb # change the include_recipe lines to the following
include_recipe “apache2”
include_recipe “mysql::client”
include_recipe “mysql::server”
include_recipe “php”
include_recipe “php::module_mysql”
include_recipe “apache2::mod_php5”
cd ../..
chef-solo -c solo.rb -j web.json
N.B. I had to edit php/metadata.json and change the text “mysql”:”>= 6.0.0″ to “mysql”:”>= 0.0.0″
TEST
Open /var/www/test.php in your editor and insert the line
< ? php
phpinfo();
? >
NOW
vi /etc/apache2/apache2.conf  # and change the section to the following:
Options FollowSymLinks
AllowOverride None
Require all granted
Now goto http://yourserver/test.php
Next page CHEF configure MySQL on AWS EC2 Ubuntu
FOLLOWS ON FROM “CHEF install apache on AWS EC2 Ubuntu”
Next we’ll setup MySQL. As the community site has a cookbook for MySQL, the process is similar to Apache. Again we’ll ask knife to fetch the cookbook from the community site for us.
Â
cd cookbooks
knife cookbook site download mysql
tar zxf mysql*
rm mysql-*.tar.gz
Install mysql as both a client and server as we only have the one machine.
cd mysql/recipes/
cd ../../phpapp
vi metadata.rb # and add the line
depends “mysql”
vi recipes/default.rb # and after the include apache2 add the following lines
include_recipe “mysql::client”
include_recipe “mysql::server”
cd ..
knife cookbook site download openssl
tar zxf openssl*.tar.gz
rm openssl*.tar.gz
knife cookbook site download build-essential
tar zxf build-essential-*.tar.gz
rm build-essential-*.tar.gz
knife cookbook site download homebrew
tar zxf homebrew-*.tar.gz
rm homebrew-*.tar.gz
knife cookbook site download windows
tar zxf windows-*.tar.gz
rm windows-*.tar.gz
knife cookbook site download chef_handler
tar zxf chef_handler-*.tar.gz
rm chef_handler-*.tar.gz
knife cookbook site download chef-sugar
tar zxf chef-sugar-*.tar.gz
rm chef-sugar-*.tar.gz
knife cookbook site download seven_zip
tar zxf ..
rm ..
knife cookbook site download mingw
tar zxf ..
rm ..
vi mysql/metadata.rb # remove the version numbers for openssl and build_essential
TO
Next Page CHEF install PHP on AWS EC2 Ubuntu
Continued from
cd cookbooks/phpapp
|
nc -z targetip mgrport=7809
nc -Â arbitrary TCP and UDP connections and listens
DESCRIPTION
The nc (or netcat) utility is used for just about anything under the sun involving TCP or UDP.  It can open TCP connections, send UDP packets, listen on arbitrary TCP and UDP ports, do port scanning, and deal with both IPv4 and IPv6.  Unlike telnet(1), nc scripts nicely, and separates error messages onto standard error instead of sending them to standard output, as telnet(1) does with some.
Common uses include:
·  simple TCP proxies
·  shell-script based HTTP clients and servers
·  network daemon testing
·  a SOCKS or HTTP ProxyCommand for ssh(1)
·  and much, much more
The options are as follows:
-4 Â Â Â Forces nc to use IPv4 addresses only.
-6 Â Â Â Forces nc to use IPv6 addresses only.
-D Â Â Â Enable debugging on the socket.
-d    Do not attempt to read from stdin.
-h    Prints out nc help.
-i interval Specifies a delay time interval between lines of text sent and received. Â Also causes a delay time between connections to multiple ports.
-k Forces nc to stay listening for another connection after its current connection is completed. Â It is an error to use this option without the -l option.
-l Used to specify that nc should listen for an incoming connection rather than initiate a connection to a remote host. Â It is an error to use this option in conjunction with the -p, -s, or -z options. Â Additionally, any timeouts specified with the -w option are ignored.
-n Do not do any DNS or service lookups on any specified addresses, hostnames or ports.
-p source_port Specifies the source port nc should use, subject to privilege restrictions and availability. Â It is an error to use this option in conjunction with the -l option.
-r Specifies that source and/or destination ports should be chosen randomly instead of sequentially within a range or in the orderthat the system assigns them.
-S Enables the RFC 2385 TCP MD5 signature option.
-s source_ip_address Specifies the IP of the interface which is used to send the packets. It is an error to use this option in conjunction with the -l option.
-T ToS Specifies IP Type of Service (ToS) for the connection. Valid values are the tokens “lowdelayâ€, “throughputâ€, “reliabilityâ€, or an 8-bit hexadecimal value preceded by “0xâ€.
-C Send CRLF as line-ending
-t Causes nc to send RFC 854 DON’T and WON’T responses to RFC 854 DO and WILL requests. This makes it possible to use nc to script telnet sessions.
-U Specifies to use Unix Domain Sockets.
-u Use UDP instead of the default option of TCP.
-v Have nc give more verbose output.
-w timeout If a connection and stdin are idle for more than timeout seconds, then the connection is silently closed. The -w flag has no effect on the -l option, i.e. nc will listen forever for a connection, with or without the -w flag. The default is no timeout.
-X proxy_version Requests that nc should use the specified protocol when talking to the proxy server. Supported protocols are “4†(SOCKS v.4), “5†(SOCKS v.5) and “connect†(HTTPS proxy). If the protocol is not specified, SOCKS version 5 is used.
-x proxy_address[:port] Requests that nc should connect to hostname using a proxy at proxy_address and port. If port is not specified, the well-known port for the proxy protocol is used (1080 for SOCKS, 3128 for HTTPS).
-z Specifies that nc should just scan for listening daemons, without sending any data to them. It is an error to use this option in conjunction with the -l option.
hostname can be a numerical IP address or a symbolic hostname (unless the -n option is given). In general, a hostname must be specified, unless the -l option is given (in which case the local host is used).
port[s] can be single integers or ranges. Ranges are in the form nn-mm. In general, a destination port must be specified, unless the -U option is given (in which case a socket must be specified).
CLIENT/SERVER MODEL It is quite simple to build a very basic client/server model using nc. On one console, start nc listening on a specific port for a connection.
For example:
$ nc -l 1234
nc is now listening on port 1234 for a connection. On a second console
For 12c read oracle articles:
Doc ID 257650.1
Doc ID 1591874.1
Basically:
set EXPIRE_TIME
Trace File Analyzer Collector (TFA)
is a diagnostic collection utility to simplify diagnostic data collection on Oracle Clusterware/Grid Infrastructure, RAC and Single Instance Database systems.
TFA is similar to the diagcollection utility packaged with Oracle Clusterware in that it collects and packages diagnostic data – however – TFA is MUCH more powerful than diagcollection with its ability to centralize and automate the collection of diagnostic information.
TFA provides the following key benefits:
TFA Collector Features:
[grid]$ tfactl diagcollect -all -since 6h
Collecting data for all components using above parameters…
Â
Collecting data for all nodes
Collection Id : 20160727092122DBN
Â
Repository Location in DBN : /u01/app/grid/tfa/repository
Â
Collection monitor will wait up to 30 seconds for collections to start
Â
2016/07/27 09:21:25 UTC : Collection Name : tfa_Wed_Jul_27_09_21_22_UTC_2016.zip
2016/07/27 09:21:25 UTC : Sending diagcollect request to host : HOSTN
2016/07/27 09:21:25 UTC : Scanning of files for Collection in progress…
2016/07/27 09:21:25 UTC : Collecting extra files…
2016/07/27 09:22:18 UTC : Completed collection of extra files…
2016/07/27 09:22:25 UTC : Getting list of files satisfying time range [07/27/2016 03:21:25 UTC, 07/27/2016 09:22:25 UTC]
2016/07/27 09:22:25 UTC : Starting Thread to identify stored files to collect
2016/07/27 09:22:25 UTC : Getting List of Files to Collect
2016/07/27 09:22:26 UTC : Trimming file : HOST/crs/HOST/crs/trace/crsd_oraagent_grid.trc with original file size : 10MB
:
2016/07/27 09:22:29 UTC : Trimming file : HOST/asm/+asm/+ASM2/trace/+ASM2_lms0_10658.trc with original file size : 1.7MB
Â
2016/07/27 09:22:29 UTC : HOST: Zipped 100 Files so Far
:
2016/07/27 09:22:29 UTC : HOST: Zipped 1000 Files so Far
Â
:
2016/07/27 09:22:44 UTC : Collecting ADR incident files…
Â
2016/07/27 09:22:49 UTC : Completed Zipping of all files
2016/07/27 09:22:49 UTC : Cleaning up temporary files
2016/07/27 09:22:49 UTC : Finished Cleaning up temporary files
2016/07/27 09:22:49 UTC : Finalizing the Collection Zip File
2016/07/27 09:22:49 UTC : Finished Finalizing the Collection Zip File
2016/07/27 09:22:49 UTC : Total Number of Files checked : 45905
2016/07/27 09:22:49 UTC : Total Size of all Files Checked : 18GB
2016/07/27 09:22:49 UTC : Number of files containing required range : 1172
2016/07/27 09:22:49 UTC : Total Size of Files containing required range : 4.4GB
2016/07/27 09:22:49 UTC : Number of files trimmed : 30
2016/07/27 09:22:49 UTC : Total Size of data prior to zip : 506MB
2016/07/27 09:22:49 UTC : Saved 4.1GB by trimming files
2016/07/27 09:22:49 UTC : Zip file size : 33MB
2016/07/27 09:22:49 UTC : Total time taken : 84s
2016/07/27 09:22:49 UTC : Remote Collection in Progress…
2016/07/27 09:22:49 UTC : lhrdb01:Completed Collection
2016/07/27 09:22:49 UTC : Completed collection of zip files.
Logs are being collected to: /u01/app/grid/tfa/repository/collection_…
Â
Â
SQL> ORADEBUG SETOSPID 10189
Oracle pid: 81, Unix process pid: 10189, image: oracle@nnn.com
SQL> ORADEBUG TRACEFILE_NAME;
/u01/app/oracle/diag/rdbms/dbn/dbn_id/trace/db_id_ora_10189.trc
SQL> oradebug event 10046 trace name context forever, level 12
SQL> oradebug event 10046 trace name context off
Â
SQL> oradebug session_event 10046 trace name context forever,level 12
Â
SQL> oradebug event 10046 trace name context off
Â
Â
Â
As ORACLE
cd /media/database
./runInstaller
Â
PLEASE NOTE:
This is a 12c standard cluster using asm installation on my test VBox nodes and I know there were some issues on swapspace etc but as these were minimal I didn’t care, I just wanted to make sure I got my grin install documented.
If you don’t like this then feel free to go elsewhere.
Not using GNS, Not using Leaf Nodes, Using local ASM
edit /etc/hosts
127.0.0.1 localhost localhost.localdomain
::1 localhost localhost.localdomain
192.168.0.25 ambari-RAC1 ambari-RAC1.localdomain
192.168.0.26 ambari-RAC2 ambari-RAC2.localdomain
192.168.0.67 ambari-cluster ambari-cluster.localdomain
192.168.0.25 ambari-RAC1-vip ambari-RAC1-vip.localdomain
192.168.0.26 ambari-RAC2-vip ambari-RAC2-vip.localdomain
10.1.1.5 ambari-RAC1-priv ambari-RAC1-priv.localdomain
10.1.1.6 ambari-RAC2-priv ambari-RAC2-priv.localdomain
For ambari-rac1
$ hostname
e.g.
hostname ambari-rac1.localdomain
$ domainname
(none)
$ vi /etc/hostname
e.g.
ambari-rac1.localdomain
$ hostnamectl set-hostname ambari-rac1.localdomain
$ nmcli general hostname
oinstall, dba, oper, bckpdba, asmdba, dgdba, kmdba, asmadmin, asmoper
grid – member of wheel, oinstall, asmadmin, asmdba, asmoper, dba,bckpdba
oracle – member of wheel, oinstall, dba, oper, bckpdba, asmdba, dgdba , kmdba
GRID_BASE=/u01/app/grid
ORACLE_BASE=/u01/app/grid
CRS_HOME=/u01/app/12.1.0/grid
GRID_HOME=/u01/app/12.1.0/grid
ORACLE_HOME=/u01/app/12.1.0/grid
ORA_CRS_HOME=/u01/app/12.1.0/grid
ORACLE_SID=+ASM??
ORA_ENV=12102
export EDITOR ORACLE_BASE ORACLE_HOME GRID_BASE GRID_HOME ORACLE_SID ORA_ENV ORACLE_HOSTNAME CRS_HOME
PATH=$PATH:/usr/sbin:$ORACLE_BASE/dba/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
export PATH
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/12102/dbhome_1
ORACLE_SID=orcl
ORA_ENV=dbhome_1
export EDITOR ORACLE_BASE ORACLE_HOME ORACLE_SID ORA_ENV ORACLE_HOSTNAME
NLS_DATE_FORMAT=’YYYYMMDD.HH24MISS’
NLS_LANG=american_america.al32utf8
export NLS_DATE_FORMAT NLS_LANG
PATH=$PATH:/usr/sbin:$ORACLE_BASE/dba/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatchexport PATH
edit /etc/security/limits.conf and add the following lines:
grid hard nofile 65536
oracle hard memlock 4335826
for a 4 gig memory system that already has a 2 gig swapfile:
# dd if=/dev/zero of=/swapfile bs=1024 count=2097152
# mkswap /swapfile
# chmod 0600 /swapfile
# swapon /swapfile
# vi /etc/fstab                                 ## to make permanent on reboot
/swapfile swap swap defaults 0 0
# cat /proc/swaps                           ## to check swap space
# free ## to check swap space
Not strictly necessary as the installer can do this
mkdir -p /u01/app/grid
mkdir -p /u01/app/12.1.0/grid
mkdir -p /u01/app/oraInventory
chmod -R grid:oinstall /u01/app/grid
chmod -R grid:oinstall /u01/app/12.1.0
mkdir -p /u01/app/oracle/product/12102
chmod -R oracle:oinstall /u01/app/oracle
yum install oracle-rdbms-server-12cR1-preinstall
kmod-oracleasm
oracleasm-support
rpm -Upv /kit dir/grid/rpm/cvuqdisk-1.0.9-1.rpm
rpm -Upv oracleasmlib-2.0.12-1.el7.x86_64.rpm
yum update -y
stop and disable
create /etc/sysctl.d/ora_params with the following:
# Oracle 11g R2 and 12c R1 kernel tuning
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
These will be loaded on system reboot.
/etc/init.d/oracleasm configure
grid
oinstall
y
y
/usr/sbin/oracleasm init
/etc/init.d/oracleasm scandisks
/etc/init.d/oracleasm listdisks
Assuming sdb and sdc are the disks:
fdisk /dev/sdb
n for new
carriage return
carriage return
carriage return
carriage return
w
fdisk /dev/sdc
n
carriage return
carriage return
carriage return
carriage return
w
/usr/sbin/oracleasm createdisk diskb /dev/sdb1
/usr/sbin/oracleasm createdisk diskc /dev/sdc1
chmod grid:oinstall /u01
chmod grid:oinstall /u01/app
chmod grid:oinstall /u01/app/grid
Unzip the oracl grid software
as the grid user
/unzip directory/runcluvfy.sh stage -pre crsinst -n ambari-rac1,ambari-rac2 -fixupnoexec
Run fixup to resolve most issues and manually correct any others.
/unzip directory/runInstaller
Install and configure oracle grid infrastructure for a cluster
Configure a standard cluster
Typical Installation
Cluster name: ambari-cluster
Nodes: ambari-rac1, ambari-rac1-vip
Nodes: ambari-rac2, ambari-rac2-vip
SSH Connectivity.
Setup
GO TO install 12c grid Part 2
Â
Â
chown grid:oinstall /u01
chmod 777 /u01
Try again
Considering I cloned these from a successful build I don’t understand why the kernel versions are different !
Why resolv.conf is incorrect ?
Select “Ignore All†and next
DNS /NIS Name Service Check – Ignored
We can ignore this issue as we have ensured all names and addresses are in all hosts files.
Various warnings and errors exist BUT we have already ignored the warnings and errors , swap size and DNS as we know they work.
Â
Normal 0 false false false false EN-US X-NONE X-NONE
Â
N.B. Bug as it doesn’t work with Flex Asm 12.1 – but should be fixed by now.
Suppose that you have an extended 3 nodes RAC:
And 2 ASM instances actives:
srvctl status asm ASM is running on racnode3,racnode1
set the asm_preferred_read_failure_groups parameter that way for the DATAP diskgroup
SQL> alter system set asm_preferred_read_failure_groups='DATAP.SITEB' sid='+ASM3'; System altered. SQL> alter system set asm_preferred_read_failure_groups='DATAP.SITEA' sid='+ASM1'; System altered.
So that ASM3 prefers to read from SITEB and ASM1 from SITEA
BUT we find Instance on node 3 are actually using ASM1
SQL> select I.INSTANCE_NAME,C.INSTANCE_NAME,C.DB_NAME 2 from gv$instance I, gv$asm_client C 3 where C.INST_ID=I.INST_ID and C.instance_name='NOPBDT3'; INSTANCE_NAME INSTANCE_NAME DB_NAME ---------------- ---------------------------------- ------------ +ASM1 NOPBDT3 NOPBDT
asmcmd iostat
asmiostat.sh
SQL> ALTER SYSTEM RELOCATE CLIENT ‘NOPBDT3:NOPBDT’;
System altered.
SQL> select I.INSTANCE_NAME,C.INSTANCE_NAME,C.DB_NAME
2Â Â Â Â Â Â from gv$instance I, gv$asm_client C
3Â Â Â Â Â Â where C.INST_ID=I.INST_ID and C.instance_name=’NOPBDT3′;
INSTANCE_NAME INSTANCE_NAME DB_NAME
—————- ——————————– ——–
+ASM3Â Â Â Â Â Â Â NOPBDT3Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â NOPBDT
+ASM3Â Â Â Â Â Â Â NOPBDT3Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â NOPBDT
Â
[oracle@oel6-112-rac1 Desktop]$ srvctl status asm ASM is running on oel6-112-rac2,oel6-112-rac1
[oracle@oel6-112-rac1 Desktop]$ asmcmd ASMCMD> showclustermode ASM cluster : Flex mode enabled ASMCMD> showclusterstate Normal
[oracle@oel6-112-rac1 Desktop]$ srvctl status asm -detail ASM is running on oel6-112-rac2,oel6-112-rac1 ASM is enabled.
[oracle@oel6-112-rac1 Desktop]$ srvctl config asm -detail ASM home: /u01/app/12.1.0/grid Password file: +DATA/orapwASM ASM listener: LISTENER ASM is enabled. ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM
[oracle@oel6-112-rac2 Desktop]$ asmcmd ASMCMD> showclustermode ASM cluster : Flex mode enabled ASMCMD> showclusterstate Normal ASMCMD> exit
[oracle@oel6-112-rac2 Desktop]$ srvctl config asm -detail ASM home: /u01/app/12.1.0/grid Password file: +DATA/orapwASM ASM listener: LISTENER ASM is enabled. ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM
[oracle@oel6-112-rac1 Desktop]$ srvctl stop asm -node oel6-112-rac1 -stopoption abort -force
[oracle@oel6-112-rac1 Desktop]$ srvctl status asm PRCR-1070 : Failed to check if resource ora.asm is registered Cannot communicate with crsd
[oracle@oel6-112-rac1 Desktop]$ crsctl check cluster CRS-4535: Cannot communicate with Cluster Ready Services CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
[oracle@oel6-112-rac1 Desktop]$ ps -ef | grep pmon oracle 3813 1 0 17:40 ? 00:00:00 mdb_pmon_-MGMTDB oracle 5806 1 0 17:42 ? 00:00:00 ora_pmon_orcl1 oracle 6193 1 0 17:42 ? 00:00:00 apx_pmon_+APX1
Note: Here a database instance is associated with the specific ASM instance running in the specific node. If for some reason the ASM instance was unable to be brought up or the service goes down, the database instance can still be brought up as the database instance will look for an ASM instance running on the same cluster.
[oracle@oel6-112-rac1 Desktop]$ . oraenv ORACLE_SID = [orcl1] ? orcl1 ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0/db_1 The Oracle base remains unchanged with value /u01/app/oracle
[oracle@oel6-112-rac1 Desktop]$ sqlplus /nolog SQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 25 18:24:36 2013 Copyright (c) 1982, 2013, Oracle. All rights reserved. SQL> connect sys/oracle@orcl as sysdba Connected. SQL> select instance_name,instance_number from gv$instance; INSTANCE_NAME INSTANCE_NUMBER ------------------------------------------- orcl2 2 orcl1 1
SQL> select instance_name,instance_number from v$instance; INSTANCE_NAME INSTANCE_NUMBER ------------------------------------------- orcl2 2 SQL> connect sys/oracle@orcl as sysdba Connected.
[oracle@oel6-112-rac1 Desktop]$ . oraenv ORACLE_SID = [orcl1] ? +ASM2 ORACLE_HOME = [/home/oracle] ? /u01/app/12.1.0/grid The Oracle base remains unchanged with value /u01/app/oracle [oracle@oel6-112-rac1 Desktop]$ asmcmd --privilege sysasm --inst +ASM2 ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 15342 4782 0 4782 0 Y DATA/ ASMCMD>
# srvctl config asm | grep -i count ASM instance count: 2
# srvctl modify asm -count 3
# srvctl status asm -detail
Â
11g public reports page is
http://:port/em/public/reports
Â
12c does not support this, Business Intelligence Publisher is replacing reports.
https://:port/em/faces/core-bip-reports
Â
Â
Â
Â
DECLARE @sqltext varbinary(128)
SELECT @sqltext = sql_handle
FROM sys.sysprocesses
WHERE spid =
SELECT TEXT FROM sys.dm_exec_sql_text(@sqltext)
GO
Â
SELECT DEST.TEXT
FROM sys.[dm_exec_connections] SDEC
CROSS APPLY sys.[dm_exec_sql_text](SDEC.[most_recent_sql_handle]) AS DEST
WHERE SDEC.[most_recent_session_id] =
List sessions:
SELECT TOP 50 * FROM(SELECT COALESCE(OBJECT_NAME(s2.objectid),’Ad-Hoc’) AS ProcName,
execution_count,s2.objectid,
(SELECT TOP 1 SUBSTRING(s2.TEXT,statement_start_offset / 2+1 ,
( (CASE WHEN statement_end_offset = -1
THEN (LEN(CONVERT(NVARCHAR(MAX),s2.TEXT)) * 2)
ELSE statement_end_offset END)- statement_start_offset) / 2+1)) AS sql_statement,
last_execution_time
FROM sys.dm_exec_query_stats AS s1
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 ) x
WHERE sql_statement NOT like ‘SELECT TOP 50 * FROM(SELECT %’
–and OBJECTPROPERTYEX(x.objectid,’IsProcedure’) = 1
ORDER BY last_execution_time DESC
Â
Â
spid |
ecid |
status |
loginame |
hostname |
blk |
dbname |
cmd |
request_id |
|
1 |
0 |
background |
sa |
|
0 |
NULL |
RESOURCE MONITOR |
0 |
|
2 |
0 |
suspended |
sa |
|
0 |
NULL |
LOG WRITER |
0 |
|
51 |
0 |
sleeping |
STORES\RLittle |
CC-P-AOMAPP |
0 |
master |
AWAITING COMMAND |
0 |
|
52 |
0 |
sleeping |
FOUNDATION |
ACC-JY |
0 |
Foundation |
AWAITING COMMAND |
0 |
|
53 |
0 |
sleeping |
CRM_HOST_LIVE |
WEB-GB |
0 |
CRM_HOST_LIVE |
SELECT |
0 |
|
54 |
0 |
sleeping |
NT AUTHORITY\SYSTEM |
CC-P-AOMDB |
0 |
msdb |
AWAITING COMMAND |
0 |
|
55 |
0 |
sleeping |
Admin |
CCUK00130T001 |
0 |
CRM_HOST_LIVE |
AWAITING COMMAND |
0 |
|
56 |
0 |
sleeping |
bte_ideradba |
VM0227 |
0 |
master |
AWAITING COMMAND |
0 |
|
57 |
0 |
sleeping |
NT AUTHORITY\SYSTEM |
CC-P-AOMDB |
0 |
msdb |
AWAITING COMMAND |
0 |
|
58 |
0 |
sleeping |
NT AUTHORITY\SYSTEM |
CC-P-AOMDB |
0 |
ReportServer |
AWAITING COMMAND |
0 |
|
59 |
0 |
sleeping |
CRM_HOST_LIVE |
CC-P-AOMAPP |
0 |
CRM_HOST_LIVE |
AWAITING COMMAND |
0 |
|
60 |
0 |
sleeping |
FOUNDATION |
VIRTUALXP-63912 |
0 |
Foundation |
SELECT |
0 |
|
Â
SELECT DISTINCT name AS database_name, session_id, host_name, login_time, login_name, reads, writes
FROM sys.dm_exec_sessions
LEFT OUTER JOIN
sys.dm_tran_locks ON sys.dm_exec_sessions.session_id = sys.dm_tran_locks.request_session_id
INNER JOIN
sys.databases ON sys.dm_tran_locks.resource_database_id = sys.databases.database_id
WHERE resource_type ‘DATABASE’
AND name =’CRM_HOST_LIVE’
ORDER BY name
Â
Â
CRM_HOST_LIVE |
76 |
_ICT_CIM_5 |
2015-09-04 09:53:05.800 |
CRM_HOST_LIVE |
3 |
11 |
CRM_HOST_LIVE |
84 |
WEB-GB |
2015-09-01 10:18:47.557 |
CRM_HOST_LIVE |
0 |
0 |
Â
SELECT DISTINCT name AS database_name, session_id, host_name, login_time, login_name, reads, writes
FROM sys.dm_exec_sessions
LEFT OUTER JOIN
sys.dm_tran_locks ON sys.dm_exec_sessions.session_id = sys.dm_tran_locks.request_session_id
INNER JOIN
sys.databases ON sys.dm_tran_locks.resource_database_id = sys.databases.database_id
WHERE resource_type ‘DATABASE’
AND request_mode like ‘%X%’
–AND name =’CRM_HOST_LIVE’
ORDER BY name
Â
Oracle Database 12c Release 1 (12.1) introduces the multitenant architecture.
This database architecture has a multitenant container database (CDB) that includes a root container, CDB$ROOT, a seed database, PDB$SEED, and multiple pluggable databases (PDBs).
Each pluggable database is equivalent to a separate database instance in Oracle Database release 11g.
The root container, CDB$ROOT, holds common objects that are accessible to every PDB utilizing metadata links or object links.
The seed database, PDB$SEED, is used when creating a new PDB to seed the new database.
The key benefit of the Oracle Database 12c multitenant architecture is that the database resources, such as CPU and memory, can be shared across all of the PDBs. This architecture also enables many databases to be treated as one for tasks such as upgrades or patches, and backups.
Â
RESTORE SPFILE TO PFILE ‘/tmp/initTEMP.ora’;
STARTUP FORCE PFILE=’/tmp/initTEMP.ora’;
17-18
When you recover a whole CDB, you recover the root and all PDBs in a single operation.
RESTORE DATABASE;
RECOVER DATABASE;
You might consider recovering only the root if a data corruption or user error occurs that affects only the root. However, Oracle strongly recommends that you recover all PDBs after recovering the root to prevent metadata inconsistencies among the root and the PDBs. In this case, it might be preferable to perform a complete recovery of the whole CDB.
SHUTDOWN IMMEDIATE
STARTUP MOUNT
RESTORE DATABASE ROOT;
RECOVER DATABASE ROOT;
Strongly recommended) Recover all PDBs, including the seed PDB.
a.Issue the RESTORE PLUGGABLE DATABASE and RECOVER PLUGGABLE DATABASE commands.
The following example recovers the PDBs sales and hr:
RESTORE PLUGGABLE DATABASE ’PDB$SEED’, sales, hr;
RECOVER PLUGGABLE DATABASE ’PDB$SEED’, sales, hr;
b.Examine the output to see if media recovery was successful. If so, proceed to the next step.
7.Open the CDB and all PDBs.
ALTER DATABASE OPEN;
ALTER PLUGGABLE DATABASE ALL OPEN;
There are two approaches to recovering a PDB with RMAN:
â– Connect to the root and then use the RESTORE PLUGGABLE DATABASE and RECOVER PLUGGABLE DATABASE commands. This approach enables you to recover multiple PDBs with a single command.
â– Connect to the PDB and use the RESTORE DATABASE and RECOVER DATABASE commands. This approach recovers only a single PDB and enables you to use the same commands used for recovering non-CDB databases.
ALTER PLUGGABLE DATABASE sales, hr CLOSE;
If any data files are missing, an error occurs and you cannot close a PDB. You must then connect to the PDB to which the missing data file belongs, take the missing data file offline, and then close the PDB.
The following command takes the data file 12 offline:
ALTER PLUGGABLE DATABASE DATAFILE 12 OFFLINE;
Issue the RESTORE PLUGGABLE DATABASE and RECOVER PLUGGABLE DATABASE commands.
The following example recovers the seed database PDB$SEED and the PDBs sales and hr:
RESTORE PLUGGABLE DATABASE ‘pdb$seed’, sales, hr;
RECOVER PLUGGABLE DATABASE ‘pdb$seed’, sales, hr;
Connect to the PDB to which the missing data file belongs and then make the data file online. The following command makes the data file 12 online:
ALTER DATABASE DATAFILE 12 ONLINE;
Examine the output to see if media recovery was successful. If so, open the PDBs.
ALTER PLUGGABLE DATABASE sales, hr OPEN;
Connect to the PDB as a local user with SYSDBA system privilege.
ALTER PLUGGABLE DATABASE CLOSE;
ALTER DATABASE DATAFILE 12 OFFLINE;
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE DATAFILE 12 ONLINE;
ALTER PLUGGABLE DATABASE OPEN;
To switch a data file in a CDB, connect to the root and use the same steps that you would use for a non-CDB.
To switch a data file in a PDB, use one of the following techniques:
â– Connect to the root and use the SWITCH … PLUGGABLE DATABASE or SWITCH DATAFILE command. This enables you to switch the data files for one or more PDBs.
â– Connect to the PDB and use the SWITCH DATABASE or SWITCH DATAFILE command to switch data files in that PDB.
Â
VALIDATE DATABASE;
If the output indicates that a data file is inaccessible. You can then run the REPORT SCHEMA command to obtain the tablespace name and file name for the data file.
RMAN> REPORT SCHEMA;
SELECT STATUS FROM V$INSTANCE;
SELECT FILE#, STATUS, ERROR, RECOVER, TABLESPACE_NAME, NAME FROM V$DATAFILE_HEADER WHERE RECOVER = ‘YES’ OR (RECOVER IS NULL AND ERROR IS NOT NULL);
SELECT FILE#, ERROR, ONLINE_STATUS, CHANGE#, TIME FROM V$RECOVER_FILE;
SELECT r.FILE# AS df#, d.NAME AS df_name, t.NAME AS tbsp_name, d.STATUS, r.ERROR, r.CHANGE#, r.TIME FROM V$RECOVER_FILE r, V$DATAFILE d, V$TABLESPACE t WHERE t.TS# = d.TS# AND d.FILE# = r.FILE#;
If you do not have a record of the DBID of your database, then you can find it in the following places without opening your database:
â– The DBID is used in forming the file name for the control file autobackup. Locate this file
By default, the format of the autobackup file for all configured devices is the substitution variable %F in the FORMAT clause. This variable format translates into c-IIIIIIIIII-YYYYMMDD-QQ, with the placeholders defined as follows:
IIIIIIIIII stands for the DBID.
YYYYMMDD is a time stamp of the day the backup is generated.
QQ is the hexadecimal sequence that starts with 00 and has a maximum of FF.
â– If you have any text files that preserve the output from an RMAN session, then the DBID is displayed by the RMAN client when it starts up and connects to your database.
RESTORE DATABASE PREVIEW;
RESTORE ARCHIVELOG FROM TIME ‘SYSDATE-7’ PREVIEW;
RESTORE DATABASE PREVIEW SUMMARY;
RESTORE ARCHIVELOG ALL PREVIEW RECALL;
RUN
{
SET ARCHIVELOG DESTINATION TO ‘/oracle/temp_restore’;
RESTORE ARCHIVELOG ALL;
}
RUN
{
SET ARCHIVELOG DESTINATION TO ‘/oracle/temp_restore’;
RESTORE DATABASE;
RECOVER DATABASE; # restores and recovers logs automatically
}
RUN
{
# Set a new location for logs 1 through 100.
SET ARCHIVELOG DESTINATION TO ‘/fs1/tmp’;
RESTORE ARCHIVELOG FROM SEQUENCE 1 UNTIL SEQUENCE 100;
# Set a new location for logs 101 through 200.
SET ARCHIVELOG DESTINATION TO ‘/fs2/tmp’;
RESTORE ARCHIVELOG FROM SEQUENCE 101 UNTIL SEQUENCE 200;
# Set a new location for logs 201 through 300.
SET ARCHIVELOG DESTINATION TO ‘/fs3/tmp’;
RESTORE ARCHIVELOG FROM SEQUENCE 201 UNTIL SEQUENCE 300;
# restore and recover data files as needed
.
.
}
Â
RUN
{
SET NEWNAME FOR DATAFILE 2 TO ‘/disk2/df2.dbf’;
SET NEWNAME FOR DATAFILE 3 TO ‘/disk2/df3.dbf’;
SET NEWNAME FOR DATAFILE 4 TO ‘/disk2/df4.dbf’;
RESTORE DATABASE;
SWITCH DATAFILE ALL;
RECOVER DATABASE;
}
ALTER TABLESPACE users OFFLINE IMMEDIATE;
RESTORE TABLESPACE users;
RECOVER TABLESPACE users;
OR
RUN
{
# specify the new location for each datafile
SET NEWNAME FOR DATAFILE ‘/disk1/oracle/dbs/users01.f’ TO ‘/disk2/users01.f’;
SET NEWNAME FOR DATAFILE ‘/disk1/oracle/dbs/users02.f’ TO ‘/disk2/users02.f’;
SET NEWNAME FOR DATAFILE ‘/disk1/oracle/dbs/users03.f’ TO ‘/disk2/users03.f’;
RESTORE TABLESPACE users;
SWITCH DATAFILE ALL; # update control file with new file names
RECOVER TABLESPACE users;
}
ALTER TABLESPACE users ONLINE;
ALTER DATABASE DATAFILE 4 OFFLINE;
SWITCH DATAFILE 4 TO COPY;
RECOVER DATAFILE 4;
ALTER DATABASE DATAFILE 4 ONLINE;
SWITCH DATABASE TO COPY;
RECOVER DATABASE;
ALTER DATABASE OPEN;
Â
Â
Â
In the current release, Data Recovery Advisor only supports single-instance databases. Oracle Real Application Clusters (Oracle RAC) databases are not supported.
In a Data Guard environment, Data Recovery Advisor cannot do the following:
â– Use files transferred from a physical standby database to repair failures on a primary database
â– Diagnose and repair failures on a standby database
However, if the primary database is unavailable, then Data Recovery Advisor may recommend a failover to a standby database. After the failover you can repair the old primary database. If you are using Enterprise Manager Cloud Control in a Data Guard configuration, then you can initiate a failover through the Data Recovery Advisor recommendations page.
In the current release, Data Recovery Advisor can only be used to diagnose and repair data corruptions in non-CDBs and the root of a multitenant container database (CDB). Data Recovery Advisor is not supported for pluggable databases (PDBs).
RMAN>
LIST FAILURE;
LIST FAILURE 101 DETAIL;
LIST FAILURE LOW;
LIST FAILURE CLOSED;
LIST FAILURE EXCLUDE FAILURE 234234;
VALIDATE DATABASE;
VALIDATE CHECK LOGICAL SKIP INACCESSIBLE DATABASE;
ADVISE FAILURE;
ADVISE FAILURE LOW etc;
ADVISE FAILURE 101;
REPAIR FAILURE PREVIEW;
REPAIR FAILURE;
CHANGE FAILURE 101 PRIORITY LOW;
CHANGE FAILURE CLOSED;
Database incarnations can stand in the following relationships to each other:
â– The current incarnation is the one in which the database is currently operating.
â– The incarnation from which the current incarnation branched following an OPEN RESETLOGS operation is the parent incarnation of the current incarnation.
â– The parent of the parent incarnation is an ancestor incarnation. Any parent of an ancestor incarnation is also an ancestor of the current incarnation.
â– The direct ancestral path of the current incarnation begins with the earliest incarnation and includes only the branches to an ancestor of the current incarnation, the parent incarnation, or the current incarnation.
An incarnation number is used to uniquely tag and identify a stream of redo.
A pluggable database (PDB) incarnation is a subincarnation of the multitenant container database (CDB) and is expressed as (database_incarnation, pdb_incarnation). For example, if the CDB is incarnation 5, and a PDB is incarnation 3, then the fully specified incarnation number of the PDB is (5, 3). The initial incarnation of a PDB is 0. Subsequent incarnations are unique but not always sequential numbers.
The V$PDB_INCARNATION view contains information about all PDB incarnations.
Use the following query to display the current incarnation of a PDB:
select PDB_INCARNATION# from v$pdb_incarnation where STATUS = ‘CURRENT’ and CON_ID = PDB_container_id;
Â
Â
Â
Â
Â
“Overview of the Fast Recovery Area” on page 5-16 explains the contents of the fast recovery area and the difference between permanent and transient files. Review this section before proceeding. The following rules govern when files become eligible for deletion from the recovery area:
â– Permanent files are never eligible for deletion.
â– Files that are obsolete under the retention policy are eligible for deletion.
â– Transient files that have been copied to tape are eligible for deletion.
â– Archived redo logs are not eligible for deletion until all the consumers of the logs have satisfied their requirements.
“Configuring an Archived Redo Log Deletion Policy” on page 5-29 explains how to configure an archived redo log deletion policy that determines when logs are eligible to be deleted. Consumers of logs can include RMAN, standby databases, Oracle Streams databases, and the Flashback Database feature.
â– Foreign archived logs that have been mined by a LogMiner session on a logical standby database are eligible for deletion. Because it is generated from a different database than the current database, a foreign archived redo log has a different DBID than the current archived redo logs.
The safe and reliable way to control deletion from the fast recovery area is to configure your retention policy and archived log deletion policy. To increase the likelihood that files moved to tape are retained on disk, increase the fast recovery area quota.
SELECT * FROM V$RECOVERY_FILE_DEST;
SELECT * FROM V$RECOVERY_AREA_USAGE;
You cannot manage the flashback logs in the fast recovery area directly other than by setting the flashback retention target or using guaranteed restore points. Nevertheless, you can manage fast recovery area space as a whole to maximize the space available for retention of flashback logs. In this way you increase the likelihood of achieving the flashback target.
To make space for flashback logs, back up the other contents of your fast recovery area to tape with commands such as BACKUP RECOVERY AREA, BACKUP BACKUPSET, and so on. Oracle Database automatically removes obsolete files from the fast recovery area. If offloading backups to tape still does not create enough space to satisfy the backup retention policy and flashback retention target, then allocate more space in the fast recovery area.
If the RMAN retention policy requires keeping a set of backups larger than the fast recovery area disk quota, or if the retention policy is set to NONE, then the fast recovery area can fill completely with no reclaimable space.
The database issues a warning alert when reclaimable space is less than 15% and a critical alert when reclaimable space is less than 3%. To warn the DBA of this condition, an entry is added to the alert log and to the DBA_OUTSTANDING_ALERTS table (used by Enterprise Manager). Nevertheless, the database continues to consume space in the fast recovery area until there is no reclaimable space left.
When the recovery area is completely full, the error displayed is as follows, where nnnnn is the number of bytes required and mmmmm is the disk quota:
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim nnnnn bytes disk space from mmmmm limit
You have several choices for how to resolve a full fast recovery area when no files are eligible for deletion:
â– Make more disk space available and increase DB_RECOVERY_FILE_DEST_SIZE to reflect the additional space.
â– Move backups from the fast recovery area to tertiary storage such as tape.
One convenient way to back up all of your recovery area files to tape together is the BACKUP RECOVERY AREA command. After you transfer backups from the recovery area to tape, you can delete files from the fast recovery area. Flashback logs cannot be backed up outside the recovery area and are not backed up by BACKUP RECOVERY AREA.
â– Run DELETE for any files that have been removed with an operating system utility.
If you use host operating system commands to delete files, then the database is not aware of the resulting free space. You can run the RMAN CROSSCHECK command to have RMAN recheck the contents of the fast recovery area and identify expired files, and then use the DELETE EXPIRED command to delete every expired backup from the RMAN repository.
â– Ensure that your guaranteed restore points are necessary. If not, delete them.
Flashback logs that are not needed for a guaranteed restore point are deleted automatically to gain space for other files in the fast recovery area. A guaranteed restore point forces the retention of flashback logs required to perform Flashback Database to the restore point SCN.
â– Review your backup retention policy and, if using Data Guard, your archived redo log deletion policy.
ALTER SYSTEM SET DB_RECOVERY_FILE_DEST=” SCOPE=BOTH SID=’*’;
ORA-19816: WARNING: Files may exist in location that are not known to database.
In such a situation, use the RMAN command CATALOG RECOVERY AREA to recatalog any such files. If the file header of the file in question is corrupted, then delete the file manually with an operating system utility.
Â
Â
Â
Type of database connection |
Keyword |
Description |
|
Target database |
TARGET |
|
|
Recovery catalog database |
CATALOG |
|
|
Auxiliary instance Or Auxiliary database |
AUXILIARY |
|
The following are the prerequisites for connecting to a database using operating system authentication (OS authentication):
â– You must set the ORACLE_SID environment variable, specifying the system identifier (SID) for the database.
â– You must be a member of the OSDBA operating system group to connect with the SYSDBA privilege or the OSBACKUPDBA operating system group to connect with the SYSBACKUP privilege.
On UNIX and Linux, the OSDBA group is typically named dba, and the OSBACKUPDBA group is typically named backupdba.
These names are assigned during database installation.
% rman target ‘”/ as sysdba”‘
% rman target ‘”/ as sysbackup”‘
rman target /
If neither AS SYSBACKUP nor AS SYSDBA is specified in the connection string, then the default used is AS SYSDBA.
You can perform RMAN operations on a whole CDB, the root only, or one or more PDBs. You make RMAN connections to CDBs according to the following rules:
â– To perform operations on the whole CDB (for example, to back up the whole CDB) you connect as target to the root.
â– To perform operations on the root only (for example, to back up the root) you connect as target to the root.
â– To perform operations on a single PDB, you can connect as target either to the root or directly to the PDB.
–If you connect to the root, you must use the PLUGGABLE DATABASE syntax in your RMAN commands. For example, to back up a PDB, you use the BACKUP PLUGGABLE DATABASE command.
–If instead you connect directly to a PDB, you can use the same commands that you would use when connecting to a non-CDB. For example, to back up a PDB, you would use the BACKUP DATABASE command.
â– To perform operations on two or more PDBs with a single command, you connect as target to the root.
For example, to back up both the sales and hr PDBs, you connect to the root and submit the following command:
BACKUP PLUGGABLE DATABASE sales, hr;
The following operations are not available when you connect as target directly to a PDB:
â– Back up archived logs
â– Delete archived logs
â– Delete archived log backups
â– Restore archived logs (RMAN does restore archived logs when required during media recovery.)
â– Point-in-time recovery (PITR)
â– TSPITR
â– Table recovery
â– Duplicate database
â– Flashback operations
â– Running Data Recovery Advisor
â– Report/delete obsolete
â– Register database
â– Import catalog
â– Reset database
â– Configuring the RMAN environment (using the CONFIGURE command)
rman target sys
target database Password: password
connected to target database: CDB (DBID=659628168)
rman target /
connected to target database: CDB (DBID=659628168)
rman target c##bkuser@sales
target database Password: password
connected to target database: CDB (DBID=659628168)
To connect as target to a PDB, you must:
â– Connect with a net service name that resolves to a database service for that PDB.
â– Connect as a local user or common user with the SYSDBA privilege.
The following example assumes the following
â– You want to perform RMAN operations on a PDB named hrpdb.
â– The net service name hrpdb resolves to a database service for the hrpdb PDB.
â– The local user hrbkup was created in the hrpdb PDB and granted the SYSDBA privilege.
rman target hrbkup@hrpdb
target database Password: password
connected to target database: CDB (DBID=659628168)
By default, RMAN writes command output to standard output. To redirect output to a log file, enter the LOG parameter on the command line when you start RMAN, as in the following example:
# rman LOG /tmp/rman.log
In this case, RMAN displays command input but does not display the RMAN output. The easiest way to send RMAN output both to a log file and to standard output is to use the Linux tee command or its equivalent. For example, the following technique enables both input and output to be visible in the RMAN command-line interface:
# rman | tee rman.log
Before invoking RMAN, it may be useful to set the NLS_DATE_FORMAT and NLS_LANG environment variables. These variables determine the format used for the time parameters in RMAN commands such as RESTORE, RECOVER, and REPORT.
The following example shows typical language and date format settings:
NLS_LANG=american
NLS_DATE_FORMAT=’Mon DD YYYY HH24:MI:SS’
If you are going to use RMAN to connect to an unmounted database and mount the database later while RMAN is still connected, then set the NLS_LANG environment variable so that it also specifies the character set used by the database.
A database that is not mounted assumes the default character set, which is US7ASCII. If your character set is different from the default, then RMAN returns errors after the database is mounted.
For example, if the character set is WE8DEC, then to avoid errors, you can set the NLS_LANG variable as follows:
NLS_LANG=american_america.we8dec
For the environment variable NLS_DATE_FORMAT to be applied and override the defaults set for the server in the server initialization file, the environment variable NLS_LANG must also be set.
Â
Backing up a whole CDB is similar to backing up a non-CDB. When you back up a whole CDB, RMAN backs up the root, all the PDBs, and the archived redo logs. You can then recover either the whole CDB, the root only, or one or more PDBs from the CDB backup.
Connect to the root as a common user with the SYSBACKUP or SYSDBA privilege.
RMAN can create backups on disk as image copies or as backup sets.
You can override this default with the AS COPY or AS BACKUPSET clauses.
BACKUP AS COPY DEVICE TYPE DISK DATABASE;
BACKUP AS BACKUPSET DATABASE;
BACKUP AS BACKUPSET DEVICE TYPE DISK DATABASE;
BACKUP AS BACKUPSET DEVICE TYPE SBT DATABASE;
You can use FROM TAG to copy an image copy with a specific tag, and then use TAG to assign the output copy a different tag.
E.G. creates new copies of all image copies of the database that have the tag full_cold_copy and gives the new copies the tag new_full_cold_copy.
BACKUP AS COPY COPY OF DATABASE FROM TAG full_cold_copy TAG new_full_cold_copy;
Use the MULTI_SECTION column of the V$BACKUP_SET view or the recovery catalog view RC_BACKUP_SET to determine if a backup is a multisection backup. For multisection backups, the MULTI_SECTION column contains the value YES.
The V$BACKUP_DATAFILE and RC_BACKUP_DATAFILE views provide information about the number of blocks in each section of a multisection backup. The BLOCKS column specifies the number of blocks in each multisection backup.
RMAN enables you to create multisection backups using image copies.
Multisection backups provide better performance by using multiple channels to back up large files in parallel. Starting with Oracle Database 12c Release 1 (12.1), you can create multisection full backups that are stored as image copies. While the image copy is being created, multiple channels are used to write files sections. However, the output of this operation is one copy for each data file.
Use the SECTION SIZE clause to create multisection backups. If the section size that you specify is larger than the size of the file, then RMAN does not use multisection backups for that file. If you specify a small section size that would produce more than 256 sections, then RMAN increases the section size to a value that results in exactly 256 sections
BACKUP AS COPY SECTION SIZE 500M DATABASE;
A manual backup of the control file is different from a control file autobackup.
RMAN makes a control file autobackup after the files specified in the BACKUP command are backed up. Thus, the autobackup—unlike a manual control file backup—contains metadata about the backup that just completed. Also, RMAN can automatically restore autobackups without the use of a recovery catalog.
You can make a manual backup of the current control file either as a backup set or as an image copy. For a backup set, RMAN first creates a snapshot control file for read consistency. You can configure the file name and location of the snapshot control file. A snapshot control file is not needed for an image copy.
In an Oracle Real Application Clusters (Oracle RAC) environment, the following restrictions apply:
■The snapshot control file location must be on shared storage—that is, storage that is accessible by all Oracle RAC instances.
â– The destination of an image copy of the current control file must be shared storage.
Â
Â
CREATE RESTORE POINT before_upgrade;
CREATE RESTORE POINT before_upgrade GUARANTEE FLASHBACK DATABASE;
LIST RESTORE POINT restore_point_name;
LIST RESTORE POINT ALL;
SELECT NAME, SCN, TIME, DATABASE_INCARNATION#, GUARANTEE_FLASHBACK_DATABASE,STORAGE_SIZE FROM V$RESTORE_POINT;
DROP RESTORE POINT before_app_upgrade;
ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=4320; # 3 days
ALTER DATABASE FLASHBACK ON;
ALTER TABLESPACE tbs_3 FLASHBACK OFF;
if flashback buf free by RVWR is the top wait event, then you know that Oracle Database cannot write flashback logs very quickly.
The V$FLASHBACK_DATABASE_STAT view shows the bytes of flashback data logged by the database. Each row in the view shows the statistics accumulated (typically over the course of an hour). The FLASHBACK_DATA and REDO_DATA columns describe bytes of flashback data and redo data written respectively during the time interval, while the DB_DATA column describes bytes of data blocks read and written. The columns FLASHBACK_DATA and REDO_DATA correspond to sequential writes, whereas DB_DATA column corresponds to random reads and writes.
Â
Because of the difference between sequential I/O and random I/O, a better indication of I/O overhead is the number of I/O operations issued for flashback logs. The V$SYSSTAT statistics shown can tell you the number of I/O operations that your instance has issued for various purposes.
Physical write I/O request The number of write operations issued for writing data blocks
Physical read I/O request The number of read operations issued for reading data blocks
Redo writes                    The number of write operations issued for writing to the redo log
Flashback log writes        The number of write operations issued for writing to flashback logs
Flashback log write bytes Total size in bytes of flashback database data written from this instance
When flashback is enabled or when there are guaranteed restore points, the background process RVWR writes flashback data to flashback database logs in the fast recovery area.
If RVWR encounters an I/O error, then the following behavior is expected:
â– If there are any guaranteed restore points defined, then the instance fails when RVWR encounters I/O errors.
â– If no guaranteed restore points are defined, then the instance remains unaffected when RVWR encounters I/O errors. Note the following; On a primary database, Oracle Database automatically disables Flashback Database while the database is open. All existing transactions and queries proceed unaffected. This behavior is expected for both single-instance and Oracle RAC databases.
Â
Â
Â
Â
You can use a stored script as an alternative to a command file for managing frequently used sequences of RMAN commands. The script is stored in the recovery catalog rather than on the file system.
1.Start RMAN and connect to a target database and recovery catalog (if used).
2.Run the CREATE SCRIPT command.
CREATE SCRIPT full_backup
{Â Â Â Â
BACKUP DATABASE PLUS ARCHIVELOG;
DELETE OBSOLETE;
}
For a global script, the syntax is similar:
CREATE GLOBAL SCRIPT global_full_backup
{Â Â Â Â
BACKUP DATABASE PLUS ARCHIVELOG;
DELETE OBSOLETE;
}
Optionally, you can provide a COMMENT with descriptive information:
CREATE GLOBAL SCRIPT global_full_backup
COMMENT ‘use only with ARCHIVELOG mode databases’
{Â Â Â Â
BACKUP DATABASE PLUS ARCHIVELOG;
DELETE OBSOLETE;
}
You can also create a script by reading its contents from a text file. The file must begin with a left brace ({) character, contain a series of commands valid within a RUN block, and end with a right brace (}) character. Otherwise, a syntax error is signalled, just as if the commands were entered at the keyboard.
CREATE SCRIPT full_backup FROM FILE ‘/tmp/my_script_file.txt’;
3.Examine the output.
If no errors are displayed, then RMAN successfully created the script and stored in the recovery catalog.
To update stored scripts, use the REPLACE SCRIPT command. If you are replacing a local script, then you must be connected to the target database that you connected to when you created the script. If the script does not exist, then RMAN creates it.
To replace a stored script:
REPLACE SCRIPT full_backup
{
BACKUP DATABASE PLUS ARCHIVELOG;
}
You can update global scripts by specifying the GLOBAL keyword as follows:
REPLACE GLOBAL SCRIPT global_full_backup
COMMENT ‘A script for full backup to be used with any database’
{
BACKUP AS BACKUPSET DATABASE PLUS ARCHIVELOG;
}
As with CREATE SCRIPT, you can update a local or global stored script from a text file with the following form of the command:
REPLACE GLOBAL SCRIPT global_full_backup FROM FILE ‘/tmp/my_script_file.txt’;
Use the EXECUTE SCRIPT command to run a stored script. If GLOBAL is specified, then a global script with this name must exist in the recovery catalog; otherwise, RMAN returns error RMAN-06004. If GLOBAL is not specified, then RMAN searches for a local stored script defined for the current target database. If no local script with this name is found, then RMAN searches for a global script by the same name and executes it if one is found.
If needed, use SHOW to examine your configured channels.
Your script uses the automatic channels configured at the time you execute the script. Use ALLOCATE CHANNEL commands in the script if you must override the configured channels. Because of the RUN block, if an RMAN command in the script fails, subsequent RMAN commands in the script do not execute.
RUN
{
EXECUTE SCRIPT full_backup;
}
The preceding command invokes a local script if one exists with the name specified. If no local script is found, but there is a global script with the name specified, then RMAN executes the global script.
You can also use EXECUTE GLOBAL SCRIPT to control which script is invoked if a local and a global script have the same name. If there is no local script called global_full_backup, the following two commands have the same effect:
RUN
{
EXECUTE GLOBAL SCRIPT global_full_backup;
}
RUN
{
EXECUTE SCRIPT global_full_backup;
}
You can specify substitution variables in the CREATE SCRIPT command. When you start RMAN on the command line, the USING clause specifies one or more values for use in substitution variables in a command file. As in SQL*Plus, &1 indicates where to place the first value, &2 indicates where to place the second value, and so on.
Create a command file that contains a CREATE SCRIPT statement with substitution variables for values that must be dynamically updated.
The following example uses substitution variables for the name of the tape set, for a string in the FORMAT specification, and for the name of the restore point.
CREATE SCRIPT quarterly {
ALLOCATE CHANNEL c1
DEVICE TYPE sbt
PARMS ‘ENV=(OB_MEDIA_FAMILY=&1)’;
BACKUP
TAG &2
FORMAT ‘/disk2/bck/&1%U.bck’
KEEP FOREVER
RESTORE POINT &3
DATABASE;
}
Connect RMAN to a target database (which must be mounted or open) and recovery catalog, specifying the initial values for the recovery catalog script.
For example, enter the following command:
# rman TARGET / CATALOG rco@catdb USING arc_backup bck0906 FY06Q3
A recovery catalog is required for KEEP FOREVER, but is not required for any other KEEP option.
Run the command file created in the first step to create the stored script.
For example, run the /tmp/catscript.rman command file as follows:
RMAN> @/tmp/catscript.rman
This step creates but does not execute the stored script.
Every quarter, execute the stored script, passing values for the substitution variables.
The following example executes the recovery catalog script named quarterly. The example specifies arc_backup as the name of the media family (set of tapes), bck1206 as part of the FORMAT string and FY06Q4 as the name of the restore point.
RUN
{
EXECUTE SCRIPT quarterly
USING arc_backup
bck1206
FY06Q4;
}
PRINT SCRIPT full_backup;
PRINT SCRIPT full_backup TO FILE ‘/tmp/my_script_file.txt’;
RINT GLOBAL SCRIPT global_full_backup;
PRINT GLOBAL SCRIPT global_full_backup TO FILE ‘/tmp/my_script_file.txt’;
Use the LIST … SCRIPT NAMES command to display the names of scripts defined in the recovery catalog. LIST GLOBAL SCRIPT NAMES and LIST ALL SCRIPT NAMES are the only commands that work when RMAN is connected to a recovery catalog without connecting to a target instance; the other forms of the LIST … SCRIPT NAMES command require a recovery catalog connection.
LIST SCRIPT NAMES;
LIST GLOBAL SCRIPT NAMES;
LIST ALL SCRIPT NAMES;
For each script listed, the output indicates which target database the script is defined for (or whether a script is global).
If you use DELETE SCRIPT without GLOBAL, and there is no stored script for the target database with the specified name, then RMAN looks for a global stored script by the specified name and deletes the global script if it exists. For example, suppose you enter the following command:
DELETE SCRIPT ‘global_full_backup’;
In this case, RMAN looks for a script global_full_backup defined for the connected target database, and if it did not find one, it searches the global scripts for a script called global_full_backup and delete that script.
To delete a global stored script, use DELETE GLOBAL SCRIPT:
DELETE GLOBAL SCRIPT ‘global_full_backup’;
To run the RMAN client and start a stored script in the recovery catalog on startup, use the SCRIPT argument when starting the RMAN client. For example, you could enter the following command to execute script /tmp/fbkp.cmd:
% rman TARGET / CATALOG rco@catdb SCRIPT ‘/tmp/fbkp.cmd’;
You must connect to the recovery catalog, which contains the stored script, and the target database, to which the script applies, when starting the RMAN client.
If local and global stored scripts are defined with the same name, then RMAN always executes the local script.
Â
CATALOG DATAFILECOPY ‘/disk1/old_datafiles/01_01_2003/users01.dbf’;
CATALOG ARCHIVELOGÂ Â ‘/disk1/arch_logs/archive1_731.dbf’, ‘/disk1/arch_logs/archive1_732.dbf’;
CATALOG BACKUPPIECEÂ ‘/disk1/backups/backup_820.bkp’;
CATALOG START WITHÂ Â ‘/disk1/backups/’;
Â
Typical Recovery Catalog Space Requirements for 1 Year
Type of Space | Space Requirement |
---|---|
|
90 MB |
Temp tablespace |
5 MB |
Rollback or undo tablespace |
5 MB |
Recovery catalog tablespace |
15 MB for each database registered in the recovery catalog |
Online redo logs |
1 MB each (three groups, each with two members) |
To create the recovery catalog schema in the recovery catalog database:
Start SQL*Plus and connect with administrator privileges to the database containing the recovery catalog. In this example, the database is catdb
.
Create a user and schema for the recovery catalog. For example, you could enter the following SQL statement
CREATE USER rco IDENTIFIED BY password TEMPORARY TABLESPACE temp DEFAULT TABLESPACE tools QUOTA UNLIMITED ON tools;
Grant the RECOVERY_CATALOG_OWNER
role to the schema owner. This role provides the user with all privileges required to maintain and query the recovery catalog.
GRANT RECOVERY_CATALOG_OWNER TO rco;
To create the recovery catalog:
Start RMAN and connect to the database that will contain the catalog. Connect to the database as the recovery catalog owner.
Run the CREATE
CATALOG
command to create the catalog. The creation of the catalog can take several minutes. If the catalog tablespace is this user’s default tablespace, then you can run the following command:
RMAN> CREATE CATALOG;
You can specify the tablespace name for the catalog in the CREATE
CATALOG
command. For example:
RMAN> CREATE CATALOG TABLESPACE cat_tbs;
You can check the results by using SQL*Plus to query the recovery catalog to see which tables were created:
SQL> SELECT TABLE_NAME FROM USER_TABLES;
In a Data Guard environment, the primary and standby databases share the same DBID and database name. To be eligible for registration in the recovery catalog, each database in the Data Guard environment must have different DB_UNIQUE_NAME
values. The DB_UNIQUE_NAME
parameter for a database is set in its initialization parameter file.
If you use RMAN in a Data Guard environment, then you can use the REGISTER DATABASE
command only for the primary database. You can use the following techniques to register a standby database in the recovery catalog:
When you connect to a standby database as TARGET
, RMAN automatically registers the database in the recovery catalog.
When you run the CONFIGURE DB_UNIQUE_NAME
command for a standby database that is not known to the recovery catalog, RMAN automatically registers this standby database if its primary database is registered.
Start RMAN and connect to a target database and recovery catalog. The recovery catalog database must be open.
Â
# rman TARGET / CATALOG rco@catdb;
If the target database is not mounted, then mount or open it:
STARTUP MOUNT;
Register the target database in the connected recovery catalog:
REGISTER DATABASE;
RMAN creates rows in the catalog tables to contain information about the target database, then copies all pertinent data about the target database from the control file into the catalog, synchronizing the catalog with the control file.
Verify that the registration was successful by running REPORT
SCHEMA
:
REPORT SCHEMA; Report of database schema for database with db_unique_name TRGT List of Permanent Datafiles=========================== File Size(MB) Tablespace RB segs Datafile Name ---- ---------- ---------------- ------- ------------------- 1 307200 SYSTEM NO /oracle/oradata/trgt/system01.dbf 2 20480 UNDOTBS YES /oracle/oradata/trgt/undotbs01.dbf 3 10240 CWMLITE NO /oracle/oradata/trgt/cwmlite01.dbf 4 10240 DRSYS NO /oracle/oradata/trgt/drsys01.dbf 5 10240 EXAMPLE NO /oracle/oradata/trgt/example01.dbf 6 10240 INDX NO /oracle/oradata/trgt/indx01.dbf 7 10240 TOOLS NO /oracle/oradata/trgt/tools01.dbf 8 10240 USERS NO /oracle/oradata/trgt/users01.dbf List of Temporary Files ======================= File Size(MB) Tablespace Maxsize(MB) Tempfile Name ---- -------- -------------------- ----------- -------------------- 1 200 TEMP 32767 /oracle/oradata/trgt/tbs_tmp.dbf
Â
Â
When backing up the recovery catalog database, you can use RMAN to make the backups. As illustrated in Figure 13-1, start RMAN with the NOCATALOG
option so that the repository for RMAN is the control file in the catalog database.
Follow these guidelines when developing an RMAN backup strategy for the recovery catalog database:
Run the recovery catalog database in ARCHIVELOG
mode so that you can do point-in-time recovery if needed.
Set the retention policy to a REDUNDANCY
value greater than 1
.
Back up the database to two separate media (for example, disk and tape).
Run BACKUP
DATABASE
PLUS
ARCHIVELOG
at regular intervals, to a media manager if available, or just to disk.
Do not use another recovery catalog as the repository for the backups.
Configure the control file autobackup feature to ON
.
With this strategy, the control file autobackup feature ensures that the recovery catalog database can always be recovered, so long as the control file autobackup is available.
Â
Every RMAN recovery catalog starting with Oracle Database 11g supports virtual private catalogs, but they are not used unless explicitly created. There is no restriction to the number of virtual private catalogs that can be created beneath one recovery catalog. Each virtual private catalog is owned by a database schema user which is different than the user who owns the recovery catalog.
After you set up a virtual private catalog user, the administrator for the recovery catalog grants each virtual private catalog the privilege to use that catalog for one or more databases that are currently registered in the recovery catalog. The administrator of the recovery catalog can also grant the privilege to register new databases while using a virtual private catalog.
Creating and Managing Virtual Private Catalogs with Oracle Database 12c Release 1 (12.1.0.2)
This section assumes that you created the base recovery catalog.
Assume that the following databases are registered in the base recovery catalog: prod1, prod2, and prod3. The database user who owns the base recovery catalog is rco. You want to create database user vpc1 and grant this user access privileges only to prod1 and prod2. Next, you want to connect to the virtual private catalog owned by vpc1, register the database prod1 with the virtual private catalog, and store backup metadata for prod1 in the virtual private catalog.
1.Start SQL*Plus and connect to the recovery catalog database with administrator privileges.
2.Create the user who will own the virtual private catalog.
E.G. if you want database user vpc1 to own the virtual private catalog, then execute the following command (replacing password with a user-defined password):
SQL> CREATE USER vpc1 IDENTIFIED BY password DEFAULT TABLESPACE vpcusers QUOTA UNLIMITED ON vpcusers;
3.Grant the CREATE SESSION privilege to the user and then exit SQL*Plus.
The following example grants the CREATE SESSION privilege to user vpc1:
SQL> GRANT CREATE SESSION TO vpc1;
4.Start RMAN and connect to the recovery catalog database as the base recovery catalog owner (not the virtual private catalog owner).
The following example connects to the base recovery catalog as rco:
RMAN> CONNECT CATALOG rco@catdb;
5.Grant desired privileges to the virtual private catalog owner.
The following example grants user vpc1 access to the metadata for prod1 and prod2 (but not prod3):
RMAN> GRANT CATALOG FOR DATABASE prod1 TO vpc1;
RMAN> GRANT CATALOG FOR DATABASE prod2 TO vpc1;
You can also use a DBID instead of a database name. The virtual private catalog user does not have access to the metadata for any other databases registered in the recovery catalog.
You can also grant the user the ability to register new target databases in the recovery catalog. For example:
RMAN> GRANT REGISTER DATABASE TO vpc1;
1.Start RMAN and connect to the recovery catalog database as the virtual private catalog owner (not the base recovery catalog owner). Connect to the database that you want to register as TARGET.
RMAN> CONNECT TARGET /
RMAN> CONNECT CATALOG vpc1@catdb;
2.Register the database whose metadata must be stored in the virtual private catalog.
The following example registers the database with the virtual private catalog owner vpc1.
RMAN> REGISTER DATABASE;
3.Back up the database using the BACKUP command with the required clauses.
Metadata related to the backup is stored in the virtual private catalog.
The RMAN recovery catalog is created and managed using VPD. This provides better performance and scalability when a large number of virtual private catalogs are created.
RMAN provides backup and recovery of multitenant container databases (CDBs), which are introduced in Oracle Database 12c Release 1 (12.1). This support includes backup and point-in-time recovery of specified pluggable databases (PDBs).
The SYSBACKUP administrative privilege encompasses the permissions required for backup and recovery, including the ability to connect to a closed database. System administrators can grant SYSBACKUP instead of SYSDBA to users who perform backup and recovery, thus reducing the proliferation of the SYSDBA privilege. In contrast to SYSDBA, SYSBACKUP does not include data access privileges such as SELECT ANY TABLE.
Storage Snapshot Optimization enables you to use third-party technologies to take a storage snapshot of your database without putting the database in BACKUP mode. You can then use the snapshot to recover all or part of the database.
You can now issue most SQL commands in RMAN without preceding the command with the SQL keyword. For a few commands that exist in both RMAN and SQL and have very different uses, you can specify the SQL keyword to eliminate ambiguity. You no longer need to enclose the SQL command in quotes, which greatly simplifies the syntax when the SQL command itself requires quotation marks. The SQL ALTER command replaces the RMAN command.
The new RMAN DESCRIBE command provides the functionality of the SQL*Plus DESCRIBE command.
RMAN provides multisection backup support for incremental backups and image copies. Wherever possible, unused block compression and Block Change Tracking are used in conjunction with multisection incremental backups. This improves backup and restore performance.
You can now restore or recover a database, data files, tablespaces, or control files by using backup sets from a physical standby database. RMAN transfers the backup sets, over the network, to the destination host. This is useful in a Data Guard environment when you want to synchronize the standby and primary databases.
RMAN can now perform active database duplication using backup sets. When sufficient auxiliary channels are allocated, the auxiliary instance connects to the target instance and retrieves the backup sets over the network, thus reducing the processing load on the target instance. Unused block compression can be used during the duplication process, thus reducing the size of backups transported over the network. You can specify the binary compression level to be used. You can also encrypt backups and use multisection backups while performing active database duplication.
You can transport data across platforms by using full and incremental backup sets. Incremental backups can reduce overall application downtime during cross-platform data migration.
RMAN can recover tables and table partitions to a specified point in time from previously-created RMAN backups.
Unified auditing consolidates all the Oracle Database audit records into one single audit trail. To use unified auditing, you must first upgrade your database to Oracle Database 12c Release 1 (12.1) and then migrate your database to use unified auditing.
You can specify that the duplicate database must not be opened using RESETLOGS after it is created. You may prefer not to open the duplicate database if you want to change the initialization parameters of the duplicate database or if opening the duplicate database may start services in the duplicate database that will conflict with the source database.
A database management system (DBMS) is software that controls the storage, organization, and retrieval of data. Typically, a DBMS has the following elements:
Kernel code
This code manages memory and storage for the DBMS.
Repository of metadata
This repository is usually called a data dictionary.
Query language
This language enables applications to access the data.
The relational model is the basis for a relational database management system (RDBMS). Essentially, an RDBMS moves data into a database, stores the data, and retrieves it so that applications can manipulate it.
A requirement of a multiuser RDBMS is the control of data concurrency, which is the simultaneous access of the same data by multiple users. Without concurrency controls, users could change data improperly, compromising data integrity. For example, one user could update a row while a different user simultaneously updates it.
If multiple users access the same data, then one way of managing concurrency is to make users wait. However, the goal of a DBMS is to reduce wait time so it is either nonexistent or negligible. All SQL statements that modify data must proceed with as little interference as possible. Destructive interactions, which are interactions that incorrectly update data or alter underlying data structures, must be avoided.
Oracle Database uses locks to control concurrent access to data. A lock is a mechanism that prevents destructive interaction between transactions accessing a shared resource. Locks help ensure data integrity while allowing maximum concurrent access to data.
In Oracle Database, each user must see a consistent view of the data, including visible changes made by a user’s own transactions and committed transactions of other users. For example, the database must prevent dirty reads, which occur when one transaction sees uncommitted changes made by another concurrent transaction.
Oracle Database always enforces statement-level read consistency, which guarantees that the data that a single query returns is committed and consistent for a single point in time. Depending on the transaction isolation level, this point is the time at which the statement was opened or the time the transaction began. The Flashback Query feature enables you to specify this point in time explicitly.
The database can also provide read consistency to all queries in a transaction, known as transaction-level read consistency. In this case, each statement in a transaction sees data from the same point in time, which is the time at which the transaction began.
The physical database structures are the files that store the data. When you execute the SQL command CREATE DATABASE
, the following files are created:
Data files
Every Oracle database has one or more physical data files, which contain all the database data. The data of logical database structures, such as tables and indexes, is physically stored in the data files.
Control files
Every Oracle database has a control file. A control file contains metadata specifying the physical structure of the database, including the database name and the names and locations of the database files.
Online redo log files
Every Oracle Database has an online redo log, which is a set of two or more online redo log files. An online redo log is made up of redo entries (also called redo records), which record all changes made to data.
Many other files are important for the functioning of an Oracle database server. These include parameter files and networking files. Backup files and archived redo log files are offline files important for backup and recovery.
This section discusses logical storage structures. The following logical storage structures enable Oracle Database to have fine-grained control of disk space use:
Data blocks
At the finest level of granularity, Oracle Database data is stored in data blocks. One data block corresponds to a specific number of bytes on disk.
Extents
An extent is a specific number of logically contiguous data blocks, obtained in a single allocation, used to store a specific type of information.
Segments
A segment is a set of extents allocated for a user object (for example, a table or index), undo data, or temporary data.
Tablespaces
A database is divided into logical storage units called tablespaces. A tablespace is the logical container for a segment. Each tablespace consists of at least one data file.
Oracle Database creates and uses memory structures for program code, data shared among users, and private data areas for each connected user. The following memory structures are associated with an instance:
System Global Area (SGA)
The SGA is a group of shared memory structures that contain data and control information for one database instance. Examples of SGA components include the database buffer cache and shared SQL areas. Starting in Oracle Database 12c Release 1 (12.1.0.2), the SGA can contain an optional In-Memory Column Store (IM column store), which enables data to be populated in memory in a columnar format.
Program Global Areas (PGA)
A PGA is a memory region that contain data and control information for a server or background process. Access to the PGA is exclusive to the process. Each server process and background process has its own PGA.
Oracle Database 12c supports the multitenant architecture, which enables an Oracle database to be a multitenant container database (CDB). A CDB is a single physical database that contains zero, one, or many user-created pluggable databases. A pluggable database (PDB) is a portable collection of schemas, schema objects, and nonschema objects that appears to an Oracle Net client as a non-CDB. A non-CDB is a traditional Oracle database that cannot contain PDBs.
Starting in Oracle Database 12c, you must create a database as either a CDB or non-CDB. You can plug a non-CDB into a CDB as a PDB. To move a PDB to a non-CDB, you must use Oracle Data Pump.
shows the same data after being consolidated into the CDB named MYCDB
.
Physically, MYCDB
is an Oracle database. MYCDB
has one database instance (although multiple instances are possible in Oracle Real Application Clusters) and one set of database files, just like a non-CDB.
MYCDB
contains two PDBs: hrpdb
and salespdb
. As shown in Figure 1-3, these PDBs appear to their respective applications just as they did before database consolidation. To administer the CDB itself or any PDB within it, a CDB administrator can connect to the root container, which is a collection of schemas, schema objects, and nonschema objects to which all PDBs belong.
CDBs and non-CDBs have architectural differences. This manual assumes the architecture of a non-CDB unless otherwise indicated.
Â
Â
This document lists the major new features identified by Oracle and is not a comprehensive list of all the 12c new features; e.g. rman VPD for the catalog is not listed here.
(Database Administrator’s Guide).
Advanced Index Compression improves the compression ratios significantly while still providing efficient access to the index.
(Database SQL Language Reference).
Processing of large volumes of data is significantly faster
(Database Data Warehousing Guide).
Attribute clustering is a table-level directive that clusters data in close physical proximity based on the content of certain columns. This directive applies to any kind of direct path operation, such as a bulk insert or a move operation.
Storing data that logically belongs together in close physical proximity can greatly reduce the amount of data to be processed and can lead to better compression ratios.
(Database VLDB and Partitioning Guide).
In previous releases, in-memory parallel query did not work well when multiple scans contended for cache memory. This feature implements a new cache called big table cache for table scan workload.
(Database Development Guide).
Flashback Data Archive (FDA) is supported for multitenant container databases (CDBs) in this release.
(Database Administrator’s Guide).
The CONTAINERS clause is a new way of looking at multitenant container databases (CDBs). With this clause, data can be aggregated from a single identical table or view across many pluggable databases (PDBs) from the root container. The CONTAINERS clause accepts a table or view name as an input parameter that is expected to exist in all PDBs in that container. Data from a single PDB or a set of PDBs can be included with the use of CON_ID in the WHERE clause. For example:
SELECT ename FROM CONTAINERS(scott.emp) WHERE CON_ID IN (45, 49);
(Database Administrator’s Guide)
The new parameter, CREATE_FILE_DEST, allows administrators to set a default location for Oracle Managed Files (OMF) data files in the pluggable database (PDB). When not set, the PDB inherits the value from the root container.
If a file system directory is specified as the default location, then the directory must already exist; Oracle does not create it. The directory must have appropriate permissions that allow Oracle to create files in it. Oracle generates unique names for the files. A file created in this manner is an Oracle-managed file.
The CREATE_FILE_DEST parameter allows administrators to structure the PDB files independently of the multitenant container database (CDB) file destination. This feature helps administrators to plug or to unplug databases from one container to another in a shared storage environment.
(Database Administrator’s Guide).
The PDB LOGGING or NOLOGGING clause can be specified in a CREATE or ALTER PLUGGABLE DATABASE statement to set or modify the logging attribute of the pluggable database (PDB). This attribute is used to establish the logging attribute of tablespaces created within the PDB if the LOGGING clause was not specified in the CREATE TABLESPACE statement.
If a PDB LOGGING clause is not specified in the CREATE PLUGGABLE DATABASE statement, the logging attribute of the PDB defaults to LOGGING.
This new clause improves the manageability of PDBs in a multitenant container database (CDB).
(Database Administrator’s Guide).
An administrator can now create a clone of a pluggable database only with the data model definition. The dictionary data in the source is copied as is but all user-created table and index data from the source is discarded.
This feature enhances cloning functionality and facilitates rapid provisioning of development environments.
(Database Administrator’s Guide).
The new release of Oracle Multitenant fully supports remote full and snapshot clones over a database link. A non-multitenant container database (CDB) can be adopted as a pluggable database (PDB) simply by cloning it over a database link. Remote snapshot cloning is also supported across two CDBs sharing the same storage.
This feature further improves rapid provisioning of pluggable databases. Administrators can spend less time on provisioning and focus more on other innovative operations.
(Database Administrator’s Guide).
With the initialization parameter CLONEDB set to true, snapshot clones of a pluggable database are supported on any local, Network File Storage (NFS) or clustered file systems with Oracle Direct NFS (dNFS) enabled. The source of the clone must remain read-only while the target needs to be on a file system that supports sparseness.
Snapshot cloning support is now extended to other third party vendor systems.
This feature eases the requirement of specific file systems for snapshot clones of pluggable databases. With file system agnostic snapshot clones, pluggable databases can be provisioned even faster than before.
(Database Administrator’s Guide).
The STANDBYS clause allows a user to specify whether a pluggable database (PDB) needs to be a part of the existing standby databases. The STANDBYS clause takes two values: ALL and NONE. While ALL is the default value, when a PDB is created with STANDBYS=NONE, the PDB’s data files are offlined and marked as UNNAMED on all of the standby databases. Any of the new standby databases instantiated after the PDB has been created needs to explicitly disable the PDB for recovery to exclude it from the standby database. However, if a PDB needs to be enabled on a standby database after it was excluded on that standby database, PDB data files need to be copied to the standby database from the primary database and the control file needs to be updated to reflect their paths after which a user needs to connect to the PDB on that standby database and issue ALTER PLUGGABLE DATABASE ENABLE RECOVERY which automatically onlines all of the data files belonging to the PDB.
This feature increases consolidation density by supporting different service-level agreements (SLAs) in the same multitenant container database (CDB).
(Database Administrator’s Guide).
The SAVE STATE clause and DISCARD STATE clause are now available with the ALTER PLUGGABLE DATABASE SQL statement to preserve the open mode of a pluggable database (PDB) across multitenant container database (CDB) restarts.
If SAVE STATE is specified, open mode of specified PDB is preserved across CDB restart on instances specified in the INSTANCES clause. Similarly, with the DISCARD STATE clause, the open mode of specified PDB is no longer preserved.
These new SQL clauses provide the flexibility to choose the automatic startup of application PDBs when a CDB undergoes a restart. This feature enhances granular control and effectively reduces downtime of an application in planned or unplanned outages.
(Database Administrator’s Guide).
The USER_TABLESPACES clause allows a user to specify which tablespaces need to be available in the new pluggable database (PDB). An example of the application of this clause is a case where a customer is migrating from a non-multitenant container database (CDB) where schema-based consolidation was used to separate data belonging to multiple tenants to a CDB where data belonging to each tenant is kept in a separate PDB. The USER_TABLESPACES clause helps to create one PDB for each schema in the non-CDB.
This powerful clause helps convert cumbersome schema-based consolidations to more agile and efficient pluggable databases.
(Database Performance Tuning Guide).
Full database caching can be used to cache the entire database in memory.
(Database SQL Tuning Guide).
In-Memory Aggregation optimizes queries that join dimension tables to fact tables and aggregate data (for example, star queries) using CPU and memory efficient KEY VECTOR and VECTOR GROUP BY aggregation operations. These operations may be automatically chosen by the SQL optimizer based on cost estimates.
(Database Administrator’s Guide).
In-Memory Column Store enables objects (tables or partitions) to be stored in memory in a columnar format. The columnar format enables scans, joins and aggregates to perform much faster than the traditional on-disk formats for analytical style queries.
(XML DB Developer’s Guide).
This feature adds support for storing, querying and indexing JavaScript Object Notation (JSON) data to Oracle Database and allows the database to enforce that JSON stored in the Oracle Database conforms to the JSON rules. This feature also allows JSON data to be queried using a PATH based notation and adds new operators that allow JSON PATH based queries to be integrated into SQL operations.
(Database Security Guide).
The new database parameter, DBFIPS_140, provides the ability to turn on and off the Federal Information Processing Standards (FIPS) 140 cryptographic processing mode inside the Oracle database.
(Clusterware Administration and Deployment Guide).
Rapid Home Provisioning allows deploying of Oracle homes based on gold images stored in a catalog of pre-created homes.
(Data Warehousing Guide).
For full table access, zone maps allow I/O pruning of data based on the physical location of the data on disk, acting like an anti-index.
Source for lots of TSQL:
http://ola.hallengren.com
Â
sf_date(,6,10)
CREATE FUNCTION [dbo].[SF_DATE]( @pd_date DATETIME, @pn_hours SMALLINT, @pn_mins SMALLINT) RETURNS datetime
AS
BEGIN
IF ( Year(@pd_date) = ‘1900’ ) RETURN( NULL )
RETURN( CONVERT(datetime , Convert(VarChar(10),@pd_date, 103 ) + ‘ ‘ + Convert(varchar,@pn_hours) + ‘:’ + Convert(varchar,@pn_mins), 103 ) )
END
Â
use dbname
select *, fileproperty(name, ‘SpaceUsed’) as Used From dbo.sysfiles
GO
DECLARE @DBInfo TABLEÂ
(
ServerName VARCHAR(100),Â
DatabaseName VARCHAR(100),Â
FileSizeMB INT,Â
LogicalFileName sysname,Â
PhysicalFileName NVARCHAR(520),Â
Status sysname,Â
Updateability sysname,Â
RecoveryMode sysname,Â
FreeSpaceMB INT,Â
FreeSpacePct VARCHAR(7),Â
FreeSpacePages INT,Â
PollDate datetime
)
Â
DECLARE @command VARCHAR(5000)
Â
SELECT @command = ‘Use [‘ + ‘?’ + ‘] SELECT @@servername as ServerName, ‘ + ”” + ‘?’ + ”” + ‘ AS DatabaseName, CAST(sysfiles.size/128.0 AS int) AS FileSize, sysfiles.name AS LogicalFileName, sysfiles.filename AS PhysicalFileName, CONVERT(sysname,DatabasePropertyEx(”?”,”Status”)) AS Status, CONVERT(sysname,DatabasePropertyEx(”?”,”Updateability”)) AS Updateability, CONVERT(sysname,DatabasePropertyEx(”?”,”Recovery”)) AS RecoveryMode, CAST(sysfiles.size/128.0 – CAST(FILEPROPERTY(sysfiles.name, ‘ + ”” + ‘SpaceUsed’ + ”” + ‘ ) AS int)/128.0 AS int) AS FreeSpaceMB, CAST(100 * (CAST (((sysfiles.size/128.0 -CAST(FILEPROPERTY(sysfiles.name,’ + ”” + ‘SpaceUsed’ + ”” + ‘ ) AS int)/128.0)/(sysfiles.size/128.0)) AS decimal(4,2))) AS varchar(8)) + ‘ + ”” + ‘%’ + ”” + ‘ AS FreeSpacePct, GETDATE() as PollDate FROM dbo.sysfiles’
INSERT INTO @DBInfoÂ
(ServerName, DatabaseName, FileSizeMB, LogicalFileName, PhysicalFileName, Status, Updateability, RecoveryMode, FreeSpaceMB, FreeSpacePct, PollDate)
EXEC sp_MSForEachDB @commandÂ
SELECT ServerName, DatabaseName, FileSizeMB, LogicalFileName, PhysicalFileName, Status, Updateability, RecoveryMode, FreeSpaceMB, FreeSpacePct, PollDateÂ
FROM @DBInfoÂ
ORDER BY ServerName, DatabaseName
===========================================
Preferred credentials are used by Grid control (GC) monitoring pages as well as batch jobs.
Using preferred credentials is a relatively easy way to manage database passwords in GC although it does not actually change the passwords in the relevant databases it does mean that each GC user only needs to know their own account and password to access their systems.
Â
In grid control
select Setup -> Security -> Preferred Credentials
Select the “HOST†credentials line.
Select “Manage Preferred credentialsâ€.
Select the “Normal Host credentials†line.
Select “Setâ€
The following popup window appears.
Using the windows domain DBATask user, enter the relevant password and create a name for the credential.
Select “Saveâ€
Select ALL the hosts in the list. Select “Setâ€
Use the named credential created earlier. Select “Saveâ€
Modify any unix systems to use oracle username/password.
For which one can set
“Normal database credentialsâ€,
“SYSDBA credentials†and
“Database Host credentialsâ€
Â
Â
In grid control as an administrator
Â
Select “Setup -> Security -> Administratorsâ€
Â
Select the “Create†button
Â
Enter the User name and password
Select the “Default†profile
Enter the E-mail address (used for alerting).
Select “Super Administratorâ€
Select “Nextâ€
Â
Select “Nextâ€
Â
Select “Finishâ€
In grid control as an administrator
Â
Select “Setup -> Security -> Administratorsâ€
Â
Select the “Create†button
Â
Enter the User name and password
Select the “Default†profile
Enter the E-mail address (used for alerting).
Do NOT select “Super Administratorâ€
Select “Nextâ€
Â
Add the role “EM_ALL_VIEWER†to the default roles “EM_USER†and “PUBLIC†and then select “Nextâ€
Â
Select the following privileges and then “Nextâ€
· Connect to any viewable target
· View all Java Service targets
· View Any Database Service
· View any Target
· Monitor Enterprise Manager
· View logs for Java Services
· View any Resource Provider
· View Cloud Home
Â
Select “Nextâ€
Â
Select “Finishâ€
In grid control as an administrator
Â
Select “Setup -> Incidents -> Incident Rulesâ€
Â
Select the rule set to be associated with the user.
In this case I have selected the default “Incident Management rule set for all targets†as we have not yet created any specific rule sets.
Select “E-mail another Administratorâ€
Â
Select the search icon.
Â
Select the radio button next to the user to be e-mailed.
Select “Selectâ€
Â
Select OK
Â
The screen confirms that the user has been setup.
Â
Â
Select “Enterprise -> Monitoring -> Monitoring Templates
Â
Select the template to be modified and then select “Editâ€
Â
Select the “Metric Thresholds†tab.
Â
Â
Find the metric to be modified.
Select the ‘3 pencils’ at the end of the line to edit the metric.
Â
Select “Editâ€
Â
To disable the metric completely set all thresholds to blanks.
Â
Here we have modified the number of occurrences to 5.
Select “Continueâ€
Â
Select “Continueâ€
Â
Select “Okâ€
Please remember that where metrics have been modified for an individual target applying the template may overwrite those settings.
Select “Enterprise -> Monitoring -> Monitoring Templates
Â
Select the template and then select “Apply…â€
Â
Ensure the radio button for “Template will only override metrics that are common to both template and targetâ€
Select “Addâ€
Â
Select the relevant targets and then select “Selectâ€
Â
Select “Okâ€
Â
Success screen.
Â
Select a target, in this case will use the host BTEMON1P as an example and modify the disk space metric such that the D drive can be 95% full before any warnings are issued.
Â
Â
Â
Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}
Select Host -> Monitoring -> Metric and Collection Settings
Â
Â
Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}
Select a metric, e.g. “Filesystems -> Filesystem Space Available(%)
Select the Pencil icon at the end of the line.
Â
Â
Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}
Select “Addâ€
Select the ‘search’ symbol on the line that has just been added and select the D drive.
Â
Â
Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}
Select “Selectâ€
Â
Â
Â
Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}
Remove the “\†after the drive letter (looks like a bug with windows setup).
Set the new metric thresholds for Warning and Critical, e.g. 5% and 4% free.
Select “Continueâ€
Â
The modified metric is now visible.
Select “Ok†to apply the changes.
Â
Â
In grid control as an administrator
Select “Enterprise -> Compliance -> Libraryâ€
Select the “Compliance Standards†tab.
Near the bottom of the list select the line with “Security Recommendations for Oracle Productsâ€
Select “Override Target Type Settings…â€
For “Compliance Standard Evaluation Status†select “Disabledâ€
Select “Okâ€
Â
Â
Â
Â
Normal 0 false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}
SR 3-11030109811 : GC 12c OMS unable to obtain metrics from newly installed agent
Agent is up and running.
No data is uploaded to grid control.
Grid control can see the host but nothing else; i.,e. the databases, listeners etc can not be discovered.
The analysis was that the internal targets did not get added when the agent was installed. Manually adding the targets worked.
$AGENT_INST/bin/emctl status agent
$AGENT_INST/bin/emctl secure agent
$AGENT_INST/bin/emctl pingOMS
$AGENT_INST/bin/emctl upload agent
$AGENT_INST/bin/emctl config agent addinternaltargets
$AGENT_INST/bin/emctl status agent
SR 3-11063183161 : After adding agents on numerous hosts all listeners are shown as down
Agent is up and running.
Listener is up and running.
Grid control believes the agent is down.
The issue was that the hostname used as Machine Name under Grid Control “Oracle Listener” -> “Target Setup” -> “Monitoring Configuration” is linked to an ipv6 address based on the output of the ping command:
On the listener host:
# ping btemon2p.btexpedite.co.uk
Pinging BTEMON2P.btexpedite.co.uk [fe80::d79:a032:5bfc:3688%11] with 32 bytes of data:
Reply from fe80::d79:a032:5bfc:3688%11: time<1ms
Reply from fe80::d79:a032:5bfc:3688%11: time<1ms
Reply from fe80::d79:a032:5bfc:3688%11: time<1ms
Reply from fe80::d79:a032:5bfc:3688%11: time<1ms
Changing the Machine Name of the Listener in Grid Control “Oracle Listener” -> “Target Setup” -> “Monitoring Configuration” to the ip on which it is registered overcame the issue.
Â
Â
Installing Oracle Management Agent in Silent Mode
Configuring Enterprise Manager for Firewalls
Â
This document is a guide to deploying an Oracle Management Agent to a Windows host, using Oracle Enterprise Manager Cloud Control 12c (EM12c) Release 5.
From EM12c Release 4 onwards, Oracle Management Agents can now be deployed on Windows hosts using PsExec from another Windows server, which acts as a staging host.
Â
Download the PsExec utility onto the Microsoft Windows staging host server from the following URL:
http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx
export OMS_HOME=/u01/app/oracle/middleware/em12cr5/oms
$OMS_HOME/bin/emcli login -username=sysman -password=******
Run the following command from the OMS host to obtain the list of platforms for which the Management Agent software is downloaded and applied:
$OMS_HOME/bin/emcli get_supported_platforms
Create a zip image for the Management Agent software:
export ZIP_LOC=/usr/bin/zip
$OMS_HOME/bin/emcli get_agentimage -destination=/swlib/win_x64_agent -platform=”Microsoft Windows x64 (64-bit)” -version=12.1.0.5.0
=== Partition Detail ===
Space free : 6 GB
Space required : 1 GB
Check the logs at /u01/app/oracle/middleware/gc_inst/em/EMGC_OMS1/sysman/emcli/setup/.emcli/get_agentimage_2015-06-25_11-09-33-AM.log
Downloading /swlib/win_x64_agent/12.1.0.5.0_AgentCore_233.zip
File saved as /swlib/win_x64_agent/12.1.0.5.0_AgentCore_233.zip
Downloading /swlib/win_x64_agent/12.1.0.5.0_PluginsOneoffs_233.zip
File saved as /swlib/win_x64_agent/12.1.0.5.0_PluginsOneoffs_233.zip
Downloading /swlib/win_x64_agent/unzip
File saved as /swlib/win_x64_agent/unzip
Agent Image Download completed successfully.
Transfer the following file to the Windows staging server:
/swlib/win_x64_agent/12.1.0.5.0_AgentCore_233.zip
Download the agentDeployPsExec.bat script onto the staging host, from the following location:
$OMS_HOME/sysman/prov/agentpush/scripts/agentDeployPsExec.bat
Create a response file to specify the Management Agent install inputs.
Create agent response file (agent.rsp) accordingly:
# Oracle Management Agent response file
# http://docs.oracle.com/cd/E24628_01/install.121/e22624/appdx_install_agent_psexec.htm#CIHCHIGA
HOST_NAMES=.company.co.uk
USER_NAME=\GCUser
PASSWORD=******
PSEXEC_DIR=E:\EM12c\staging\PSTools
AGENT_IMAGE_PATH=E:\EM12c\staging\core\12.1.0.5.0_AgentCore_233.zip
AGENT_BASE_DIR=E:\app\oracle\agent12c
AGENT_REGISTRATION_PASSWORD=******
EM_UPLOAD_PORT=4903
OMS_HOST=
REMOTE_SYS_DIR=C:\Windows
LOG_LOC=E:\EM12c\staging\logs
ADDITIONAL_PARAMETERS=AGENT_PORT=3872:b_startAgent=true:s_agentHomeName=agent12cR5
Add the \GCUser account to the local Administrators group.
The windows firewall settings need to be modified to allow communication between the agent and the oms host.
Start the firewall gui.
Start -> Administrative Tools -> Windows Firewall with Advanced configuration
Modify the settings as shown in the following screenshots.
Right click on “Inbound Rules†and select “New Ruleâ€
Â
Â
Enter the port numbers shown
Â
Â
Select
Â
Â
Â
Enter the rule name as “Oracle management Server to Oracle Management agentâ€
Â
Â
Right click on “Outbound Rules†and select “New Ruleâ€
Â
Â
For security purposes the sqlnet.ora file in use on the agent host may have an entry such as:
TCP.INVITED_NODES = ( node1, node2, node3 )
This entry is used to restrict oracle tcp access to the host; if this entry exists then the grid control host must be added to the invited nodes list; e.g.
TCP.INVITED_NODES = ( node1, node2, node3, OEM )
Â
Normal 0 false false false EN-GB JA X-NONE
1. Logon onto the Windows EM12c staging server.
2. Update the following response file to include the target host name, credentials, and an appropriate installation directory for the agent (highlighted in yellow below):
E:\EM12c\staging\core\agent.rsp
Example contents:
HOST_NAMES=
USER_NAME=\GCUser
PASSWORD=password123
PSEXEC_DIR=E:\EM12c\staging\PSTools
AGENT_IMAGE_PATH=E:\EM12c\staging\core\12.1.0.5.0_AgentCore_233.zip
AGENT_BASE_DIR=E:\app\oracle\agent12c
AGENT_REGISTRATION_PASSWORD=password123
EM_UPLOAD_PORT=4903
OMS_HOST=
REMOTE_SYS_DIR=C:\Windows
LOG_LOC=E:\EM12c\staging\logs
ADDITIONAL_PARAMETERS=AGENT_PORT=3872:b_startAgent=true:s_agentHomeName=agent12cR5
3. Open a Command Prompt Window (as Administrator) and change directory to the PsExec tools folder:
cd /d E:\EM12c\staging\PSTools
4. Test connectivity to the target server using PsExec with the \GCUser account:
psexec.exe \\ -u \GCUser -p ****** -d -c E:\EM12c\staging\PSTools\\test.bat
PsExec v2.11 – Execute processes remotely
Copyright (C) 2001-2014 Mark Russinovich
Sysinternals – www.sysinternals.com
test.bat started on with process ID 5068.
E:\EM12c\staging\PSTools>
5. Assuming the test completes without any issues, run the following command from the staging host to install the agent on the selected Windows target host:
E:\EM12c\staging\PSTools>E:\EM12c\staging\core\agentDeployPsExec.bat PROPERTIES_FILE=E:\EM12c\staging\core\agent.rsp
E:\EM12c\staging\PSTools>echo off
===================================================================
Agent deployment started on host :
Creating installation base directory …
Copying unzip to the target host …
Copying agent software to the target host …
Extracting agent software …
Installing agent ….
Performing clean up …
Agent deployment completed successfully on host :
E:\EM12c\staging\PSTools>
Yo You should see a message saying the deployment completed successfully.
T The agent target is now ready to be configured in Oracle Enterprise Manager.
v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);}
Normal 0 false false false false EN-GB JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:”Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:””; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:”Cambria”,serif; mso-fareast-language:EN-US;}
Passwords with special characters
Special characters in the \GCUser password (or equivalent account) can lead to PsExec command failures. Should this occur, PsExec will produce an error like the one below:
The handle is invalid.
Connecting to …
Couldn’t access :
Account hasn’t been added to the local Administrators group on the target host
If the \GCUser account doesn’t have local administrator rights on the target server, you will get an access denied error when trying to connect via PsExec:
Couldn’t access
Access is denied.
Â
Â
Â
Normal 0 false false false EN-GB JA X-NONE
Installing Oracle Management Agent in Silent Mode
Configuring Enterprise Manager for Firewalls
On the destination host determine the current Oracle BASE directory.
e.g. O:\Oracle
Create a temporary directory, e.g. EM12C, to hold the installation kit and log files.
e.g. O:\Oracle\EM12C
Copy the agent release kit software to the destination host and unzip the file.
i.e. copy 12.1.0.5.0_AgentCore_233.zip to O:\Oracle\EM12C
unzip 12.1.0.5.0_AgentCore_233.zip
This will create the directory O:\Oracle\EM12C\12.1.0.5.0_AgentCore_233
Inside the expanded release kit directory we will find the file agent.rsp which needs to have the following lines added at the end;
e.g. edit O:\Oracle\EM12C\12.1.0.5.0_AgentCore_233\agent.rsp
AGENT_IMAGE_PATH=O:\Oracle\EM12C\12.1.0.5.0_AgentCore_233\12.1.0.5.0_AgentCore_233.zip
AGENT_REGISTRATION_PASSWORD=
EM_UPLOAD_PORT=4903
OMS_HOST=
REMOTE_SYS_DIR=C:\Windows
LOG_LOC=O:\Oracle\EM12C\logs
ADDITIONAL_PARAMETERS=AGENT_PORT=3872:b_startAgent=true:s_agentHomeName=agent12cR5
Change text in bold as appropriate.
Open a command window and change the default location to the installation kit directory.
O:\Oracle\EM12C\12.1.0.5.0_AgentCore_233\agent.rsp
Execute the command to install the agent software.
agentDeploy.bat AGENT_BASE_DIR=O:\Oracle\agent12c OMS_HOST=
EM_UPLOAD_PORT=4903 AGENT_REGISTRATION_PASSWORD=
RESPONSE_FILE=O:\Oracle\EM12C\12.1.0.5.0_AgentCore_233\agent.rsp
-softwareOnly
Change text in bold as appropriate.
The windows firewall settings need to be modified to allow communication between the agent and the oms host.
Start the firewall gui.
Start -> Administrative Tools -> Windows Firewall with Advanced configuration
Modify the settings as shown in the following screenshots.
Right click on “Inbound Rules†and select “New Ruleâ€
Â
Enter the port numbers shown
Select
Enter the rule name as “Oracle management Server to Oracle Management agentâ€
Right click on “Outbound Rules†and select “New Ruleâ€
Â
For security purposes the sqlnet.ora file in use on the agent host may have an entry such as:
TCP.INVITED_NODES = ( node1, node2, node3 )
This entry is used to restrict oracle tcp access to the host; if this entry exists then the grid control host must be added to the invited nodes list; e.g.
TCP.INVITED_NODES = ( node1, node2, node3, OEM )
Once the firewalls etc have been configured we can reconfigure the agent.
agentDeploy.bat AGENT_BASE_DIR=O:\Oracle\agent12c OMS_HOST=
EM_UPLOAD_PORT=4903 AGENT_REGISTRATION_PASSWORD=
RESPONSE_FILE=O:\Oracle\EM12C\12.1.0.5.0_AgentCore_233\agent.rsp
-configOnly
Â
Â
Â
Â
Â
Â