Wednesday, 16 July 2008

loading objects into shared pool

Pin A Cursor In Memory:

VARIABLE x REFCURSOR
BEGIN OPEN :x for
SELECT * FROM all_tables;
END;
/
--Determine address and hash value of the SQL statement

SELECT address, hash_value
FROM gv$open_cursor
WHERE sql_text LIKE '%ALL_TABLES%';

-- substitute your query results for mine, below.

exec sys.dbms_shared_pool.keep('1C5B28DC, 3958201300', 'C');

Pin A Trigger In Memory:
dbms_shared_pool.keep(‘TESTTRG’,’R’)

Pin A Package, Procedure Or Function In Memory (this is the default):
dbms_shared_pool.keep(‘PKG_TEST’,’P’)

Pin A Sequence In Memory:
dbms_shared_pool.keep(’SEQ_TEST’, ‘Q’);



DBMS_SHARED_POOL.UNKEEP (name VARCHAR2, flag CHAR DEFAULT ‘P’);
exec dbms_shared_pool.unkeep('UWCLASS.TESTPROC', 'P');

Tuesday, 15 July 2008

sga components used,free

select round(sum(bytes)/1024/1024,2)' MB' total_sga,
round(round(sum(bytes)/1024/1024,2) - round(sum(decode(name,'free memory',bytes,0))/1024/1024,2))' MB' used,
round(sum(decode(name,'free memory',bytes,0))/1024/1024,2)' MB' free
from v$sgastat


free memory in shared pool:

select * from v$sgastatwhere name='free memory'and pool='shared pool'

Library cache hit ratio:

select sum(pinhits)/sum(pins) from v$librarycache

Dictionary cache hit ratio:

select (sum(gets-getmisses-fixed))/sum(gets)"row cache" from v$rowcache

Monday, 14 July 2008

log sequence number

Log sequence number:

Oracle Database assigns each redo log file a new log sequence number every time a log switch occurs and LGWR begins writing to it. When the database archives redo log files, the archived log retains its log sequence number. A redo log file that is cycled back for use is given the next available log sequence number.

Each online or archived redo log file is uniquely identified by its log sequence number. During crash, instance, or media recovery, the database properly applies redo log files in ascending order by using the log sequence number of the necessary archived and redo log files.

If the database is opened with reset logs, the log sequence number is reset to 1.

Sunday, 13 July 2008

statspack

SQL> @C:\esas\oracle\app\oracle\product\10.2.0\server\RDBMS\ADMIN\spcreate

or

SQL> CONNECT / AS SYSDBA
SQL> define default_tablespace='ESASBRANCH'
SQL> define temporary_tablespace='TEMP'
SQL> define perfstat_password='SYS'
SQL> @C:\esas\oracle\app\oracle\product\10.2.0\server\RDBMS\ADMIN\spcreate
**********************


SQL> CONNECT perfstat/SYS
SQL> EXECUTE statspack.snap;

or

variable snap number;
begin :snap := statspack.snap;
end;
/

SQL> print snap
SNAP
----------
12


select name,snap_id,to_char(snap_time,'DD.MM.YYYY:HH24:MI:SS')
"Date/Time" from stats$snapshot,v$database;


SAat basi statspack almasi icin:
@C:\esas\oracle\app\oracle\product\10.2.0\server\RDBMS\ADMIN\spauto

desc user_jobs;

SELECT job
FROM user_jobs;

exec dbms_job.remove(1);

yarim saatde bir almasi icin:
EXECUTE DBMS_JOB.INTERVAL(job_number,'SYSDATE+(1/48)');


To remove a specified job:
EXECUTE DBMS_JOB.REMOVE(job_number);
---------------

report:

connect perfstat/sys

@C:\esas\oracle\app\oracle\product\10.2.0\server\RDBMS\ADMIN\spreport

OR
connect perfstat/my_perfstat_password
define begin_snap=1
define end_snap=2
define report_name=batch_run
@C:\esas\oracle\app\oracle\product\10.2.0\server\RDBMS\ADMIN\spreport


Truncating All Statspack Data
@C:\esas\oracle\app\oracle\product\10.2.0\server\RDBMS\ADMIN\sptrunc

or

purge(SPPURGE.SQL deletes all snapshots ranging between the lower and upper bound snapshot IDs )
@C:\esas\oracle\app\oracle\product\10.2.0\server\RDBMS\ADMIN\sppurge

or
drop

@C:\esas\oracle\app\oracle\product\10.2.0\server\RDBMS\ADMIN\spdrop

************************************

Generate Execution Plan for given SQL statement

sqlplus perfstat/perfstat
SQL> @?/rdbms/admin/sprepsql.sql
Enter the Hash Value, in this example: 1279400914

sql_trace

exec dbms_system.set_sql_trace_in_session(sid,serial#,true);
exec dbms_system.set_sql_trace_in_session(38,3,true);
exec dbms_system.set_sql_trace_in_session(38,3,false);




tkprof C:\oracle\product\10.2.0\admin\orcl\udump\orcl_ora_224_tuncay_sql_trace.trc C:\sql_trace.txt explain=esastest/esastest sys=yes waits= yes

tkprof C:\oracle\product\10.2.0\admin\orcl\udump\orcl_ora_224_tunca_sql_trace.trc C:\toutput.txt insert=C:\tkprof.sql record=C:\allsql.sql


alter session set sql_trace =true

alter session set tracefile_identifier = 'tuncay_sql_trace';





----------------
select b.username, c.value || '\' || lower(d.value) || '_ora_' ||
to_char(a.spid, 'fm00000') || '.trc' "TRACE_FILE",
a.pid PID, b.serial# session_number
from v$process a, v$session b, v$parameter c, v$parameter d
where a.addr = b.paddr
and c.name = 'user_dump_dest'
and d.name = 'db_name'
and b.username is not null;


/* Formatted on 2008/06/05 14:49 (Formatter Plus v4.8.8) */
CREATE OR REPLACE PROCEDURE start_trace (
v_sid IN NUMBER,
v_serial# IN NUMBER,
seconds IN NUMBER
)
---------------------------------------------

-- 2003 - Oracle Utilities

-- D. Moore

--

-- This procedure serves as a wrapper to

-- session tracing. It accepts

-- a sid and serial#, along with the amount of

-- time in seconds that the trace should last.

-- The trace will be stopped when that time

-- period expires. After tracing is turned

-- off, the name of the trace file will be

-- displayed.

---------------------------------------------
IS
v_user VARCHAR2 (32);
stop_trace_cmd VARCHAR2 (200);
DURATION NUMBER;
v_spid NUMBER;
dump_dest VARCHAR2 (200);
db_name VARCHAR2 (32);
v_version VARCHAR2 (32);
v_compatible VARCHAR2 (32);
file_name VARCHAR2 (32);
no_session_found EXCEPTION;
BEGIN
BEGIN
SELECT a.username, b.spid
INTO v_user, v_spid
FROM v$session a, v$process b
WHERE a.SID = v_sid AND a.serial# = v_serial# AND a.paddr = b.addr;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
RAISE no_session_found;
END;

DBMS_SYSTEM.set_sql_trace_in_session (v_sid, v_serial#, TRUE);
DBMS_OUTPUT.put_line ('Tracing Started for User: ' || v_user);
DBMS_OUTPUT.put_line ( 'Tracing Start Time: '
|| TO_CHAR (SYSDATE, 'MM-DD-YYYY HH24:MI:SS')
);

---------------------------------------------------

-- Sleep for the amount of seconds specified as

-- seconds input parameter. When complete, stop

-- the tracing and display the resulting trace file

-- name

---------------------------------------------------
IF seconds IS NULL
THEN
DURATION := 60;
ELSE
DURATION := seconds;
END IF;

DBMS_LOCK.sleep (DURATION);
-- the time alotted has now expired. Disable

-- tracing and output the trace file information
DBMS_SYSTEM.set_sql_trace_in_session (v_sid, v_serial#, FALSE);
DBMS_OUTPUT.put_line ( 'Tracing Stop Time: '
|| TO_CHAR (SYSDATE, 'MM-DD-YYYY HH24:MI:SS')
);

-- get all of the data needed to format the trace file name
SELECT VALUE
INTO dump_dest
FROM v$parameter
WHERE NAME = 'user_dump_dest';

SELECT VALUE
INTO db_name
FROM v$parameter
WHERE NAME = 'db_name';

-- we need the version of the database in order to determine

-- the naming scheme for the trace file
DBMS_UTILITY.db_version (v_version, v_compatible);

IF SUBSTR (v_version, 1, 1) = '9'
THEN
file_name := db_name || '_ora_' || v_spid || '.trc';
ELSIF SUBSTR (v_version, 1, 3) = '8.1'
THEN
file_name := 'ora_' || v_spid || '_' || db_name || '.trc';
ELSIF SUBSTR (v_version, 1, 3) = '8.0'
THEN
file_name := 'ora_' || v_spid || '.trc';
END IF;

DBMS_OUTPUT.put_line ('Trace Directory: ' || dump_dest);
DBMS_OUTPUT.put_line ('Trace Filename: ' || file_name);
EXCEPTION
WHEN no_session_found
THEN
DBMS_OUTPUT.put_line
('No session found for sid and serial#

specified'
);
END start_trace;


---------------------
exec start_trace(17, 6157, 30);

schedules

BEGIN
DBMS_SCHEDULER.create_schedule ('S1',repeat_interval => 'FREQ=WEEKLY;BYHOUR=15;BYDAY=SUN',start_date => SYSTIMESTAMP);
DBMS_SCHEDULER.create_schedule ('S2',repeat_interval => 'FREQ=HOURLY;INTERVAL=2;BYDAY=MON,TUE,WED,THU,FRI,SAT;BYHOUR=8,9,10,11,12,13,14,15,16,17,18,19,20,21,22',start_date => SYSTIMESTAMP );
DBMS_SCHEDULER.create_schedule ('S3', repeat_interval =>'S1,S2',start_date => SYSTIMESTAMP);
END;
/


begin
dbms_scheduler.create_job
(job_name => 'TK',
job_type => 'PLSQL_BLOCK',
job_action=> 'begin select sysdate from dual; end;',
schedule_name=>'S3',
enabled=>true,
auto_drop=>false,
comments=>'Refresh the gift_advice table from ADVANCE.WORLD');
end;


BEGIN
dbms_scheduler.drop_schedule(
schedule_name => 'S1');
END;
/




CREATE OR REPLACE procedure SYS.print_schedule_dates
( schedule in varchar2,
start_date in timestamp with time zone default dbms_scheduler.stime(),
number_of_dates in pls_integer default 10 )
is date_after timestamp with time zone := start_date - interval '1' second;
next_date timestamp with time zone;
begin for i in 1 .. number_of_dates
loop
dbms_scheduler.evaluate_calendar_string
(schedule, start_date, date_after, next_date);
dbms_output.put_line(to_char(next_date,'DY DD-MON-YYYY (DDD-IW) HH24:MI:SS TZH:TZM TZR'));
date_after := next_date;
end loop;
end;
/


exec print_schedule_dates('S3');

job scripts

BEGIN sys.dbms_scheduler.create_schedule(
repeat_interval => 'FREQ=DAILY;BYHOUR=12;BYMINUTE=0;BYSECOND=0',
start_date => systimestamp,
comments => 'schedule for the backup job',
schedule_name => 'my_backup_job_schedule');
END;
/


BEGIN sys.dbms_scheduler.create_job(
job_name => 'daily_backup_job',
job_type => 'EXECUTABLE',
job_action => 'C:\esas\oracle\hotbackup.bat',
schedule_name => 'my_backup_job_schedule',
job_class => 'DEFAULT_JOB_CLASS',
comments => 'Job which backup the db',
auto_drop => FALSE,
enabled => TRUE);
END;
/

EXIT




BEGIN
DBMS_SCHEDULER.DROP_JOB (
job_name =>'DAILY_BACKUP_JOB');
END;
/


BEGIN
dbms_scheduler.drop_schedule(
schedule_name => 'my_backup_job_schedule');
END;
/


select * from dba_scheduler_jobs;

select next_run_date from dba_scheduler_jobs where job_name='DAILY_BACKUP_JOB';


select ADDITIONAL_INFO,ERROR# from DBA_SCHEDULER_JOB_RUN_DETAILS where job_name='DAILY_BACKUP_JOB';

flashback scripts

FLASHBACK TABLE t1
TO TIMESTAMP (SYSDATE - 5);

FLASHBACK TABLE x to before drop;
FLASHBACK TABLE persons TO TIMESTAMP TO_TIMESTAMP
('2005-04-05 10:00:00', 'YYYY-MM-DD HH24:MI:SS');


SELECT * FROM "BIN$xTMPjHZ6SG+1xnDIaR9E+g==$0";

DROP TABLE test PURGE;


select * from temp as of timestamp to_timestamp('2008-04-04 10:52:00','YYYY-MM-DD HH:MI:SS')

select versions_starttime, versions_endtime, versions_xid, versions_operation
from t1 versions between timestamp minvalue and maxvalue
order by VERSIONS_STARTTIME


SELECT xid, operation, start_scn,commit_scn, logon_user, undo_sql
FROM flashback_transaction_query
WHERE xid = HEXTORAW('0024002F0005766E');


SELECT versions_xid
, versions_startscn
, versions_endscn
, DECODE( versions_operation
, 'I', 'Insert'
, 'U', 'Update'
, 'D', 'Delete'
, 'Original'
) "Operation"
FROM t1
VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE
/




SELECT versions_startscn, versions_starttime,
versions_endscn, versions_endtime,
versions_xid, versions_operation
FROM t1 VERSIONS BETWEEN TIMESTAMP
TO_TIMESTAMP('2008-05-24 13:00:00', 'YYYY-MM-DD HH24:MI:SS')
AND TO_TIMESTAMP('2008-05-24 14:35:00', 'YYYY-MM-DD HH24:MI:SS')

PARALLELISM rman

You can achieve paralellism within yours script explicitly

run
{
allocate channel c1 type disk;
allocate channel c2 type disk;
backup
(datafile 1,2,3 channel c1)
(archivelog all channel c2);
}

Or

configure device type disk parallelism 3;

The backup command will be divided among the number of channels which you defined in the parallelsim config.

backup
(datafile 1,2)
(datafile 3,4)
(archivelog all)

Tuesday, 8 July 2008

Block number of the row

dbms_rowid.rowid_block_number(rowid)

it simply returns the block number that the rowid points to.

select id, dbms_rowid.rowid_block_number(rowid) from t;

recovery question

After the online backup, lets say there is no transaction therefore no log switchs nor checkpoints occurs.so checkpoint scn of the datafile and controlfiles should be same and database must be consistent.still need recovery?

there are always transactions always if the database is up and running, you have transactions.

ops$tkyte%ORA10GR2> @test
ops$tkyte%ORA10GR2> select dbms_flashback.get_system_change_number from dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
65882054

ops$tkyte%ORA10GR2> exec dbms_lock.sleep(3)

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select dbms_flashback.get_system_change_number from dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
65882055

the database is ALWAYS doing something. and yes, you would need recovery there are things happening before, during and after your hot backups - always.

Monday, 7 July 2008

blocks that reside in the buffer_cache

select o.object_name,count(*) number_of_blocks
from dba_objects o, V$BH bh
where o.data_object_id=bh.objd
and o.owner!='SYS'
group by o.object_name
order by count(*) desc



Hit ratio:
select name,physical_reads,db_block_gets,consistent_gets,1-(physical_reads/(db_block_gets+consistent_gets))"HIT RATIO" from v$buffer_pool_statistics

Hit Ratios

free memory in shared pool:
select * from v$sgastat
where name='free memory'
and pool='shared pool'

Library cache hit ratio:
select sum(pinhits)/sum(pins) from v$librarycache

Dictionary cache hit ratio:
select (sum(gets-getmisses-fixed))/sum(gets)"row cache" from v$rowcache


select parameter,sum(gets),sum(getmisses),100*sum(gets-getmisses)/sum(gets) pct_succ_gets,sum(modifications) updates
from v$rowcache
where gets>0
group by parameter

archivelog destination

alter system set log_archive_dest_1='location=D:\..\..' scope=both;

DBWR and LGWR

DBWR
------
checkpoint
dirty buffer reach threshold
no free buffers
timeout
drop table
truncate table


LGWR
--------
commit
more than 1MB
1/3rd full
before DBWR
every 3 seconds

database buffer cache, library cache

Database buffer cache:

LRU is used.
When a query is proccessed,Oracle looks database buffer cache for any block it needs.
If block not found in database buffer cache ;server reads block from datafile and places copy in database buffer cache.

Library cache:

Stores information about most recently used SQL and PL-SQL statements.
SHARED SQL: Stores and shares execution plan,parse tree,...
SHARED PL-SQL:Second time same/identical SQL advantage to available in

Oracle will look for the execution plan and check the statement for syntex and syntatical correctness in shared pool first and then execute the statement and look for buffer caches for same data .

changing global_name

update global_name set global_name='ORCL';
commit;

append and parallel hints, nologging

in order to append into a table, you have to lock it - preventing all other sessions from touching it.
so, if you insert /*+ APPEND */ into table select .... - table is locked.



alter session enable parallel dml;

create table deneme ( x int );

insert /*+ parallel(deneme,8) */ into deneme select 0 from dual;

SELECT * FROM deneme

ORA-12838: cannot read/modify an object after modifying it in parallel




**********************************************************************************
create table deneme ( x int );
insert /*+ append */ into deneme select 0 from dual;
SELECT * FROM deneme
ORA-12838: cannot read/modify an object after modifying it in paralel




However, not all operations support NOLOGGING mode, the following is a list of operations that support NOLOGGING:

direct load (SQL*Loader)
direct-load
INSERT CREATE TABLE ... AS SELECT
CREATE INDEX
ALTER TABLE ... MOVE PARTITION
ALTER TABLE ... SPLIT PARTITION
ALTER INDEX ... SPLIT PARTITION
ALTER INDEX ... REBUILD
ALTER INDEX ... REBUILD PARTITION
INSERT, UPDATE, and DELETE on LOBs in NOCACHE NOLOGGING mode stored out of line


***********************************************************
parallel and append => direct path load
directly into datafile bypass buffer cache and above HWM
***********************************************************

direct path load + logging = no undo, but full redo
direct path load + nologging = no undo, almost no redo (exceptions are space transactions and block invalidation record)
conventional path load + loggging = full undo, full redo
conventional path load + nologging = full undo, full redo (i.e. nologging has no effect if direct path is not used)




alter table big_table parallel ;

Şeklinde yazıp parallelism derecesini Oracle ‘a bırakıyor ve Oracle kaynaklarını uygunsa dinamik olarak parallelik derecesini artıracaktır
ya da kaynaklar yogunsa paralellik derecesini düşürecektir


Nologging:

NOLOGGING is create table and alter table clause, you can't use it as part of insert statement.

snapshot too old

.increasing undo retention is one method.

.array fetching out the table names (making the query run for a short time) is another.

.using gather schema stats is (easiest) another..
.From Oracle metalink, Oracle suggested to prevent ORA-01555 due to delayed block cleanout, you
can use select count(*) from table to force the block cleanout.

use operations that can bypass undo:

truncate
insert /*+ append */
disable indexes, rebuild afterwards

--------
More simply put:

o at the beginning of your transaction you removed a row from block #1.

o you commit - allowing the undo that protected this modification to be overwritten.

o you in fact DO overwrite that undo - as the rollback segment wraps around.

o later you need to REVISIT block #1, but you cannot as the undo you need has been overwritten BY
YOU

enterprise manager

C:> emca -config dbcontrol db -repos recreate

./emctl start dbconsole
./emctl stop dbconsole


setup host credential in EM for windows:

control panel->Admin Tools->Local Security Policy
Local Policies ->user rights assignments
add user to "Logon as a Batch Job"

Sunday, 6 July 2008

shrink_me

Here is an example showing what I tried to draw above:

ops$tkyte@ORA9IR2> create tablespace shrink_me
2 datafile '/tmp/shrink_me.dbf' size 704k
3 segment space management manual
4 uniform size 64k
5 /

Tablespace created.

we have exactly 10 64k extents we can use. (the 11th 64k block of space is used by
Oracle to manage these locally managed tablespaces in the datafile)


ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t1 ( x int, a char(2000) default 'a', b char(2000)
default 'b', c char(2000) default 'c') tablespace shrink_me;

Table created.

ops$tkyte@ORA9IR2> create table t2 ( x int, a char(2000) default 'a', b char(2000)
default 'b', c char(2000) default 'c') tablespace shrink_me;

Table created.

Each row in these tables will consume a block (8 rows/extent - but don't forget the
first block is borrowed by Oracle to manage space in the segment...)

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select file_id, block_id first_block, block_id+blocks-1 last_block,
segment_name
2 from dba_extents
3 where tablespace_name = 'SHRINK_ME'
4 union all
5 select file_id, block_id, block_id+blocks-1, 'free'
6 from dba_free_space
7 where tablespace_name = 'SHRINK_ME'
8 order by file_id, first_block
9 /

FILE_ID FIRST_BLOCK LAST_BLOCK SEGMENT_NAME
---------- ----------- ---------- ------------------------------
13 9 16 T1
13 17 24 T2
13 25 88 free

so, we have the starting scenario - T1 has an extent, T2 has one and the rest of the
file is "free space", now lets fill up t1:


ops$tkyte@ORA9IR2> insert into t1 (x) select rownum from all_objects where rownum <= 56;

56 rows created.

ops$tkyte@ORA9IR2> commit;

Commit complete.

ops$tkyte@ORA9IR2> select file_id, block_id first_block, block_id+blocks-1 last_block,
segment_name
2 from dba_extents
3 where tablespace_name = 'SHRINK_ME'
4 union all
5 select file_id, block_id, block_id+blocks-1, 'free'
6 from dba_free_space
7 where tablespace_name = 'SHRINK_ME'
8 order by file_id, first_block
9 /

FILE_ID FIRST_BLOCK LAST_BLOCK SEGMENT_NAME
---------- ----------- ---------- ------------------------------
13 9 16 T1
13 17 24 T2
13 25 32 T1
13 33 40 T1
13 41 48 T1
13 49 56 T1
13 57 64 T1
13 65 72 T1
13 73 80 T1
13 81 88 free

10 rows selected.

we have the middle scenario - if we dropped T1 now, all of the T1's would become free
space and we could shrink the file, however:


ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t2 (x) select rownum from all_objects where rownum <= 8;

8 rows created.

ops$tkyte@ORA9IR2> commit;

Commit complete.

ops$tkyte@ORA9IR2> select file_id, block_id first_block, block_id+blocks-1 last_block,
segment_name
2 from dba_extents
3 where tablespace_name = 'SHRINK_ME'
4 union all
5 select file_id, block_id, block_id+blocks-1, 'free'
6 from dba_free_space
7 where tablespace_name = 'SHRINK_ME'
8 order by file_id, first_block
9 /

FILE_ID FIRST_BLOCK LAST_BLOCK SEGMENT_NAME
---------- ----------- ---------- ------------------------------
13 9 16 T1
13 17 24 T2
13 25 32 T1
13 33 40 T1
13 41 48 T1
13 49 56 T1
13 57 64 T1
13 65 72 T1
13 73 80 T1
13 81 88 T2

10 rows selected.

Now the entire tablespace is full - no more free space - but we drop t1 and get LOTS of
free space:

ops$tkyte@ORA9IR2> drop table t1;

Table dropped.

ops$tkyte@ORA9IR2> select file_id, block_id first_block, block_id+blocks-1 last_block,
segment_name
2 from dba_extents
3 where tablespace_name = 'SHRINK_ME'
4 union all
5 select file_id, block_id, block_id+blocks-1, 'free'
6 from dba_free_space
7 where tablespace_name = 'SHRINK_ME'
8 order by file_id, first_block
9 /

FILE_ID FIRST_BLOCK LAST_BLOCK SEGMENT_NAME
---------- ----------- ---------- ------------------------------
13 9 16 free
13 17 24 T2
13 25 80 free
13 81 88 T2

the tablespace is 80% empty - but we cannot shrink it - we cannot shrink it at all, let
alone to 300k - because the LAST EXTENT is taken by T2

ops$tkyte@ORA9IR2> alter database datafile '/tmp/shrink_me.dbf' resize 300k;
alter database datafile '/tmp/shrink_me.dbf' resize 300k
*
ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value


but lets move T2 around...

ops$tkyte@ORA9IR2> alter table t2 move;

Table altered.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select file_id, block_id first_block, block_id+blocks-1 last_block,
segment_name
2 from dba_extents
3 where tablespace_name = 'SHRINK_ME'
4 union all
5 select file_id, block_id, block_id+blocks-1, 'free'
6 from dba_free_space
7 where tablespace_name = 'SHRINK_ME'
8 order by file_id, first_block
9 /

FILE_ID FIRST_BLOCK LAST_BLOCK SEGMENT_NAME
---------- ----------- ---------- ------------------------------
13 9 16 T2
13 17 24 free
13 25 32 T2
13 33 88 free

now we have lots of free space at the end of the file and we can resize:

ops$tkyte@ORA9IR2> alter database datafile '/tmp/shrink_me.dbf' resize 300k;

Database altered.

ops$tkyte@ORA9IR2> select file_id, block_id first_block, block_id+blocks-1 last_block,
segment_name
2 from dba_extents
3 where tablespace_name = 'SHRINK_ME'
4 union all
5 select file_id, block_id, block_id+blocks-1, 'free'
6 from dba_free_space
7 where tablespace_name = 'SHRINK_ME'
8 order by file_id, first_block
9 /

FILE_ID FIRST_BLOCK LAST_BLOCK SEGMENT_NAME
---------- ----------- ---------- ------------------------------
13 9 16 T2
13 17 24 free
13 25 32 T2


don't forget, if you move a table, you have to then rebuild the indexes

number of rows per block

Does Oracle have a "max number of rows per block" limit such as DB2's 255 row limit? I cannot find this explicitly in any of the documentation (I have looked). I'm asking this supposing that x number of rows would actually fit in a block, pctfree is set appropriately, etc. Thanks!

very blocksize dependent. for example,
8k block:
ops$tkyte%ORA10GR2> create table t ( x int ) pctfree 0;
Table created.
ops$tkyte%ORA10GR2> insert into t select null from all_objects where rownum <= 10000;
10000 rows created.
ops$tkyte%ORA10GR2> select count(*)
2 from t
3 where dbms_rowid.rowid_block_number(rowid) = (select
min(dbms_rowid.rowid_block_number(rowid)) from t);
COUNT(*)
----------
733

and then 16k...
ops$tkyte%ORA10GR2> create table t ( x int ) pctfree 0 tablespace ts16k;
Table created.
ops$tkyte%ORA10GR2> insert into t select null from all_objects where rownum <= 10000;
10000 rows created.
ops$tkyte%ORA10GR2> select count(*)
2 from t
3 where dbms_rowid.rowid_block_number(rowid) = (select
min(dbms_rowid.rowid_block_number(rowid)) from t);

COUNT(*)
----------
1478

shrink,alter table move,deallocate unused

ALTER TABLE MOVE doesn't reset empty_blocks, it will reset the HWM. That is, if your table has, for example, 16 used blocks, but only 12 of them contains data (think "delete", "insert", "update"), if you supply an ALTER TABLE MOVE the HWM after that will be again 12. However, empty_blocks always show how many of the allocated blocks for the segment have never been used (above the HWM). That's why when you submit a SHRINK space that space can be reclaimed because it is above the HWM.


I believe this is better explained with an example:

SQL> create table t ( x number )
2 tablespace users
3 storage ( initial 10M next 10M )
4 /
Table created.
SQL> analyze table t compute statistics;
Table analyzed.
SQL> select blocks, extents from user_segments where segment_name = 'T';
BLOCKS EXTENTS
---------- ----------
1280 10

SQL> select blocks, empty_blocks from user_tables where table_name = 'T';
BLOCKS EMPTY_BLOCKS
---------- ------------
0 1280


So, I started creating a table named T and requested initially 10mb allocated, which turned out to be 1280 blocks and 10 extents. From there you can see:
- table T has 1280 blocks allocated (blocks in user_segment)
- none of which are *formatted* to receive data (blocks in user_tables)

Then, I insert some data:

SQL> insert into t
2 select rownum
3 from dual
4 connect by level <= 100000; 100000 rows created. SQL> analyze table t compute statistics;
Table analyzed.
SQL> select blocks, extents from user_segments where segment_name = 'T';
BLOCKS EXTENTS
---------- ----------
1280 10

SQL> select blocks, empty_blocks from user_tables where table_name = 'T';
BLOCKS EMPTY_BLOCKS
---------- ------------
186 1094


I inserted 100,000 rows, from there you can see:
- allocated blocks/extents for the table did not change
- however, blocks formated to receive data were raised by 186 and the remaining blocks are empty

186 blocks are the HWM now, because those are the blocks that sometime were formatted to receive data.
Blocks above
186 are allocated blocks which have never been formatted to receive data.

I will delete some data now to show you it will not raise empty_blocks nor it will lower the blocks that are formatted to receive data (that is, the HWM).


SQL> delete from t where rownum <= 90000; 90000 rows deleted. SQL> analyze table t compute statistics;
Table analyzed.
SQL> select blocks, empty_blocks from user_tables where table_name = 'T';
BLOCKS EMPTY_BLOCKS
---------- ------------
186 1094

See, the delete did nothing to change the HWM, but..

SQL> select count(distinct dbms_rowid.rowid_block_number(rowid)) used_blocks from t;

USED_BLOCKS
-----------
16

tells me only 16 of those 186 contains data. The rest blocks belong to the segment's freelist to be used for inserts/updates.
Now, I will *move* the table to show you how it will re-adjust the HWM.


SQL> alter table t move tablespace users;
Table altered.
SQL> analyze table t compute statistics;
Table analyzed.
SQL> select blocks, empty_blocks from user_tables where table_name = 'T';
BLOCKS EMPTY_BLOCKS
---------- ------------
20 1260
See, it shrinked down the HWM to just 20 from 86 and raised the empty_blocks, but..

SQL> select blocks, extents from user_segments where segment_name = 'T';
BLOCKS EXTENTS
---------- ----------
1280 10

tells you it did nothing to *shrink* the allocated space asigned to the segment, meaning that at this stage the segment will still be using, at the operating system level, the same kind of storage. Now, to *reclaim* that space we will use shrink.


SQL> alter table t enable row movement;
Table altered.
SQL> alter table t shrink space;
Table altered.
SQL> analyze table t compute statistics;
Table analyzed.
SQL> select blocks, extents from user_segments where segment_name = 'T';
BLOCKS EXTENTS
---------- ----------
128 1
SQL> select blocks, empty_blocks from user_tables where table_name = 'T';
BLOCKS EMPTY_BLOCKS
---------- ------------
20 108

There, you see the table actually shrinked down from 1280 blocks allocated and 10 extents, to its minimum, 128 blocks and just 1 extent.I hope it has cleared your doubts.

Shrink = below AND above HWM
Deallocate unsued = above HWM only
Alter table move= below HWM

corrupt blocks

analyze TABLE table_name validate structure cascade;

select segment_type,owner'.'segment_name
from dba_extents
where file_id = 4 and 29830 between block_id and block_id+blocks -1;

SELECT tablespace_name, segment_type, owner, segment_name
FROM dba_extents
where file_id =4
and block_ID =4151

V$backup_corruption
select * from v$database_block_corruption;

************************************************


*******************************************

C:\Documents and Settings\Administrator>dbv file=C:\esas\oracle\oradata\XE\esasbranch.dbf feedback=100

***********************************************
BEGIN
DBMS_REPAIR.ADMIN_TABLES (
TABLE_NAME => 'REPAIR_TABLE',
TABLE_TYPE => dbms_repair.repair_table,
ACTION => dbms_repair.create_action,
TABLESPACE => 'ESASBRANCH');
END;
/

BEGIN
DBMS_REPAIR.ADMIN_TABLES (
TABLE_NAME => 'ORPHAN_KEY_TABLE',
TABLE_TYPE => dbms_repair.orphan_table,
ACTION => dbms_repair.create_action,
TABLESPACE => 'ESASBRANCH');
END;
/


SET SERVEROUTPUT ON
DECLARE num_corrupt INT;
BEGIN
num_corrupt := 0;
DBMS_REPAIR.CHECK_OBJECT (
SCHEMA_NAME => 'ESASBRANCH',
OBJECT_NAME => 'WAYBILL',
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
CORRUPT_COUNT => num_corrupt);
DBMS_OUTPUT.PUT_LINE('number corrupt: ' TO_CHAR (num_corrupt));
END;
/

SELECT OBJECT_NAME, BLOCK_ID, CORRUPT_TYPE, MARKED_CORRUPT,
CORRUPT_DESCRIPTION, REPAIR_DESCRIPTION
FROM REPAIR_TABLE;


SET SERVEROUTPUT ON
DECLARE num_fix INT;
BEGIN
num_fix := 0;
DBMS_REPAIR.FIX_CORRUPT_BLOCKS (
SCHEMA_NAME => 'ESASBRANCH',
OBJECT_NAME=> 'ACCOUNT',
OBJECT_TYPE => dbms_repair.table_object,
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
FIX_COUNT=> num_fix);
DBMS_OUTPUT.PUT_LINE('num fix: ' TO_CHAR(num_fix));
END;
/

SELECT OBJECT_NAME, BLOCK_ID, MARKED_CORRUPT
FROM REPAIR_TABLE;

******FOR ÝNDEXES***********

SET SERVEROUTPUT ON
DECLARE num_orphans INT;
BEGIN
num_orphans := 0;
DBMS_REPAIR.DUMP_ORPHAN_KEYS (
SCHEMA_NAME => 'ESASBRANCH',
OBJECT_NAME => 'PK_DEPT',
OBJECT_TYPE => dbms_repair.index_object,
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
ORPHAN_TABLE_NAME=> 'ORPHAN_KEY_TABLE',
KEY_COUNT => num_orphans);
DBMS_OUTPUT.PUT_LINE('orphan key count: ' TO_CHAR(num_orphans));
END;
/



BEGIN
DBMS_REPAIR.REBUILD_FREELISTS (
SCHEMA_NAME => 'ESASBRANCH',
OBJECT_NAME => 'ACCOUNTADDRESS',
OBJECT_TYPE => dbms_repair.table_object);
END;
/

*******Enabling or Disabling the Skipping of Corrupt Blocks************

BEGIN
DBMS_REPAIR.SKIP_CORRUPT_BLOCKS (
SCHEMA_NAME => 'ESASBRANCH',
OBJECT_NAME => 'WAYBILL',
OBJECT_TYPE => dbms_repair.table_object,
FLAGS => dbms_repair.skip_flag);
END;
/


SELECT OWNER, TABLE_NAME, SKIP_CORRUPT FROM DBA_TABLES
WHERE OWNER = 'ESASBRANCH';





***********************************************************************************************************
/* Fix the bitmap status for all the blocks in table mytab in schema sys */
EXECUTE DBMS_REPAIR.SEGMENT_FIX_STATUS('ESASBRANCH', 'ACCOUNT');

/* Mark block number 45, filenumber 1 for table mytab in sys schema as FULL.*/

EXECUTE DBMS_REPAIR.SEGMENT_FIX_STATUS('ESASBRANCH', 'ACCOUNT', TABLE_OBJECT,4, 372115, 1);

**************************************************************************
run {
allocate channel ch1 type disk;
blockrecover datafile 9 block 13 datafile 2 block 19;
}

BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 FROM BACKUPSET;

RMAN> BACKUP VALIDATE DATABASE ARCHIVELOG ALL;

SQL> select * from v$database_block_corruption;

*************************
RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE ARCHIVELOG ALL;

query the V$DATABASE_BLOCK_CORRUPTION

RMAN> blockrecover corruption list;
**************************
RMAN> BACKUP VALIDATE DATABASE;
RMAN> BLOCKRECOVER CORRUPTION LIST;
****************************

droping all user_objects

-- remember to connect first as the user/owner of the objects you want to drop. Do NOT run this as another user or SYS

declare
del_objs integer;
obj_del integer;
cursor get_objs is
select object_type,
'"'object_name'"'decode(object_type,'TABLE' ,' cascade constraints',null) obj_nam
from user_objects
where object_type in ('TABLE',
'VIEW',
'PACKAGE',
'SEQUENCE',
'PROCEDURE',
'FUNCTION',
'SYNONYM', 'MATERIALIZED VIEW')
order by object_type;
cursor get_objs2 is
select object_type, '"'object_name'"' obj_nam
from user_objects
where object_type in ('TYPE');
begin
for objrec in get_objs loop
execute immediate ('drop 'objrec.object_type' ' objrec.obj_nam);
end loop;

for objrec in get_objs2 loop
begin
execute immediate ('drop 'objrec.object_type' ' objrec.obj_nam);
exception
when others then
null;
end;
end loop;
end;
/



-- Remember to Purge the Recylebin if you are using 10g

purge recyclebin
/

-- Comfirm all is dropped

select * from user_objects
/

redo scripts

select group#,status,member
from v$logfile;

select group#,members,status,archived
from v$log;

alter database drop logfile member 'C:\esas\oracle\oradata\XE\REDO2M.LOG';
alter database add logfile member '' to group 1;
alter database clear logfile group 1;

SQL> startup mount
SQL> recover database until cancel using backup controlfile
SQL> cancel
SQL> alter database open resetlogs;

alter database drop logfile member 'C:\esas\oracle\oradata\XE\REDO1M.LOG';
alter database drop logfile member 'C:\esas\oracle\oradata\XE\REDO2M.LOG';
alter database drop logfile member 'C:\esas\oracle\oradata\XE\REDO3M.LOG';

SQL> recover database until cancel;
Media recovery complete.
SQL> alter database open resetlogs;
Database altered.

ALTER DATABASE ADD LOGFILE MEMBER 'C:\ESAS\ORACLE\ORADATA\XE\REDO1M.LOG' TO GROUP 1
ALTER DATABASE ADD LOGFILE MEMBER 'C:\ESAS\ORACLE\ORADATA\XE\REDO2M.LOG' TO GROUP 2
ALTER DATABASE ADD LOGFILE MEMBER 'C:\ESAS\ORACLE\ORADATA\XE\REDO3M.LOG' TO GROUP 3

alter database drop logfile member 'C:\ESAS\ORACLE\ORADATA\XE\REDO3M.LOG'
alter database clear logfile group 2;
alter database clear logfile unarchived group 2;
ALTER DATABASE DROP LOGFILE GROUP 2;
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;

alter database add logfile group 4 ( 'D:\DATAFILES\ATM\REDO04A.LOG', 'D:\DATAFILES\ATM\REDO04B.LOG' ) size 100M


Redos should be predicted to switch every 20 minutes.

rman scripts

configure channel device type disk rate 5M(100K)
CONFIGURE DEVICE TYPE DISK PARALLELISM 4;
SHOW DEVICE TYPE;

report schema;

*****************************************************************************STARTUP Startup NOMOUNT PFILE='C:\esas\oracle\app\oracle\admin\XE\pfile\initXE.ora
RESTORE SPFILE FROM AUTOBACKUP; RESTORE SPFILE TO 'C:\spfileTEMP.ora' FROM AUTOBACKUP; ******************************************************************************
restore tablespace "UNDO";

run {
set until time "to_date('07 02 2008 09:00:00', 'MM DD YYYY HH24:MI:SS')";
restore database;
recover database;
}

run {
set until time "to_date('01 18 2008 10:00:00', 'MM DD YYYY HH:MI:SS')";
restore database;
recover database;
alter database open resetlogs;
}

alter database datafile 'C:\esas\oracle\oradata\XE\ESASBRANCH.DBF' offline;

alter tablespace x online

run{
restore database
recover database until time "to_date('01 18 2008 10:00:00', 'MM DD YYYY HH:MI:SS')";
alter database open resetlogs;
}

RESTORE CONTROLFILE FROM AUTOBACKUP;
recover database until cancel using backup controlfile

RMAN> configure device type disk backup type to compressed backupset;

----------------------------------------------------------------------
RESTORE CONTROLFILE FROM AUTOBACKUP;

ALTER DATABASE DATAFILE 'C:\ESAS\ORACLE\ORADATA\XE\ESASBRANCH.DBF' OFFLINE;ALTER TABLESPACE ESASBRANCH OFFLINE

restore controlfile from '/sbclocal/app/oracle/admin/IBADCP01/dumprman/controlfile_IBADCP01_c-4238742468-20071219-01';

run{
allocate channel c1 device type sbt;
backup recovery area;
}

run{
allocate channel c1 device type sbt;
backup database plus archivelog;
}
crosscheck backup
DELETE NOPROMPT EXPIRED BACKUPSET;
run {allocate channel c1 device type disk
format='C:\usr.bak';
backup datafile 4;
}

run{
backup
format='C:\%d_%s_%p'
database plus archivelog;
}

sql 'alter database archive log current';
list backup summary;

run{
backup device type disk tag 'kasim28' database;
backup device type disk tag 'kasim28' archivelog all not backed up delete all input;
delete noprompt obsolete device type disk;
}

CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
configure device type disk backup type to compressed backupset;


SELECT archivelog_change#-1 FROM v$database;

run { set until scn 3412816;
restore database;
recover database;
alter database open resetlogs;
}

run {
shutdown immediate;
startup mount;
set until time 'Nov 26 2007 17:00:00';
# set until scn 1000; # alternatively, you can specify SCN
# set until sequence 9923; # you can specify log sequence number
restore database;
recover database;
alter database open resetlogs;
}

RECOVER DATABASE UNTIL TIME "to_date('2008-04-22:13:20:00','YYYY-MM-DD:HH24:MI:SS')";

run {
sql "ALTER TABLESPACE users OFFLINE" ;
restore tablespace users ;
recover tablespace users ;
sql "ALTER TABLESPACE users ONLINE" ;
}

select value from v$parameter where name='db_recovery_file_dest_size';

run{
allocate channel t1 type 'sbt_tape' parms 'ENV=(tdpo_optfile=C:\program files\tivoli\tsm\agentoba\tdpo.opt)';
backup database;
}

run{
backup device type disk tag '%TAG' database;
SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
backup device type disk tag '%TAG' archivelog all not backed up delete all input;
delete noprompt obsolete device type disk;
}
******************* EM backup job to tape *******************

tablespace usage

select avail_tname "TABLESPACE NAME",round(avail_space,2) "ALLOCATED (GB)",round(free_space,2) "FREE (GB)",round((free_space/avail_space)*100,2) "FREE (%)" from (select tablespace_name avail_tname,sum(bytes)/1024/1024/1024 avail_space from dba_data_files group by tablespace_name) a,(select tablespace_name free_tname,sum(bytes)/1024/1024/1024 free_space from dba_free_space group by tablespace_name) bwhere a.avail_tname=b.free_tname order by 4;

----------------------
select ddf.tablespace_name,total/(1024*1024) ttlSize,(total-nvl(freespace,0))/(1024*1024) used, freespace/(1024*1024) freespacefrom (select tablespace_name,sum(bytes) total from dba_data_files group by tablespace_name) ddf, (select tablespace_name ,sum(bytes) freespace from dba_free_space group by tablespace_name) dfs where ddf.tablespace_name=dfs.tablespace_name(+);


select
fs.tablespace_name "Tablespace",(df.totalspace - fs.freespace) "Used MB",fs.freespace "Free MB",df.totalspace "Total MB",round(100 * (fs.freespace / df.totalspace)) "Pct. Free"
from(select
tablespace_name,round(sum(bytes) / 1048576) TotalSpace
from
dba_data_files
group by
tablespace_name) df,(select
tablespace_name,round(sum(bytes) / 1048576) FreeSpace
from
dba_free_space
group by
tablespace_name) fs
where
df.tablespace_name = fs.tablespace_name order by 1;

temp tablespace scripts

select count(*) from dba_temp_files;

select tablespace_name from dba_tablespaces;

alter tablespace TEMP add tempfile 'C:\ESAS\ORACLE\ORADATA\XE\TEMP.DBF';

ALTER DATABASE TEMPFILE 'C:\ESAS\ORACLE\ORADATA\XE\TEMP.DBF' RESIZE 100M

ALTER DATABASE TEMPFILE 'C:\ESAS\ORACLE\ORADATA\XE\TEMP.DBF' AUTOEXTEND OFF;

SELECT tablespace_name, file_name, bytesFROM dba_temp_files WHERE tablespace_name = 'TEMP';



select * from v$sort_usage;

SELECT A.tablespace_name tablespace, D.mb_total,
SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM v$sort_segment A,
(
SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM v$tablespace B, v$tempfile C
WHERE B.ts#= C.ts#
GROUP BY B.name, C.block_size
) D
WHERE A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;





SELECT S.sid || ',' || S.serial# sid_serial, S.username, S.osuser, P.spid, S.module,
P.program, SUM (T.blocks) * TBS.block_size / 1024 / 1024 mb_used, T.tablespace,
COUNT(*) statements
FROM v$sort_usage T, v$session S, dba_tablespaces TBS, v$process P
WHERE T.session_addr = S.saddr
AND S.paddr = P.addr
AND T.tablespace = TBS.tablespace_name
GROUP BY S.sid, S.serial#, S.username, S.osuser, P.spid, S.module,
P.program, TBS.block_size, T.tablespace
ORDER BY sid_serial;



SELECT S.sid || ',' || S.serial# sid_serial, S.username, Q.hash_value, Q.sql_text,
T.blocks * TBS.block_size / 1024 / 1024 mb_used, T.tablespace
FROM v$sort_usage T, v$session S, v$sqlarea Q, dba_tablespaces TBS
WHERE T.session_addr = S.saddr
AND T.sqladdr = Q.address
AND T.tablespace = TBS.tablespace_name
ORDER BY S.sid;


select b.tablespace,b.segfile#,b.segblk#,b.blocks,a.sid,a.serial#,a.username,a.osuser,a.status
from V$SESSION a,V$SORT_USAGE b where a.saddr = b.session_addr order by b.tablespace,b.segfile#,b.segblk#,b.blocks;

undo_retention

You tell oracle "i'd like to keep undo for a certain period of time, say 1 hour"what that means is, if you ran a transaction at noon -- it generated some undo (rollback data). if undo retention was set to an hour, Oracle will try to retain that undo -- not overwrite it -- for at least an hour. It might overwrite it -- if you are pressured for space, but it will try to keep it for an houur (in 10g, you can force it to keep it -- by making transactions that want to overwrite fail instead of overwriting it)

1) setting a high undo retention means you are saying to Oracle -- PLEASE try to keep this undo for N hours. Oracle will extend undo segments as much as possible as it needs space in order to accomodate that request. If it cannot extend anymore (undo tablespace is full and no autoextend is possible on the datafiles) -- it will prematurely expire the oldest undo data in that segment -- meaning the undo retention request you made will not be observed.

select max(maxquerylen) from v$undostat;

Some useful unix commands

removing all trc files:
find . -name "*.trc" -exec rm {} \;

In AIX server log files are as follows:
1) /var/adm/wtmp this is binary file use following command to read it /usr/lib/acct/fwtmp fast shutdown and reboot
svmon -G

memory issues :
#!/bin/ksh
[[ -z $1 ]] && count=10 count=$1ps gxu grep PID grep -v grep
ps gxu /usr/bin/sort -k 4.0,5r grep -v PID head -n $count

ps aux head -1; ps aux sort -rn +3 head
svmon -Put 5

finding file :
find /-name 'filename'

iostat 4 5
sar -u 1 10 -->cpu
sar -u 10 8
sar -d 5 2 ->io


ioscan -fn
tar -cvf /dev/rmt/c2t5dOBEST /tmp
tar -xvf sample.tar ->open sample.tar
tar -tvf sample.tar ->show sample.tar
tar -cf sample.tar /home/../.. put everythıng ınto one file(no compress)

Changing username

1-connect sys/sys as sysdba
2-select user# , name from user$;
3-update user$ set name='INTERBASE' where name='TJAY';
4-commit;
5-shutdown immediate;
6- startup mount;
7- alter database open;
8- alter user INTERBASE identified by o;

Saturday, 5 July 2008

Rman scenerios

1-) When backup is online and which is not consistent ,when you restored yesterday backup then yours datafile is not consistent i.e the CKPT SCN is not sync with controlfile ,to make yours datafile sync itself and with controlfile you require recovery.


2-) Oracle first check controlfile checkpoint SCN number ,if checkpoint SCN in the datafile header earlier then controlfile checkpoint SCN then oracle need redo,recovery is overall the process of applying redo to data files.In more depth recovery will synchronize the checkpoint SCN in the data files header to checkpoint SCN in control file Or in broader term we can say goal of recovery is to make sure the CKPT SCN in each datafile header matches the CKPT SCN in the control file.


3-) When you restore old control file which doesn't know where the redo chain ends forces open resetlogs.