Friday, September 25, 2020

Oracle Extent Allocation Mystery and wasted/fragmented space in partitioned table PART-2

 In Part-1-Oracle Extent Allocation for non-partitioned table, we learned how Oracle allocates extents for non-partitioned tables and which is most appropriate in most of the cases, however when you have large tables with partitions , sub partitions , composite partitions used for your application, then I would suggest you to take closer look at the way Oracle allocates extents for such tables, most of the cases when all the extents and not densely populated with rows then it will tend to have more storage space wasted and few millions of rows would required 100s of GBs of storage which could have been accommodate in lesser storage.  


      Recently while investigating data fragmentation with one of our client's database, I was amazed to see that there was huge wasted space for the large partitioned tables having multiple sub-partitions.  The database I was investigating with fragmentation was an OLTP database having couple of large partitions/sub-partitions tables that occupies majority of database storage and I was surprised to see more than 70% storage was not properly utilized due extent allocation mystery for partitioned table which I will be revealing in this article. 

  
  

SQL>  CREATE TABLE "T_OBJ_SUB1"
  2     (	"OWNER"       VARCHAR2(128 BYTE) ,
  3  	"OBJECT_NAME"     VARCHAR2(128 BYTE) ,
  4  	"SUBOBJECT_NAME" VARCHAR2(128 BYTE) ,
  5  	"OBJECT_ID" NUMBER,
  6  	"DATA_OBJECT_ID" NUMBER,
  7  	"OBJECT_TYPE" VARCHAR2(23 BYTE) ,
  8  	"CREATED" DATE,
  9  	"LAST_DDL_TIME" DATE,
 10  	"TIMESTAMP" VARCHAR2(19 BYTE) ,
 11  	"STATUS" VARCHAR2(7 BYTE) ,
 12  	"TEMPORARY" VARCHAR2(1 BYTE) ,
 13  	"GENERATED" VARCHAR2(1 BYTE) ,
 14  	"SECONDARY" VARCHAR2(1 BYTE) ,
 15  	"NAMESPACE" NUMBER,
 16  	"EDITION_NAME" VARCHAR2(128 BYTE) ,
 17  	"SHARING" VARCHAR2(18 BYTE) ,
 18  	"EDITIONABLE" VARCHAR2(1 BYTE) ,
 19  	"ORACLE_MAINTAINED" VARCHAR2(1 BYTE) ,
 20  	"APPLICATION" VARCHAR2(1 BYTE) ,
 21  	"DEFAULT_COLLATION" VARCHAR2(100 BYTE) ,
 22  	"DUPLICATED" VARCHAR2(1 BYTE) ,
 23  	"SHARDED" VARCHAR2(1 BYTE) ,
 24  	"CREATED_APPID" NUMBER,
 25  	"CREATED_VSNID" NUMBER,
 26  	"MODIFIED_APPID" NUMBER,
 27  	"MODIFIED_VSNID" NUMBER,
 28  	ins_dt date
 29     )  SEGMENT CREATION IMMEDIATE
 30    PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
 31   NOCOMPRESS LOGGING
 32   tablespace IPYPRTBPS1
 33    --STORAGE  --commented
 34    PARTITION BY RANGE ("INS_DT") INTERVAL (NUMTODSINTERVAL (1, 'DAY'))
 35    SUBPARTITION BY HASH (  "OWNER",  "OBJECT_NAME",  "SUBOBJECT_NAME"  )
 36    SUBPARTITION TEMPLATE (
 37      SUBPARTITION "T1"  ,
 38      SUBPARTITION "T2" ,
 39      SUBPARTITION "T3" ,
 40      SUBPARTITION "T4" ,
 41      SUBPARTITION "T5" ,
 42      SUBPARTITION "T6" ,
 43      SUBPARTITION "T7" ,
 44      SUBPARTITION "T8"  )
 45  (PARTITION "P_FIRST"  VALUES LESS THAN (TIMESTAMP' 2018-02-12 00:00:00')
 46  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 47    STORAGE(
 48    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 49     ( SUBPARTITION "P_FIRST_T1" SEGMENT CREATION DEFERRED
 50     NOCOMPRESS ,
 51    SUBPARTITION "P_FIRST_T2" SEGMENT CREATION DEFERRED
 52     NOCOMPRESS ,
 53    SUBPARTITION "P_FIRST_T3" SEGMENT CREATION DEFERRED
 54     NOCOMPRESS ,
 55    SUBPARTITION "P_FIRST_T4" SEGMENT CREATION DEFERRED
 56     NOCOMPRESS ,
 57    SUBPARTITION "P_FIRST_T5" SEGMENT CREATION DEFERRED
 58     NOCOMPRESS ,
 59    SUBPARTITION "P_FIRST_T6" SEGMENT CREATION DEFERRED
 60     NOCOMPRESS ,
 61    SUBPARTITION "P_FIRST_T7" SEGMENT CREATION DEFERRED
 62     NOCOMPRESS ,
 63    SUBPARTITION "P_FIRST_T8" SEGMENT CREATION DEFERRED
 64    NOCOMPRESS ) )  ENABLE ROW MOVEMENT ;

Table created.

SQL> 
SQL> insert into t_obj_sub1 (select o.*,o.created from dba_objects o);

194556 rows created.

SQL> commit;

Commit complete.
 
 

Now let's see how the extents have been allocated for this table when we created table without specifying storage clause. Please note that we have truncated some content in below output to make it short and slim. If you observe the output, the first extent allocated for the extent is 8MB in size, which is 1028 blocksx 8kb block_size.



SQL> set line 500 pagesize 50 echo on
SQL> col owner for a10
SQL> col segment_name for a20
SQL> col partition_name for  a25
SQL> col segment_type for a25
SQL> col tablespace_name for a15
SQL> col extent_id for 99
SQL> col file_id for 99
SQL> col block_id for 9999999999
SQL> col bytes for 9999999999
SQL> col blocks for 99999
SQL> 
SQL> 
SQL> select owner,segment_name,partition_name,segment_type,tablespace_name,extent_id,file_id,block_id,bytes,blocks
  2   from  dba_extents where owner='TEST' and segment_name='T_OBJ_SUB1';

OWNER      SEGMENT_NAME         PARTITION_NAME            SEGMENT_TYPE              TABLESPACE_NAME EXTENT_ID FILE_ID    BLOCK_ID       BYTES BLOCKS
---------- -------------------- ------------------------- ------------------------- --------------- --------- ------- ----------- ----------- ------
TEST       T_OBJ_SUB1           SYS_SUBP137446            TABLE SUBPARTITION        IPYPRTBPS1              0       8      266496     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137448            TABLE SUBPARTITION        IPYPRTBPS1              0       8      270592     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137454            TABLE SUBPARTITION        IPYPRTBPS1              0       8      275712     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137455            TABLE SUBPARTITION        IPYPRTBPS1              0       8      272640     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137459            TABLE SUBPARTITION        IPYPRTBPS1              0       8      277760     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137467            TABLE SUBPARTITION        IPYPRTBPS1              0       8      286976     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137469            TABLE SUBPARTITION        IPYPRTBPS1              0       8      280704     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137470            TABLE SUBPARTITION        IPYPRTBPS1              0       8      284928     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137472            TABLE SUBPARTITION        IPYPRTBPS1              0       8      283904     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137473            TABLE SUBPARTITION        IPYPRTBPS1              0       8      282880     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137475            TABLE SUBPARTITION        IPYPRTBPS1              0       8      290944     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137476            TABLE SUBPARTITION        IPYPRTBPS1              0       8      291968     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137487            TABLE SUBPARTITION        IPYPRTBPS1              0       8      300288     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137488            TABLE SUBPARTITION        IPYPRTBPS1              0       8      296192     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137489            TABLE SUBPARTITION        IPYPRTBPS1              0       8      295168     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137493            TABLE SUBPARTITION        IPYPRTBPS1              0       8      305408     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137495            TABLE SUBPARTITION        IPYPRTBPS1              0       8      309504     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137496            TABLE SUBPARTITION        IPYPRTBPS1              0       8      311552     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137497            TABLE SUBPARTITION        IPYPRTBPS1              0       8      302336     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137499            TABLE SUBPARTITION        IPYPRTBPS1              0       8      306432     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137507            TABLE SUBPARTITION        IPYPRTBPS1              0       8      314624     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137509            TABLE SUBPARTITION        IPYPRTBPS1              0       8      313600     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137512            TABLE SUBPARTITION        IPYPRTBPS1              0       8      315648     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137516            TABLE SUBPARTITION        IPYPRTBPS1              0       8      317696     8388608   1024
:
:
:
TEST       T_OBJ_SUB1           SYS_SUBP137372            TABLE SUBPARTITION        IPYPRTBPS1              0       9     2998400     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137373            TABLE SUBPARTITION        IPYPRTBPS1              0       9     2996352     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137374            TABLE SUBPARTITION        IPYPRTBPS1              0       9     3000448     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137378            TABLE SUBPARTITION        IPYPRTBPS1              0       9     3005568     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137381            TABLE SUBPARTITION        IPYPRTBPS1              0       9     3004544     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137382            TABLE SUBPARTITION        IPYPRTBPS1              0       9     3006592     8388608   1024
TEST       T_OBJ_SUB1           SYS_SUBP137385            TABLE SUBPARTITION        IPYPRTBPS1              0       9     3009664     8388608   1024

342 rows selected.


 
Now let us observe the storage clause picked up by default when this partitioned table is created without specifying storage clause. First we need to collect statistics to have the statistics related columns populated in dba_tab_subpartitions and related dictionary views. Please note that we have truncated some content in below output to make it short and slim.


SQL> 
SQL> exec dbms_stats.gather_table_stats(ownname => 'TEST', tabname => 'T_OBJ_SUB1',GRANULARITY=>'ALL', estimate_percent => dbms_stats.auto_sample_size, degree => '4',cascade => TRUE, method_opt=>'FOR ALL COLUMNS SIZE AUTO');

PL/SQL procedure successfully completed.

SQL> 
SQL> col table_owner for a10
SQL> col table_name for a15
SQL> col partition_name for a15
SQL> col subpartition_name for a25
SQL> col partition_position for 999
SQL> col subpartition_position for 9999
SQL> col tablespace_name for a15
SQL> col initial_extent for 999999999999
SQL> col next_extent  for 999999999999
SQL> col num_rows for 999999999999
SQL> col blocks for 99999999
SQL> col last_analyzed for a25
SQL> 
SQL> select table_owner,table_name ,partition_name,subpartition_name,partition_position,subpartition_position,tablespace_name,initial_extent, next_extent ,
  2  num_rows, blocks,last_analyzed
  3  from dba_tab_subpartitions where  table_name='T_OBJ_SUB1';

TABLE_OWNE TABLE_NAME      PARTITION_NAME  SUBPARTITION_NAME         PARTITION_POSITION SUBPARTITION_POSITION TABLESPACE_NAME INITIAL_EXTENT   NEXT_EXTENT      NUM_ROWS    BLOCKS LAST_ANALYZED 
---------- --------------- --------------- ------------------------- ------------------ --------------------- --------------- -------------- ------------- ------------- --------- --------------
TEST       T_OBJ_SUB1      P_FIRST         P_FIRST_T1                                 1                     1 IPYPRTBPS1             8388608       1048576          8971      1006 23-SEP-20     
TEST       T_OBJ_SUB1      P_FIRST         P_FIRST_T2                                 1                     2 IPYPRTBPS1             8388608       1048576          9069      1006 23-SEP-20     
TEST       T_OBJ_SUB1      P_FIRST         P_FIRST_T3                                 1                     3 IPYPRTBPS1             8388608       1048576          8850      1006 23-SEP-20     
TEST       T_OBJ_SUB1      P_FIRST         P_FIRST_T8                                 1                     8 IPYPRTBPS1             8388608       1048576          9067      1006 23-SEP-20     
TEST       T_OBJ_SUB1      P_FIRST         P_FIRST_T5                                 1                     5 IPYPRTBPS1             8388608       1048576          9088      1006 23-SEP-20     
TEST       T_OBJ_SUB1      P_FIRST         P_FIRST_T6                                 1                     6 IPYPRTBPS1             8388608       1048576          9134      1006 23-SEP-20     
TEST       T_OBJ_SUB1      P_FIRST         P_FIRST_T7                                 1                     7 IPYPRTBPS1             8388608       1048576          9144      1006 23-SEP-20     
TEST       T_OBJ_SUB1      P_FIRST         P_FIRST_T4                                 1                     4 IPYPRTBPS1             8388608       1048576          9033      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137258     SYS_SUBP137250                             2                     1 IPYPRTBPS1             8388608       1048576             3      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137258     SYS_SUBP137251                             2                     2 IPYPRTBPS1             8388608       1048576             2      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137258     SYS_SUBP137252                             2                     3 IPYPRTBPS1             8388608       1048576             4      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137258     SYS_SUBP137257                             2                     8 IPYPRTBPS1             8388608       1048576             1      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137258     SYS_SUBP137254                             2                     5 IPYPRTBPS1             8388608       1048576             4      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137258     SYS_SUBP137255                             2                     6 IPYPRTBPS1             8388608       1048576             2      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137258     SYS_SUBP137256                             2                     7 IPYPRTBPS1             8388608       1048576             2      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137258     SYS_SUBP137253                             2                     4 IPYPRTBPS1             8388608       1048576             4      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137339     SYS_SUBP137331                             3                     1 IPYPRTBPS1             8388608       1048576             4      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137339     SYS_SUBP137332                             3                     2 IPYPRTBPS1             8388608       1048576             1      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137339     SYS_SUBP137333                             3                     3 IPYPRTBPS1             8388608       1048576             4      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137339     SYS_SUBP137334                             3                     4 IPYPRTBPS1             8388608       1048576             2      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137339     SYS_SUBP137335                             3                     5 IPYPRTBPS1             8388608       1048576             1      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137339     SYS_SUBP137336                             3                     6 IPYPRTBPS1             8388608       1048576             3      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137339     SYS_SUBP137337                             3                     7 IPYPRTBPS1             8388608       1048576             4      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137339     SYS_SUBP137338                             3                     8 IPYPRTBPS1             8388608       1048576             1      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137357     SYS_SUBP137349                             4                     1 IPYPRTBPS1             8388608       1048576             1      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137357     SYS_SUBP137350                             4                     2 IPYPRTBPS1             8388608       1048576             3      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137357     SYS_SUBP137351                             4                     3 IPYPRTBPS1             8388608       1048576             3      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137357     SYS_SUBP137352                             4                     4 IPYPRTBPS1             8388608       
:
:
TEST       T_OBJ_SUB1      SYS_P137429     SYS_SUBP137423                            57                     3 IPYPRTBPS1             8388608       1048576             3       376 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137429     SYS_SUBP137421                            57                     1 IPYPRTBPS1             8388608       1048576             2      1006 23-SEP-20     
TEST       T_OBJ_SUB1      SYS_P137429     SYS_SUBP137422                            57                     2 IPYPRTBPS1             8388608       1048576             1      1006 23-SEP-20     

456 rows selected.

If you observe in the above output, the INITIAL_EXTENT is 8MB(8388608) and NEXT_EXTENT size is 1MB(1048576) for the default storage clause. In below output you can observe that almost all the exten space in each subparitions are wasted and total 2.50GB (2527086KB) storage is being wasted where as actual data is approximately 25MB(26097KB). Please note that we have truncated some content in below output to make it short and slim.


SQL>
SQL> --Wasted space below HWM
SQL> col owner for a20
SQL> col table_name for a30
SQL> col LAST_ANALYZED for a20
SQL> col EXTENT_MANAGEMENT for a20
SQL> col SEGMENT_SPACE_MANAGEMENT for a20
SQL> col tablespace_name for a20
SQL> col sgment_space_mgmt for a15
SQL> col partition_name for a25
SQL> col subparition_name for a30
SQL>  
SQL> 
SQL> select * from
  2  (
  3  select
  4     table_owner,table_name,PARTITION_NAME,subpartition_name,last_analyzed,s.tablespace_name,EXTENT_MANAGEMENT EXTENT_MGMT,SEGMENT_SPACE_MANAGEMENT SEGMENT_SPACE_MGMT,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,round((blocks*8),2) "size (kb)" ,
  5     round((num_rows*avg_row_len/1024),2) "actual_data (kb)",
  6     (round((blocks*8),2) - round((num_rows*avg_row_len/1024),2)) "wasted_space (kb)"
  7  from
  8     dba_tab_subpartitions T, dba_tablespaces s
  9  where
 10     (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
 11  and
 12  t.tablespace_name=s.tablespace_name
 13  order by (round((blocks*8),2) - round((num_rows*avg_row_len/1024),2)) desc
 14  )
 15  where table_owner='TEST' AND TABLE_NAME='T_OBJ_SUB1' ;

TABLE_OWNE TABLE_NAME                     PARTITION_NAME            SUBPARTITION_NAME         LAST_ANALYZED        TABLESPACE_NAME      EXTENT_MGM SEGMEN      NUM_ROWS    BLOCKS EMPTY_BLOCKS  size (kb) actual_data (kb) wasted_space (kb)                                                                                                                                                                                                                                                                       
---------- ------------------------------ ------------------------- ------------------------- -------------------- -------------------- ---------- ------ ------------- --------- ------------ ---------- ---------------- -----------------                                                                                                                                                                                                                                                                       
TEST       T_OBJ_SUB1                     SYS_P137690               SYS_SUBP137682            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               1      1006            0       8048              .11           8047.89                                                                                                                                                                                                                                                                       
TEST       T_OBJ_SUB1                     SYS_P137708               SYS_SUBP137705            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               1      1006            0       8048              .11           8047.89                                                                                                                                                                                                                                                                       
TEST       T_OBJ_SUB1                     SYS_P137708               SYS_SUBP137706            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               1      1006            0       8048              .11           8047.89                                                                                                                                                                                                                                                                       
TEST       T_OBJ_SUB1                     SYS_P137717               SYS_SUBP137714            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               1      1006            0       8048              .11           8047.89                                                                                                                                                                                                                                                                       
TEST       T_OBJ_SUB1                     SYS_P137753               SYS_SUBP137748            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               1      1006            0       8048              .11           8047.89                                                                                                                                                                                                                                                                       
TEST       T_OBJ_SUB1                     SYS_P137357               SYS_SUBP137355            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               1      1006            0       8048              .11           8047.89                                                                                                                                                                                                                                                                       
TEST       T_OBJ_SUB1                     SYS_P137483               SYS_SUBP137476            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               1      1006            0       8048              .11           8047.89                                                                                                                                                                                                                                                                       
TEST       T_OBJ_SUB1                     SYS_P137510               SYS_SUBP137507            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               1      1006            0       8048              .11           8047.89                                                                                                                                                                                                                                                                       
TEST       T_OBJ_SUB1                     SYS_P137339               SYS_SUBP137332            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               1      1006            0       8048              .11           8047.89                                                                                                                                                                                                                                                                       
TEST       T_OBJ_SUB1                     SYS_P137339               SYS_SUBP137335            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               1      1006            0       8048              .11           8047.89                                                         :                                               
:
:
TEST       T_OBJ_SUB1                     SYS_P137690               SYS_SUBP137683            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               1       754            0       6032              .11           6031.89                                                                                                                                                                                                                                                                       
TEST       T_OBJ_SUB1                     SYS_P137528               SYS_SUBP137520            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               1       376            0       3008              .14           3007.86                                                                                                                                                                                                                                                                       
TEST       T_OBJ_SUB1                     SYS_P137429               SYS_SUBP137423            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               3       376            0       3008              .39           3007.61                                                                                                                                                                                                                                                                       
TEST       T_OBJ_SUB1                     SYS_P137294               SYS_SUBP137289            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO             803       124            0        992           105.08            886.92                                                                                                                                                                                                                                                                       

320 rows selected.
SQL> 
SQL> 
SQL> 
SQL> select table_owner,table_name,sum(size_kb),sum(actual_data_kb),sum(wasted_space_kb) from
  2  (
  3  select
  4     table_owner,table_name,PARTITION_NAME,subpartition_name,   last_analyzed,s.tablespace_name,EXTENT_MANAGEMENT EXTENT_MGMT,SEGMENT_SPACE_MANAGEMENT SEGMENT_SPACE_MGMT,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,
  5     round((blocks*8),2) size_kb ,
  6     round((num_rows*avg_row_len/1024),2) actual_data_kb,
  7     (round((blocks*8),2) - round((num_rows*avg_row_len/1024),2)) wasted_space_kb
  8  from
  9     dba_tab_subpartitions T, dba_tablespaces s
 10  where
 11     (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
 12  and
 13  t.tablespace_name=s.tablespace_name
 14  order by (round((blocks*8),2) - round((num_rows*avg_row_len/1024),2)) desc
 15  )
 16  where table_owner='TEST' AND TABLE_NAME='T_OBJ_SUB1'
 17  group by table_owner,table_name;

TABLE_OWNE TABLE_NAME                     SUM(SIZE_KB) SUM(ACTUAL_DATA_KB) SUM(WASTED_SPACE_KB)
---------- ------------------------------ ------------ ------------------- --------------------                                                   
TEST       T_OBJ_SUB1                          2553184            26097.06           2527086.94                                                   SQL> 
SQL> 
SQL> 

 
So the important questions is how can we control the wasted spacefor partitioned/sub-partitioned tables. There are TWO ways you can control the storage clause as mentioned below: 
 #1: specify 65k or smaller extent_size when you define the paritioned tables. 
#2: set hiddent parameter _partition_large_extents to false. (We will discuss this in another article.)



 Now let's examine method #1, when we create paritioned/sub-paritioned table T_OBJ_S2 with smaller initial_extent size i.e STORAGE(INITIAL 10K NEXT 20K). Oracle will allocate 64k initial extent_size. Please note that if you create 64k or lesser initial extent size Oracle will allocate at least 64k initial extent and the same algorithm for storage allocation will be applied similar to normal tables that mean first 16 extent size will be 64k and then 17th extent onward size would be 1MB and so on, which is evident from the below example. With smaller extent size in this case all the data was able to fit in 54MB instead of 2.50GB in previous case with default storage. Please note that it again depends on how densely you populate each extents, if you populate enough rows in each extents to optimally then it should be fine but if you use default extent allocation scheme for paritioned/sub-partitioned tables and you do not populate enough rows in each extents allocatged for the partitiones/sub-partitions then you may have huge wasted space in your database.


SQL> 
SQL>   CREATE TABLE "T_OBJ_S2"
  2     (	"OWNER"       VARCHAR2(128 BYTE) ,
  3  	"OBJECT_NAME"     VARCHAR2(128 BYTE) ,
  4  	"SUBOBJECT_NAME" VARCHAR2(128 BYTE) ,
  5  	"OBJECT_ID" NUMBER,
  6  	"DATA_OBJECT_ID" NUMBER,
  7  	"OBJECT_TYPE" VARCHAR2(23 BYTE) ,
  8  	"CREATED" DATE,
  9  	"LAST_DDL_TIME" DATE,
 10  	"TIMESTAMP" VARCHAR2(19 BYTE) ,
 11  	"STATUS" VARCHAR2(7 BYTE) ,
 12  	"TEMPORARY" VARCHAR2(1 BYTE) ,
 13  	"GENERATED" VARCHAR2(1 BYTE) ,
 14  	"SECONDARY" VARCHAR2(1 BYTE) ,
 15  	"NAMESPACE" NUMBER,
 16  	"EDITION_NAME" VARCHAR2(128 BYTE) ,
 17  	"SHARING" VARCHAR2(18 BYTE) ,
 18  	"EDITIONABLE" VARCHAR2(1 BYTE) ,
 19  	"ORACLE_MAINTAINED" VARCHAR2(1 BYTE) ,
 20  	"APPLICATION" VARCHAR2(1 BYTE) ,
 21  	"DEFAULT_COLLATION" VARCHAR2(100 BYTE) ,
 22  	"DUPLICATED" VARCHAR2(1 BYTE) ,
 23  	"SHARDED" VARCHAR2(1 BYTE) ,
 24  	"CREATED_APPID" NUMBER,
 25  	"CREATED_VSNID" NUMBER,
 26  	"MODIFIED_APPID" NUMBER,
 27  	"MODIFIED_VSNID" NUMBER,
 28  	ins_dt date
 29     )  SEGMENT CREATION IMMEDIATE
 30    tablespace IPYPRTBPS1
 31    STORAGE(INITIAL 10K NEXT 20K)
 32    PARTITION BY RANGE ("INS_DT") INTERVAL (NUMTODSINTERVAL (1, 'DAY'))
 33    SUBPARTITION BY HASH (  "OWNER",  "OBJECT_NAME",  "SUBOBJECT_NAME"  )
 34    SUBPARTITION TEMPLATE (
 35      SUBPARTITION "T1"  ,
 36      SUBPARTITION "T2" ,
 37      SUBPARTITION "T3" ,
 38      SUBPARTITION "T4" ,
 39      SUBPARTITION "T5" ,
 40      SUBPARTITION "T6" ,
 41      SUBPARTITION "T7" ,
 42      SUBPARTITION "T8"  )
 43  (PARTITION "P_FIRST"  VALUES LESS THAN (TIMESTAMP' 2018-02-12 00:00:00')
 44      ( SUBPARTITION "P_FIRST_T1" SEGMENT CREATION DEFERRED
 45     NOCOMPRESS ,
 46    SUBPARTITION "P_FIRST_T2" SEGMENT CREATION DEFERRED
 47     NOCOMPRESS ,
 48    SUBPARTITION "P_FIRST_T3" SEGMENT CREATION DEFERRED
 49     NOCOMPRESS ,
 50    SUBPARTITION "P_FIRST_T4" SEGMENT CREATION DEFERRED
 51     NOCOMPRESS ,
 52    SUBPARTITION "P_FIRST_T5" SEGMENT CREATION DEFERRED
 53     NOCOMPRESS ,
 54    SUBPARTITION "P_FIRST_T6" SEGMENT CREATION DEFERRED
 55     NOCOMPRESS ,
 56    SUBPARTITION "P_FIRST_T7" SEGMENT CREATION DEFERRED
 57     NOCOMPRESS ,
 58    SUBPARTITION "P_FIRST_T8" SEGMENT CREATION DEFERRED
 59    NOCOMPRESS ) )  ENABLE ROW MOVEMENT ;

Table created.

SQL> 
SQL> 
SQL> insert into t_obj_s2 (select o.*,o.created from dba_objects o);

195070 rows created.

SQL> 
SQL> 
SQL> 
SQL> commit;

Commit complete.

SQL> 
SQL> 

SQL> BEGIN
  2  dbms_stats.gather_table_stats(ownname => 'TEST', tabname => 'T_OBJ_S2',GRANULARITY=>'ALL', estimate_percent => dbms_stats.auto_sample_size, degree => '4',cascade => TRUE, method_opt=>'FOR ALL COLUMNS SIZE AUTO');
  3  END;
  4  /

PL/SQL procedure successfully completed.

SQL> 
SQL> 
SQL> 
SQL> col owner for a10
SQL> col segment_name for a20
SQL> col partition_name for  a25
SQL> col segment_type for a25
SQL> col tablespace_name for a15
SQL> col extent_id for 99
SQL> col file_id for 99
SQL> col block_id for 9999999999
SQL> col bytes for 9999999999
SQL> col blocks for 99999
SQL> 
SQL> select owner,segment_name,partition_name,segment_type,tablespace_name,extent_id,file_id,block_id,bytes,blocks
  2   from  dba_extents where owner='TEST' and segment_name='T_OBJ_S2';

OWNER      SEGMENT_NAME         PARTITION_NAME            SEGMENT_TYPE              TABLESPACE_NAME EXTENT_ID FILE_ID    BLOCK_ID       BYTES BLOCKS
---------- -------------------- ------------------------- ------------------------- --------------- --------- ------- ----------- ----------- ------
TEST       T_OBJ_S2             SYS_SUBP137941            TABLE SUBPARTITION        IPYPRTBPS1              0       9      209144       65536      8
TEST       T_OBJ_S2             SYS_SUBP137783            TABLE SUBPARTITION        IPYPRTBPS1              0       9      160936       65536      8
TEST       T_OBJ_S2             SYS_SUBP137783            TABLE SUBPARTITION        IPYPRTBPS1              1       8       68768       65536      8
TEST       T_OBJ_S2             SYS_SUBP137783            TABLE SUBPARTITION        IPYPRTBPS1              2      11       56488       65536      8
TEST       T_OBJ_S2             SYS_SUBP137783            TABLE SUBPARTITION        IPYPRTBPS1              3      10       56504       65536      8
TEST       T_OBJ_S2             SYS_SUBP137783            TABLE SUBPARTITION        IPYPRTBPS1              4       9      282784       65536      8
TEST       T_OBJ_S2             SYS_SUBP137788            TABLE SUBPARTITION        IPYPRTBPS1              0       9      147672       65536      8
TEST       T_OBJ_S2             SYS_SUBP137788            TABLE SUBPARTITION        IPYPRTBPS1              1       8       64760       65536      8
TEST       T_OBJ_S2             SYS_SUBP137788            TABLE SUBPARTITION        IPYPRTBPS1              2      11       56512       65536      8
TEST       T_OBJ_S2             SYS_SUBP137788            TABLE SUBPARTITION        IPYPRTBPS1              3      10       56528       65536      8
TEST       T_OBJ_S2             SYS_SUBP137788            TABLE SUBPARTITION        IPYPRTBPS1              4       9      282808       65536      8
TEST       T_OBJ_S2             SYS_SUBP137794            TABLE SUBPARTITION        IPYPRTBPS1              0       9      160984       65536      8
TEST       T_OBJ_S2             SYS_SUBP137794            TABLE SUBPARTITION        IPYPRTBPS1              1       8       71920       65536      8
TEST       T_OBJ_S2             SYS_SUBP137794            TABLE SUBPARTITION        IPYPRTBPS1              2      11       56552       65536      8
TEST       T_OBJ_S2             SYS_SUBP137871            TABLE SUBPARTITION        IPYPRTBPS1              0       9      198816       65536      8
TEST       T_OBJ_S2             SYS_SUBP137871            TABLE SUBPARTITION        IPYPRTBPS1              1       8       60640       65536      8
TEST       T_OBJ_S2             SYS_SUBP137877            TABLE SUBPARTITION        IPYPRTBPS1              0       9      197840       65536      8
TEST       T_OBJ_S2             SYS_SUBP137877            TABLE SUBPARTITION        IPYPRTBPS1              1       8       60656       65536      8
TEST       T_OBJ_S2             SYS_SUBP138053            TABLE SUBPARTITION        IPYPRTBPS1              0       9      228568       65536      8
TEST       T_OBJ_S2             SYS_SUBP137891            TABLE SUBPARTITION        IPYPRTBPS1              0       9      198848       65536      8
:
:
:
TEST       T_OBJ_S2             SYS_SUBP138163            TABLE SUBPARTITION        IPYPRTBPS1              0       9      281800       65536      8
TEST       T_OBJ_S2             SYS_SUBP138164            TABLE SUBPARTITION        IPYPRTBPS1              0       9      281784       65536      8
TEST       T_OBJ_S2             SYS_SUBP138166            TABLE SUBPARTITION        IPYPRTBPS1              0       9      281792       65536      8
TEST       T_OBJ_S2             SYS_SUBP138169            TABLE SUBPARTITION        IPYPRTBPS1              0       9      281824       65536      8
TEST       T_OBJ_S2             SYS_SUBP138170            TABLE SUBPARTITION        IPYPRTBPS1              0       9      281816       65536      8
TEST       T_OBJ_S2             SYS_SUBP138171            TABLE SUBPARTITION        IPYPRTBPS1              0       9      281832       65536      8
TEST       T_OBJ_S2             SYS_SUBP138178            TABLE SUBPARTITION        IPYPRTBPS1              0       9      282840       65536      8
TEST       T_OBJ_S2             SYS_SUBP138179            TABLE SUBPARTITION        IPYPRTBPS1              0       9      282856       65536      8
TEST       T_OBJ_S2             SYS_SUBP138180            TABLE SUBPARTITION        IPYPRTBPS1              0       9      282848       65536      8
TEST       T_OBJ_S2             SYS_SUBP138183            TABLE SUBPARTITION        IPYPRTBPS1              0       9      282864       65536      8
TEST       T_OBJ_S2             SYS_SUBP138186            TABLE SUBPARTITION        IPYPRTBPS1              0       9      283800       65536      8
TEST       T_OBJ_S2             SYS_SUBP138187            TABLE SUBPARTITION        IPYPRTBPS1              0       9      283832       65536      8
TEST       T_OBJ_S2             SYS_SUBP138189            TABLE SUBPARTITION        IPYPRTBPS1              0       9      283816       65536      8
TEST       T_OBJ_S2             SYS_SUBP138190            TABLE SUBPARTITION        IPYPRTBPS1              0       9      282872       65536      8
TEST       T_OBJ_S2             SYS_SUBP138191            TABLE SUBPARTITION        IPYPRTBPS1              0       9      283824       65536      8
TEST       T_OBJ_S2             SYS_SUBP138193            TABLE SUBPARTITION        IPYPRTBPS1              0       9      283808       65536      8


753 rows selected.


SQL> 
SQL> --Wasted space below HWM
SQL> col owner for a20
SQL> col table_name for a30
SQL> col LAST_ANALYZED for a20
SQL> col EXTENT_MANAGEMENT for a20
SQL> col SEGMENT_SPACE_MANAGEMENT for a20
SQL> col tablespace_name for a20
SQL> col sgment_space_mgmt for a15
SQL> col partition_name for a25
SQL> col subparition_name for a30
SQL> 
SQL> 
SQL> select * from
  2  (
  3  select
  4     table_owner,table_name,PARTITION_NAME,subpartition_name,last_analyzed,s.tablespace_name,EXTENT_MANAGEMENT EXTENT_MGMT,SEGMENT_SPACE_MANAGEMENT SEGMENT_SPACE_MGMT,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,round((blocks*8),2) "size (kb)" ,
  5     round((num_rows*avg_row_len/1024),2) "actual_data (kb)",
  6     (round((blocks*8),2) - round((num_rows*avg_row_len/1024),2)) "wasted_space (kb)"
  7  from
  8     dba_tab_subpartitions T, dba_tablespaces s
  9  where
 10    -- (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
 11  --and
 12  t.tablespace_name=s.tablespace_name
 13  order by (round((blocks*8),2) - round((num_rows*avg_row_len/1024),2)) desc
 14  )
 15  where table_owner='TEST' AND TABLE_NAME='T_OBJ_S2' ;

TABLE_OWNE TABLE_NAME                     PARTITION_NAME            SUBPARTITION_NAME         LAST_ANALYZED        TABLESPACE_NAME      EXTENT_MGM SEGMEN      NUM_ROWS    BLOCKS EMPTY_BLOCKS  size (kb) actual_data (kb) wasted_space (kb) 
---------- ------------------------------ ------------------------- ------------------------- -------------------- -------------------- ---------- ------ ------------- --------- ------------ ---------- ---------------- ----------------- 
TEST       T_OBJ_S2                       SYS_P137816               SYS_SUBP137813            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO            5997       244            0       1952           802.33           1149.67 
TEST       T_OBJ_S2                       SYS_P137816               SYS_SUBP137814            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO            5983       244            0       1952            806.3            1145.7 
TEST       T_OBJ_S2                       SYS_P137816               SYS_SUBP137812            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO            5998       244            0       1952           808.32           1143.68 
TEST       T_OBJ_S2                       SYS_P137816               SYS_SUBP137809            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO            6048       244            0       1952           809.16           1142.84 
TEST       T_OBJ_S2                       SYS_P137816               SYS_SUBP137808            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO            6012       244            0       1952           810.21           1141.79 
TEST       T_OBJ_S2                       SYS_P137816               SYS_SUBP137810            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO            6043       244            0       1952           814.39           1137.61 
TEST       T_OBJ_S2                       SYS_P137816               SYS_SUBP137811            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO            6105       244            0       1952           816.78           1135.22 
TEST       T_OBJ_S2                       SYS_P137816               SYS_SUBP137815            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO            6092       244            0       1952           820.99           1131.01 
TEST       T_OBJ_S2                       P_FIRST                   P_FIRST_T3                23-SEP-20            IPYPRTBPS1           LOCAL      AUTO            8850       244            0       1952           1218.6             733.4 
TEST       T_OBJ_S2                       P_FIRST                   P_FIRST_T1                23-SEP-20            IPYPRTBPS1           LOCAL      AUTO            8971       244            0       1952          1235.26            716.74 
TEST       T_OBJ_S2                       P_FIRST                   P_FIRST_T8                23-SEP-20            IPYPRTBPS1           LOCAL      AUTO            9067       244            0       1952          1239.63            712.37 
TEST       T_OBJ_S2                       P_FIRST                   P_FIRST_T4                23-SEP-20            IPYPRTBPS1           LOCAL      AUTO            9033       244            0       1952           1243.8             708.2 
:
:
:
TEST       T_OBJ_S2                       SYS_P137969               SYS_SUBP137965            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               0         0            0          0                0                 0 
TEST       T_OBJ_S2                       SYS_P137969               SYS_SUBP137966            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               0         0            0          0                0                 0 
TEST       T_OBJ_S2                       SYS_P137969               SYS_SUBP137967            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               0         0            0          0                0                 0 
TEST       T_OBJ_S2                       SYS_P137969               SYS_SUBP137968            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               0         0            0          0                0                 0 
TEST       T_OBJ_S2                       SYS_P137978               SYS_SUBP137970            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               0         0            0          0                0                 0 
TEST       T_OBJ_S2                       SYS_P137978               SYS_SUBP137972            23-SEP-20            IPYPRTBPS1           LOCAL      AUTO               0         0            0          0                0                 0 

456 rows selected.

SQL> 
SQL> 
SQL> select table_owner,table_name,sum(size_kb),sum(actual_data_kb),sum(wasted_space_kb) from
  2  (
  3  select
  4     table_owner,table_name,PARTITION_NAME,subpartition_name,   last_analyzed,s.tablespace_name,EXTENT_MANAGEMENT EXTENT_MGMT,SEGMENT_SPACE_MANAGEMENT SEGMENT_SPACE_MGMT,NUM_ROWS,BLOCKS,EMPTY_BLOCKS,
  5     round((blocks*8),2) size_kb ,
  6     round((num_rows*avg_row_len/1024),2) actual_data_kb,
  7     (round((blocks*8),2) - round((num_rows*avg_row_len/1024),2)) wasted_space_kb
  8  from
  9     dba_tab_subpartitions T, dba_tablespaces s
 10  where
 11     (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
 12  and
 13  t.tablespace_name=s.tablespace_name
 14  order by (round((blocks*8),2) - round((num_rows*avg_row_len/1024),2)) desc
 15  )
 16  where table_owner='TEST' AND TABLE_NAME='T_OBJ_S2'
 17  group by table_owner,table_name;

TABLE_OWNE TABLE_NAME                     SUM(SIZE_KB) SUM(ACTUAL_DATA_KB) SUM(WASTED_SPACE_KB)
---------- ------------------------------ ------------ ------------------- --------------------
TEST       T_OBJ_S2                              54072            26163.87             27908.13

SQL> 

 

Thursday, September 24, 2020

Oracle Extent Allocation for non-partitioned table - Part-1

As we all know, Oracle allocates storage space logically using blocks, extents(contiguous block space), segments for tables, indexes, partitions, subpartitions in tablespace consisting of one of more physical datafiles.

Oracle 9i introduced LOCALLY MANAGED TABLESPACE, prior to that Oracle was using DICTIONARY MANAGED TABLESPACE. With introduction of LOCALLY MANAGED TABLESPACE , AUTOMATIC SEGMENT SPACE MANAGEMENT(ASSM) was also introduced, which removes the need for managing freelists and freelist groups by using bitmaps to describe the space usage of each block is within a segment. 

Let us first of all understand how Oracle performs extent allocation for normal/non-partitioned tables.


Below is the sample script we generally use to create locally managed ASSM tablespace.

CREATE TABLESPACE DATA
DATAFILE  '+DATA001/PGTEST/DATAFILE/ipydata01.dbf' SIZE 1400M 
AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED LOGGING ONLINE
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO;


Now, we will create non-partitioned table without storage clause and will insert data from dba_objects dictionary table to populate data.




SQL> set line 500 echo on                                     
SQL>
SQL>   drop table t_obj;

Table dropped.

SQL>
SQL>
SQL> CREATE TABLE "T_OBJ"
  2     ("OWNER" VARCHAR2(128 BYTE),				
  3  "OBJECT_NAME" VARCHAR2(128 BYTE),
  4  "SUBOBJECT_NAME" VARCHAR2(128 BYTE),
  5  "OBJECT_ID" NUMBER,
  6  "DATA_OBJECT_ID" NUMBER,
  7  "OBJECT_TYPE" VARCHAR2(23 BYTE),
  8  "CREATED" DATE,
  9  "LAST_DDL_TIME" DATE,
 10  "TIMESTAMP" VARCHAR2(19 BYTE),
 11  "STATUS" VARCHAR2(7 BYTE),
 12  "TEMPORARY" VARCHAR2(1 BYTE),
 13  "GENERATED" VARCHAR2(1 BYTE),
 14  "SECONDARY" VARCHAR2(1 BYTE),
 15  "NAMESPACE" NUMBER,
 16  "EDITION_NAME" VARCHAR2(128 BYTE),
 17  "SHARING" VARCHAR2(18 BYTE),
 18  "EDITIONABLE" VARCHAR2(1 BYTE),
 19  "ORACLE_MAINTAINED" VARCHAR2(1 BYTE),
 20  "APPLICATION" VARCHAR2(1 BYTE),
 21  "DEFAULT_COLLATION" VARCHAR2(100 BYTE),
 22  "DUPLICATED" VARCHAR2(1 BYTE),
 23  "SHARDED" VARCHAR2(1 BYTE),
 24  "CREATED_APPID" NUMBER,
 25  "CREATED_VSNID" NUMBER,
 26  "MODIFIED_APPID" NUMBER,
 27  "MODIFIED_VSNID" NUMBER
 28     )   SEGMENT CREATION IMMEDIATE
 29    PCTFREE 10 PCTUSED 40 INITRANS 10 MAXTRANS 255
 30   NOCOMPRESS LOGGING
 31    TABLESPACE "DATA" ;

Table created.

SQL>

SQL>
SQL> insert into t_obj (select * from dba_objects);

51684 rows created.

SQL> commit;

Commit complete.

SQL> 

SQL>
  

Let us check the extent allocation detail for the table segment created above.


  
SQL> set line 200 pagesize 50

SQL> col owner for a10
SQL> col segment_name for a20
SQL> col partition_name for  a25
SQL> col segment_type for a25
SQL> col tablespace_name for a15
SQL> col extent_id for 99
SQL> col file_id for 9999
SQL> col block_id for 9999999999
SQL> col bytes for 9999999999
SQL> col blocks for 99999
SQL>
SQL> select owner,segment_name,partition_name,segment_type,tablespace_name,extent_id,file_id,block_id,bytes,blocks
  2   from  dba_extents where owner='ADMIN' and segment_name='T_OBJ';

OWNER      SEGMENT_NAME  PARTITION_NAME    SEGMENT_TYPE    TABLESPACE_NAME EXTENT_ID FILE_ID    BLOCK_ID       BYTES BLOCKS
---------- ------------- ----------------- --------------- --------------- --------- ------- ----------- ----------- ------
ADMIN      T_OBJ                           TABLE           DATA                    0    4619      828416       65536      8
ADMIN      T_OBJ                           TABLE           DATA                    1    4619      828424       65536      8
ADMIN      T_OBJ                           TABLE           DATA                    2    4619      828432       65536      8
ADMIN      T_OBJ                           TABLE           DATA                    3    4619      828440       65536      8
ADMIN      T_OBJ                           TABLE           DATA                    4    4619      828448       65536      8
ADMIN      T_OBJ                           TABLE           DATA                    5    4619      828456       65536      8
ADMIN      T_OBJ                           TABLE           DATA                    6    4619      828464       65536      8
ADMIN      T_OBJ                           TABLE           DATA                    7    4619      828472       65536      8
ADMIN      T_OBJ                           TABLE           DATA                    8    4619      828480       65536      8
ADMIN      T_OBJ                           TABLE           DATA                    9    4619      828488       65536      8
ADMIN      T_OBJ                           TABLE           DATA                   10    4619      828496       65536      8
ADMIN      T_OBJ                           TABLE           DATA                   11    4619      828504       65536      8
ADMIN      T_OBJ                           TABLE           DATA                   12    4619      828512       65536      8
ADMIN      T_OBJ                           TABLE           DATA                   13    4619      828520       65536      8
ADMIN      T_OBJ                           TABLE           DATA                   14    4619      828528       65536      8
ADMIN      T_OBJ                           TABLE           DATA                   15    4619      828536       65536      8
ADMIN      T_OBJ                           TABLE           DATA                   16    4619      828544     1048576    128
ADMIN      T_OBJ                           TABLE           DATA                   17    4619      828672     1048576    128
ADMIN      T_OBJ                           TABLE           DATA                   18    4619      828800     1048576    128
ADMIN      T_OBJ                           TABLE           DATA                   19    4619      828928     1048576    128
ADMIN      T_OBJ                           TABLE           DATA                   20    4619      829056     1048576    128
ADMIN      T_OBJ                           TABLE           DATA                   21    4619      829184     1048576    128
ADMIN      T_OBJ                           TABLE           DATA                   22    4619      829312     1048576    128
ADMIN      T_OBJ                           TABLE           DATA                   23    4619      829440     1048576    128

24 rows selected.

SQL>

In the above output, If you observe carefully under extent_id, bytes and blocks columna, you will see that there are first 16 extents allocated for this segment with size 64kb (65536 bytes) (8 blocksx8kb) from the 17th extent extent size is increased to 1MB (128 blocks* 8kb), please remember this database created with default block size of 8kb.


Remember when we created tablespace DATA in the beginning with EXTENT MANAGEMENT LOCAL and  AUTOALLOCATE, below extent allocation behavior is because of the AUTOALLOCATE and please note that this behavior is not disclosed in oracle document  and can change at any time, however since 9i to 19c this behavior is the evident:

 
  • First 16 extents: 64 KB (1 MB in total).
  • Next 63 extents: 1 MB (total space is 16 MB).
  • Next 126 extents: 8 MB (until total space 1 GB).
  • Then extent size 64 MB is chosen.

and so on.

Extent allocation with AUTOALLOCATE in LMT may work best for most of the non-partitioned table however it may become little more complicated when used with partitioned tables, especially with multiple subpartitions. We will discuss about this in PART-2-Oracle Extent Allocation Mystery and wasted/fragmented space in partitioned table
.



Tuesday, August 4, 2020

Oracle Data Guard Protection Modes

Oracle Data Guard Protection modes allow us to choose between database availability, performance and protection and based on protection mode we choose we need to configure and perform certain steps to achieve the same. 

Below Oracle 12c1 document/URL explains details around Oracle Data Guard Protection modes:




Tuesday, February 4, 2020

Steps to apply Rolling GI and RDBMS patch to Oracle Grid & RDBMS home and database- Oracle Database Patch 30501932 - GI Jan 2020 Release Update 12.2.0.1.200114

Scope of this Article:



  Mentioned steps to apply patch for GI and RDBMS homes and database using rolling method with minimum to no downtime can be used for any quarterly patches, however this particular article was prepared while applying Jan'20 PSU patch -Oracle Database Patch 30501932 - GI Jan 2020 Release Update 12.2.0.1.200114 on IBM AIX 64 bit Power systems.  Below mentioned steps will give you fair idea to apply PSU Patch, however I would highly encourage you to go through the patch readme thoroughly before proceeding with the patching.  It's always better to apply patch on lower environment first before applying it on production environment. 

First of all identify the patches required to be applied for your environment and download and keep zip files in specific location on the server.  for example I have downloaded and kept patches zip files at below location on my all nodes of the RAC cluster.

/backup/PATCH/Jan2020

Few Important points:

1. This document can be used to apply PSU patch for Unix/Linux and some of the portion can be used for Windows as well.
2. This method is used to apply patch for 12cR2, which is quite different than application of patch in 11g and prior.
3. Binary Patching activity will be performed as a root user.
4. Only  GI patch needs to be downloaded for RAC(which includes GI+RDBMS components), same patch needs to be applied on both Grid Infra home and RDBMS Home.
5. datapatch, which is post SQL Load/execution needs to performed only once on any single node , when binary patching completed on all/both the nodes.
6. It's recommended that patch related readme needs to be followed , there may be small changes here and there considering the patch version and OS version.
7. For RAC environment, Rolling forward method is the best as there will NOT be complete application downtime, however in flight transaction may impact while performing activity.


###1.1 Patch Information

The Grid Infrastructure patches are cumulative and include the Database Update Content and CPU program security content.




 Configuration                                                     GI Version Database Versions Patch OPatch Command(1)         Comments
----------------------------------------------------------------- ------------- -----------------   -------- ----------------- ---------------------------------------------------

GI Home in conjunction with RAC, RACOne, or Single Instance home     12.2.0.1   12.2.0.1            GI RU   opatchauto          GI Home and all the Database Homes will be patched

 
 Table 1-2 Patch Numbers Getting Installed as Part of this Bundle Patch

 Patch Number            Description                                            Applicable Homes
==================      ===================                                    ============================
30593149                Database Jan 2020 Release Update 12.2.0.1.200114       Only DB Home for non-Oracle RAC setup. 
                                                                               Both DB Homes and Grid Home for Oracle RAC setup.
30585969                OCW Jan 2020 Release Update 12.2.0.1.200114            Both DB Homes and Grid Home
30586063                ACFS Jan 2020 Release Update 12.2.0.1.200114Footnote 2 Only Grid Home
26839277                DBWLM Release Update 12.2.0.1.170913Footnote 2         Only Grid Home
30591794                Tomcat Release Update 12.2.0.1.0Footnote 2             Only Grid Home


  


###1.2 Patch Installation and Deinstallation  

==>1.2.1 Patch Installation Prerequisites

Note: Make sure that crontab jobs or any other scheduled jobs should be commented during patching.

-->1.2.1.1 OPatch Utility Information 
You must use the OPatch utility version 12.2.0.1.17 or later to apply this patch for all platforms. Oracle recommends that you use the latest released OPatch version for 12.1 releases, 
which is available for download from My Oracle Support patch 6880880 by selecting ARU link for the 12.2.0.1.0 release. It is recommended that you download the Opatch utility and 
the patch in a shared location to be able to access them from any node in the cluster for the patch application on each node.

When patching the GI Home, a shared location on ACFS only needs to be unmounted on the node where the GI Home is being patched.

The new opatch utility should be updated in all the Oracle RAC database homes and the GI home that are being patched. 


oracle@PRODHOSTDB01:[/home/oracle]$cd $ORACLE_HOME/OPatch
oracle@PRODHOSTDB01:[/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch]$opatch version
OPatch Version: 12.2.0.1.6 <--Which is less than 12.2.0.1.17 so we need to download latest 6880880 support patch.

OPatch succeeded.
oracle@PRODHOSTDB01:[/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch]$

Download the OPatch utility to a temporary directory.

For each Oracle RAC database home and the GI home that are being patched, run the following commands as the home owner to extract the OPatch utility.

Unzip the patch as grid home owner except for installations that do not have any grid homes
PATCH Location: /backup/PATCH/Jan2020
$ unzip <OPATCH-ZIP> -d <ORACLE_HOME>
$ <ORACLE_HOME>/OPatch/opatch version
The version output of the previous command should be 12.2.0.1.17 or later.

~~~>Make sure that you change below permission for directory from where you are applying patches otherwise you may face issue while patching due to permission issue. 
oracle@PRODHOSTDB01:[/backup/PATCH/Jan2020]$chmod -R 775 /backup/PATCH/Jan2020

UnZip the 6880880:

###Backing of the Binaries homes: make sure to backup binaries on both the nodes
~~~>Make sure that you change below permission otherwise you may face issue while patching due to permission issue.
oracle@PRODHOSTDB01:[/backup/PATCH/Jan2020]$chmod -R 775 /backup/PATCH/Jan2020

For GI Home: Run it as GRID user

-->To run backup of home in background
cd /backup/PATCH/Jan2020
nohup tar cvf - /u01/app/12.2.0/grid | gzip > /backup/PATCH/Jan2020/GRID_HOME_bkp1_30012020.tar.gz &
-->to run backup of home in foreground
tar cvf - /u01/app/12.2.0/grid | gzip > /backup/PATCH/Jan2020/Grid_home_bkp1_30012020.tar.gz 

OH home: RUN it as Oracle user
-->To run backup of home in background
cd /backup/PATCH/Jan2020
nohup tar cvf - /u01/app/oracle/product/12.2.0.1/dbhome_1 | gzip > ORACLE_HOME_bkp1_30012020.tar.gz &

OR

-->to run backup of home in foreground
cd /backup/PATCH/Jan2020
tar cvf - /u01/app/oracle/product/12.2.0.1/dbhome_1 | gzip >  /backup/PATCH/Jan2020/ORACLE_HOME_bkp1_08102019.tar.gz


-->1.2.1.2 Validation of Oracle Inventory

Before beginning patch application, check the consistency of inventory information for GI home and each database home to be patched. Run the following command as respective Oracle home owner to check the consistency.

Note:Perform this on both the nodes:

For Grid Home:
$ /u01/app/12.2.0/grid/OPatch/opatch lsinventory -detail -oh <ORACLE_HOME>

/u01/app/12.2.0/grid/OPatch/opatch lsinventory -detail -oh /u01/app/12.2.0/grid |tee -a /backup/PATCH/Jan2020/grid_OPatch_lsinventory_B4.lst

/u01/app/12.2.0/grid/OPatch/opatch lsinventory -detail -oh /u01/app/12.2.0/grid |tee -a /backup/PATCH/Jan2020/grid_OPatch_lsinventory_detail_B4.lst

For RDBMS Home as oracle user:
If this command succeeds, it lists the Oracle components that are installed in the home. Save the output so you have the status prior to the patch apply.

/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch/opatch lsinventory -oh /u01/app/oracle/product/12.2.0.1/dbhome_1 |tee -a /backup/PATCH/Jan2020/rdbms_OPatch_lsinventory_B4.lst
/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/product/12.2.0.1/dbhome_1 |tee -a /backup/PATCH/Jan2020/rdbms_OPatch_lsinventory_detail_B4.lst

~~>>Execute below SQL to check the status of the patches from DB:

sqlplus / as sysdba
spool /backup/PATCH/Jan2020/datapatch_status_B4.lst

set line 500
 col patch_id for 9999999999
 col patch_uid for 9999999999
 col version for a10
 col action for a10
 col install_id  for 99
 col action_time for a30
 col description for a70
 col bundle_series for a20

 select patch_id, patch_uid, version, action, install_id, action_time, description, bundle_series  from dba_registry_sqlpatch;

spool off

~~>1.2.1.2 Validation of Oracle Inventory
Before beginning patch application, check the consistency of inventory information for GI home and each database home to be patched. Run the following command as respective Oracle home owner to check the consistency.

$ <ORACLE_HOME>/OPatch/opatch lsinventory -detail -oh <ORACLE_HOME>

~~>1.2.1.3 Download and Unzip the Patch : I assume this step has been done


~~>1.2.1.4 Run OPatch Conflict Check

•For Grid Infrastructure Home, as home user: Change appropriate patch location
Login as grid

 $GRID_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /backup/PATCH/Jan2020/30501932/30593149 |tee -a  /backup/PATCH/Jan2020/grid_ConflictCheck_30501932_30593149.txt
 $GRID_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /backup/PATCH/Jan2020/30501932/30585969 |tee -a  /backup/PATCH/Jan2020/grid_ConflictCheck_30501932_30585969.txt
 $GRID_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /backup/PATCH/Jan2020/30501932/30586063 |tee -a  /backup/PATCH/Jan2020/grid_ConflictCheck_30501932_30586063.txt
 $GRID_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /backup/PATCH/Jan2020/30501932/26839277 |tee -a  /backup/PATCH/Jan2020/grid_ConflictCheck_30501932_26839277.txt
 $GRID_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /backup/PATCH/Jan2020/30501932/30591794 |tee -a  /backup/PATCH/Jan2020/grid_ConflictCheck_30501932_30591794.txt


•For Database home, as home user:Change appropriate patch location
Login as oracle

 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /backup/PATCH/Jan2020/30501932/30593149 |tee -a /backup/PATCH/Jan2020/oracle_ConflictCheck_30501932_30593149.txt
 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /backup/PATCH/Jan2020/30501932/30585969 |tee -a /backup/PATCH/Jan2020/oracle_ConflictCheck_30501932_30585969.txt

 The report will indicate the interim patches that conflict with the patch 30501932 and the interim patches for which patch 30501932 is a superset.

Note:

When OPatch starts, it validates the patch and ensures that there are no conflicts with the software already installed in the ORACLE_HOME.
OPatch categorizes conflicts into the following types:
Conflicts with a patch already applied to the ORACLE_HOME.
In this case, stop the patch installation and contact Oracle Support Services.
Conflicts with subset patch already applied to the ORACLE_HOME.
In this case, continue with the patch installation because as the new patch contains all the fixes from the existing patch in the ORACLE_HOME.
And, in any case, the subset patch will automatically be rolled back prior to the installation of the new patch.


-->1.2.1.5 Run OPatch SystemSpace Check

Check if enough free space is available on the ORACLE_HOME filesystem for the patches to be applied as given below:

⇒For Grid Infrastructure Home, as home user:grid
===============================
1.Create file /tmp/patch_list_gihome.txt with the following content:

su - grid

cd /backup/PATCH/Jan2020

echo "/backup/PATCH/Jan2020/30501932/30593149" >> /backup/PATCH/Jan2020/patch_list_gihome.txt
echo "/backup/PATCH/Jan2020/30501932/30585969" >> /backup/PATCH/Jan2020/patch_list_gihome.txt
echo "/backup/PATCH/Jan2020/30501932/30586063" >> /backup/PATCH/Jan2020/patch_list_gihome.txt
echo "/backup/PATCH/Jan2020/30501932/26839277" >> /backup/PATCH/Jan2020/patch_list_gihome.txt
echo "/backup/PATCH/Jan2020/30501932/30591794" >> /backup/PATCH/Jan2020/patch_list_gihome.txt
cat  /backup/PATCH/Jan2020/patch_list_gihome.txt

2.Run the opatch command to check if enough free space is available in the Grid Infrastructure Home:

 $GRID_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /backup/PATCH/Jan2020/patch_list_gihome.txt

⇨For Database home, as home user:
===============================
su - oracle

1.Create file /backup/PATCH/Jan2020/patch_list_dbhome.txt with the following content:

echo "/backup/PATCH/Jan2020/30501932/30593149" >> /backup/PATCH/Jan2020/patch_list_dbhome.txt
echo "/backup/PATCH/Jan2020/30501932/30585969" >> /backup/PATCH/Jan2020/patch_list_dbhome.txt
cat /backup/PATCH/Jan2020/patch_list_dbhome.txt


2. Run opatch command to check if enough free space is available in the Database Home:
$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /backup/PATCH/Jan2020/patch_list_dbhome.txt

The command output reports pass and fail messages as per the system space availability:
If OPatch reports Prereq "checkSystemSpace" failed., then cleanup the system space as the required amount of space is not available.
If OPatch reports Prereq "checkSystemSpace" passed., then no action is needed. Proceed with patch installation.

~~~~~~~~~~~~~~~~~~~~~~output of checkSystemSpace execution~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
===>Node-1: checkSystemSpace for GRID Home
==================================


"/backup/PATCH/Jan2020/patch_list_gihome.txt" [New file] 7 lines, 202 characters
grid@PRODHOSTDB01:[/backup/PATCH/Jan2020]$cat "/backup/PATCH/Jan2020/patch_list_gihome.txt"

 /backup/PATCH/Jan2020/30501932/30593149
/backup/PATCH/Jan2020/30501932/30585969
/backup/PATCH/Jan2020/30501932/30586063
/backup/PATCH/Jan2020/30501932/26839277
/backup/PATCH/Jan2020/30501932/30591794

 grid@PRODHOSTDB01:[/backup/PATCH/Jan2020]$GRID_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /backup/PATCH/Jan2020/patch_list_gihome.txt
ksh: GRID_HOME/OPatch/opatch:  not found.
grid@PRODHOSTDB01:[/backup/PATCH/Jan2020]$$GRID_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /backup/PATCH/Jan2020/patch_list_gihome.txt
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

 PREREQ session

 Oracle Home       : /u01/app/12.2.0/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.2.0/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/opatch2020-01-30_11-23-25AM_1.log

 Invoking prereq "checksystemspace"

 Prereq "checkSystemSpace" passed.

 OPatch succeeded.
  

~~>>CheckSystemSpace for Oracle_home:RDBMS on node-1

<!–– output text begin ––>
oracle@PRODHOSTDB01:[/home/oracle]$
oracle@PRODHOSTDB01:[/home/oracle]$echo "/backup/PATCH/Jan2020/30501932/30593149" >> /backup/PATCH/Jan2020/patch_list_dbhome.txt
oracle@PRODHOSTDB01:[/home/oracle]$echo "/backup/PATCH/Jan2020/30501932/30585969" >> /backup/PATCH/Jan2020/patch_list_dbhome.txt
oracle@PRODHOSTDB01:[/home/oracle]$cat /backup/PATCH/Jan2020/patch_list_dbhome.txt
/backup/PATCH/Jan2020/30501932/30593149
/backup/PATCH/Jan2020/30501932/30585969
oracle@PRODHOSTDB01:[/home/oracle]$$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /backup/PATCH/Jan2020/patch_list_dbhome.txt
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/oracle/product/12.2.0.1/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.2.0.1/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatch/opatch2020-01-30_11-25-59AM_1.log

Invoking prereq "checksystemspace"

Prereq "checkSystemSpace" passed.

OPatch succeeded.
oracle@PRODHOSTDB01:[/home/oracle]$
  

===>Node-2: checkSystemSpace for GRID Home
==================================

root@PRODHOSTDB02:[/backup/PATCH/Jan2020]#su - grid
grid@PRODHOSTDB02:[/home/grid]$
grid@PRODHOSTDB02:[/home/grid]$cd /backup/PATCH/Jan2020

 grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$
echo "/backup/PATCH/Jan2020/30501932/30593149" >> /backup/PATCH/Jan2020/patch_list_gihome.txt
echo "/backup/PATCH/Jan2020/30501932/30585969" >> /backup/PATCH/Jan2020/patch_list_gihome.txt
grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$echo "/backup/PATCH/Jan2020/30501932/30593149" >> /backup/PATCH/Jan2020/patch_list_gihome.txt
grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$echo "/backup/PATCH/Jan2020/30501932/30585969" >> /backup/PATCH/Jan2020/patch_list_gihome.txt
grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$echo "/backup/PATCH/Jan2020/30501932/30586063" >> /backup/PATCH/Jan2020/patch_list_gihome.txt
grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$echo "/backup/PATCH/Jan2020/30501932/26839277" >> /backup/PATCH/Jan2020/patch_list_gihome.txt
grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$echo "/backup/PATCH/Jan2020/30501932/30591794" >> /backup/PATCH/Jan2020/patch_list_gihome.txt
grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$cat  /backup/PATCH/Jan2020/patch_list_gihome.txt
/backup/PATCH/Jan2020/30501932/30593149
/backup/PATCH/Jan2020/30501932/30585969
/backup/PATCH/Jan2020/30501932/30586063
/backup/PATCH/Jan2020/30501932/26839277
/backup/PATCH/Jan2020/30501932/30591794
grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$
grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$
grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$$GRID_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /backup/PATCH/Jan2020/patch_list_gihome.txt
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

 PREREQ session

 Oracle Home       : /u01/app/12.2.0/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.2.0/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u01/app/12.2.0/grid/cfgtoollogs/opatch/opatch2020-01-30_11-44-12AM_1.log

 Invoking prereq "checksystemspace"

 Prereq "checkSystemSpace" passed.

 OPatch succeeded.
grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$
grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$
grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$exit
root@PRODHOSTDB02:[/backup/PATCH/Jan2020]#su - oracle
oracle@PRODHOSTDB02:[/home/oracle]$
oracle@PRODHOSTDB02:[/home/oracle]$echo "/backup/PATCH/Jan2020/30501932/30593149" >> /backup/PATCH/Jan2020/patch_list_dbhome.txt
oracle@PRODHOSTDB02:[/home/oracle]$echo "/backup/PATCH/Jan2020/30501932/30585969" >> /backup/PATCH/Jan2020/patch_list_dbhome.txt
oracle@PRODHOSTDB02:[/home/oracle]$cat /backup/PATCH/Jan2020/patch_list_dbhome.txt
/backup/PATCH/Jan2020/30501932/30593149
/backup/PATCH/Jan2020/30501932/30585969
oracle@PRODHOSTDB02:[/home/oracle]$
oracle@PRODHOSTDB02:[/home/oracle]$
oracle@PRODHOSTDB02:[/home/oracle]$$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /backup/PATCH/Jan2020/patch_list_dbhome.txt
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

 PREREQ session

 Oracle Home       : /u01/app/oracle/product/12.2.0.1/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.2.0.1/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatch/opatch2020-01-30_11-45-00AM_1.log

 Invoking prereq "checksystemspace"

 Prereq "checkSystemSpace" passed.

 OPatch succeeded.
oracle@PRODHOSTDB02:[/home/oracle]$

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~End of Ouput ChecySystemSpace ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


==>1.2.2 One-off Patch Conflict Detection and Resolution

Run as ROOT user:

export GRID_HOME=/u01/app/12.2.0/grid
$GRID_HOME/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -analyze -oh $GRID_HOME

export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1
$ORACLE_HOME/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -analyze -oh $ORACLE_HOME


===You can use below command to analyze both the home togather.
export GRID_HOME=/u01/app/12.2.0/grid
$GRID_HOME/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -analyze
===

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~output of execution of analyze~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
===>Node -1 for GRID home:
=============================

<!–– output text begin ––>
root@PRODHOSTDB01:[/]#export GRID_HOME=/u01/app/12.2.0/grid
root@PRODHOSTDB01:[/]#$GRID_HOME/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -analyze -oh $GRID_HOME
which: 0652-141 There is no sudo in /bin /usr/bin /usr/local/bin /usr/bin /etc /usr/sbin /usr/ucb /usr/bin/X11 /sbin /usr/java8_64/jre/bin /usr/java8_64/bin.

OPatchauto session is initiated at Thu Jan 30 11:34:02 2020

System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-01-30_11-34-11AM.log.

Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2020-01-30_11-34-53AM.log
The id for this session is ZE7B

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid
Patch applicability verified successfully on home /u01/app/12.2.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:PRODHOSTdb01
CRS Home:/u01/app/12.2.0/grid
Version:12.2.0.1.0


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /backup/PATCH/Jan2020/30501932/30585969
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_11-35-18AM_1.log

Patch: /backup/PATCH/Jan2020/30501932/30586063
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_11-35-18AM_1.log

Patch: /backup/PATCH/Jan2020/30501932/26839277
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_11-35-18AM_1.log

Patch: /backup/PATCH/Jan2020/30501932/30591794
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_11-35-18AM_1.log

Patch: /backup/PATCH/Jan2020/30501932/30593149
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_11-35-18AM_1.log



OPatchauto session completed at Thu Jan 30 11:35:30 2020
Time taken to complete the session 1 minute, 28 seconds
root@PRODHOSTDB01:[/]#

  


===>Node -1 for RDBMS home:
=============================


root@PRODHOSTDB01:[/]#export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1
root@PRODHOSTDB01:[/]#$ORACLE_HOME/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -analyze -oh $ORACLE_HOME
which: 0652-141 There is no sudo in /bin /usr/bin /usr/local/bin /usr/bin /etc /usr/sbin /usr/ucb /usr/bin/X11 /sbin /usr/java8_64/jre/bin /usr/java8_64/bin.

OPatchauto session is initiated at Thu Jan 30 11:30:21 2020

System initialization log file is /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchautodb/systemconfig2020-01-30_11-30-30AM.log.

Session log file is /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchauto/opatchauto2020-01-30_11-31-23AM.log
The id for this session is YCBP

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1


Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0.1/dbhome_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:PRODHOSTdb01
RAC Home:/u01/app/oracle/product/12.2.0.1/dbhome_1
Version:12.2.0.1.0


==Following patches were SKIPPED:

Patch: /backup/PATCH/Jan2020/30501932/30586063
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /backup/PATCH/Jan2020/30501932/26839277
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /backup/PATCH/Jan2020/30501932/30591794
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /backup/PATCH/Jan2020/30501932/30585969
Log: /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_11-31-45AM_1.log

Patch: /backup/PATCH/Jan2020/30501932/30593149
Log: /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_11-31-45AM_1.log

OPatchauto session completed at Thu Jan 30 11:32:46 2020
Time taken to complete the session 2 minutes, 25 seconds
root@PRODHOSTDB01:[/]#

  


===>Node -2 for GRID home:
=============================


root@PRODHOSTDB02:[/backup/PATCH/Jan2020]#export GRID_HOME=/u01/app/12.2.0/grid
$GRID_HOME/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -analyze -oh $GRID_HOME

 root@PRODHOSTDB02:[/backup/PATCH/Jan2020]#export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1
$GRID_HOME/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -analyze -oh $GRID_HOME
which: 0652-141 There is no sudo in /bin /usr/bin /usr/local/bin /usr/bin /etc /usr/sbin /usr/ucb /usr/bin/X11 /sbin /usr/java8_64/jre/bin /usr/java8_64/bin.

 OPatchauto session is initiated at Thu Jan 30 11:47:24 2020

 System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-01-30_11-47-33AM.log.

 Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2020-01-30_11-48-10AM.log
The id for this session is W7XV

 Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid
Patch applicability verified successfully on home /u01/app/12.2.0/grid

 OPatchAuto successful.

 --------------------------------Summary--------------------------------

 Analysis for applying patches has completed successfully:

 Host:PRODHOSTdb02
CRS Home:/u01/app/12.2.0/grid
Version:12.2.0.1.0

 
 ==Following patches were SUCCESSFULLY analyzed to be applied:

 Patch: /backup/PATCH/Jan2020/30501932/30585969
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_11-48-36AM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/30586063
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_11-48-36AM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/26839277
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_11-48-36AM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/30591794
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_11-48-36AM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/30593149
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_11-48-36AM_1.log

 OPatchauto session completed at Thu Jan 30 11:48:46 2020
Time taken to complete the session 1 minute, 23 seconds
root@PRODHOSTDB02:[/backup/PATCH/Jan2020]#
  

===>Node -2 for RDBMS home:
=============================

root@PRODHOSTDB02:[/backup/PATCH/Jan2020]#export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1
root@PRODHOSTDB02:[/backup/PATCH/Jan2020]#$ORACLE_HOME/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -analyze -oh $ORACLE_HOME
which: 0652-141 There is no sudo in /bin /usr/bin /usr/local/bin /usr/bin /etc /usr/sbin /usr/ucb /usr/bin/X11 /sbin /usr/java8_64/jre/bin /usr/java8_64/bin.

 OPatchauto session is initiated at Thu Jan 30 11:48:47 2020

 System initialization log file is /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchautodb/systemconfig2020-01-30_11-48-55AM.log.

 Session log file is /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchauto/opatchauto2020-01-30_11-49-45AM.log
The id for this session is KAXM

 Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0.1/dbhome_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 OPatchAuto successful.

 --------------------------------Summary--------------------------------

 Analysis for applying patches has completed successfully:

 Host:PRODHOSTdb02
RAC Home:/u01/app/oracle/product/12.2.0.1/dbhome_1
Version:12.2.0.1.0

 
 ==Following patches were SKIPPED:

 Patch: /backup/PATCH/Jan2020/30501932/30586063
Reason: This patch is not applicable to this specified target type - "rac_database"

 Patch: /backup/PATCH/Jan2020/30501932/26839277
Reason: This patch is not applicable to this specified target type - "rac_database"

 Patch: /backup/PATCH/Jan2020/30501932/30591794
Reason: This patch is not applicable to this specified target type - "rac_database"

 
 ==Following patches were SUCCESSFULLY analyzed to be applied:

 Patch: /backup/PATCH/Jan2020/30501932/30585969
Log: /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_11-50-07AM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/30593149
Log: /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_11-50-07AM_1.log

OPatchauto session completed at Thu Jan 30 11:51:07 2020
Time taken to complete the session 2 minutes, 21 seconds
root@PRODHOSTDB02:[/backup/PATCH/Jan2020]#


==> Check all the CRS services are listed...



 ps -ef|grep d.bin

 $crsctl status res -t
Node-1:
====================
grid@PRODHOSTDB01:[/home/grid]$ps -ef|grep d.bin
    root 14156106        1   1   Jan 27      -  1:08 /u01/app/12.2.0/grid/bin/cssdagent
    grid 14549326        1   0   Jan 27      -  5:54 /u01/app/12.2.0/grid/bin/oraagent.bin
    root 14614814        1   0   Jan 27      -  2:35 /u01/app/12.2.0/grid/bin/octssd.bin reboot
    grid 14942694 16777666   0   Jan 27      -  0:25 /u01/app/12.2.0/grid/bin/evmlogger.bin -o /u01/app/12.2.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /u01/app/12.2.0/grid/log/[HOSTNAME]/evmd/evmlogger.log
    root 15073760        1   0   Jan 27      -  3:43 /u01/app/12.2.0/grid/bin/ohasd.bin reboot _ORA_BLOCKING_STACK_LOCALE=AMERICAN_AMERICA.WE8ISO8859P1
    root 15466990        1   0   Jan 27      -  0:00 /bin/sh /u01/app/12.2.0/grid/bin/ocssd
    root 15860074        1   0   Jan 27      -  1:07 /u01/app/12.2.0/grid/bin/cssdmonitor
    grid 16122176        1   0   Jan 27      -  0:31 /u01/app/12.2.0/grid/bin/mdnsd.bin
    root 16318772        1   0   Jan 27      -  4:11 /u01/app/12.2.0/grid/bin/orarootagent.bin
    grid 16515512        1   0   Jan 27      -  2:32 /u01/app/12.2.0/grid/bin/gipcd.bin
    grid 16646534 15466990   1   Jan 27      -  5:59 /u01/app/12.2.0/grid/bin/ocssd.bin
    grid 16777666        1   0   Jan 27      -  2:36 /u01/app/12.2.0/grid/bin/evmd.bin
    grid 17236460        1   1   Jan 27      -  0:54 /u01/app/12.2.0/grid/bin/scriptagent.bin
    grid 18219414        1   0   Jan 27      -  0:01 /u01/app/12.2.0/grid/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit
    grid 18481628        1   0   Jan 27      -  0:01 /u01/app/12.2.0/grid/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit
    root 18547174        1   0   Jan 27      -  8:59 /u01/app/12.2.0/grid/bin/ologgerd -M
    root 18809210        1   0   Jan 27      -  7:54 /u01/app/12.2.0/grid/bin/osysmond.bin
    grid 19595702        1   0   Jan 27      -  0:37 /u01/app/12.2.0/grid/bin/gpnpd.bin
    root 19726724        1   0   Jan 27      - 75:05 /u01/app/12.2.0/grid/bin/orarootagent.bin
    grid 19792224        1   0   Jan 27      -  0:28 /u01/app/12.2.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
    grid 19988910        1   0   Jan 27      -  0:02 /u01/app/12.2.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
    grid 23331312        1   0   Jan 27      - 54:21 /u01/app/12.2.0/grid/bin/oraagent.bin
    grid 24773044        1   0   Jan 27      -  0:02 /u01/app/12.2.0/grid/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit
    root 26935722        1   0   Jan 27      -  4:23 /u01/app/12.2.0/grid/bin/crsd.bin reboot
    grid 29688294 42008834   0 11:58:57  pts/2  0:00 grep d.bin
  oracle 30474534        1   0   Jan 28      -  0:47 /u01/app/12.2.0/grid/bin/oraagent.bin

 
 
 grid@PRODHOSTDB01:[/home/grid]$crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.DATA.dg
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.FRA.dg
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               OFFLINE OFFLINE      PRODHOSTdb02              STABLE
ora.GIMR.dg
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.INDX.dg
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               OFFLINE OFFLINE      PRODHOSTdb02              STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.OCR.dg
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.REDO.dg
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               OFFLINE OFFLINE      PRODHOSTdb02              STABLE
ora.net1.network
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.ons
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.proxy_advm
               OFFLINE OFFLINE      PRODHOSTdb01              STABLE
               OFFLINE OFFLINE      PRODHOSTdb02              STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       PRODHOSTdb01              169.254.113.95 10.80
                                                             .0.2,STABLE
ora.asm
      1        ONLINE  ONLINE       PRODHOSTdb01              Started,STABLE
      2        ONLINE  ONLINE       PRODHOSTdb02              Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
ora.PRODHOSTdb01.vip
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
ora.PRODHOSTdb02.vip
      1        ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.testdb.db
      1        ONLINE  ONLINE       PRODHOSTdb01              Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome_1,STABLE
      2        ONLINE  ONLINE       PRODHOSTdb02              Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome_1,STABLE
ora.testdb.testdb_svc.svc
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
      2        ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       PRODHOSTdb01              Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
--------------------------------------------------------------------------------
grid@PRODHOSTDB01:[/home/grid]$

  

Node-2:
=====================


<!–– output text begin ––>

 grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$ps -ef|grep d.bin
    root 13631752        1   2   Jan 27      - 48:45 /u01/app/12.2.0/grid/bin/orarootagent.bin
    root 13828582        1   0   Jan 27      -  0:00 /bin/sh /u01/app/12.2.0/grid/bin/ocssd
    root 13959532        1   0   Jan 27      -  5:09 /u01/app/12.2.0/grid/bin/crsd.bin reboot
    grid 14287182 16318832   0   Jan 27      -  0:25 /u01/app/12.2.0/grid/bin/evmlogger.bin -o /u01/app/12.2.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /u01/app/12.2.0/grid/log/[HOSTNAME]/evmd/evmlogger.log
    grid 14352682 13828582   0   Jan 27      -  5:51 /u01/app/12.2.0/grid/bin/ocssd.bin
    grid 14811500        1   0   Jan 27      -  0:29 /u01/app/12.2.0/grid/bin/mdnsd.bin
    root 14942564        1   0   Jan 27      -  6:51 /u01/app/12.2.0/grid/bin/osysmond.bin
    root 15008054        1   0   Jan 27      -  1:07 /u01/app/12.2.0/grid/bin/cssdagent
    root 15139242        1   1   Jan 27      -  2:32 /u01/app/12.2.0/grid/bin/octssd.bin reboot
    root 15663392        1   0   Jan 27      -  1:05 /u01/app/12.2.0/grid/bin/cssdmonitor
    grid 16187658        1   0   Jan 27      -  8:20 /u01/app/12.2.0/grid/bin/oraagent.bin
    grid 16253186        1   0   Jan 27      -  2:30 /u01/app/12.2.0/grid/bin/gipcd.bin
    grid 16318832        1   0   Jan 27      -  2:33 /u01/app/12.2.0/grid/bin/evmd.bin
    root 16384338        1   0   Jan 27      -  3:31 /u01/app/12.2.0/grid/bin/ohasd.bin reboot _ORA_BLOCKING_STACK_LOCALE=AMERICAN_AMERICA.WE8ISO8859P1
    grid 16515334        1   0   Jan 27      -  0:02 /u01/app/12.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit
    root 16581012        1   0   Jan 27      -  2:48 /u01/app/12.2.0/grid/bin/orarootagent.bin
    grid 16712134        1   1   Jan 27      - 23:35 /u01/app/12.2.0/grid/bin/oraagent.bin
    grid 16843084        1   0   Jan 27      -  0:33 /u01/app/12.2.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
    grid 16908772        1   0   Jan 27      -  0:36 /u01/app/12.2.0/grid/bin/gpnpd.bin
    grid 18547146        1   0   Jan 27      -  0:01 /u01/app/12.2.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
  oracle 19530086        1   0   Jan 28      -  0:44 /u01/app/12.2.0/grid/bin/oraagent.bin
    grid 35389790 19595636   0 11:59:26  pts/0  0:00 grep d.bin
grid@PRODHOSTDB02:[/backup/PATCH/Jan2020]$

 grid@PRODHOSTDB02:[/home/grid]$crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.DATA.dg
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.FRA.dg
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.GIMR.dg
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.INDX.dg
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.OCR.dg
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.REDO.dg
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.net1.network
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.ons
               ONLINE  ONLINE       PRODHOSTdb01              STABLE
               ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.proxy_advm
               OFFLINE OFFLINE      PRODHOSTdb01              STABLE
               OFFLINE OFFLINE      PRODHOSTdb02              STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       PRODHOSTdb01              169.254.113.95 10.80
                                                             .0.2,STABLE
ora.asm
      1        ONLINE  ONLINE       PRODHOSTdb01              Started,STABLE
      2        ONLINE  ONLINE       PRODHOSTdb02              Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
ora.PRODHOSTdb01.vip
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
ora.PRODHOSTdb02.vip
      1        ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.testdb.db
      1        ONLINE  ONLINE       PRODHOSTdb01              Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome_1,STABLE
      2        ONLINE  ONLINE       PRODHOSTdb02              Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             .1/dbhome_1,STABLE
ora.testdb_svc.svc
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
      2        ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       PRODHOSTdb01              Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       PRODHOSTdb02              STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       PRODHOSTdb01              STABLE
--------------------------------------------------------------------------------
grid@PRODHOSTDB02:[/home/grid]$
  
~~~~~~~~~~~~~~~~~~~~~~~end of output crsctl cluster resources status~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


==>1.2.4 Patch Installation
###########################################################################################

==>check status of database connection and distribution before starting patch or any other information like invalid objects etc , you can capture it at this point

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
run your scripts if any to gather any details before proceeding with the patch
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Now shutdown the instance on the node-1, either you can use srvctl stop instance command or you can manually shutdown like below. I always prefere to manually shutdown instance like below:


oracle@PRODHOSTDB01:[/home/oracle]$sqlplus '/ as sysdba'

 SQL*Plus: Release 12.2.0.1.0 Production on Thu Jan 30 12:11:54 2020

 Copyright (c) 1982, 2017, Oracle.  All rights reserved.

 
 Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

 SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

  

==>Patching Oracle RAC Database Homes and GI Separately 


•Case 1: Oracle RAC, where the GI Home and the Database Homes are not shared and ACFS file system is not configured.

root@PRODHOSTDB01:[/]#ps -ef|grep sql
    root 29360580 43123118   0 12:14:52  pts/2  0:00 grep sql
root@PRODHOSTDB01:[/]#


==>For Grid Home:
================
---------------------------------------------------------------------------
su - grid

export GRID_HOME=/u01/app/12.2.0/grid    <--Not Mandatory
which opatch
opatch version

==> Please make sure to change the full permission otherwise patching may fail, due to file permission issue.
root@IPYDRDB01:[/backup/Jan2020]#chmod 777 -r /backup/Jan2020/29699173

echo $PATH
export PATH=/u01/app/12.2.0/grid/OPatch:$PATH

opatch version

root@# /u01/app/12.2.0/grid/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -oh /u01/app/12.2.0/grid


==>Oracle - DB:
================
-------------------------------------------------------------------------
su - oracle

export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1
export PATH=/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch:$PATH
echo $PATH
which opatch
opatch version

root@#  /u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -oh /u01/app/oracle/product/12.2.0.1/dbhome_1


You can either patch GRID and RDBMS Homes separately or in single shot using below command which will patch both the homes with single command on the node.

====>To patch both the nodes to gather run below command.
As root user, execute the following command on each node of the cluster:
# <GI_HOME>/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/29708720


~~~~~~~~~~~~~~~~~~~~~~Output patch apply~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
==>Node-1 : Grid Patching:
=============================



 root@PRODHOSTDB01:[/]#export GRID_HOME=/u01/app/12.2.0/grid
root@PRODHOSTDB01:[/]#echo $PATH
/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java8_64/jre/bin:/usr/java8_64/bin
root@PRODHOSTDB01:[/]#export PATH=/u01/app/12.2.0/grid/OPatch:$PATH
root@PRODHOSTDB01:[/]#echo $PATH
/u01/app/12.2.0/grid/OPatch:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java8_64/jre/bin:/usr/java8_64/bin
root@PRODHOSTDB01:[/]#opatch version
OPatch Version: 12.2.0.1.19

 OPatch succeeded.
root@PRODHOSTDB01:[/]#/u01/app/12.2.0/grid/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -oh /u01/app/12.2.0/grid
which: 0652-141 There is no sudo in /bin /usr/bin /usr/local/bin /u01/app/12.2.0/grid/OPatch /usr/bin /etc /usr/sbin /usr/ucb /usr/bin/X11 /sbin /usr/java8_64/jre/bin/usr/java8_64/bin.

 OPatchauto session is initiated at Thu Jan 30 12:17:43 2020

 System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-01-30_12-17-51PM.log.

 Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2020-01-30_12-18-19PM.log
The id for this session is JS77

 Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid
Patch applicability verified successfully on home /u01/app/12.2.0/grid

 
 Bringing down CRS service on home /u01/app/12.2.0/grid
Prepatch operation log file location: /u01/app/grid/crsdata/PRODHOSTdb01/crsconfig/crspatch_PRODHOSTdb01_2020-01-30_12-18-58AM.log
CRS service brought down successfully on home /u01/app/12.2.0/grid

 
 Start applying binary patch on home /u01/app/12.2.0/grid
Successfully executed command: /usr/sbin/slibclean

 
 Binary patch applied successfully on home /u01/app/12.2.0/grid

 
 Starting CRS service on home /u01/app/12.2.0/grid

 Postpatch operation log file location: /u01/app/grid/crsdata/PRODHOSTdb01/crsconfig/crspatch_PRODHOSTdb01_2020-01-30_12-28-34AM.log
CRS service started successfully on home /u01/app/12.2.0/grid

 OPatchAuto successful.

 --------------------------------Summary--------------------------------

 Patching is completed successfully. Please find the summary as follows:

 Host:PRODHOSTdb01
CRS Home:/u01/app/12.2.0/grid
Version:12.2.0.1.0
Summary:

==Following patches were SUCCESSFULLY applied:

 Patch: /backup/PATCH/Jan2020/30501932/26839277
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_12-21-35PM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/30585969
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_12-21-35PM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/30586063
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_12-21-35PM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/30591794
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_12-21-35PM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/30593149
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_12-21-35PM_1.log



OPatchauto session completed at Thu Jan 30 12:40:54 2020
Time taken to complete the session 23 minutes, 12 seconds
root@PRODHOSTDB01:[/]#
 
  


~~~~~~~~~~RDBMS/ORACLE_HOME patching output~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


root@PRODHOSTDB01:[/]#
root@PRODHOSTDB01:[/]#
root@PRODHOSTDB01:[/]#export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1
root@PRODHOSTDB01:[/]#export PATH=/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch:$PATH
root@PRODHOSTDB01:[/]#echo $PATH
/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch:/u01/app/12.2.0/grid/OPatch:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java8_64/jre/bin:/usr/java8_64/bin
root@PRODHOSTDB01:[/]#which opatch
/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch/opatch
root@PRODHOSTDB01:[/]#opatch version
OPatch Version: 12.2.0.1.19

 OPatch succeeded.
root@PRODHOSTDB01:[/]#/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -oh /u01/app/oracle/product/12.2.0.1/dbhome_1
which: 0652-141 There is no sudo in /bin /usr/bin /usr/local/bin /u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch /u01/app/12.2.0/grid/OPatch /usr/bin /etc /usr/sbin /usr/ucb /usr/bin/X11 /sbin /usr/java8_64/jre/bin /usr/java8_64/bin.

 OPatchauto session is initiated at Thu Jan 30 12:48:26 2020

 System initialization log file is /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchautodb/systemconfig2020-01-30_12-48-35PM.log.

 Session log file is /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchauto/opatchauto2020-01-30_12-49-28PM.log
The id for this session is 1172

 Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0.1/dbhome_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Preparing to bring down database service on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Successfully prepared home /u01/app/oracle/product/12.2.0.1/dbhome_1 to bring down database service

 
 Bringing down database service on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Database service successfully brought down on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Performing prepatch operation on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Perpatch operation completed successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Start applying binary patch on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Successfully executed command: /usr/sbin/slibclean

 Binary patch applied successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Performing postpatch operation on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Postpatch operation completed successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Starting database service on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Database service successfully started on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Preparing home /u01/app/oracle/product/12.2.0.1/dbhome_1 after database service restarted
No step execution required.........

 
 Trying to apply SQL patch on home /u01/app/oracle/product/12.2.0.1/dbhome_1
SQL patch applied successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 OPatchAuto successful.

 --------------------------------Summary--------------------------------

 Patching is completed successfully. Please find the summary as follows:

 Host:PRODHOSTdb01
RAC Home:/u01/app/oracle/product/12.2.0.1/dbhome_1
Version:12.2.0.1.0
Summary:

 ==Following patches were SKIPPED:

 Patch: /backup/PATCH/Jan2020/30501932/30586063
Reason: This patch is not applicable to this specified target type - "rac_database"

 Patch: /backup/PATCH/Jan2020/30501932/26839277
Reason: This patch is not applicable to this specified target type - "rac_database"

 Patch: /backup/PATCH/Jan2020/30501932/30591794
Reason: This patch is not applicable to this specified target type - "rac_database"

 
 ==Following patches were SUCCESSFULLY applied:

 Patch: /backup/PATCH/Jan2020/30501932/30585969
Log: /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_12-50-06PM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/30593149
Log: /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-01-30_12-50-06PM_1.log

 
 Patching session reported following warning(s):
_________________________________________________

 [WARNING] The database instance 'TESTDB1' from '/u01/app/oracle/product/12.2.0.1/dbhome_1', in host'PRODHOSTdb01' is not running. SQL changes, if any,  will not be applied.
To apply. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.

 
 OPatchauto session completed at Thu Jan 30 12:54:26 2020
Time taken to complete the session 6 minutes, 0 second
root@PRODHOSTDB01:[/]#

 


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

####>Start the Cluster service if not started automatically: 


>>As ROOT user and make sure GRID_HOME is set

crsctl start crs

root@TESTDB02:[/backup/PATCH]#echo $GRID_HOME
/u01/app/12.2.0.1/grid
root@TESTDB02:[/backup/PATCH]#$GRID_HOME/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
root@TESTDB02:[/backup/PATCH]#crsctl status res -t


>>To check status of Cluster services (all the CRS services are listed).
$crsctl status res -t
ps -ef|grep d.bin

sqlplus / as sysdba

>>>Take the lsinvntory details for DB and GRID homes. 
opatch lsinventory -detail> grid_OPatch_lsinventory_detail_After_DBGrid_patch.lst

/u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatch lsinventory -detail >rdbms_OPatch_lsinventory_detail_After_DB_Patch.lst


==>start the instance:


 grid@PRODHOSTDB01:[/home/grid]$srvctl status database -d TESTDB
Instance TESTDB1 is not running on node PRODHOSTdb01
Instance TESTDB2 is running on node PRODHOSTdb02

 
 srvctl start instance -d TESTDB -i TESTDB1

 srvctl status database -d TESTDB

 grid@PRODHOSTDB01:[/home/grid]$srvctl status database -d TESTDB
Instance TESTDB1 is running on node PRODHOSTdb01
Instance TESTDB2 is running on node PRODHOSTdb02
grid@PRODHOSTDB01:[/home/grid]$

 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
grid@TESTDB01:[/home/grid]$crs_stat -t

 grid@PRODHOSTDB01:[/home/grid]$crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....SM.lsnr ora....er.type ONLINE    ONLINE    PRODHOSTdb01
ora.DATA.dg    ora....up.type ONLINE    ONLINE    PRODHOSTdb01
ora.FRA.dg     ora....up.type ONLINE    ONLINE    PRODHOSTdb01
ora.GIMR.dg    ora....up.type ONLINE    ONLINE    PRODHOSTdb01
ora.INDX.dg    ora....up.type ONLINE    ONLINE    PRODHOSTdb01
ora....ER.lsnr ora....er.type ONLINE    ONLINE    PRODHOSTdb01
ora....AF.lsnr ora....er.type OFFLINE   OFFLINE
ora....N1.lsnr ora....er.type ONLINE    ONLINE    PRODHOSTdb01
ora....N2.lsnr ora....er.type ONLINE    ONLINE    PRODHOSTdb02
ora....N3.lsnr ora....er.type ONLINE    ONLINE    PRODHOSTdb02
ora.MGMTLSNR   ora....nr.type ONLINE    ONLINE    PRODHOSTdb01
ora.OCR.dg     ora....up.type ONLINE    ONLINE    PRODHOSTdb01
ora.REDO.dg    ora....up.type ONLINE    ONLINE    PRODHOSTdb01
ora.asm        ora.asm.type   ONLINE    ONLINE    PRODHOSTdb01
ora.cvu        ora.cvu.type   ONLINE    ONLINE    PRODHOSTdb02
ora....01.lsnr application    ONLINE    ONLINE    PRODHOSTdb01
ora....b01.ons application    ONLINE    ONLINE    PRODHOSTdb01
ora....b01.vip ora....t1.type ONLINE    ONLINE    PRODHOSTdb01
ora....02.lsnr application    ONLINE    ONLINE    PRODHOSTdb02
ora....b02.ons application    ONLINE    ONLINE    PRODHOSTdb02
ora....b02.vip ora....t1.type ONLINE    ONLINE    PRODHOSTdb02
ora.TESTDB.db ora....se.type ONLINE    ONLINE    PRODHOSTdb01
ora....svc.svc ora....ce.type ONLINE    ONLINE    PRODHOSTdb01
ora.mgmtdb     ora....db.type ONLINE    ONLINE    PRODHOSTdb01
ora....network ora....rk.type ONLINE    ONLINE    PRODHOSTdb01
ora.ons        ora.ons.type   ONLINE    ONLINE    PRODHOSTdb01
ora.proxy_advm ora....vm.type OFFLINE   OFFLINE
ora.qosmserver ora....er.type ONLINE    ONLINE    PRODHOSTdb02
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    PRODHOSTdb01
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    PRODHOSTdb02
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    PRODHOSTdb02
grid@PRODHOSTDB01:[/home/grid]$

 
 grid@TESTDB01:[/home/grid]$crsctl status res -t

 
 grid@TESTDB01:[/home/grid]$
  
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
###>>Get the post Patch evidence from Node-1 After patch application:


Before beginning patch application, check the consistency of inventory information for GI home and each database home to be patched. Run the following command as respective Oracle home owner to check the consistency.

Note:Perform this on both the nodes:

For Grid:
============
$ /u01/app/12.2.0/grid/OPatch/opatch lsinventory -detail -oh <ORACLE_HOME>

/u01/app/12.2.0/grid/OPatch/opatch lsinventory  -oh /u01/app/12.2.0/grid |tee -a /backup/PATCH/Jan2020/grid_OPatch_lsinventory_After.lst
/u01/app/12.2.0/grid/OPatch/opatch lsinventory -detail -oh /u01/app/12.2.0/grid |tee -a /backup/PATCH/Jan2020/grid_OPatch_lsinventory_detail_After.lst

For RDBMS as oracle user:
=============
If this command succeeds, it lists the Oracle components that are installed in the home. Save the output so you have the status prior to the patch apply.


/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch/opatch lsinventory -oh /u01/app/oracle/product/12.2.0.1/dbhome_1 |tee -a /backup/PATCH/Jan2020/rdbms_OPatch_lsinventory_After.lst
/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/product/12.2.0.1/dbhome_1 |tee -a /backup/PATCH/Jan2020/rdbms_OPatch_lsinventory_detail_After.lst

~~~~~~~~~~~~~~~~~~~~~~~~completion of node-1  patching activity~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

###====>Perform the similar steps on Node-2:


==>1.2.4 Patch Installation ON NODE-2
########################################################################################

sqlplus / as sysdba
01:28:49 SQL> shut immediate;
                                       
Note: Post shutdown of the instance one, observed that connections/txns where moved slowly to node-2, after 10-15 min all connection(total 3593) were moved to node-2. Below is the output from instance -2:

Patching Oracle RAC Database Homes and GI Separately

•Case 1: Oracle RAC, where the GI Home and the Database Homes are not shared and ACFS file system is not configured.

oracle@:[/home/oracle]$ps -ef| grep sqlplus
  oracle  9307060  7275390   0 03:13:07  pts/3  0:00 grep sqlplus
oracle@:[/home/oracle]$


==>Grid HOME:
==============
---------------------------------------------------------------------------

==> Please make sure to change the full permission otherwise patching may fail, due to file permission issue.

root@TESTDB02:[/backup/PATCH]#chmod -R 775 /backup/PATCH/Jan2020

-->Make sure that instance is down, This is optional but it's better to shutdown instance manually to make sure it is cleanly shutting down.
   Login as oracle and shutdown instance.  "Shutdown immediate;"

-->Now as a root User:

export GRID_HOME=/u01/app/12.2.0/grid    <--Not Mandatory

echo $PATH
export PATH=/u01/app/12.2.0/grid/OPatch:$PATH
which opatch
opatch version

root@# /u01/app/12.2.0/grid/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -oh /u01/app/12.2.0.1/grid


==>Oracle - DB:
==============
-------------------------------------------------------------------------
-->As a root user:

export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1   <-- Not mandatory

export PATH=/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch:$PATH
echo $PATH
which opatch
opatch version

root@# /u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -oh /u01/app/oracle/product/12.2.0.1/dbhome_1

==>start the instance as an oracle user:
su - oracle

srvctl start instance -d TESTDB -i TESTDB2

srvctl status database  -d TESTDB



~~~~~~~~~~~~output of patch application on node-2~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


oracle@PRODHOSTDB02:[/home/oracle]$sqlplus / as sysdba

 SQL*Plus: Release 12.2.0.1.0 Production on Sun Feb 2 14:19:07 2020

 Copyright (c) 1982, 2017, Oracle.  All rights reserved.

 
 Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

 SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

 -----------------
root@PRODHOSTDB02:[/]#echo $PATH
/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java8_64/jre/bin:/usr/java8_64/bin
root@PRODHOSTDB02:[/]#export PATH=/u01/app/12.2.0/grid/OPatch:$PATH
root@PRODHOSTDB02:[/]#echo $PATH
/u01/app/12.2.0/grid/OPatch:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java8_64/jre/bin:/usr/java8_64/bin
root@PRODHOSTDB02:[/]#opatch version
OPatch Version: 12.2.0.1.19

 OPatch succeeded.
root@PRODHOSTDB02:[/]#which opatch
/u01/app/12.2.0/grid/OPatch/opatch
root@PRODHOSTDB02:[/]#/u01/app/12.2.0/grid/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -oh /u01/app/12.2.0/grid
which: 0652-141 There is no sudo in /bin /usr/bin /usr/local/bin /u01/app/12.2.0/grid/OPatch /usr/bin /etc /usr/sbin /usr/ucb /usr/bin/X11 /sbin /usr/java8_64/jre/bin /usr/java8_64/bin.

 OPatchauto session is initiated at Sun Feb  2 14:39:53 2020

 System initialization log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-02_02-40-02PM.log.

 Session log file is /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2020-02-02_02-40-41PM.log
The id for this session is XEQZ

 Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.2.0/grid
Patch applicability verified successfully on home /u01/app/12.2.0/grid

 
 Bringing down CRS service on home /u01/app/12.2.0/grid
Prepatch operation log file location: /u01/app/grid/crsdata/PRODHOSTdb02/crsconfig/crspatch_PRODHOSTdb02_2020-02-02_02-41-20PM.log
CRS service brought down successfully on home /u01/app/12.2.0/grid

 
 Start applying binary patch on home /u01/app/12.2.0/grid
Successfully executed command: /usr/sbin/slibclean

 Binary patch applied successfully on home /u01/app/12.2.0/grid

 
 Starting CRS service on home /u01/app/12.2.0/grid
Postpatch operation log file location: /u01/app/grid/crsdata/PRODHOSTdb02/crsconfig/crspatch_PRODHOSTdb02_2020-02-02_02-49-56PM.log
CRS service started successfully on home /u01/app/12.2.0/grid

 OPatchAuto successful.

 --------------------------------Summary--------------------------------

 Patching is completed successfully. Please find the summary as follows:

 Host:PRODHOSTdb02
CRS Home:/u01/app/12.2.0/grid
Version:12.2.0.1.0
Summary:

 ==Following patches were SUCCESSFULLY applied:

 Patch: /backup/PATCH/Jan2020/30501932/26839277
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-02_14-42-55PM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/30585969
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-02_14-42-55PM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/30586063
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-02_14-42-55PM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/30591794
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-02_14-42-55PM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/30593149
Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-02_14-42-55PM_1.log

 
 OPatchauto session completed at Sun Feb  2 14:55:05 2020
Time taken to complete the session 15 minutes, 12 seconds
root@PRODHOSTDB02:[/]#
  
----------------------------

==>Output of patch apply for RDBMS Home on Node-2:
===================================================


 root@PRODHOSTDB02:[/]#echo $PATH
/u01/app/12.2.0/grid/OPatch:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java8_64/jre/bin:/usr/java8_64/bin
root@PRODHOSTDB02:[/]#export PATH=/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch:$PATH
root@PRODHOSTDB02:[/]#echo $PATH
/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch:/u01/app/12.2.0/grid/OPatch:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java8_64/jre/bin:/usr/java8_64/bin
root@PRODHOSTDB02:[/]#which opatch
/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch/opatch
root@PRODHOSTDB02:[/]#opatch version
OPatch Version: 12.2.0.1.19

 OPatch succeeded.
root@PRODHOSTDB02:[/]#
root@PRODHOSTDB02:[/]#/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch/opatchauto apply /backup/PATCH/Jan2020/30501932 -oh /u01/app/oracle/product/12.2.0.1/dbhome_1
which: 0652-141 There is no sudo in /bin /usr/bin /usr/local/bin /u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch /u01/app/12.2.0/grid/OPatch /usr/bin /etc /usr/sbin /usr/ucb /usr/bin/X11 /sbin /usr/java8_64/jre/bin /usr/java8_64/bin.

 OPatchauto session is initiated at Sun Feb  2 15:03:59 2020

 System initialization log file is /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchautodb/systemconfig2020-02-02_03-04-07PM.log.

 Session log file is /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchauto/opatchauto2020-02-02_03-04-47PM.log
The id for this session is F3V4

 Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Patch applicability verified successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Verifying SQL patch applicability on home /u01/app/oracle/product/12.2.0.1/dbhome_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Preparing to bring down database service on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Successfully prepared home /u01/app/oracle/product/12.2.0.1/dbhome_1 to bring down database service

 
 Bringing down database service on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Database service successfully brought down on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Performing prepatch operation on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Perpatch operation completed successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Start applying binary patch on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Successfully executed command: /usr/sbin/slibclean

 Binary patch applied successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Performing postpatch operation on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Postpatch operation completed successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Starting database service on home /u01/app/oracle/product/12.2.0.1/dbhome_1
Database service successfully started on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 
 Preparing home /u01/app/oracle/product/12.2.0.1/dbhome_1 after database service restarted
No step execution required.........

 
 Trying to apply SQL patch on home /u01/app/oracle/product/12.2.0.1/dbhome_1
SQL patch applied successfully on home /u01/app/oracle/product/12.2.0.1/dbhome_1

 OPatchAuto successful.

 --------------------------------Summary--------------------------------

 Patching is completed successfully. Please find the summary as follows:

 Host:PRODHOSTdb02
RAC Home:/u01/app/oracle/product/12.2.0.1/dbhome_1
Version:12.2.0.1.0
Summary:

 ==Following patches were SKIPPED:

 Patch: /backup/PATCH/Jan2020/30501932/30586063
Reason: This patch is not applicable to this specified target type - "rac_database"

 Patch: /backup/PATCH/Jan2020/30501932/26839277
Reason: This patch is not applicable to this specified target type - "rac_database"

 Patch: /backup/PATCH/Jan2020/30501932/30591794
Reason: This patch is not applicable to this specified target type - "rac_database"

 
 ==Following patches were SUCCESSFULLY applied:

 Patch: /backup/PATCH/Jan2020/30501932/30585969
Log: /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-02_15-05-26PM_1.log

 Patch: /backup/PATCH/Jan2020/30501932/30593149
Log: /u01/app/oracle/product/12.2.0.1/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-02_15-05-26PM_1.log

 
 Patching session reported following warning(s):
_________________________________________________

 [WARNING] The database instance 'TESTDB2' from '/u01/app/oracle/product/12.2.0.1/dbhome_1', in host'PRODHOSTdb02' is not running. SQL changes, if any,  will not be applied.
To apply. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.

 OPatchauto session completed at Sun Feb  2 15:09:45 2020
Time taken to complete the session 5 minutes, 47 seconds
root@PRODHOSTDB02:[/]#
root@PRODHOSTDB02:[/]#
root@PRODHOSTDB02:[/]#su - oracle
oracle@PRODHOSTDB02:[/home/oracle]$
oracle@PRODHOSTDB02:[/home/oracle]$
oracle@PRODHOSTDB02:[/home/oracle]$srvctl start instance -d TESTDB -i TESTDB2
oracle@PRODHOSTDB02:[/home/oracle]$
oracle@PRODHOSTDB02:[/home/oracle]$srvctl status database  -d TESTDB
Instance TESTDB1 is running on node PRODHOSTdb01
Instance TESTDB2 is running on node PRODHOSTdb02

oracle@PRODHOSTDB02:[/home/oracle]$
  
---------------------------------------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~END of output of patch application on binary on node-2~~~~~~~~~~~~~~~~~~~~~~~~~~



####==>Datapatch application: only should be run from one node only.

==>Run as an oracle user


cd $ORACLE_HOME/OPatch
./datapatch -verbose

~~~~~~~~~~~~~~~~~~~~~~ ==>o/p of :Datapatch application: only should be run from one node only.~~~~~~~~~~~~~~~~~~~~~~
-------------

oracle@PRODHOSTDB02:[/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch]$./datapatch -verbose
SQL Patching tool version 12.2.0.1.0 Production on Sun Feb  2 15:21:24 2020
Copyright (c) 2012, 2020, Oracle.  All rights reserved.

 Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_34800046_2020_02_02_15_21_24/sqlpatch_invocation.log

 Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done

 Current state of SQL patches:
Bundle series DBRU:
  ID 200114 in the binary registry and not installed in the SQL registry

 Adding patches to installation queue and performing prereq checks...
Installation queue:
  Nothing to roll back
  The following patches will be applied:
    30593149 (DATABASE JAN 2020 RELEASE UPDATE 12.2.0.1.200114)

 Installing patches...

 Patch installation complete.  Total patches installed: 1

 Validating logfiles...
Patch 30593149 apply: WITH ERRORS (RETRYABLE)
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/30593149/23339232/30593149_apply_TESTDB_2020Feb02_15_22_57.log (errors)
    Error at line 10007: ORA-04021: timeout occurred while waiting to lock object SYS.DBMS_AQ
    Error at line 10051: ORA-04021: timeout occurred while waiting to lock object SYS.DBMS_AQADM_VAR
    Error at line 10057: ORA-04021: timeout occurred while waiting to lock object SYS.DBMS_AQADM_SYS

 Adding patches to retry installation queue...
Retry installation queue:
  Nothing to roll back
  The following patches will be applied:
    30593149 (DATABASE JAN 2020 RELEASE UPDATE 12.2.0.1.200114)

 Installing patches...

 Patch installation complete.  Total patches installed: 1

 Validating logfiles...
Patch 30593149 apply: SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/30593149/23339232/30593149_apply_TESTDB_2020Feb02_16_11_06.log (no errors)
SQL Patching tool complete on Sun Feb  2 16:21:59 2020
oracle@PRODHOSTDB02:[/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch]$
oracle@PRODHOSTDB02:[/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch]$

  

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~End of datapatch application~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

-----While datapatch faced issue of below as AQ job were running: If you also encounter similar issue while apply datapatch and $AQ- Advanced queue jobs are running then refer this article for the solution. :datapatch execution takes time and fails with ORA-04021 as AQ jobs are running while PSU Patching on 12c


~~>Check the status of datapatch/modified SQL

set line 500
col patch_id for 9999999999
col patch_uid for 9999999999
col version for a10
col action for a10
col install_id  for 99
col action_time for a30
col description for a60
col bundle_series for a20
col status for a30

select patch_id, patch_uid, version, action, install_id, action_time, description, bundle_series,status  from dba_registry_sqlpatch;


SQL> set line 500
SQL> col patch_id for 9999999999
SQL> col patch_uid for 9999999999
SQL> col version for a10
SQL> col action for a10
SQL> col install_id  for 99
SQL> col action_time for a30
SQL> col description for a60
SQL> col bundle_series for a20
SQL> col status for a30
SQL>
SQL> select patch_id, patch_uid, version, action, install_id, action_time, description, bundle_series,status  from dba_registry_sqlpatch;

    PATCH_ID   PATCH_UID VERSION    ACTION     INSTALL_ID ACTION_TIME                    DESCRIPTION                                                  BUNDLE_SERIES        STATUS
----------- ----------- ---------- ---------- ---------- ------------------------------ ------------------------------------------------------------ -------------------- ------------------------------
   30593149    23339232 12.2.0.1   APPLY               1 02-FEB-20 04.11.06.311937 PM   DATABASE JAN 2020 RELEASE UPDATE 12.2.0.1.200114             DBRU                 WITH ERRORS (RETRYABLE)
   30593149    23339232 12.2.0.1   APPLY               1 02-FEB-20 04.21.59.239175 PM   DATABASE JAN 2020 RELEASE UPDATE 12.2.0.1.200114             DBRU                 SUCCESS

 SQL>
  

Hope this helps.