Thursday 17 November 2011

Admin Category


In Oracle Database 10g, Automatic Shared Memory Management (ASMM) feature is introduced to automatically determine the size of Database buffer cache (default pool), Shared pool, Large pool and Java pool (starting with 10gR2, the streams pool is included) by setting the parameter SGA_TARGET.
This feature reduces the tasks like dynamically analyzing the database workload and redistribute memory across the SGA pools.
Benefits of Automatic Shared Memory Management                                                                Automatic Shared Memory Management simplifies the configuration of the SGA. Before Oracle Database 10G, buffer cache, shared pool, java pool, and large pool need to be manually specified for the database. Under sizing can lead to poor performance and out-of-memory errors (ORA-4031), while over sizing can waste memory. This feature enables you to specify a total memory amount to be used for all SGA Components (buffer cache, shared pool, java pool, and large pool). The Oracle database periodically redistributes memory between these components according to workload requirements. Before Oracle Database 10G, the user did not have exact control over the total size of the SGA since memory was allocated by Oracle for the fixed SGA, and for other internal metadata allocations over and above the total size of the user specified SGA parameters. The new SGA size parameter SGA_TARGET now includes all the memory in the SGA, including all the automatically sized components, manually sized components, and any internal allocations during startup.
Configuring Automatic Shared Memory Management
Automatic Shared Memory Management can be configured by using the SGA_TARGET initialization parameter. If you specify a non-zero value for SGA_TARGET, the following four memory pools are automatically sized:
Database buffer cache (Default pool)
Shared pool
Large pool
Java pool
10gR2 the streams pool is included
If you set SGA_TARGET to 0, Automatic Shared Memory Management is disabled. The default value of SGA_TARGET is 0.
The individual parameters used before Oracle 10G releases to specify the sizes of the automatically sized components have not been made obsolete. The initialization parameters that size these pools (DB_CACHE_SIZE, SHARED_POOL_SIZE, LARGE_POOL_SIZE, and JAVA_POOL_SIZE) are now referred to as auto-tuned SGA parameters.
The following buffers are now referred to as manually sized components:
Log buffer
Other buffer caches (KEEP/RECYCLE, other block sizes)
Streams pool (new in Oracle Database 10g)
Fixed SGA and other internal allocations
Note: STATISTICS_LEVEL must be set to TYPICAL (default) or ALL to use Automatic Shared Memory Management.
Behavior of Auto-Tuned SGA Parameters
When SGA_TARGET is not set or set to zero, auto-tuned SGA components behave as they did in previous releases. The only exception is the Shared Pool. As of 10g, internal overhead allocations for metadata are now included in the value of the SHARED_POOL_SIZE parameter.
Behavior of Manual SGA Parameters
Below are the manual SGA size parameters :
DB_KEEP_CACHE_SIZE
DB_RECYCLE_CACHE_SIZE
DB_nK_CACHE_SIZE (n = 2, 4, 8, 16, 32)
LOG_BUFFER
STREAMS_POOL_SIZE
Manual SGA parameters are specified by the user, and the given sizes precisely control the sizes of their corresponding components.
When SGA_TARGET is set, the total size of manual SGA size parameters is subtracted from the SGA_TARGET value, and balance is given to the auto-tuned SGA components.
For example, if SGA_TARGET = 8G and DB_KEEP_CACHE_SIZE = 1G, this means that the total size of the four auto-tuned components (shared pool, java pool, default buffer cache, and large pool) is limited to 7GB. The 7GB includes the fixed SGA and log buffer, and only after those have been allocated the rest of the memory is divided between the components. The size of the keep cache is 1GB, as specified by the parameter.
Resizing SGA_TARGET
SGA_TARGET is a dynamic parameter and can be changed through Enterprise Manager or with the ALTER SYSTEM command.
SGA_TARGET can be increased up to the value of SGA_MAX_SIZE. It can be reduced until any one auto-tuned components reaches its minimum size (either a user-specified minimum or an internally determined minimum). If you increase the value of SGA_TARGET, the additional memory is distributed according to the auto-tuning policy across the auto-tuned components. If you reduce the value of SGA_TARGET the memory is taken away by the auto-tuning policy from one or more of the auto-tuned components. Therefore any change in the value of SGA_TARGET affects only the sizes of the auto-tuned components.
For example, if SGA_TARGET = 8G and DB_KEEP_CACHE_SIZE = 1G and you increase SGA_TARGET to 9G, the additional 1GB is distributed only among the components controlled by SGA_TARGET. The value of DB_KEEP_CACHE_SIZE is not affected. Likewise, if SGA_TARGET is reduced to 7G, the 1GB is only taken from the components controlled by SGA_TARGET. This decrease does not affect the settings of the manually controlled parameters like DB_KEEP_CACHE_SIZE.
Disable Automatic Shared Memory Tuning
You can dynamically choose to disable automatic shared memory tuning by setting SGA_TARGET to zero. In this case the values of all the auto-tuned parameters are set to the current sizes of the components, even if the user had earlier specified a different non-zero value for an auto-tuned parameter. In the above example, the value of SGA_TARGET is 8GB, and the value of SHARED_POOL_SIZE is 1G.
If the system has internally adjusted the size of the shared pool component to 2G, then setting SGA_TARGET to zero results in SHARED_POOL_SIZE being set to 2G, overriding the original user-specified value.
Manually Resizing Auto-Tuned Parameters
As discussed above, when SGA_TARGET is set, the default value for the auto-tuned parameters is zero, and a nonzero value serves as a lower bound for the size of the corresponding component. Therefore, when an auto-tuned parameter is resized automatically, the resize results in a change to the size of the component only if the new value is larger than the present size of the SGA component.
For example, if you set SGA_TARGET to 8G and set SHARED_POOL_SIZE to 2G, you ensure that the Shared Pool has at least 2G at all times to accommodate the necessary memory allocations.
If you adjust the value of SHARED_POOL_SIZE to 1G, there is no immediate effect on the size of the shared pool. It simply gives the automatic memory tuning algorithm the freedom to later reduce the Shared Pool to 1G if required.
On the other hand, if the size of the Shared Pool is 1G to begin with, then adjusting the value of SHARED_POOL_SIZE to 2G results in the Shared Pool component growing to a size of 2G. The memory used in this grow operation is taken from one or more auto-tuned components, and the sizes of the manual components are not affected.
SGA Background Process
The Automatic Shared Memory Management feature uses a new background process named Memory Manager (MMAN). MMAN serves as the SGA Memory Broker and coordinates the sizing of the memory components. The SGA Memory Broker keeps track of the sizes of the components and pending resize operations.
Automatic Shared Memory Principles
The SGA Memory Broker observes the system and workload in order to determine the ideal distribution of memory. It is never complacent and performs this check every few minutes so that memory can always be present where needed. In the absence of automatic shared memory tuning, components had to be sized to anticipate their individual worst-case memory requirements.
For example, consider a system that runs large OLTP jobs during the day that require large buffer caches, and runs parallel batch jobs at night that require large values for the large pool. The DBA would have to simultaneously configure both the buffer cache and the large pool to accommodate their peak requirements.
With SGA auto-tuning, when the OLTP job runs, the buffer cache has most of the memory to allow for good I/O performance. When the DSS batch job starts later, the memory automatically migrates to the large pool so that it can be used by Parallel Query operations.
Based on workload information, automatic shared memory tuning:
Captures statistics periodically in the background
Uses the different memory advisories
Performs “what-if” analyses to determine best distribution of memory
Moves memory to where it is most needed
Has no need to configure parameters for the worst-case scenario
Resurrects component sizes from last shutdown if SPFILE is used
MMAN
======
SGA Background Process
The Automatic Shared Memory Management feature uses a new background process named Memory Manager (MMAN). MMAN serves as the SGA Memory Broker and coordinates the sizing of the memory components. The SGA Memory Broker keeps track of the sizes of the components and pending resize operations
RVWR
======
Flashback database
—————————–
- This a new feature introduced in 10g.
- Flashbacking a database means going back to a previous database state.
- The Flashback Database feature provides a way to quickly revert an entire
Oracle database to the state it was in at a past point in time.
- This is different from traditional point in time recovery.
- A new background process Recovery Writer ( RVWR) introduced which is responsible for writing
flashback logs which stores pre-image(s) of data blocks
- One can use Flashback Database to back out changes that:
- Have resulted in logical data corruptions.
- Are a result of user error.
- This feature is not applicable for recovering the database in case of media
failure.
- The time required for flashbacking a database to a specific time in past is
DIRECTLY PROPORTIONAL to the number of changes made and not on the size
of the database.
Jnnn
=====
These are job queue processes which are spawned as needed by CJQ0 to complete scheduled jobs. This is not a new process.
CTWR
=====
This is a new process Change Tracking Writer (CTWR) which works with the new block changed tracking features in 10g for fast RMAN incremental backups.
MMNL
=====
The Memory Monitor Light (MMNL) process is a new process in 10g which works with the Automatic Workload Repository new features (AWR) to write out full statistics buffers to disk as needed.
MMON
======
The Manageability Monitor (MMON) process was introduced in 10g and is associated with the Automatic Workload Repository new features used for automatic problem detection and self-tuning. MMON writes out the required statistics for AWR on a scheduled basis.
M000
======
MMON background slave (m000) processes.
CJQn
=====
This is the Job Queue monitoring process which is initiated with the job_queue_processes parameter. This is not new.
RBAL
====
This is the ASM related process that performs rebalancing of disk resources controlled by ASM.
ARBx
====
These processes are managed by the RBAL process and are used to do the actual rebalancing of ASM
controlled disk resources. The number of ARBx processes invoked is directly influenced by the asm_power_limit parameter.
ASMB
=====
The ASMB process is used to provide information to and from the Cluster Synchronization Services used by ASM to manage the disk resources. It is also used to update statistics and provide a heartbeat mechanism.
we cannot start the Oracle Database … we get an error message telling us, that one datafile is lost or corrupted … what can we do ?
ORA-01157: cannot identify/lock data file 10 – see DBWR trace file
ORA-01110: data file 10: ‘/u01/db/test.dbf’
If the database is in ARCHIVELOG mode with a working online backup concept you are a lucky person, recover the database and everything is OK. But what can we do, if the database is in NOARCHIVELOG mode and no backup is present ?
a). If the database is still running do the following
1). Switch the damaged datafile to the RECOVER status
sql> ALTER DATABASE DATAFILE ‘/u01/db/test.dbf’ OFFLINE DROP;
sql> SELECT file#,status,bytes,name FROM v$datafile;
  FILE# STATUS       BYTES NAME
——- ——– ——— ——————————
      1 SYSTEM   104857600 /u01/db/SOL3/sys/SOL3_sys1.dbf
      2 RECOVER    2097152 /u01/db/test.dbf
2). Stop and Start the database to verify that the database can be started without ‘ test.dbf ‘.
sql> SHUTDOWN IMMEDIATE;
sql> STARTUP;
3). DROP the tablespace to which the datafile belongs
sql> DROP TABLESPACE test INCLUDING CONTENTS;
b). If the database is not running do the following
The database can only be stopped with SHUTDOWN ABORT with a damaged or lost datafile and the datafile is still in ONLINE mode. Therefore it’s better to switch the datafile to the RECOVER status as shown above before stopping the database. However there is a way to switch the datafile to the RECOVER status when the database is stopped.
1). Mount the database and switch the damaged datafile to the RECOVER status
sql> STARTUP MOUNT;
sql>ALTER DATABASE DATAFILE ‘/u01/db/test.dbf’ OFFLINE DROP;
sql> ALTER DATABASE OPEN;
2). DROP the tablespace to which the datafile belongs
sql> DROP TABLESPACE test INCLUDING CONTENTS;
3). Stop and Start the database to verify that the database can be started without ‘ test.dbf ‘.
sql> SHUTDOWN IMMEDIATE;
sql> STARTUP;
Important note
1) The term ‘ OFFLINE DROP ‘ is misleading, it is not possible to drop a datafile with this command. The only purpose of this command is to startup a database with damaged or missing datafile and the databae is in NOARCHIVELOG mode.
2) The command ‘ ALTER DATABASE DATAFILE … OFFLINE DROP ‘ changes the datafile status from ONLINE to RECOVER. In this mode, the database can be started even if the datafile is not present.
3) If space management (e.g. CREATE TABLE ..) occurs for this datafile, Oracle will try to allocate space in this ‘ dropped ‘ datafile and fails.
4) The only way to drop a datafile is to drop the tablespace to which the datafile belongs. Unfortunately you will lose data if you don’t have an actual backup
FLASHBACK QUERY
—————————–
CREATE TABLE flashback_query_test (
  id  NUMBER(10)
);

SELECT current_scn, TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS') FROM v$database;

CURRENT_SCN TO_CHAR(SYSTIMESTAM
----------- -------------------
     722452 2004-03-29 13:34:12

INSERT INTO flashback_query_test (id) VALUES (1);
COMMIT;

SELECT COUNT(*) FROM flashback_query_test;

  COUNT(*)
----------
         1

SELECT COUNT(*) FROM flashback_query_test AS OF TIMESTAMP TO_TIMESTAMP('2004-03-29 13:34:12', 'YYYY-MM-DD HH24:MI:SS');

  COUNT(*)
----------
         0

SELECT COUNT(*) FROM flashback_query_test AS OF SCN 722452;

  COUNT(*)
----------
         0
FLASHBACK VERSION QUERY
——————————————–
CREATE TABLE flashback_version_query_test (
  id           NUMBER(10),
  description  VARCHAR2(50)
);

INSERT INTO flashback_version_query_test (id, description) VALUES (1, 'ONE');
COMMIT;

SELECT current_scn, TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS') FROM v$database;

CURRENT_SCN TO_CHAR(SYSTIMESTAM
----------- -------------------
     725202 2004-03-29 14:59:08

UPDATE flashback_version_query_test SET description = 'TWO' WHERE id = 1;
COMMIT;
UPDATE flashback_version_query_test SET description = 'THREE' WHERE id = 1;
COMMIT;

SELECT current_scn, TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS') FROM v$database;

CURRENT_SCN TO_CHAR(SYSTIMESTAM
----------- -------------------
     725219 2004-03-29 14:59:36

COLUMN versions_startscn FORMAT 99999999999999999
COLUMN versions_starttime FORMAT A24
COLUMN versions_endscn FORMAT 99999999999999999
COLUMN versions_endtime FORMAT A24
COLUMN versions_xid FORMAT A16
COLUMN versions_operation FORMAT A1
COLUMN description FORMAT A11
SET LINESIZE 200

SELECT versions_startscn, versions_starttime,
       versions_endscn, versions_endtime,
       versions_xid, versions_operation,
       description
FROM   flashback_version_query_test
       VERSIONS BETWEEN TIMESTAMP TO_TIMESTAMP('2004-03-29 14:59:08', 'YYYY-MM-DD HH24:MI:SS')
       AND TO_TIMESTAMP('2004-03-29 14:59:36', 'YYYY-MM-DD HH24:MI:SS')
WHERE  id = 1;

 VERSIONS_STARTSCN VERSIONS_STARTTIME          VERSIONS_ENDSCN VERSIONS_ENDTIME         VERSIONS_XID     V DESCRIPTION
------------------ ------------------------ ------------------ ------------------------ ---------------- - -----------
            725212 29-MAR-04 02.59.16 PM                                                02001C0043030000 U THREE
            725209 29-MAR-04 02.59.16 PM                725212 29-MAR-04 02.59.16 PM    0600030021000000 U TWO
                                                        725209 29-MAR-04 02.59.16 PM                       ONE
SELECT versions_startscn, versions_starttime,
       versions_endscn, versions_endtime,
       versions_xid, versions_operation,
       description
FROM   flashback_version_query_test
       VERSIONS BETWEEN SCN 725202 AND 725219
WHERE  id = 1;

 VERSIONS_STARTSCN VERSIONS_STARTTIME          VERSIONS_ENDSCN VERSIONS_ENDTIME         VERSIONS_XID     V DESCRIPTION
------------------ ------------------------ ------------------ ------------------------ ---------------- - -----------
            725212 29-MAR-04 02.59.16 PM                                                02001C0043030000 U THREE
            725209 29-MAR-04 02.59.16 PM                725212 29-MAR-04 02.59.16 PM    0600030021000000 U TWO
                                                        725209 29-MAR-04 02.59.16 PM                       ONE
FLASHBACK TRANSACTION QUERY
—————————————————-
SELECT xid, operation, start_scn,commit_scn, logon_user, undo_sql
FROM   flashback_transaction_query
WHERE  xid = HEXTORAW('0600030021000000');

XID              OPERATION                         START_SCN COMMIT_SCN
---------------- -------------------------------- ---------- ----------
LOGON_USER
------------------------------
UNDO_SQL
----------------------------------------------------------------------------------------------------
0600030021000000 UPDATE                               725208     725209
SCOTT
update "SCOTT"."FLASHBACK_VERSION_QUERY_TEST" set "DESCRIPTION" = 'ONE' where ROWID = 'AAAMP9AAEAAAA
AYAAA';

0600030021000000 BEGIN                                725208     725209
SCOTT

XID              OPERATION                         START_SCN COMMIT_SCN
---------------- -------------------------------- ---------- ----------
LOGON_USER
------------------------------
UNDO_SQL
----------------------------------------------------------------------------------------------------
2 rows selected.
FLASHBACK TABLE
—————————–
Flashback table requires following privileges
1) FLASHBACK ANY TABLE or FLASHBACK object
2) SELECT,INSERT,DELETE and ALTER privs on table
3) Row movement must be enabled
CREATE TABLE flashback_table_test (
  id  NUMBER(10)
);

ALTER TABLE flashback_table_test ENABLE ROW MOVEMENT;

SELECT current_scn FROM v$database;

CURRENT_SCN
-----------
     715315

INSERT INTO flashback_table_test (id) VALUES (1);
COMMIT;

SELECT current_scn FROM v$database;

CURRENT_SCN
-----------
     715340

FLASHBACK TABLE flashback_table_test TO SCN 715315;

SELECT COUNT(*) FROM flashback_table_test;

  COUNT(*)
----------
         0

FLASHBACK TABLE flashback_table_test TO SCN 715340;

SELECT COUNT(*) FROM flashback_table_test;

  COUNT(*)
----------
         1
FLASHBACK DATABASE
———————————–
Database must be in archivelog mode and flashback should be enabled for performing this. When placed in flashback mode, we can observe flashback logs getting generated in flash_recovery_area
-- Create a dummy table.
CONN scott/tiger
CREATE TABLE flashback_database_test (
  id  NUMBER(10)
);

-- Flashback 5 minutes.
CONN sys/password AS SYSDBA
SHUTDOWN IMMEDIATE
STARTUP MOUNT EXCLUSIVE
FLASHBACK DATABASE TO TIMESTAMP SYSDATE-(1/24/12);
ALTER DATABASE OPEN RESETLOGS;

-- Check that the table is gone.
CONN scott/tiger
DESC flashback_database_test
We can use following commands also in flashback database
FLASHBACK DATABASE TO TIMESTAMP my_date;
FLASHBACK DATABASE TO BEFORE TIMESTAMP my_date;
FLASHBACK DATABASE TO SCN my_scn;
FLASHBACK DATABASE TO BEFORE SCN my_scn;
Hi Friends,
I want to present some information regarding occurence of ORA-04030 error on a 32-bit windows platform.
Whenever we observe ora-04030 along with 0ra-600[723], it occured may be due to the following reason:
A 32-bit operating system is not capable to address more than 2^32 bit of memory that is 2GB. (this is by default in windows 32-bit machines). so whenever your Oracle memory tries to go beyond this 2GB value, you will observe ora-600 and ora-04030 errors.
The following are the workarounds to this problem:
1. Reduce system SGA
or
2. Reduce PGA/UGA contents (Sort_area_size)
Note: The above workarounds may lead to performance problem as you are decreasing memory sizes
The solutions would be:
1)  Increase memory addressability from 2G to 3G by changing boot.ini parameter (start->run->msconfig->boot.ini)
“multi(0)disk(0)rdisk(0)partition(1)\WINDOWS=”Microsoft Windows XP Professional” /fastdetect /NoExecute=alwaysoff ” to
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS=”Microsoft Windows XP Professional” /fastdetect /NoExecute=OptIn /3GB
or
2) Upgrade to 64-bit operating system
Workaround : a temporary fix to get back the system to normal immediately
Solution : a permanent fix to the problem
If you are running in 9i or above, there are two parameters (not one) which govern whether or not you are capable of running in shared server mode.
DISPATCHERS governs whether a job dispatcher runs. You have to have at least one of them configured before shared server is possible. However, the mere fact that a dispatcher runs does NOT mean your database is running in shared server mode. There also have to be shared server processes capable of handling the work dispatched by the dispatcher(s), and those are configured with the SHARED_SERVERS parameter. If that’s set to any number greater than 1, you have shared server processes running on your instance. But that STILL doesn’t mean you’re running in shared server mode! If you have SHARED_SERVERS=57 and no dispatcher, you simply have 57 processes sitting around doing nothing whatsoever (and incapable of doing useful work!)
In short, you have to have DISPATCHERS and SHARED_SERVERS set.
Note, for example, that 10g configures a single dispatcher for all databases by default (if they’re created with DBCA and you don’t get in there to stop it happeneing), but it does NOT configure SHARED_SERVERS, so by default a 10g database does not run in shared server mode.
The other thing I’d clarify is that a database doesn’t really run in shared server mode anyway! The fact that your instance has a dispatcher and shared server processes running doesn’t necessarily mean your users will end up connected to the dispatcher and having shared server processes handling their job requests. They will by default, but if the tnsnames.ora they use to connect (or its centralised equivalent) contains the line SERVER=DEDICATED, then they will get to use dedicated server processes, no matter what the dispatcher or shared server processes might think about it!
With dispatchers and shared server processes configured, in other words, an instance can “support shared server connection requests”. That’s rather different than “running in shared server mode”. The distinction is important because privileged actions (startup, shutdown, backup and recover commands) cannot be processed by a shared server process, so it’s important for an instance that is configured for normal users to use shared server processes to still support the connection to dedicated server processes by suitably credentialled users.
If a user does end up connected to a shared server process, there is usually a performance penalty to pay compared to using a dedicated server process. A user submits a query and instead of it being immediately processed by a server process, it gets submitted to a dispatcher …which promptly sticks it on a job queue! You then have to wait for a shared server process to become free and decide to pick your job off the queue. That’s inevitably slower than doing it the dedicated way.
People use shared server as the first line of scaling up their databases… and you’re right that it primarily depends on the number of users connected to the server concurrently. In dedicated server mode, a new connection means a new process gets spawned (or a new thread on Windows) and a new connection socket is opened. Servers can only handle so many connection sockets, processes or threads before they start to keel over under the strain. Shared server, as the name suggest, means that new connections do not cause new server processes to be spawned. So 300 users can be processed with, maybe, 30 or 40 processes in total. If your box would normally keel over handling 300 dedicated connections, then clearly with that sort of sharing ratio, you’d be able to scale to nearer 3000 users before it starts wilting by using shared processes.
But it’s also a bit subtler than that: a data warehouse would be daft to implement shared server, even if it did have 300+ concurrent users. That’s because the users of such systems typically run queries that run for hours… and a shared process that is nabbed to perform one job for hours on end isn’t really a shared process any more, is it?! So the rule of thumb as to when to implement shared server is yes, (a) when your concurrent user count starts reaching levels that your server just doesn’t seem able to sustain any more AND (b) when you can be sure that the users tend to issue short, sharp queries -say, about 3 seconds or so to process, max.
Again, there are mixed states to get through, too. You might have lots of OLTP-type sub-3-second transactions on the same database on which one or two users regularly run big reports. In that case, you make sure the reporters have a tnsnames.ora that says SERVER=DEDICATED and the OLTP-type people use one that has SERVER=SHARED in it; configure the DISPATCHERS and SHARED_SERVER parameters for the instance and then those that can benefit from shared servers can do so and those that wouldn’t won’t be stealing shared processes from those that can!
The alternative approach for those with more cash is to go and buy better server hardware that can cope with the user community numbers! Shared Server configuration, however, comes free. You pays your money and you takes your choices!
Windows 2000 server/ oracle 8i
Everything worked for 2 years without problem … noone did enything to the system …. no new install or anything
After creating new listener …. this listener works and Window service is running. But only for 3-5 minutes …. after that, service is stopped …
If I try to start the service, system says “Error 3 : the system can’t find file …. “
If I run tnslsnr from cmd ….. then the listener works for another 3-5 minutes and stops …. and no error for “can’t find file” is showen ….
I checked log file .. no error …
I checked trc file :
nsglhe: exit
nsevwait: entry
nsevwait: 3 registered connection(s)
nsevwait: 0 added to NT list for 0×8
nsevwait: 2 added to NT list for 0×8
nsevwait: 3 added to NT list for 0×2
nsevwait: 0 pre-posted event(s)
nsevwait: waiting for transport event (0 thru 3)…
ntctst: size of NTTEST list is 2 – calling poll
ntctspoll: Testing for 1 cxds on protocol Windows NT Named Pipes NT Protocol Adapter
ntctspoll: Testing for 2 cxds on protocol select
sntseltst: Testing for CONNECTIONS on socket 600
sntseltst: Testing for DATA on socket 576
ends with lines above …..
I tried to run it a few times and always ends with those lines …..
Is it possible to be a network adapter problem or disk error ?? I’m just guessing ….
How can I see what is wrong with listener and why did it stop ?… TRACE to ADMIN or SUPPORT didn’t tell me anything ….
Running lsnrctl status or start shows standard errors …. 32bit window error says “unknown error ” …So now the question would be: Why was listener running for 5 minutes and then stopped ? (when running it manually)

The thing that went wrong is Automatic update is ON and service pack 4 is installed. This made listener to stop working.
Some times OS level patches also will effect Oracle services.
During a SELECT statement, we get the error: ORA-1578: file#1 block#3840 corrupted Oracle block.
It is telling us, that the corruption took place in file#1. This always a SYSTEM tablespace file. This may mean that the corrption problem may only be resolved with the recreation of the database follwed by a full database import. Only if this is the case, do a full database export with the database unavailable to users immediately followed by the database recreation and import in order to prevent loss of data. Now how to find out exactly which object is corrupted. It may be a rollback segment or an index that can simply be recreated. Let’s see how we can find out this.
Connect to an DBA account:
SELECT segment_type, segment_name
  FROM dba_extents
WHERE file_id = 1
   AND block_id < 3840
   AND block_id + blocks >= 3840;

SEGMENT_TYPE      SEGMENT_NAME
—————– —————————————-
INDEX             I_SOURCE1
or use:
SELECT segment_type, segment_name
  FROM dba_extents
WHERE file_id = 1
   AND 3840 BETWEEN block_id AND block_id + blocks -1;
SEGMENT_TYPE      SEGMENT_NAME
—————– —————————————-
INDEX             I_SOURCE1
This is a simulated example, altough file#1 ALWAYS belongs to the SYSTEM tablespace, block #3840 could be anything. Lower block numbers in the #1 file are likely bootstrap segments. In this case I_SOURCE is corrupted, this is a SYS Index that cannot be dropped. In this case only recreating the database will resolve the problem. If it is a table it must be understood that the data within the corrupted blocks is lost. You can try to save as much of the data in the corrupted table before the bad block and create a new table around the corrupted part. Get the rowid’s around the corrupted block with:
SELECT ROWIDTOCHAR(rowid) FROM <table-name>;
The ALTER SYSTEM SUSPEND statement suspends a database by halting all input and output (I/O) to datafiles (file header and file data) and control files, thus allowing a database to be backed up without I/O interference. When the database is suspended all preexisting I/O operations are allowed to complete and any new database accesses are placed in a queued state.
The suspend command suspends the database, and is not specific to an instance. Therefore, in an Oracle Real Application Clusters environment, if the suspend command is entered on one system, then internal locking mechanisms will propagate the halt request across instances, thereby quiescing all active instances in a given cluster. However, do not start a new instance while you suspend another instance, since the new instance will not be suspended.
Use the ALTER SYSTEM RESUME statement to resume normal database operations. You can specify the SUSPEND and RESUME from different instances. For example, if instances 1, 2, and 3 are running, and you issue an ALTER SYSTEM SUSPEND statement from instance 1, then you can issue a RESUME from instance 1, 2, or 3 with the same effect.
The suspend/resume feature is useful in systems that allow you to mirror a disk or file and then split the mirror, providing an alternative backup and restore solution. If you use a system that is unable to split a mirrored disk from an existing database while writes are occurring, then you can use the suspend/resume feature to facilitate the split.
The suspend/resume feature is not a suitable substitute for normal shutdown operations, however, since copies of a suspended database can contain uncommitted updates.
Do not use the ALTER SYSTEM SUSPEND statement as a substitute for placing a tablespace in hot backup mode. Precede any database suspend operation by an ALTER TABLESPACE BEGIN BACKUP statement.
The following statements illustrate ALTER SYSTEM SUSPEND/RESUME usage. The V$INSTANCE view is queried to confirm database status.
SQL> ALTER SYSTEM SUSPEND;
System altered
SQL> SELECT DATABASE_STATUS FROM V$INSTANCE;
DATABASE_STATUS
———
SUSPENDED
SQL> ALTER SYSTEM RESUME;
System altered
SQL> SELECT DATABASE_STATUS FROM V$INSTANCE;
DATABASE_STATUS
———
ACTIVE

Problem
If you try to truncate a table on a remote database, you will get the following error:
ORA-02021: DDL operations are not allowed on a
           remote database.
You have a database link to the remote database so you can see objects there and execute them (e.g. procedures, functions, packages, triggers, etc). The solution is to create the following procedure on the remote database, then execute it from the local one.
CREATE OR REPLACE PROCEDURE
Truncate_Remote_Table(p_table_name VARCHAR2) AS
/*
   Procedure Name: Truncate_Remote_Table
   Purpose: To truncate a table on a local database
            from a remote database.This procedure is
            executed remotely via a dblink and passed
            in the table name that exists on the local
            database.
*/
   v_sql_error_code PLS_INTEGER;
   v_sql_error_message VARCHAR2(512);
BEGIN
   EXECUTE IMMEDIATE ‘TRUNCATE TABLE ‘ || p_table_name;
EXCEPTION
   WHEN OTHERS THEN
      v_sql_error_code := SQLCODE;
      v_sql_error_message := SQLERRM(v_sql_error_code);
      DBMS_OUTPUT.ENABLE(5000);
      DBMS_OUTPUT.PUT_LINE(‘OTHER ERROR’);
      DBMS_OUTPUT.PUT_LINE(v_sql_error_message);
END Truncate_Remote_Table;
To execute the procedure, use the following from the local database:

Top-N queries
Suppose you want to retrieve and sort information on the 4 most recently hired employees in a very efficient way. This can be achieved using an inline view combined with ORDER BY and ROWNUM.
Inline Views
An in-line view, which is a feature of a Top-N SQL query, is a subquery. This type of subquery differs from a regular subquery by containing an ORDER BY clause which is not allowed in a regular subquery. The ROWNUM condition, which is an enhanced sorting mechanism, would be used in the outer query to complete the Top-N SQL query.
Example
We need to retrieve and sort information on the 4 most recently hired employees from the following list (marked in blue color).
SELECT empno,ename,TO_CHAR(hiredate,’DD.MM.YYYY’) “hiredate”
 FROM emp
ORDER BY hiredate DESC;
     EMPNO ENAME      hiredate
———- ———- ———-
      7876 ADAMS      12.01.1983
      7788 SCOTT      09.12.1982
      7934 MILLER     23.01.1982
      7900 JAMES      03.12.1981
      7902 FORD       03.12.1981
      7839 KING       17.11.1981
      7654 MARTIN     28.09.1981
      7844 TURNER     08.09.1981
      7782 CLARK      09.06.1981
      7698 BLAKE      01.05.1981
      7566 JONES      02.04.1981
      7521 WARD       22.02.1981
      7499 ALLEN      20.02.1981
      7369 SMITH      17.12.1980
The first approach is to used the following query, which does not select the Top-4 rows !
SELECT empno,ename,hiredate
  FROM emp
 WHERE ROWNUM < 5
ORDER BY hiredate DESC;
     EMPNO ENAME      HIREDATE
———- ———- ———-
      7566 JONES      02.04.1981
      7521 WARD       22.02.1981
      7499 ALLEN      20.02.1981
      7369 SMITH      17.12.1980
The solution is to use an inline view with an ORDER BY and a ROWNUM condition, in the outer query to complete the Top-N SQL query.
SELECT *
  FROM (SELECT empno,ename,hiredate
          FROM emp
        ORDER BY hiredate DESC)
WHERE ROWNUM < 5;
     EMPNO ENAME      HIREDATE
———- ———- ———-
      7876 ADAMS      12.01.1983
      7788 SCOTT      09.12.1982
      7934 MILLER     23.01.1982
      7900 JAMES      03.12.1981
The purpose of this query is to retrieve and sort information on the 4 most recently hired employees. This is a Top-N SQL query which is more efficient than a regular query because Oracle stores a maximum of only 5 rows as the data is retrieved from the table avoiding sorting all of the rows in the table at once. The WHERE clause contains ‘ROWNUM < 5′ which prevents sorting on more than 5 rows at one time — cool isn’t it ? 

A tablespace that manages its own extents maintains a bitmap in each datafile to keep track of the free or used status of blocks in that datafile. Each bit in the bitmap corresponds to a block or a group of blocks. When an extent is allocated or freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. These changes do not generate rollback information because they do not update tables in the data dictionary. 
Local management of extents automatically tracks adjacent free space, eliminating the need to coalesce free extents. The sizes of extents that are managed locally can be determined automatically by the system. Alternatively, all extents can have the same size in a locally-managed tablespace.
A tablespace that manages its extents locally can have either uniform extent sizes or variable extent sizes that are determined automatically by the system. When you create the tablespace, the UNIFORM or AUTOALLOCATE (system-managed) option specifies the type of allocation.
For system-managed extents, you can specify the size of the initial extent and Oracle determines the optimal size of additional extents, with a minimum extent size of 64 KB. This is the default for permanent tablespaces.
For uniform extents, you can specify an extent size or use the default size, which is 1 MB. Temporary tablespaces that manage their extents locally can only use this type of allocation.
The storage parameters NEXT, PCTINCREASE, MINEXTENTS, MAXEXTENTS, and DEFAULT
STORAGE are not valid for extents that are managed locally. Actually, you cannot create a locally managed SYSTEM tablespace. Locally managed temporary tablespaces must of type “temporary” (not “permanent”).
Advantages
Local management of extents avoids recursive space management operations, which can occur in dictionary-managed tablespaces if consuming or releasing space in an extent results in another operation that consumes or releases space in a rollback segment or data dictionary table. Local management of extents automatically tracks adjacent free space, eliminating the need to coalesce free extents.
Example:
 An insert causes a request for an extent
Oracle allocates space in the data tablespace
This causes an update of system tables in the data
dictionary if tablespace is dictionary-managed
Consequently, an update is made to the redo log.
So a large number of inserts in to a tablespace with a small extent size may cause many I/O’s to the system tablespace and consequently the redo log files. Also, large sorts from “read-only” databases may cause many I/O’s to the log file due to system tablespace update for temporary tablespace extent allocation.
How to calculate the size of locally managed tablespaces ?
When creating tablespaces with a uniform extent size it is important to understand that 64 Kbytes per datafile is allocated for the storage management information. When creating database files, add an additional 64 Kbytes to the size of your datafile.
Consider the following example to illustrate the matter:
SQL> CREATE TABLESPACE demo1
     DATAFILE ‘D:\Oradata\SAP1\tab\SAP1_demo1.dbf’ SIZE 10M
     EXTENT MANAGEMENT LOCAL UNIFORM SIZE 5M;
Tablespace created.
SQL> select bytes from dba_free_space where
     TABLESPACE_NAME = ‘DEMO1′;
     BYTES
———-
   5242880
What happens here is we ask for 5 Mbyte extents in a 10 Mbyte file. After 64 Kbytes is allocated for the bitmap, we are left with one 5 Mbyte extent and one less then 5 Mbytes extent. We cannot use the less then 5 Mbyte extent so it does not show up — it is wasted. This can also happen when you have larger uniform extents when the remainder of space in the datafile is just 64 Kbytes short of being able to accomodate your uniform extent size.
SQL> drop TABLESPACE demo1;
Tablespace dropped.
If you change the test case to allow for the extra 64 Kbytes:
SQL> CREATE TABLESPACE demo1
     DATAFILE ‘D:\Oradata\SAP1\tab\SAP1_demo1.dbf’ SIZE 10304K
     EXTENT MANAGEMENT LOCAL UNIFORM SIZE 5M;
Tablespace created.
SQL> select bytes from dba_free_space where
     TABLESPACE_NAME = ‘DEMO1′;
     BYTES
———-
  10485760
You can see that when we add 64 Kbytes to the datafile size the full 2 extents you want are there. Locally managed tablespaces should have datafiles that are 64 Kbytes
LARGER then a multiple of their extent size when using uniform sizing.
Sizing LMT
More and more we are using locally managed tablespaces. They offer a large amount of benefits, so why should we not use this new feature?
Some thoughts are needed when you decided to use Uniform Extent Allocation. With the uniform method, you specify an extent size when you create the tablespace, and all extents for all objects created within that tablespace will be that size.
The uniform method also provides an enforcement mechanism, because you can’t override the uniform extent size of locally managed tablespaces when you create a schema object such as a table or an index.
Calculate the Size of Tablespaces
The goal is to allocate as much disk space as really needed and as really used. With the uniform extent allocation you can calculate or even estimate the number of extents you want to allocate. Gaps or unused disk space within the tablespace should be avoided.
Lets assume that we create a tablespace with the uniform extent size of 1 MByte and 10 extents. Remember that locally managed tablespaces will use another 64 KBytes or the Header Bitmap:
10 * 1 * 1024K + 64K = 10304K
Note that all calculations are made in KBytes and that your chosen extent size is the multiple of your defined block size. The following statement creates this locally managed tablespace with a uniform extent size of 1 MByte:
CREATE TABLESPACE uni_test
  DATAFILE ‘C:\Oradata\ASU1\tab\uni_test.dbf’
  SIZE 10304K
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1024K;
Check the Size and Number of Extents
Now every object created within the newly created tablespace gets its uniform extent size of 1 MByte:
CREATE TABLE tab_1 (
  num NUMBER
  ) TABLESPACE uni_test;
CREATE TABLE tab_2 (
  num NUMBER,
  text VARCHAR2(255)
  ) TABLESPACE uni_test
  STORAGE
      (INITIAL       100K
       NEXT          100K
       MINEXTENTS    1
       MAXEXTENTS    UNLIMITED
       PCTINCREASE   0);
CREATE TABLE tab_3 (
  num NUMBER,
  text VARCHAR2(255),
  create_date DATE
  ) TABLESPACE uni_test
  STORAGE
      (MINEXTENTS    2
       MAXEXTENTS    UNLIMITED
       PCTINCREASE   0);
If you are including a STORAGE clause when you create tables or indexes, Oracle will allocate as much extents as you indicate to use. Table TAB_1 will be allocated with one extent, table TAB_2 too because you need at least 100 KBytes. Table TAB_3 will be created with two extents. This could also be done by defining an INITIAL value of 2 MBytes.
The allocated blocks and extents can be verified using the view DBA_SEGMENTS:
SELECT segment_name, segment_type, blocks, extents
  FROM dba_segments
 WHERE owner = ‘TEST’
 ORDER BY EXTENTS
/
SEGMENT_NAME         SEGMENT_TYPE           BLOCKS    EXTENTS
——————– —————— ———- ———-
TAB_1                TABLE                     256          1
TAB_2                TABLE                     256          1
TAB_3                TABLE                     512          2
The free space in the tablespace UNI_TEST can be verified using the view DBA_FREE_SPACE:
SELECT tablespace_name, bytes, blocks
  FROM dba_free_space
 WHERE tablespace_name = ‘UNI_TEST’
/
TABLESPACE_NAME                     BYTES     BLOCKS
—————————— ———- ———-
UNI_TEST                          6291456       1536
That means in the tablespace UNI_TEST are still 1536 blocks available. How many extents are these blocks? This can be calculated by multiplying the number of available blocks by the block size and divided by the extent size:
1536 * 4K / 1024K = 6 extents
That fits with our calculations and verifications: 4 extents are already used and another 6
extents could be used to fill up the whole tablespace.
Check the File Size
If you check the physical file size used for the tablespace UNI_TEST you will be surprised: Instead of the calculated 10304 KBytes (10’551’296 Bytes) you will find the disk file’s size of 10’555’392 Bytes. Oracle allocates another block which can not be used for object allocation. Some of the Oracle tools such as the Tablespace Manger shows the total number of blocks according to the disk file size. In our example this are 2577 blocks, but usable are only 2576 blocks minus 64 KBytes (for header bitmap).
Summary
 
Keep the following rules in mind during the sizing of tablespaces:
Each extent size is the multiple of your defined block size.
The usable tablespace size is the multiple of your estimated number of extents.
The defined tablespace size used during CREATE TABLESPACE statement adds 64
KBytes for the header bitmap (HB) to the usable tablespace size.
The physical file size adds one block (AB) to the defined tablespace size.
Some times we may get a requirement to move Oracle home (whatever may be the version) to different location
Here i am providing steps to do the same on a windows machine
1. Stop all database related services running from this Oracle home
2. Take a cold backup of your database
3. Uninstall Oracle from the present drive using OUI
It is better to restart the system once so as to clear registeries and directories
4. Install again the same version of Oracle in the drive which you want
5. Create the service for the database using ORADIM utility and start it
Hi Friends,

Just now i find out very interesting concept in Architecture.

When instance starts, all background process related to read/write with database (DBWR & LGWR) will read redolog files and data files while opening the database. actually this used to happen in oracle 6.0 in which there is no CKPT process and LGWR used to perform checkpoint at that time. after that even though oracle changed and introduced CKPT, still in 10g also LGWR reads datafiles while opening the database.

For more information read following link.
http://forums.oracle.com/forums/thread.jspa?messageID=2631627&#2631627
If you are using the Standard Edition of the Oracle Database and want to move to the Enterprise Edition, then complete the following steps:
  • Ensure that the release number of your Standard Edition server software is the same release as the Enterprise Edition server software.For example, if your Standard Edition server software is release 10.2.0, then you should upgrade to release 10.2.0 of the Enterprise Edition. 
  • Shut down your database. 
  • If your operating system is Windows, then stop all Oracle services, including the OracleServiceSID Oracle service, where SID is the instance name. 
  • Deinstall the Standard Edition server software.
  • Install the Enterprise Edition server software using the Oracle Universal Installer.Select the same Oracle home that was used for the de-installed Standard Edition. During the installation, be sure to select the Enterprise Edition. When prompted, choose Software Only from the Database Configuration screen. 
  • Start up your database.
Your database is now upgraded to the Enterprise Edition.
Note:
If you have a Standard Edition database at a release prior to Oracle Database 10g, you can change it to an Enterprise Edition database by first installing the Enterprise Edition and then following the normal upgrade procedures, as described in this manual.
To move an existing Oracle9i or Oracle Database 10g system from manual undo management (using rollback segments) to automatic undo management requires a shut down and restart—you must shut-down the database, set the undo_management parameter to AUTO, and then restart the database—assuming you’ve created the UNDO tablespace first, however. To get some guidance about how large to make that UNDO tablespace, you can use a function from the Undo Advisor PL/SQL package created just for that purpose, as in this example:
SQL> set serveroutput on;
SQL> declare utbsize_in_MB number;
  2    begin
  3    utbsize_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION;
  4    dbms_output.put_line(TO_CHAR(utbsize_in_MB));
  5    end;
  6    /
187
PL/SQL procedure successfully completed.
In this case, the return value of 187 is the suggested starting size in megabytes for an UNDO tablespace based on current rollback segment utilization in an existing system.
SQL> select * from address;
NAME     VORNAME  BIRTH
——– ——– —-
zahn     martin   1954
zahn     martin   1954
hodler   kurt     1962
wyss     bruno    1965
Get the Duplicates with:
SELECT name,vorname FROM address
GROUP BY name,vorname
HAVING COUNT(*) > 1;
NAME     VORNAME
——– ——–
zahn     martin

Delete the Duplicates with

DELETE from address A
WHERE (A.name, A.vorname, A.birth) IN
    (SELECT B.name, B.vorname, B.birth FROM address B
      WHERE A.name = B.name AND A.vorname = B.vorname
        AND A.birth = B.birth AND A.rowid > B.rowid);
1 row deleted.
sometimes we may get below error while working on the database or when starting database
ORA-16014: log 1 sequence# 53 not archived, no available destinations
ORA-00312: online log 1 thread 1: ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\PROD\REDO01.LOG’
The reason for this could be that your redolog file is about to archive and at same time database was shutdown which will lead to some corruptions in redolog file
When i tried to take backup using RMAN, i got this error
C:\>rman target /
Recovery Manager: Release 10.2.0.1.0 – Production on Thu Mar 18 12:40:04 2010
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
connected to target database: PROD (DBID=120728071, not open)
RMAN> sql ‘alter database open’;
using target database control file instead of recovery catalog
sql statement: alter database open
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of sql command on default channel at 03/18/2010 12:41:42
RMAN-11003: failure during parse/execution of SQL statement: alter database open
ORA-16014: log 1 sequence# 53 not archived, no available destinations
ORA-00312: online log 1 thread 1: ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\PROD\REDO01.LOG’
Then i checked if archivelog mode is enabled
C:\>sqlplus “/ as sysdba”
SQL*Plus: Release 10.2.0.1.0 – Production on Thu Mar 18 12:42:13 2010
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production
With the Partitioning, OLAP and Data Mining options
SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     53
Next log sequence to archive   53
Current log sequence           55
so, archivelog mode is enabled. I checked status of redolog groups to check if there are any INVALID groups
SQL> select group#,status from v$log;
    GROUP# STATUS
———- —————-
         1 INACTIVE
         3 CURRENT
         2 INACTIVE
so, even we don’t have any INVALID groups. I tried to start the database, but landed in above mentioned error.
Then i tried to clear logfile using below command
SQL> alter database clear logfile ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\PROD\REDO01.LOG’;
alter database clear logfile ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\PROD\REDO01.LOG’
*
ERROR at line 1:
ORA-00350: log 1 of instance prod (thread 1) needs to be archived
ORA-00312: online log 1 thread 1:
‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\PROD\REDO01.LOG’
it didn’t allowed because it was not yet archived. so i tried below command and it worked
SQL> alter database clear unarchived logfile ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\PROD\REDO01.LOG’;
Database altered.
this command tells to not archive the file and clear the contents
Now my database is opened fine
SQL> alter database open;
Database altered.
SQL> select name,open_mode from v$database;
NAME      OPEN_MODE
——— ———-
PROD      READ WRITE
If you also got similar or same error, then try to clear redolog files first
Recently i faced a typical situation where user performed a delete on a huge table. All of a sudden he terminated that delete statement which leaded to one undo segment corruption
because of this, when other transactions hitting this segment, lands up with an error. so i used below procedure to recover that undo segment
Note : The procedure i followed may differ from other solutions
Initially i tried to drop undo tablespace (after creating new one), but its saying i cannot drop it. so i followed below steps
1) created pfile from spfile (if your database is already using pfile, then take a backup of that file and this step is not necessary)
2) shutdown database
3) edit pfile and include the following line
*._corrupt_rollback_segments=”segment_name which is giving problem”
we can know the corrput segment name using dba_rollback_segs
4) startup pfile=’pfile name’
5) drop the old undo tablespace (remember i created new one already and made that as default ;-))
6) shutdown the database
7) startup using spfile (if no spfile used, edit your pfile and remove the parameter which was included in step 3)
Now it should start normally and your work will be smoother
If you have any more solutions, please post so that it will be helpful to all…..
1. command to check if the database is running at OS level or not
windows:
 check whether service is started in services.msc and you could see something like this
Note: If you shutdown the database at SQL prompt, still service will show as “started”. see the below output
C:\>sqlplus “/ as sysdba”
SQL*Plus: Release 10.2.0.1.0 – Production on Mon Feb 15 14:22:22 2010
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production
With the Partitioning, OLAP and Data Mining options
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
but still my services.msc showing as “started”
The above i tried on windows XP, so results may differ in other versions of windows…
Unix :
use “ps -ef | grep smon” command and output is…
[oracle@viadbscph003 logs]$ ps -ef | grep smon
oracle   15020     1  0 Feb08 ?        00:01:15 ora_smon_iasdb1
Note: By looking at the above output we cannot say that database is available. we can say that only if we see some output as follows when using ps -ef | grep ora command
oracle   30670     1  0 10:24 ?        00:00:00 oracleiasdb1 (LOCAL=NO)
oracle   30684     1  0 10:24 ?        00:00:00 oracleiasdb1 (LOCAL=NO)
oracle   31128     1  0 10:25 ?        00:00:00 oracleiasdb1 (LOCAL=NO)
oracle   31348     1  0 Feb14 ?        00:00:00 oracleiasdb1 (LOCAL=NO)
oracle   31405     1  0 Feb12 ?        00:00:00 oracleiasdb1 (LOCAL=NO)
oracle   31505     1  2 10:26 ?        00:00:00 oracleiasdb1 (LOCAL=NO)
oracle   31508     1  0 10:26 ?        00:00:00 oracleiasdb1 (LOCAL=NO)
so, if you see SMON is shown in ps command, then it means your instance is started for sure.
2.  how to check how many databases existing on the server? (count the databases even if they are down)
Windows:
check services.msc which will show all the databases services list. if database is up and running, it will show as “started”. otherwise, it will show nothing
Unix:
we can get all up and running DB’s information by using ps -ef | grep smon command. but even to get shutdown databases list, we should use oratab file which either exists in /etc or /var directory.
Note: If we create database manually and didn’t updated that info in  this oratab, then we are going to miss the count
A user in the database can access other schema tables even though privileges are not granted directly to him
To show this, i am giving you the following demo
step 1 : i created a user u1 and create a table using emp table of scott
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.
C:\>sqlplus
SQL*Plus: Release 10.2.0.1.0 – Production on Thu Jan 21 18:17:12 2010
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
Enter user-name: / as sysdba
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production
With the Partitioning, OLAP and Data Mining options
SQL> create user u1 identified by u1;
User created.
SQL> grant connect,create table to u1;
Grant succeeded.
SQL> alter user u1 quota unlimited on users;
User altered.
SQL> create table u1.x as select * from scott.emp;
Table created.
step 2 : i granted select on this table to public
SQL> conn u1/u1
Connected.
SQL> select * from tab;
TNAME                          TABTYPE  CLUSTERID
—————————— ——- ———-
X                              TABLE
SQL> grant select on x to public;
Grant succeeded.
step 3 : i created another user and granted only create session privilege using CONNECT role
SQL> conn / as sysdba
Connected.
SQL> create user u2 identified by u2;
User created.
SQL> grant connect to u2;
Grant succeeded.
SQL> select privilege from dba_sys_privs where grantee=’U2′;
no rows selected
SQL> select privilege from dba_tab_privs where grantee=’U2′;
no rows selected
SQL> select granted_role from dba_role_privs where grantee=’U2′;
GRANTED_ROLE
——————————
CONNECT
 step 4 : i connected to u2 user and surprisingly you can see that i can able to access table X which belongs to u1
SQL> conn u2/u2
Connected.
SQL> select * from u1.x;
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM
———- ———- ——— ———- ——— ———- ———-
    DEPTNO
———-
      7369 SMITH      CLERK           7902 17-DEC-80        800
        20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300
        30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500
        30
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM
———- ———- ——— ———- ——— ———- ———-
    DEPTNO
———-
      7566 JONES      MANAGER         7839 02-APR-81       2975
        20
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400
        30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850
        30
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM
———- ———- ——— ———- ——— ———- ———-
    DEPTNO
———-
      7782 CLARK      MANAGER         7839 09-JUN-81       2450
        10
      7788 SCOTT      ANALYST         7566 19-APR-87       3000
        20
      7839 KING       PRESIDENT            17-NOV-81       5000
        10
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM
———- ———- ——— ———- ——— ———- ———-
    DEPTNO
———-
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0
        30
      7876 ADAMS      CLERK           7788 23-MAY-87       1100
        20
      7900 JAMES      CLERK           7698 03-DEC-81        950
        30
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM
———- ———- ——— ———- ——— ———- ———-
    DEPTNO
———-
      7902 FORD       ANALYST         7566 03-DEC-81       3000
        20
      7934 MILLER     CLERK           7782 23-JAN-82       1300
        10
14 rows selected.
By this we can conclude that, any privilege granted to PUBLIC can be access by all users in the database (even though they dont assign with privileges directly)
If already some privileges are assigned to PUBLIC in your database, be careful and get confirmed before trying to revoke
Note : I tried this on 10g database….if wann you can give a try on 9i ;-)
Lot many times i had seen questions from many DBA’s asking how we can shrink the undo tablespace. Remember, once undo starts increasing, we cannot shrink it back to original size. all we can do is following
I am assuming initially you created undo tablespace with 5 GB and now it grown to 35 GB. now you want it again to shrink to 5 GB. see what you can do now
1. create new undo tablespace with 5 GB of size
2. make newly created undo as default using
sql> alter system set undo_tablespace=’UNDOTBS_NAME’ scope=both;
3. drop the old undo tablespace along with datafiles also
But you can shrink the temp tablespace after once the extents are deallocated using
sql> alter database tempfile ‘path’ resize ‘value’;
Suggestion : Never place UNDO and TEMP in autoextend on mode
Lets say i created a new tablespace with 8 datafiles.
SQL> select file_name from dba_data_files where tablespace_name=’TESTNDEX’;
FILE_NAME
——————————————————————————–
/data1/oradata/TESTINDEX01.dbf
/data1/oradata/TESTINDEX02.dbf
/data1/oradata/TESTINDEX03.dbf
/data1/oradata/TESTINDEX04.dbf
/data1/oradata/TESTINDEX05.dbf
/data1/oradata/TESTINDEX06.dbf
/data1/oradata/TESTINDEX07.dbf
/data1/oradata/TESTINDEX08.dbf
8 rows selected.
i created a table called TEST in this tablespace.
now many people think that space will be allocated (or extents will be allocated) from 1st datafile of this tablespace and once it is full, it will use 2nd, 3rd etc….which is not the real picture
the fact is, if i create a table, when i start inserting data into that, extents will be allocated from all the datafiles in ROUND ROBIN fashion (if you don’t know about this, just do a google !)
So, in our example it will go to all the 8 datafiles
so based on the allocation of extents and size of data, object can reside in all the datafiles or some datafiles.
How can we check in which datafile my object resides?
select a.segment_name,a.file_id,b.file_name Datafile_name from dba_extents a,dba_data_files b where a.file_id=b.file_id and a.segment_name=’YOUR OBJECT NAME’;
How can we check what objects are there in a datafile?
select a.segment_name,a.file_id,b.file_name Datafile_name from dba_extents a,dba_data_files b where a.file_id=b.file_id and b.file_name=<your datafile name with path>;
bringing a detailed explanation from Oracle guru Aman sharma on hard parsing….
please join forums and acquire max knowledge about Oracle DB
we all read a point in oracle documentation that “SMON releases temp segments occupied by completed transactions”
By reading above statement many of DBA’s will think like this
1) a transaction T1 occupied temp tablespace, which is sorting data
2) Another transaction T2 hitted database which also requires temp tablespace.
3) SMON will release the temp segments occupied by T1 (after T1 is completed) so that they will be allocated to T2
But in practical, this will not happen and it will be done in following way
1) T1 hits database, Oracle will allocate temp segments to it
2) T2 hits database, Oracle will check if T1 completes its job. If T1 finished, then those temp segments will be directly allocated to T2. Otherwise, Oracle will allocate new temp segments from remaining free space in the tablespace
Why Oracle will do this?
Now a big question is why this procedure? because allocation and de-allocation of segments is cost effective, Oracle will choose method of allocating temp segments for first time and managing them from then on….
Because of this you may observe your temp tablespace is getting full when user ran some big sorting conditions. Actual usage of user can be known using V$SORT_USAGE view
you can use below query
*******************************
Temp segment usage per session
*******************************
SELECT S.sid || ‘,’ || S.serial# sid_serial, S.username, S.osuser, P.spid, S.module,
P.program, SUM (T.blocks) * TBS.block_size / 1024 / 1024 mb_used, T.tablespace,
COUNT(*) statements
FROM v$sort_usage T, v$session S, dba_tablespaces TBS, v$process P
WHERE T.session_addr = S.saddr
AND S.paddr = P.addr
AND T.tablespace = TBS.tablespace_name
GROUP BY S.sid, S.serial#, S.username, S.osuser, P.spid, S.module,
P.program, TBS.block_size, T.tablespace
ORDER BY sid_serial;
So, from now remember SMON will only deallocate segments while shutting down the database or for some other actions, but rest all, it will just maintain them….
Hi buddies, here is one way of changing database name which we used to follow prior to 10g version. Hope steps will be clear to you
step 1 : Take the controlfile trace
C:\>sqlplus “/ as sysdba”
SQL*Plus: Release 10.2.0.1.0 – Production on Fri Mar 26 17:03:56 2010
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Production
With the Partitioning, OLAP and Data Mining options
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
————— —————-
HOST_NAME
—————————————————————-
VERSION           STARTUP_T STATUS       PAR    THREAD# ARCHIVE LOG_SWITCH_WAIT
—————– ——— ———— — ———- ——- —————
LOGINS     SHU DATABASE_STATUS   INSTANCE_ROLE      ACTIVE_ST BLO
———- — —————– —————— ——— —
              1 prod
NKUMAR56-1
10.2.0.1.0        26-MAR-10 OPEN         NO           1 STARTED
ALLOWED    NO  ACTIVE            PRIMARY_INSTANCE   NORMAL    NO
SQL> alter database backup controlfile to trace;
Database altered.
Note : The trace file will be created in udump directory. If you don’t know the path of udump, simply issue
SQL> show parameter dump
NAME                                 TYPE        VALUE
———————————— ———– ——————————
background_core_dump                 string      partial
background_dump_dest                 string      C:\ORACLE\PRODUCT\10.2.0\ADMIN
                                                 \PROD\BDUMP
core_dump_dest                       string      C:\ORACLE\PRODUCT\10.2.0\ADMIN
                                                 \PROD\CDUMP
max_dump_file_size                   string      UNLIMITED
shadow_core_dump                     string      partial
user_dump_dest                       string      C:\ORACLE\PRODUCT\10.2.0\ADMIN
                                                 \PROD\UDUMP
step 2 : Take create controlfile script from the trace file and copy to new file and save it with .sql extension (in my case i saved it as control.sql). In the script, change
1. REUSE to SET
2. old database name to new database name
Here i changed PROD to PRODDB
CREATE CONTROLFILE SET DATABASE “PRODDB” NORESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\PROD\REDO01.LOG’  SIZE 50M,
  GROUP 2 ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\PROD\REDO02.LOG’  SIZE 50M,
  GROUP 3 ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\PROD\REDO03.LOG’  SIZE 50M
– STANDBY LOGFILE
DATAFILE
  ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\PROD\SYSTEM01.DBF’,
  ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\PROD\UNDOTBS01.DBF’,
  ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\PROD\SYSAUX01.DBF’,
  ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\PROD\USERS01.DBF’,
  ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\PROD\RMAN_TBS01.DBF’
CHARACTER SET WE8MSWIN1252
;
step 3 : shutdown the database normally
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
step 4 : startup in NOMOUNT stage
SQL> startup nomount
ORACLE instance started.
Total System Global Area  289406976 bytes
Fixed Size                  1248576 bytes
Variable Size              88081088 bytes
Database Buffers          192937984 bytes
Redo Buffers                7139328 bytes
step 5 : change DB_NAME parameter in spfile
SQL> alter system set db_name=proddb scope=spfile;
System altered.
Note : If you are not using spfile, after step 4, edit your pfile and change the value for DB_NAME parameter. You can use following command to check if you are using spfile or not
SQL> show parameter spfile
NAME                                 TYPE        VALUE
———————————— ———– ——————————
spfile                               string      C:\ORACLE\PRODUCT\10.2.0\DB_1\
                                                 DBS\SPFILEPROD.ORA
If you are able to see value, it means you are using spfile. otherwise you are using pfile
step 6 : Again shutdown and startup in nomount stage
SQL> shutdown immediate
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> startup nomount
ORACLE instance started.
Total System Global Area  289406976 bytes
Fixed Size                  1248576 bytes
Variable Size              88081088 bytes
Database Buffers          192937984 bytes
Redo Buffers                7139328 bytes
step 7 : remove old controlfiles from the locations (you can get it from CONTROL_FILES parameter)
step 8 : execute control file script which creates new controlfiles in respective locations
SQL> @c:\control.sql
Control file created.
step 9 : open the database with resetlogs option and check the database name. it should show new name
SQL> alter database open resetlogs;
Database altered.
SQL> select name from v$database;
NAME
———
PRODDB
Lets see the steps to perform a complete recovery of database if we loose all the controlfiles
  1. Take the trace of controlfile using below command
    sql> alter database backup controlfile to trace;
    Note : The above command will work fortunately if you have database still up and running. If not, you need to have the latest controlfile trace. If not available and still you have all redolog and datafile information, then you can take trace of other database and modify the name, path and sizes of redolog files and datafiles
  2. From the controlfile trace, copy second CREATE CONTROLFILE command till characterset to another text file and save it with .sql extension (generally i will save it as create_control.sql)
  3. change RESETLOGS option to NORESETLOGS in that sql file.
  4. sql> shutdown immediate;
  5. sql> startup nomount;
  6. sql> @create_control.sql (your current directory should be the location of this file or you can give path also before file name)
    Note : This will create controlfile and will place the database in MOUNT state. If any errors, observed, we need to debug them.
  7. sql> alter database open;
Hope this helps you… :-)
Many DBA learners (including me) will assume that Indexes are also a similar storage unit to table….i mean they will think that Index will also
store data in same format as table i.e rows and columns
Lets assume the same. If i am having a row stored at 10,000th position in Index and i am requiring that, in such case oracle need to scan 9999
rows before finding the exact row and from there it will just get row id. Using this rowid, oracle need to look into table to pick up the actual row.
By this we can understand that time for searching will increase if index is storing lakhs or crores of row ids.
If that is the case, we will never say indexes will provide data faster :-)
So, then how indexes are stored?
Indexes are stored in Tree structure with a root value on top and nodes in middle and leaves at bottom
An example is show below
Here, 122 is the root value. while searching, oracle will compare the provided value in where condition with this root value and if it is greater than
root value, it will move to right hand side or if lower, it will move to left hand side.
For example, assume we used below query
SQL> select * from emp where empno=139;
1) Oracle will first compare 139 with 122 and will found 139 is greater than 122 and moves to right hand side
2) Again it compares with 136 and founds it is still greater value and moves to middle leaf and there is a HIT !
3) from that leaf it will take rowid and will search for rowid in the table
So, here we are getting our result in just 3 searches…..it happend because our index is storing in tree format.
Another question, why indexes will become UNUSABLE after move table?
As already discussed, indexes are based on rowid’s. During move table, internally oracle will
1) creates table with oracle defined name with same structure as original and it will transfer the rows to new table
2) once done, it will drop original table and will rename new one from oracle defined name to user defined name
In that case, rowid of all the rows in the table will change and Index when tries to access the table will not found any rowid’s which are stored
earlier i.e rowid of table and index are not matching. Because of this reason, Oracle will mark Indexes are UNUSABLE.
we can make unusable indexes valid only by rebuilding them. Internally oracle will re-create index while rebuilding.
Somtimes, index rebuilding may take time. In such case you can use NOLOGGING clause to speed up, if database is in archivelog mode.
Many a times DBA’s will get a requirement to move Index Organized tables (IOT) to a different tablespace.
If you try to move the table in a general way using move command, it will fail throwing following error
SQL> alter table protas.SYS_IOT_OVER_67262 move tablespace protasdata;
alter table protas.SYS_IOT_OVER_67262 move tablespace protasdata
                   *
ERROR at line 1:
ORA-25191: cannot reference overflow table of an index-organized table
Here, i tried to move a table SYS_IOT_OVER_67262 which is an IOT. PROTASDATA is the new tablespace name.
when you search in google for the error, you will get following as action
Action: Issue the statement against the parent index-organized table containing the specified overflow table.
By this we need to understand that IOT name is different from table_name. To find the IOT name for this table, use the below query
SQL> select IOT_NAME from dba_tables where owner=’PROTAS’ and table_name=’SYS_IOT_OVER_67262′;
IOT_NAME
——————————
TAS_PNRRAW_LINES
now try to use IOT name with the move command
SQL> alter table protas.TAS_PNRRAW_LINES move tablespace protasdata;
Sometimes, even the above command will fail because we didn’t specified about Overflow space. In such case, use below command
SQL> alter table protas.TAS_PNRRAW_LINES move tablespace protasdata overflow tablespace protasdata;
Generally when we move tables, indexes will become UNUSABLE. At that time, we will rebuild indexes using
SQL> alter index protas.TAS_PNRRAW_LINES_PK rebuild online;
To move to new tablespace while rebuilding, we can use
SQL> alter index protas.TAS_PNRRAW_LINES_PK rebuild online tablespace protasdata;
alter index protas.TAS_PNRRAW_LINES_PK rebuild online tablespace protasdata
*
ERROR at line 1:
ORA-28650: Primary index on an IOT cannot be rebuilt
Here TAS_PNRRAW_LINES_PK is index on IOT. Because already table is structured in the form of index i.e tree format, we don’t need to rebuild the index again. So you can ignore this error.
Hi Friends, this article will guide you in installing Oracle Text or Context on a 10g database. Generally if we go for full packages installation, oracle will install Context in the first time. But if you got a requirement to install it on a database which doesn’t have it pre-installed, you can use below steps…
1) Create a tablespace with name DRSYS
This step is optional and you can use any existing tablespace. In 9i we can observe this tablespace created by default. But in 10g it uses SYSAUX by default.
2) run @?/ctx/admin/catctx.sql ctxsys sysaux temp nolock
catctx.sql will create necessary packages and will create a user CTXSYS with the same password as username and will assign SYSAUX as default tablespace and TEMP as temporary tablespace. nolock specifies that account should not be locked.
3) run @?/ctx/admin/defaults/drdefxx.sql
This script will create necessary objects required for index creation. The last “xx” in the script name represents region name.
For example, for US it should be drdefus.sql and for UK it is drdefuk.sql
4) grant execute on ctxsys.ctx_ddl to public;
This is to grant permissions on the package
Thats it !!! you are done.
Sometimes you may get below error when using context through application
ORA-29855: error occurred in the execution of ODCIINDEXCREATEroutine
ORA-20000: Oracle Text error:
DRG-10700: preference does notexist: CTXSYS.DEFAULT_LEXER
ORA-06512: at “CTXSYS.DRUE”, line 126
ORA-06512:at “CTXSYS.TEXTINDEXMETHODS”, line 54
ORA-06512: at line 1
This is because you have not ran the script for the region specific. As discussed in installation step 3, you need to run appropriate script for the region. So the solution would be that.
For more details on this error read Oracle Metalink DOC 107626.1
Hope it Helps…
Hi Friends, today i face some problem with EM dbconsole in one of my databases are here is the way how i resolved it
Database version : 10.2.0.4 standard edition
OS version : windows 2003 server with service pack 2
Problem : Sys admin performed firm upgrade on my database server and rebooted it. From that moment EM dbconsole is not working
1) i checked the status of dbconsole as follows
C:\Documents and Settings\oracms>emctl status dbconsole
Oracle Enterprise Manager 10g Database Control Release 10.2.0.4.0
Copyright (c) 1996, 2007 Oracle Corporation.  All rights reserved.
https://cmsp-database.klpcph.local:5500/em/console/aboutApplication
Oracle Enterprise Manager 10g is not running.
——————————————————————
Logs are generated in directory N:\oracle\product\10.2.0\db/cmsp-database.klpcph.local_KLPPROD01/sys
man/log
2) then i tried starting dbconsole and got following error
C:\Documents and Settings\oracms>emctl start dbconsole
Oracle Enterprise Manager 10g Database Control Release 10.2.0.4.0
Copyright (c) 1996, 2007 Oracle Corporation.  All rights reserved.
https://cmsp-database.klpcph.local:5500/em/console/aboutApplication
Starting Oracle Enterprise Manager 10g Database Control …The OracleDBConsoleKLPPROD01 service is s
tarting…………………………………………………………………………………
……………………………………………………………………………………….
……………………
The OracleDBConsoleKLPPROD01 service could not be started.
A service specific error occurred: 1.
More help is available by typing NET HELPMSG 3547.
3) then i tried to find out what this message is
C:\Documents and Settings\oracms>NET HELPMSG 3547
A service specific error occurred: ***.
EXPLANATION
A service-specific error occurred.
ACTION
Refer to the Help or documentation for that service to determine the problem.
as you can see we cannot understand what is the exact problem here…
4) sometimes this problem may occur because SYSMAN and DBSNMP users got locked. so i checked their account status
C:\Documents and Settings\oracms>sqlplus “/ as sysdba”
SQL*Plus: Release 10.2.0.4.0 – Production on Tue May 18 08:26:24 2010
Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
Connected to:
Oracle Database 10g Release 10.2.0.4.0 – 64bit Production
SQL> select username,account_status from dba_users;
USERNAME                       ACCOUNT_STATUS
—————————— ——————————–
MGMT_VIEW                      OPEN
SYS                            OPEN
SYSTEM                         OPEN
DBSNMP                         OPEN
SYSMAN                         OPEN
so from above, its clear that both the user accounts are open
5) then i checked if agent is running or not
C:\Documents and Settings\oracms>emctl status agent
Oracle Enterprise Manager 10g Database Control Release 10.2.0.4.0
Copyright (c) 1996, 2007 Oracle Corporation.  All rights reserved.
—————————————————————
Agent is Not Running
6) because agent is not running, i tried to start it
C:\Documents and Settings\oracms>emctl start agent
Oracle Enterprise Manager 10g Database Control Release 10.2.0.4.0
Copyright (c) 1996, 2007 Oracle Corporation.  All rights reserved.
The service name is invalid.
More help is available by typing NET HELPMSG 2185.
as mentioned i tried to check what this message is…
C:\Documents and Settings\oracms>NET HELPMSG 2185
The service name is invalid.
EXPLANATION
You tried to start a service that is not configured on this system.
ACTION
Check the spelling of the service name or check the configuration information for the service using
the Services option from Server Manager.

7) restarting the server could be a solution, but we cannot do that if it is a production database. so i recreated repository
Do remember, recreating enterprise manager repository will not have any impact on database functionality
before recreating, take out the info like
Database hostname
Database SID
Listener port number
password for SYS user
password for DBSNMP user
password for SYSMAN user
also, it is important that DBSNMP and SYSMAN user’s account should be opened
use the following command to recreate repository
C:\Documents and Settings\oracms>emca -config dbcontrol db -repos recreate
STARTED EMCA at May 18, 2010 8:28:24 AM
EM Configuration Assistant, Version 10.2.0.1.0 Production
Copyright (c) 2003, 2005, Oracle.  All rights reserved.
Enter the following information:
Database SID: KLPSTAGE01
Database Control is already configured for the database KLPSTAGE01
You have chosen to configure Database Control for managing the database KLPSTAGE01
This will remove the existing configuration and the default settings and perform a fresh configurati
on
Do you wish to continue? [yes(Y)/no(N)]: y
Listener port number: 1521
Password for SYS user:
Password for DBSNMP user:
Password for SYSMAN user:
Email address for notifications (optional):
Outgoing Mail (SMTP) server for notifications (optional):
—————————————————————–
You have specified the following settings
Database ORACLE_HOME ……………. N:\oracle\product\10.2.0\db
Database hostname ……………. klpdbscph601.klpcph.local
Listener port number ……………. 1521
Database SID ……………. KLPSTAGE01
Email address for notifications ……………
Outgoing Mail (SMTP) server for notifications ……………
—————————————————————–
Do you wish to continue? [yes(Y)/no(N)]: y
this successfully created repository and started my dbconsole
8)so i am happy that my problem solved. but when i checked the EM dbconsole status….it gave me another shock
C:\Documents and Settings\oracms>emctl status dbconsole
Oracle Enterprise Manager 10g Database Control Release 10.2.0.4.0
Copyright (c) 1996, 2007 Oracle Corporation.  All rights reserved.
https://klpdbscph601.klpcph.local:5500/em/console/aboutApplication
EM Daemon is not running.
——————————————————————
Logs are generated in directory N:\oracle\product\10.2.0\db/klpdbscph601.klpcph.local_KLPSTAGE01/sys
man/log
then i started looking at log and found following warning
May 18, 2010 8:33:00 AM oracle.sysman.emcp.ParamsManager getLocalListener
WARNING: Error retrieving listener for klpdbscph601.klpcph.local
so finally, i reloaded listener and you know it started working… :-)
C:\Documents and Settings\oracms>lsnrctl reload LISTENER_STAGE01
LSNRCTL for 64-bit Windows: Version 10.2.0.4.0 – Production on 18-MAY-2010 08:37:26
Copyright (c) 1991, 2007, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=klpdbscph601.klpcph.local)(PORT=1523)))
The command completed successfully
C:\Documents and Settings\oracms>emctl status dbconsole
Oracle Enterprise Manager 10g Database Control Release 10.2.0.4.0
Copyright (c) 1996, 2007 Oracle Corporation.  All rights reserved.
https://klpdbscph601.klpcph.local:5500/em/console/aboutApplication
Oracle Enterprise Manager 10g is running.
——————————————————————
Logs are generated in directory N:\oracle\product\10.2.0\db/klpdbscph601.klpcph.local_KLPSTAGE01/sys
man/log
Note : we are not using grid control, so this is only for 10g EM dbconsole problem. Don’t apply it for grid control and also test it before use
Hi Friends, recently i tried to configure Oracle enterprise manager in one of my database and issued below command
$ emctl status dbconsole
i got the following ouput
OC4J Configuration issue. /opt/oracle/oracle/product/10.2.0/DB/oc4j/j2ee/OC4J_DBConsole_iss.dk_PAS not found.
Then i found that we need to have a directory in the specified location with specified name and i found a directory called OC4J_DBConsole. Then i renamed it to OC4J_DBConsole_iss.dk_PAS which solved the issue.
Hope this may help you and in next post i will present different EM commands
Good day friends…
Its been long time that i posted a technical article and reason is am busy with my new project KT.
yesterday when i am doing RMAN cloning to a different server to get a schema which is dropped, i got a peculiar error. i will post the steps to follow on how to retrieve a single table or schema using rman backup later.
when i about to start my new instance on a server (which already contains 3 databases), i got below error
SQL> startup
ORA-27102: out of memory
SVR4 Error: 22: Invalid argument
This is first time i got this error and wondered why it is…and then got a point that “may be RAM size is not sufficient to allocate memory to new instance”
i checked the cause as below
$ oerr ORA 27102
27102, 00000, “out of memory”
// *Cause: Out of memory
// *Action: Consult the trace file for details
now with this its clear that we don’t have a straight answer. then i checked alert log file and found following line
Starting ORACLE instance (normal)
Wed Aug 11 10:37:26 2010
WARNING: EINVAL creating segment of size 0×0000000080002000
fix shm parameters in /etc/system or equivalent
when i am checking at OS level, background processes are being started but OS is unable to allocate memory to SGA or PGA
After analysis, i found that either SHMMAX or SHMALL are not sufficiently mentioned at OS level kernel
In such cases, we need to do following
1. $ prtconf | grep Mem
Memory size: 8192 Megabytes
2. $ id -p
uid=500(oracle) gid=201(dba) projid=200(MCSDBT)
3. $ prctl -n project.max-shm-memory -i project 200
project: 200: MCSDBT
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
project.max-shm-memory
        privileged      2.0GB      -   deny                                 -
        system          16.0EB    max   deny                                 -
here you can see the value privileged is only 2 GB. so we need to increase that using the below command
4. $ prctl -n project.max-shm-memory -r -v 10G -i project 200
5. $ prctl -n project.max-shm-memory -i project 200
project: 200: MCSDBT
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
project.max-shm-memory
        privileged      10.0GB      -   deny                                 -
        system          16.0EB    max   deny                                 -
Now it was changed to 10G and try out, you will be able to start your instance.
Note: sometimes you may not have root privilege to execute this command, in such case you can take sys admin help.
The value which changed to 10G is temporary for that moment i.e it will be again 2 GB once you reboot server
Note : The above procedure will help in Oracle 10.2.0.4 running on Solaris 10. Commands may differ for other versions and OS flavours
This article will enable you to learn about some of new background processes in 11g. As per Oracle documentation there are 56 new background processes added in 11g release 1
Lets have a look at some important one’s….
MMAN - this process is responsible for ASMM in 10g and AMM in 11g  which manages memory allocation to SGA and PGA
RCBG - this background process is responsible for processing data into server result cache
DIAG - In 11g we have a single location for all the trace files, alert log and other diagnostic files. DIAG is the process which performs  diagnostic dumps and executes oradebug commands
DIA0 – responsible for hang detection and deadlock resoultion
DBRM – Database resource manager is responsible for setting plans to users and all other database resource management activities
EMNC – Event Monitor Coordinator will coordinate with event management and notification activity
FBDA – Flashback Data Archiver process is responsible for all flashback related actions in 11g database
GEN0 - General task execution process which performs required tasks
SMCo – Space management coordinator executes various space management tasks like space reclaiming, allocation etc. It uses slave processes Wnnn whenever required
VKTM – Virtual keeper of time is  responsible for keeping track of the wall-clock time and used as a reference-time counter
All of us by this time would have known about 11g new feature AMM. This allows to manage both SGA and PGA automatically by setting MEMORY_TARGET and MEMORY_MAX_TARGET parameters.
Now i got a doubt on how oracle will do this? i.e releasing shared memory and allocating it to private memory…
After so many searches in google, finally i found following
1. Oracle in 11g is using /dev/shm mount point for shared memory implementation
2. this is called Linux POSIX oriented SHM implementaion
So now all memory segments are treated as files on that mount point. If any one requires extra segment, Oracle will simply allocate the file which is free.
If you don’t have /dev/shm mounted (default it will be mounted), then you cannot use MEMORY_TARGET parameter and this will also happen if you have less size for tmpfs in Linux.
Plz refer to below link where Tanel explained in wonderful way about this.
http://blog.tanelpoder.com/2007/08/21/oracle-11g-internals-part-1-automatic-memory-management/
Many of you may have known about new parameter RESULT_CACHE in 11g. It supports a new component of SGA called SERVER RESULT CACHE. Lets go and see its functionality
Normally sql statements will follow parse, execute and fetch phases to give required data to users. In that process, Oracle will copy blocks into buffer cache…and filtering of data out of that block will happen.
Oracle thought why can’t i store directly rows itself instead of blocks? and thats how it came up with a new SGA component SERVER RESULT CACHE.
RESULT CACHE resides in shared pool. when using this component in 11g, any sql statement will
1) first check for parsed statement in library cache and if got an already parsed statement with execution plan, it will move to result cache.
2) In result cache, it will search for data which is having the same execution plan. This will happen using execution plan id. From this we can understand that result cache also stores execution plan id along with data(rows of the required table)
3) Then the data is given to user directly even without execution and fetch (remember even it didn’t performed parsing). so you will get data in a flash…
Oracle suggests to use RESULT CACHE for OLTP environments or for statements which are frequently used (earlier versions, we used KEEP cache for the same…but in the form of blocks again). It also supports PL/SQL functions and will give more advantage when executing functions repeatedly.
RESULT_CACHE_MAX_SIZE parameter is used to size the result cache. Its size depends on…
1) when mentioned with MEMORY_TARGET parameter, it will take 0.25% of total SGA size
2) when mentioned only SGA_TARGET (without MEMORY_TARGET), it will take 0.5% of SGA size
3) when not using AMM or ASMM, it will take 1% memory from shared pool
Hope this small post gave you an idea on 11g new feature…
Yesterday one of my friend got a requirement to insert a space into a table using SQL*Loader.
you may feel whats tough in that? right…..but the column in which this supposed to fill with is a NOT NULL column
When ever you try to do this it will throw error at SQL level or SQL*Loader. We can use NVL function as a solution for this problem.
Below is the sample sql*loader control file which shows how to use NVL function
LOAD DATA
INFILE ‘sqlldr_test.txt’
INTO TABLE test
truncate
FIELDS TERMINATED BY ‘|’ OPTIONALLY ENCLOSED BY ‘”‘
TRAILING NULLCOLS
(
total,
dat DATE ‘YYYY-MM-DD’,
gender “nvl(:gender,’N')
)
Above you can observe that user is trying to insert a value for Gender, but if value is not available he want it to replace with N.
Yesterday in one post (http://pavandba.com/2011/10/16/which-process-starts-first-when-instance-is-started/) I explained about PSP0 process which is introduced in 10g
Sometimes all of a sudden our Instance will be crashed and when looked at alert log file we found below error message
ORA-00490: PSP process terminated with error
While PSP process itself terminated due to any error the whole instance is crashed with ORA-00490 error message.
On further check you may found one more error also in the alert log file which describes as
ORA-27300: OS system dependent operation:fork failed with status: 12
ORA-27301: OS failure message: Not enough space
ORA-27302: failure occurred at: skgpspawn3
When operating system is encountering with some unknown error like insufficient space in temp Area or swap Area or insufficient system resources then Oracle throws above errors
Same time PMON is terminating instance with following error with process id of PMON process. Because Oracle processes are being unmanageable of Oracle database instance.
PMON: terminating instance due to error 490
Instance terminated by PMON, pid = 20094
Root Cause:
This error will occur when there is no free space available in swap area of System for spawning new process of Oracle. Due to this reason Process SPwaner process PSP0 (with ORA-00490 error code of Oracle) terminated because it doesn’t able to manage or create Oracle processes. Result is Oracle instance crashed by PMON process with errorstack 490 (which is pointing out ORA-00490). If lack of system resource found then also same situation can be occurring.
Solution :
There are 2 solutions for this problem which are mentioned below
1. Check your swap space and increase swap area in system. Because due to lack of space in swap are Oracle unable to create new process and PSP0 Process SPwaner is unable to manage Oracle process.
2. Check “ulimit” setting for Oracle. “ulimit” is for user shell limitation. If maximum shell limit is reached then also PSP0 process becomes unstable to manage other Oracle processes.Increase the “ulimit” setting for Oracle user.
Note : Hope this helps and Please share this post to your friends in case you feel its informative………..
Long time back, I asked this question in my POLL and many of the DBA enthus choosed the option of SMON
http://pavandba.files.wordpress.com/2011/10/1.jpg?w=100&h=300
But unfortunately, this is not the right answer (when considering 10g)
The first process that will be started when we start instance is PSP process. This is called PROCESS SPAWNER.  This process is introduced in 10g and is responsible for creating and managing other oracle backgroung processes.
As the name specifies, this process can spawn so that you will see the process name as PSP0 in alert log file.
More about PSP0 will be followed in my next article…..
Note : Please put your comments and also share this article by clicking share button below so as to spread the DBA knowledge
Many a times i had seen my DBA friends asking this question. There could be lot of reasons whenever something is slow….but in case of materialized view, we can use below checks to get confirmed where is the problem
Troubleshoot the problem in following steps
1) check the network connectivity using ping command. you should able to see no time gap between packets transfer
2) check if tnsping command is taking time
3) check the size of MV log. it should be truncated after every refresh
Note: It may be a surprise for many people if i say MV log will get truncated after every refresh. But this is the actual thing that happens. how oracle will manage refresh in such case…lets discuss in another post
4) check the size of original table. if MV log size is more than original table, then its clear that problem is with MV log
The following is the reason for increment in size of MV log than table
1) Additional MV’s are created on the same tables, but had stopped refreshing now.
2) If the snapshot becomes invalid or lost and was not dropped formally.
Solution:
The below would be quite good solutions to apply
1) Drop the MV which are not using from long time.
2) drop the materialized view log and re-create (but this will require a complete refresh and the table will not be accessible during this refresh to the users)
again, can we drop MV log alone without dropping MV? answer is YES and below is sample command for the same
DROP MATERIALIZED VIEW LOG ON VLRAPP.CL_ACC_HOLD_TRANS;
You can create MV log again using below command 
CREATE MATERIALIZED VIEW LOG ON “VLRAPP”.”CL_ACC_HOLD_TRANS”
 PCTFREE 60 PCTUSED 30 INITRANS 1 MAXTRANS 255 NOLOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE “DATA”
  WITH PRIMARY KEY EXCLUDING NEW VALUES;
For commands, please use DBMS_METADATA.GET_DDL package (for syntax, just google it)
as said, there could be some other reasons tooo….but this post may give a glance on what to verify initially
EMCTL is one service which we never know whether it will run fine or not…. :-)
Just kidding, here is one such problem with Enterprise Manager configuration and its solution
Problem:
When trying to configure EM for 10g or trying to drop repository or trying to recreate repository you may get below error
[oracle@issgascop218 ~]$ emca -deconfig dbcontrol db -repos drop
STARTED EMCA at Aug 24, 2010 7:51:44 AM
EM Configuration Assistant, Version 10.2.0.1.0 Production
Copyright (c) 2003, 2005, Oracle.  All rights reserved.
Enter the following information:
Database SID: PASQ
Listener port number: 1521
Password for SYS user:
Password for SYSMAN user:
Do you wish to continue? [yes(Y)/no(N)]: y
Aug 24, 2010 7:51:54 AM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /opt/oracle/product/10.2.0/cfgtoollogs/emca/PASQ/emca_2010-08-24_07-51-44-AM.log.
Aug 24, 2010 7:51:55 AM oracle.sysman.emcp.util.DBControlUtil stopOMS
INFO: Stopping Database Control (this may take a while) …
Aug 24, 2010 7:51:57 AM oracle.sysman.emcp.EMReposConfig stopDBMSJobs
WARNING: Error initializing SQL connection. SQL operations cannot be performed
Aug 24, 2010 7:51:57 AM oracle.sysman.emcp.EMReposConfig invoke
WARNING: Unable to remove DBMS jobs.
Aug 24, 2010 7:51:57 AM oracle.sysman.emcp.EMReposConfig dropRepository
INFO: Dropping the EM repository (this may take a while) …
Aug 24, 2010 7:51:57 AM oracle.sysman.emcp.util.PlatformInterface executeCommand
WARNING: Error executing /opt/oracle/product/10.2.0/sysman/admin/emdrep/bin/RepManager -connect (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=issgascop218)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=PASQ.issgascop218.global.iss.biz))) -repos_user SYSMAN -action drop -verbose -output_file /opt/oracle/product/10.2.0/cfgtoollogs/emca/PASQ/emca_repos_drop_2010-08-24_07-51-57-AM.log
Aug 24, 2010 7:51:57 AM oracle.sysman.emcp.EMReposConfig invoke
SEVERE: Error dropping the repository
Aug 24, 2010 7:51:57 AM oracle.sysman.emcp.EMReposConfig invoke
INFO: Refer to the log file at /opt/oracle/product/10.2.0/cfgtoollogs/emca/PASQ/emca_repos_drop_<date>.log for more details.
Aug 24, 2010 7:51:57 AM oracle.sysman.emcp.EMConfig perform
SEVERE: Error dropping the repository
Refer to the log file at /opt/oracle/product/10.2.0/cfgtoollogs/emca/PASQ/emca_2010-08-24_07-51-44-AM.log for more details.
Could not complete the configuration. Refer to the log file at /opt/oracle/product/10.2.0/cfgtoollogs/emca/PASQ/emca_2010-08-24_07-51-44-AM.log for more details.
When i opened log file, i found below information…
[24-08-2010 07:51:57] Enter SYS user’s password :
[24-08-2010 07:51:57]
[24-08-2010 07:51:57] Enter repository user password :
[24-08-2010 07:51:57]
[24-08-2010 07:51:57] Getting temporary tablespace from database…
[24-08-2010 07:51:57] Could not connect to SYS/(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=issgascop218)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=PASQ.issgascop218.global.iss.biz))): ORA-01031: insufficient privileges (DBD ERROR: OCISessionBegin)
One of possible reasons for ORA-01031 is OS user is not added to either oinstall or dba group. But in my case, its there.
After a search, i found that the reason is lack of PASSWORD FILE. Then i created password file which solved the issue
[oracle@issgascop218 ~]$ cd $ORACLE_HOME/dbs
[oracle@issgascop218 dbs]$ orapwd file=orapw$ORACLE_SID password=oracle entries=1 force=y
[oracle@issgascop218 dbs]$ ls -ltr
total 16096
-rw-r—–  1 oracle dba    8385 Sep 11  1998 init.ora
-rw-r—–  1 oracle dba   12920 May  3  2001 initdw.ora
-rw-rw—-  1 oracle dba      24 Mar 22  2007 lkPAS
-rw-rw—-  1 oracle dba    1552 Mar 22  2007 hc_PAS.dat
-rw-rw—-  1 oracle dba    1552 May  9  2007 hc_PASQ.dat
-rw-rw—-  1 oracle dba      24 May  9  2007 lkPASQ
-rw-r—–  1 oracle dba    2430 May  9  2007 initPASQ.ora
-rw-r—–  1 oracle dba    2560 Dec 28  2007 spfilePAS.oraold
-rw-r—–  1 oracle dba    2393 Dec 28  2007 initPAS.ora
-rw-r—–  1 oracle dba    1536 Apr 30 11:54 orapwPAS
-rw-r—–  1 oracle dba    2560 Jul  1 22:00 spfilePAS.ora
-rw-r—–  1 oracle dba    3584 Jul 12 22:00 spfilePASQ.ora
-rw-r—–  1 oracle dba 7716864 Aug 23 18:30 snapcf_PASQ.f
-rw-r—–  1 oracle dba 8601600 Aug 23 18:57 snapcf_PAS.f
-rw-r—–  1 oracle dba    1536 Aug 24 07:54 orapwPASQ
[oracle@issgascop218 dbs]$ emca -config dbcontrol db -repos recreate
Last few lines of above command
INFO: Repository successfully created
Aug 24, 2010 8:01:56 AM oracle.sysman.emcp.util.DBControlUtil startOMS
INFO: Starting Database Control (this may take a while) …
Aug 24, 2010 8:03:32 AM oracle.sysman.emcp.EMDBPostConfig performConfiguration
INFO: Database Control started successfully
Aug 24, 2010 8:03:33 AM oracle.sysman.emcp.EMDBPostConfig performConfiguration
INFO: >>>>>>>>>>> The Database Control URL is http://issgascop218:5500/em <<<<<<<<<<<
Enterprise Manager configuration completed successfully
So, whenever you get ora-01031 while performing some action on EM, plz do check if this solution works out
Friends, many of us know about DUAL and its usage. Lets learn few things about it…
1) can we drop a dual table?
Ans: Yes. but it will have serious impact on the database functionality. so you should never do that
2) Can we create DUAL table if dropped?
Ans: Yes. use the following steps for the same…
SQL> DROP TABLE SYS.DUAL ;

Table dropped.

SQL> CREATE TABLE SYS.DUAL
  2  (
  3    DUMMY  VARCHAR2(1 BYTE)
  4  )
  5  TABLESPACE SYSTEM;

Table created.

SQL> CREATE PUBLIC SYNONYM DUAL FOR SYS.DUAL;

Synonym created.

SQL> GRANT SELECT ON SYS.DUAL TO PUBLIC WITH GRANT OPTION;

Grant succeeded.

SQL> INSERT INTO dual VALUES (‘X’);

1 row created.

SQL> SELECT * FROM dual;

D                                                                             
-                                                                             
X        
3) Can i create my own DUAL table which is having same functionality?
Ans: Yes. use below script which will create MYDUAL as another DUAL table.
CREATE OR REPLACE PROCEDURE replace_mydual (
 table_name_in IN VARCHAR2
)
IS
BEGIN
 BEGIN
    EXECUTE IMMEDIATE ‘DROP TABLE ‘ || table_name_in;
 EXCEPTION
    WHEN OTHERS THEN NULL;
 END;
 EXECUTE IMMEDIATE ‘CREATE TABLE ‘
                   || table_name_in
                   || ‘ (dummy VARCHAR2(1))’;
 EXECUTE IMMEDIATE
    ‘CREATE OR REPLACE TRIGGER mydual_’ || table_name_in ||
    ‘  BEFORE INSERT
       ON ‘ || table_name_in || ‘
    DECLARE
       PRAGMA AUTONOMOUS_TRANSACTION;
       l_count PLS_INTEGER;
    BEGIN
       SELECT COUNT (*)
       INTO   l_count
       FROM   ‘ || table_name_in || ‘;
       IF l_count = 1
       THEN
          raise_application_error
          ( -20000
          , ”The ‘ || table_name_in || ‘ table can only have one row.” );
       END IF;
    END;’;
 EXECUTE IMMEDIATE ‘BEGIN INSERT INTO ‘
                   || table_name_in
                   || ‘ VALUES (”X”); COMMIT; END;’;
 EXECUTE IMMEDIATE ‘GRANT SELECT ON ‘
                   || table_name_in
                   || ‘ TO PUBLIC’;
 EXECUTE IMMEDIATE ‘CREATE PUBLIC SYNONYM ‘
                   || table_name_in
                   || ‘ FOR ‘
                   || table_name_in;
 EXECUTE IMMEDIATE
    ‘CREATE OR REPLACE FUNCTION next_pky (seq_in IN VARCHAR2)
        RETURN PLS_INTEGER AUTHID CURRENT_USER
     IS
        retval PLS_INTEGER;
     BEGIN
        EXECUTE IMMEDIATE ”SELECT ” || seq_in
                      || ”.NEXTVAL FROM ‘ || table_name_in ||
                      ‘|| ”INTO retval;
        RETURN retval;
     END next_pky;’;
END replace_mydual;
TOM KYTE explained about this in one of his articles and as always its best…
All above ask your friend GOOGLE about this, lot of results will hit your door
HAPPY LEARNING…

No comments:

Post a Comment