Enhance your Oracle database administration expertise with our comprehensive course on upgrading to Oracle Database 12c. Tailored for DBAs, system administrators, IT managers, application developers, data architects, technical support engineers, IT consultants, and students, this course provides in-depth knowledge and practical experience to ensure a seamless and successful upgrade.
Key Highlights:
- Comprehensive Upgrade Guide: Learn a step-by-step approach for upgrading to Oracle Database 12c, covering pre-upgrade preparation, the upgrade process, and post-upgrade tasks.
- Practical Labs: Gain hands-on experience through real-world scenarios and labs.
- Best Practices: Apply industry best practices to reduce downtime and ensure data integrity during the upgrade.
- Explore New Features: Discover new features and improvements in Oracle Database 12c that can benefit your organization.
- Troubleshooting Skills: Develop the ability to troubleshoot and resolve common issues encountered during the upgrade.
- Backup and Recovery: Master essential backup and recovery procedures to safeguard data throughout the upgrade.
- Performance Optimization: Learn techniques to optimize database performance after upgrading.
Who Should Attend:
- Database Administrators (DBAs): Ensure a smooth upgrade while maintaining database performance and security.
- System Administrators: Manage the hardware and operating systems supporting the upgraded database.
- IT Managers: Lead and coordinate the upgrade process with your team.
- Application Developers: Understand how the upgrade affects applications and ensure compatibility.
- Data Architects: Utilize new features for improved database design and performance.
- Technical Support Engineers: Offer advanced support and troubleshoot upgraded environments.
- IT Consultants: Advise clients on best practices and carry out upgrades.
- Students and Learners: Acquire practical skills for a career in database management.
Course Outcomes:
By the end of this course, participants will be able to:
- Prepare for and successfully execute an Oracle Database 12c upgrade.
- Identify and resolve pre-upgrade challenges.
- Implement post-upgrade tasks to maintain a stable and optimized database.
- Leverage the new features of Oracle Database 12c.
- Strengthen their troubleshooting, backup, and recovery abilities.
- Improve database performance for enhanced efficiency and reliability.
Join us to master the Oracle Database 12c upgrade and elevate your database administration skills.
What You’ll Learn:
- Oracle Database 12c features and architecture
- Upgrade planning and preparation
- Upgrade methods and execution
- Validating and testing the upgraded database
- Performance tuning and optimization
- Troubleshooting and issue resolution
- Best practices and recommendations
- Hands-on labs and case studies
- Certification preparation
- Continuous learning resources
Prerequisites:
- Basic Oracle database administration knowledge
- SQL and PL/SQL proficiency
- Familiarity with Oracle Database 12c features
- Experience with backup and recovery processes
- Basic operating system skills
Who This Course Is For:
Database Administrators (DBAs), System Administrators, IT Managers, Application Developers, and others involved in database management.
-
1 Identify two ways to rectify the error.
Your multitenant container (CDB) contains two pluggable databases (PDB), HR_PDB and
ACCOUNTS_PDB, both of which use the CDB tablespace. The temp file is called temp01.tmp.
A user issues a query on a table on one of the PDBs and receives the following error:
ERROR at line 1:
ORA-01565: error in identifying file ‘/u01/app/oracle/oradata/CDB1/temp01.tmp’
ORA-27037: unable to obtain file status
-
A. Add a new temp file to the temporary tablespace and drop the temp file that that produced the error.
Explanation
Adding a new temp file to the temporary tablespace and dropping the temp file that produced the error is a valid solution to rectify the ORA-01565 error. By adding a new temp file, the user can continue their query without any issues related to the missing temp file. -
B. Shut down the database instance, restore the temp01.tmp file from the backup, and then restart the database.
Explanation
Shutting down the database instance, restoring the temp01.tmp file from the backup, and then restarting the database is not a recommended solution for rectifying the ORA-01565 error. This approach is more time-consuming and disruptive compared to simply adding a new temp file to the temporary tablespace. -
C. Take the temporary tablespace offline, recover the missing temp file by applying redo logs, and then bring the temporary tablespace online.
Explanation
Taking the temporary tablespace offline, recovering the missing temp file by applying redo logs, and then bringing the temporary tablespace online is not a suitable solution for rectifying the ORA-01565 error. This process is complex and unnecessary for resolving the file identification error. -
D. Shutdown the database instance, restore and recover the temp file from the backup, and then open the database with RESETLOGS
Explanation
Shutting down the database instance, restoring and recovering the temp file from the backup, and then opening the database with RESETLOGS is not the most efficient way to rectify the ORA-01565 error. This approach involves unnecessary steps and can lead to additional complications during the database recovery process.
Correct!Wrong!Overall explanation
Answer: A
Explanation: * Because temp files cannot be backed up and because no redo is ever generated
for them, RMAN never restores or recovers temp files. RMAN does track the names of temp files,
but only so that it can automatically re-create them when needed.
* If you use RMAN in a Data Guard environment, then RMAN transparently converts primary
control files to standby control files and vice versa. RMAN automatically updates file names for
data files, online redo logs, standby redo logs, and temp files when you issue RESTORE and
RECOVER.
-
-
2 Which two statements are true about the use of the procedures listed in the v$sysaux_occupants.move_procedure column?
-
A. The procedure may be used for some components to relocate component data to the SYSAUX tablespace from its current tablespace.
Explanation
The procedure listed in the v$sysaux_occupants.move_procedure column can be used to relocate component data from its current tablespace to the SYSAUX tablespace for some components. This allows for efficient management and organization of component data within the database. -
B. The procedure may be used for some components to relocate component data from the SYSAUX tablespace to another tablespace.
Explanation
The procedure listed in the v$sysaux_occupants.move_procedure column can be used to relocate component data from the SYSAUX tablespace to another tablespace for some components. This flexibility allows for better resource allocation and optimization within the database environment. -
C. All the components may be moved into SYSAUX tablespace.
Explanation
Not all components can be moved into the SYSAUX tablespace using the procedure listed in the v$sysaux_occupants.move_procedure column. The relocation of components to the SYSAUX tablespace is specific to certain components and may not apply to all components within the database. -
D. All the components may be moved from the SYSAUX tablespace.
Explanation
Not all components can be moved from the SYSAUX tablespace using the procedure listed in the v$sysaux_occupants.move_procedure column. The ability to move components out of the SYSAUX tablespace is limited to specific components and may not be applicable to all components within the database.
Correct!Wrong!Overall Explanation
-
-
3 Which statement is true about Oracle Net Listener?
-
A. It acts as the listening endpoint for the Oracle database instance for all local and non-local user connections. Explanation The Oracle Net Listener acts as the listening endpoint for the Oracle database instance, handling all incoming user connections, whether they are local or non-local.
-
B. A single listener can service only one database instance and multiple remote client connections. Explanation Contrary to the statement, a single Oracle Net Listener can service multiple database instances and handle connections from multiple remote clients, making it a more scalable and efficient solution.
-
C. Service registration with the listener is performed by the process monitor (PMON) process of each database instance. Explanation Service registration with the Oracle Net Listener is actually performed by the service registration process (SRVCTL) and not the process monitor (PMON) process of each database instance.
-
D. The listener.ora configuration file must be configured with one or more listening protocol addresses to allow remote users to connect to a database instance. Explanation While the listener.ora configuration file is used to define listening protocol addresses, it is not mandatory to allow remote users to connect to a database instance. The listener can be configured to listen on multiple protocols and ports.
-
E. The listener.ora configuration file must be located in the ORACLE_HOME/network/admin directly. Explanation The listener.ora configuration file is typically located in the ORACLE_HOME/network/admin directory, but it is not a strict requirement. The file can be located in a different directory as long as the listener can access and read it.
Correct!Wrong!Overall explanation
Correct Answer: C
Explanation/Reference:
Supported services, that is, the services to which the listener forwards client requests, can be configured in the listener.ora file or this information can be dynamically registered with the listener. This dynamic registration feature is called service registration. The registration is performed by the PMON process--an instance background process--of each database instance that has the necessary configuration in the database initialization parameter file. Dynamic service registration does not require any configuration in the listener.ora file.
Incorrect:
Not B: Service registration reduces the need for the SID_LIST_listener_name parameter setting, which specifies information about the databases served by the listener, in the listener.ora file.
Note:
- Oracle Net Listener is a separate process that runs on the database server computer. It receives incoming client connection requests and manages the traffic of these requests to the database server.
- A remote listener is a listener residing on one computer that redirects connections to a database instance on another computer. Remote listeners are typically used in an Oracle Real Application Clusters (Oracle RAC) environment. You can configure registration to remote listeners, such as in the case of Oracle RAC, for dedicated server or shared server environments.
-
-
4 In which three ways can you re-create the lost disk group and restore the data?
You are administering a database stored in Automatic Storage Management (ASM). You use RMAN to back up the database and the MD_BACKUP command to back up the ASM metadata regularly. You lost an ASM disk group DG1 due to hardware failure.
-
Use the MD_RESTORE command to restore metadata for an existing disk group by passing the existing disk group name as an input parameter and use RMAN to restore the data. Explanation Using the MD_RESTORE command to restore metadata for an existing disk group requires passing the existing disk group name as an input parameter. This command is used to restore metadata only, so you also need to use RMAN to restore the actual data.
-
Use the MKDG command to restore the disk group with the same configuration as the backed-up disk group and data on the disk group. Explanation The MKDG command is used to restore a disk group with the same configuration as the backed-up disk group and the data on the disk group. This command helps recreate the lost disk group with the same settings and data.
-
Use the MD_RESTORE command to restore the disk group with the changed disk group specification, failure group specification, name, and other attributes and use RMAN to restore the data. Explanation The MD_RESTORE command can be used to restore the disk group with changed specifications such as disk group specification, failure group specification, name, and other attributes. You will also need to use RMAN to restore the data after restoring the metadata.
-
Use the MKDG command to restore the disk group with the same configuration as the backed-up disk group name and same set of disks and failure group configuration, and use RMAN to restore the data. Explanation The MKDG command is used to restore the disk group with the same configuration as the backed-up disk group, including the same set of disks and failure group configuration. After recreating the disk group, you will need to use RMAN to restore the data.
-
Use the MD_RESTORE command to restore both the metadata and data for the failed disk group. Explanation Using the MD_RESTORE command allows you to restore both the metadata and data for the failed disk group. This command is useful when you need to recreate the entire disk group with both metadata and data.
-
Use the MKDG command to add a new disk group DG1 with the same or different specifications for failure group and other attributes and use RMAN to restore the data. Explanation The MKDG command can be used to add a new disk group with the same or different specifications for failure group and other attributes. In this case, you can create a new disk group with the same name as the failed disk group and then use RMAN to restore the data.
Correct!Wrong!Overall explanation
Correct Answer: C, E, F
Explanation/Reference:
Note:
* The md_restore command allows you to restore a disk group from the metadata created by the md_backup command.
/md_restore Command
Purpose
This command restores a disk group backup using various options that are described in this section. / In the restore mode md_restore, it re-create the disk group based on the backup file with all user-defined templates with the exact configuration as the backuped disk group. There are several options when restore the disk group
full - re-create the disk group with the exact configuration
nodg - Restores metadata in an existing disk group provided as an input parameter newdg - Change theconfiguration like failure group, disk group name, etc..
* The MD_BACKUP command creates a backup file containing metadata for one or more disk groups. By default all the mounted disk groups are included in the backup file which is saved in the current working directory. If the name of the backup file is not specified, ASM names the file AMBR_BACKUP_INTERMEDIATE_FILE.
-
-
5 Which option identifies the correct sequence to recover the SYSAUX tablespace?
The steps to recover the tablespace are as follows:
1. Mount the CDB.
2. Close all the PDBs.
3. Open the database.
4. Apply the archive redo logs.
5. Restore the data file.
6. Take the SYSAUX tablespace offline.
7. Place the SYSAUX tablespace offline.
8. Open all the PDBs with RESETLOGS.
9. Open the database with RESETLOGS.
10. Execute the command SHUTDOWN ABORT.
-
A. 6, 5, 4, 7
-
B. 10, 1, 2, 5, 8
-
C. 10, 1, 2, 5, 4, 9, 8
-
D. 10, 1, 5, 8, 10
Correct!Wrong!Overall Explanation
Correct Answer: C
Explanation/Reference:
* Example:
While evaluating the 12c beta3 I was not able to do the recover while testing "all pdb files lost".Cannot close the pdb as the system datafile was missing...
So only option to recover was:
Shutdown cdb (10)
startup mount; (1)
restore pluggable database
recover pluggable databsae
alter database open;
alter pluggable database name open;
Oracle support says: You should be able to close the pdb and restore/recover the system tablespace of PDB.
* Open the database with the RESETLOGS option after finishing recovery:
SQL> ALTER DATABASE OPEN RESETLOGS;
-
-
6 Which three are direct benefits of the multiprocess, multithreaded architecture of Oracle Database 12c when it is enabled?
-
Reduced logical I/O Explanation The multiprocess, multithreaded architecture of Oracle Database 12c allows for improved efficiency in accessing data, reducing the need for logical I/O operations and ultimately improving performance.
-
Reduced virtual memory utilization Explanation While the multiprocess, multithreaded architecture of Oracle Database 12c can improve performance in various aspects, it may not necessarily lead to reduced virtual memory utilization as the system may still require a certain amount of memory to function effectively.
-
Improved parallel Execution performance Explanation The multiprocess, multithreaded architecture of Oracle Database 12c enables improved parallel execution performance by allowing multiple processes and threads to work simultaneously on different parts of a query, leading to faster query processing.
-
Improved Serial Execution performance Explanation The multiprocess, multithreaded architecture of Oracle Database 12c primarily focuses on enhancing parallel execution performance rather than serial execution performance, as it is designed to leverage multiple processes and threads for concurrent processing.
-
Reduced physical I/O Explanation The multiprocess, multithreaded architecture of Oracle Database 12c can help reduce physical I/O operations by optimizing data access and retrieval, resulting in improved performance and efficiency.
-
Reduced CPU utilization Explanation The multiprocess, multithreaded architecture of Oracle Database 12c can lead to reduced CPU utilization by efficiently distributing processing tasks across multiple threads and processes, maximizing the utilization of available CPU resources.
Correct!Wrong!Overall explanation
Correct Answer: C
Explanation/Reference:
* Multiprocess and Multithreaded Oracle Database Systems Multiprocess Oracle Database (also called multiuser Oracle Database) uses several processes to run different parts of the Oracle Database code and additional Oracle processes for the users-- either one process for each connected user or one or more processes shared by multiple users. Most databases are multiuser because a primary advantage of a database is managing data needed by multiple users simultaneously.
Each process in a database instance performs a specific job. By dividing the work of the database and applications into several processes, multiple users and applications can connect to an instance simultaneously while the system gives good performance.
* In previous releases, Oracle processes did not run as threads on UNIX and Linux systems. Starting in Oracle Database 12c, the multithreaded Oracle Database model enables Oracle processes to execute as operating system threads in separate address spaces.
-
-
7 Which three are true about the large pool for an Oracle database instance that supports shared server connections?
-
A. Allocates memory for RMAN backup and restore operations Explanation The large pool in an Oracle database instance that supports shared server connections is used to allocate memory for RMAN backup and restore operations. This helps in optimizing the performance of these operations by providing a separate memory area for them.
-
B. Allocates memory for shared and private SQL areas Explanation The large pool also allocates memory for shared and private SQL areas in an Oracle database instance that supports shared server connections. This helps in managing and optimizing the memory usage for SQL operations, improving overall performance.
-
C. Contains stack space Explanation The large pool does not contain stack space in an Oracle database instance that supports shared server connections. Stack space is typically managed separately and is not part of the large pool memory allocation.
-
D. Contains a hash area performing hash joins of tables Explanation The large pool does not contain a hash area performing hash joins of tables in an Oracle database instance that supports shared server connections. Hash joins are typically managed in the shared pool or buffer cache, not in the large pool.
-
E. Contains a cursor area for storing runtime information about cursors Explanation The large pool contains a cursor area for storing runtime information about cursors in an Oracle database instance that supports shared server connections. This helps in efficiently managing cursor-related information and optimizing cursor operations for shared server connections.
Correct!Wrong!Overall explanation
Correct Answer: A, B, E
Explanation/Reference:
The large pool can provide large memory allocations for the following:
/ (B)UGA (User Global Area) for the shared server and the Oracle XA interface (used where transactions
interact with multiple databases)
/Message buffers used in the parallel execution of statements
/ (A) Buffers for Recovery Manager (RMAN) I/O slaves
Note:
* large pool
Optional area in the SGA that provides large memory allocations for backup and restore operations, I/O server processes, and session memory for the shared server and Oracle XA.
* Oracle XA
An external interface that allows global transactions to be coordinated by a transaction manager other than Oracle Database.
* UGA
User global area. Session memory that stores session variables, such as logon information, and can also contain the OLAP pool.
* Configuring the Large Pool
Unlike the shared pool, the large pool does not have an LRU list (not D). Oracle Database does not attempt to age objects out of the large pool. Consider configuring a large pool if the database instance uses any of the following Oracle Database features:
* Shared server
In a shared server architecture, the session memory for each client process is included in the shared pool.
* Parallel query
Parallel query uses shared pool memory to cache parallel execution message buffers.
* Recovery Manager
Recovery Manager (RMAN) uses the shared pool to cache I/O buffers during backup and restore operations.
For I/O server processes, backup, and restore operations, Oracle Database allocates buffers that are a few
hundred kilobytes in size.
-
-
8 How can you detect the cause of the degraded performance?
You notice that the performance of your production 24/7 Oracle database significantly degraded. Sometimes you are not able to connect to the instance because it hangs. You do not want to restart the database instance.
-
A. Enable Memory Access Mode, which reads performance data from SGA. Explanation Enabling Memory Access Mode to read performance data from the System Global Area (SGA) may provide some insights into the database performance, but it may not be the most effective method for detecting the cause of degraded performance in a 24/7 production database without restarting the instance.
-
B. Use emergency monitoring to fetch data directly from SGA analysis. Explanation Using emergency monitoring to fetch data directly from the SGA for analysis may help in identifying potential issues, but it is not a recommended approach for diagnosing the cause of degraded performance in a production database without restarting the instance.
-
C. Run Automatic Database Diagnostic Monitor (ADDM) to fetch information from the latest Automatic Workload Repository (AWR) snapshots. Explanation Running the Automatic Database Diagnostic Monitor (ADDM) to fetch information from the latest Automatic Workload Repository (AWR) snapshots is a recommended approach for detecting the cause of degraded performance in a production database. ADDM analyzes performance data and provides recommendations based on the AWR snapshots, helping to identify performance issues without the need to restart the database instance.
-
D. Use Active Session History (ASH) data and hang analysis in regular performance monitoring. Explanation Using Active Session History (ASH) data and hang analysis in regular performance monitoring can be helpful in understanding the current state of the database and identifying potential performance issues. However, it may not be the most efficient method for diagnosing the cause of degraded performance in a 24/7 production database without restarting the instance.
-
E. Run ADDM in diagnostic mode. Explanation Running ADDM in diagnostic mode allows for a more in-depth analysis of the database performance by providing detailed recommendations and insights based on the collected data. While running ADDM in diagnostic mode can be beneficial, it may not be the most efficient method for detecting the cause of degraded performance in a production database without restarting the instance.
Correct!Wrong!Overall explanation
Correct Answer: C
Explanation/Reference:
* In most cases, ADDM output should be the first place that a DBA looks when notified of a performance problem.
* Performance degradation of the database occurs when your database was performing optimally in the past, such as 6 months ago, but has gradually degraded to a point where it becomes noticeable to the users. The Automatic Workload Repository (AWR) Compare Periods report enables you to compare database performance between two periods of time.
While an AWR report shows AWR data between two snapshots (or two points in time), the AWR Compare Periods report shows the difference between two periods (or two AWR reports with a total of four snapshots).
Using the AWR Compare Periods report helps you to identify detailed performance attributes and configuration settings that differ between two time periods.
Reference: Resolving Performance Degradation Over Time
-
-
9 How does real-time Automatic database Diagnostic Monitor (ADDM) check performance degradation and provide solutions?
In your multitenant container database (CDB) containing pluggable databases (PDB), users complain about performance degradation.
-
A. It collects data from SGA and compares it with a preserved snapshot. Explanation Real-time Automatic Database Diagnostic Monitor (ADDM) collects data from the System Global Area (SGA) and compares it with a preserved snapshot to identify any performance degradation issues. By analyzing this data, ADDM can provide solutions to improve performance within the multitenant container database (CDB) and its pluggable databases (PDBs).
-
B. It collects data from SGA, analyzes it, and provides a report. Explanation Real-time Automatic Database Diagnostic Monitor (ADDM) collects data from the System Global Area (SGA), analyzes it, and generates a detailed report on the performance of the multitenant container database (CDB) and its pluggable databases (PDBs). This analysis helps in identifying any performance degradation issues and provides recommendations for improvement.
-
C. It collects data from SGA and compares it with the latest snapshot. Explanation Real-time Automatic Database Diagnostic Monitor (ADDM) collects data from the System Global Area (SGA) and compares it with the latest snapshot to detect any performance degradation within the multitenant container database (CDB) and its pluggable databases (PDBs). By comparing this data, ADDM can suggest solutions to address the performance issues.
-
D. It collects data from both SGA and PGA, analyzes it, and provides a report. Explanation Real-time Automatic Database Diagnostic Monitor (ADDM) collects data from both the System Global Area (SGA) and Program Global Area (PGA), analyzes the information, and generates a comprehensive report on the performance of the multitenant container database (CDB) and its pluggable databases (PDBs). This analysis helps in identifying performance degradation issues and provides recommendations for optimization.
Correct!Wrong!Overall explanation
Correct Answer: B
Explanation/Reference:
Note:
* The multitenant architecture enables an Oracle database to function as a multitenant container database (CDB) that includes zero, one, or many customer-created pluggable databases (PDBs). A PDB is a portable collection of schemas, schema objects, and nonschema objects that appears to an Oracle Net client as a non- CDB. All Oracle databases before Oracle Database 12c were non-CDBs.
* The System Global Area (SGA) is a group of shared memory areas that are dedicated to an Oracle "instance" (an instance is your database programs and RAM).
* The PGA (Program or Process Global Area) is a memory area (RAM) that stores data and control information for a single process.
-
-
10 Which statement is true about the archived redo log files?
You database is running an ARCHIVELOG mode.
The following parameter are set in your database instance:
LOG_ARCHIVE_FORMAT = arch+%t_%r.arc
LOG_ARCHIVE_DEST_1 = `LOCATION = /disk1/archive'
DB_RECOVERY_FILE_DEST_SIZE = 50G
DB_RECOVERY_FILE = `/u01/oradata'
-
A. They are created only in the location specified by the LOG_ARCHIVE_DEST_1 parameter. Explanation The archived redo log files are created only in the location specified by the LOG_ARCHIVE_DEST_1 parameter, which is `/disk1/archive'. This parameter defines the primary destination for the archived redo log files in the ARCHIVELOG mode.
-
B. They are created only in the Fast Recovery Area. Explanation The archived redo log files are not created only in the Fast Recovery Area, as the primary location for archived redo log files is specified by the LOG_ARCHIVE_DEST_1 parameter. The Fast Recovery Area is used for other recovery-related files and operations.
-
C. They are created in the location specified by the LOG_ARCHIVE_DEST_1 parameter and in the default location $ORACLE_HOME/dbs/arch. Explanation The archived redo log files are not created in the default location $ORACLE_HOME/dbs/arch. The primary location for archived redo log files is specified by the LOG_ARCHIVE_DEST_1 parameter, which is `/disk1/archive'.
-
D. They are created in the location specified by the LOG_ARCHIVE_DEST_1 parameter and the location specified by the DB_RECOVERY_FILE_DEST parameter. Explanation The archived redo log files are created in the location specified by the LOG_ARCHIVE_DEST_1 parameter, which is `/disk1/archive', and not in the location specified by the DB_RECOVERY_FILE_DEST parameter, which is `/u01/oradata'. The DB_RECOVERY_FILE_DEST parameter is used for setting the Fast Recovery Area location, not the archived redo log files location.
Correct!Wrong!Overall explanation
Correct Answer: A
Explanation/Reference:
You can choose to archive redo logs to a single destination or to multiple destinations. Destinations can be local--within the local file system or an Oracle Automatic Storage Management (Oracle ASM) disk group--or remote (on a standby database). When you archive to multiple destinations, a copy of each filled redo log file is written to each destination. These redundant copies help ensure that archived logs are always available in the event of a failure at one of the destinations. To archive to only a single destination, specify that destination using the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST initialization parameters.
ARCHIVE_DEST initialization parameter. To archive to multiple destinations, you can choose to archive to two or more locations using the LOG_ARCHIVE_DEST_n initialization parameters, or to archive only to a primary and secondary destination using the LOG_ ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DESTinitialization parameters.
-
11 Which three functions are performed by the SQL Tuning Advisor?
-
A. Building and implementing SQL profiles Explanation Building and implementing SQL profiles is one of the functions performed by the SQL Tuning Advisor. SQL profiles are used to improve the performance of SQL statements by providing additional information to the query optimizer
-
B. Recommending the optimization of materialized views Explanation Recommending the optimization of materialized views is not a function performed by the SQL Tuning Advisor. Materialized views are physical copies of the result set of a query, and their optimization is typically handled by the Database Administrator.
-
C. Checking query objects for missing and stale statistics Explanation Checking query objects for missing and stale statistics is one of the functions performed by the SQL Tuning Advisor. It analyzes the statistics of query objects to ensure that the optimizer has accurate information for generating efficient execution plans.
-
D. Recommending bitmap, function-based, and B-tree indexes Explanation Recommending bitmap, function-based, and B-tree indexes is not a function performed by the SQL Tuning Advisor. While indexes play a crucial role in query optimization, the SQL Tuning Advisor focuses more on analyzing SQL statements and providing recommendations based on their execution plans.
-
E. Recommending the restructuring of SQL queries that are using bad plans Explanation Recommending the restructuring of SQL queries that are using bad plans is one of the functions performed by the SQL Tuning Advisor. It identifies SQL queries with inefficient execution plans and suggests ways to restructure them for better performance.
Correct!Wrong!Overall explanation
Correct Answer: A, C, E
Explanation/Reference:
The SQL Tuning Advisor takes one or more SQL statements as an input and invokes the Automatic Tuning Optimizer to perform SQL tuning on the statements. The output of the SQL Tuning Advisor is in the form of an advice or recommendations, along with a rationale for each recommendation and its expected benefit. The recommendation relates to collection of statistics on objects (C), creation of new indexes, restructuring of the SQL statement (E), or creation of a SQL profile (A). You can choose to accept the recommendation to complete the tuning of the SQL statements.
-
-
12 Which three steps should you perform to recover the control file and make the database fully operational?
Your multitenant container database (CDB) contains three pluggable database (PDBs). You find that the control file is damaged. You plan to use RMAN to recover the control file. There are no startup triggers associated with the PDBs.
-
A. Mount the container database (CDB) and restore the control file from the control file auto backup. Explanation Restoring the control file from the control file auto backup while the container database (CDB) is mounted is a crucial step in recovering the control file and making the database fully operational. This ensures that the control file is restored to its correct state.
-
B. Recover and open the CDB in NORMAL mode. Explanation Recovering and opening the CDB in NORMAL mode after restoring the control file is essential to ensure that the CDB is fully operational. This step allows the database to be accessed and used normally after the control file recovery process.
-
C. Mount the CDB and then recover and open the database, with the RESETLOGS option. Explanation Mounting the CDB, recovering and opening the database with the RESETLOGS option is not necessary for recovering the control file. This step is more relevant for performing a point-in-time recovery or when resetting the redo log sequence.
-
D. Open all the pluggable databases. Explanation Opening all the pluggable databases is not a required step for recovering the control file and making the database fully operational. The focus should be on recovering the control file at the CDB level before addressing the individual PDBs.
-
E. Recover each pluggable database. Explanation Recovering each pluggable database is not necessary for recovering the control file at the CDB level. The priority should be on restoring the control file to ensure the overall database integrity.
-
F. Start the database instance in the nomount stage and restore the control file from control file auto backup. Explanation Starting the database instance in the nomount stage and restoring the control file from the control file auto backup is a critical step in the recovery process. This ensures that the control file is correctly restored before proceeding with further recovery steps.
Correct!Wrong!Overall explanation
Correct Answer: C, D, F
Explanation/Reference:
Step 1: F
Step 2: D
Step 3: C: If all copies of the current control file are lost or damaged, then you must restore and mount a backup control file. You must then run the RECOVERcommand, even if no data files have been restored, and open the database with the RESETLOGS option.
Note:
* RMAN and Oracle Enterprise Manager Cloud Control (Cloud Control) provide full support for backup and recovery in a multitenant environment. You can back up and recover a whole multitenant container database (CDB), root only, or one or more pluggable databases (PDBs).
-
-
13 Identify three valid options for adding a pluggable database (PDB) to an existing multitenant container database (CDB).
-
A. Use the CREATE PLUGGABLE DATABASE statement to create a PDB using the files from the SEED. Explanation Using the CREATE PLUGGABLE DATABASE statement to create a PDB using the files from the SEED is a valid option for adding a PDB to an existing CDB. This method allows for the creation of a new PDB based on the template provided by the SEED PDB.
-
B. Use the CREATE DATABASE . . . ENABLE PLUGGABLE DATABASE statement to provision a PDB by copying file from the SEED. Explanation Using the CREATE DATABASE . . . ENABLE PLUGGABLE DATABASE statement to provision a PDB by copying files from the SEED is not a valid option for adding a PDB to an existing CDB. This statement is used to create a new CDB with a PDB enabled, not to add a new PDB to an existing CDB.
-
C. Use the DBMS_PDB package to clone an existing PDB. Explanation Using the DBMS_PDB package to clone an existing PDB is a valid option for adding a PDB to an existing CDB. This method allows for the creation of a new PDB by cloning an existing PDB within the same CDB.
-
D. Use the DBMS_PDB package to plug an Oracle 12c non-CDB database into an existing CDB. Explanation Using the DBMS_PDB package to plug an Oracle 12c non-CDB database into an existing CDB is not a valid option for adding a PDB to an existing CDB. The DBMS_PDB package is used for managing PDBs within a CDB, not for plugging non-CDB databases into a CDB.
-
E. Use the DBMS_PDB package to plug an Oracle 11 g Release 2 (11.2.0.3.0) non-CDB database into an existing CDB. Explanation Using the DBMS_PDB package to plug an Oracle 11g Release 2 (11.2.0.3.0) non-CDB database into an existing CDB is not a valid option for adding a PDB to an existing CDB. The DBMS_PDB package is designed for managing PDBs within a CDB and does not support plugging in non-CDB databases.
Correct!Wrong!Overall explanation
Correct Answer: A, C, D
Explanation/Reference:
Use the CREATE PLUGGABLE DATABASE statement to create a pluggable database (PDB).
This statement enables you to perform the following tasks:
* (A) Create a PDB by using the seed as a template Use the create_pdb_from_seed clause to create a PDB by using the seed in the multitenant container database (CDB) as a template. The files associated with the seed are copied to a new location and the copied files are then associated with the new PDB.
* (C) Create a PDB by cloning an existing PDB Use the create_pdb_clone clause to create a PDB by copying an existing PDB (the source PDB) and then plugging the copy into the CDB. The files associated with the source PDB are copied to a new location and the copied files are associated with the new PDB. This operation is called cloning a PDB.
The source PDB can be plugged in or unplugged. If plugged in, then the source PDB can be in the same CDB or in a remote CDB. If the source PDB is in a remote CDB, then a database link is used to connect to the remote CDB and copy the files.
* Create a PDB by plugging an unplugged PDB or a non-CDB into a CDB Use the create_pdb_from_xml clause to plug an unplugged PDB or a non-CDB into a CDB, using an XML metadata file.
-
-
14 Examine this command:
SQL > exec DBMS_STATS.SET_TABLE_PREFS (`SH', `CUSTOMERS', `PUBLISH', `false');
Which three statements are true about the effect of this command?
-
A. Statistics collection is not done for the CUSTOMERS table when schema stats are gathered. Explanation This command specifically sets the PUBLISH preference for the CUSTOMERS table in the SH schema to false, indicating that statistics collection will not be done for this table when schema stats are gathered. This means that statistics for the CUSTOMERS table will not be included in the schema-level statistics collection process.
-
B. Statistics collection is not done for the CUSTOMERS table when database stats are gathered. Explanation The command only affects the statistics collection for the CUSTOMERS table when schema stats are gathered, not when database stats are gathered. Therefore, statistics collection for the CUSTOMERS table will still occur when database stats are gathered, and this preference setting does not impact that process.
-
C. Any existing statistics for the CUSTOMERS table are still available to the optimizer at parse time. Explanation Despite setting the PUBLISH preference to false for the CUSTOMERS table, any existing statistics for the table will still be available to the optimizer at parse time. This means that the optimizer can still use the existing statistics for query optimization even though new statistics are not being collected.
-
D. Statistics gathered on the CUSTOMERS table when schema stats are gathered are stored as pending statistics. Explanation When schema stats are gathered, statistics gathered on the CUSTOMERS table are not stored as pending statistics. Instead, they are simply not collected due to the PUBLISH preference being set to false. Pending statistics are typically generated when statistics are collected but not yet published for use by the optimizer.
-
E. Statistics gathered on the CUSTOMERS table when database stats are gathered are stored as pending statistics. Explanation When database stats are gathered, statistics gathered on the CUSTOMERS table are not stored as pending statistics. The PUBLISH preference setting specifically affects the collection of statistics when schema stats are gathered, not when database stats are gathered. Therefore, statistics for the CUSTOMERS table will still be stored and available when database stats are gathered.
Correct!Wrong!Overall explanation
Correct Answer: A, C, D
Explanation/Reference:
* SET_TABLE_PREFS Procedure,This procedure is used to set the statistics preferences of the specified table in the specified schema.
* Example:
Using Pending Statistics Assume many modifications have been made to the employees table since the last time statistics weregathered. To ensure that the cost-based optimizer is still picking the best plan, statistics should be gathered once again; however, the user is concerned that new statistics will cause the optimizer to choose bad plans when the current ones are acceptable. The user can do the following:
EXEC DBMS_STATS.SET_TABLE_PREFS('hr', 'employees', 'PUBLISH', 'false');
By setting the employees tables publish preference to FALSE, any statistics gather from now on will not be automatically published. The newly gathered statistics will be marked as pending.
-
-
15 Which three are true concerning a multitenant container database with three pluggable database?
-
A. All administration tasks must be done to a specific pluggable database. Explanation In a multitenant container database, administration tasks can be performed at either the container database level or the individual pluggable database level. It is not mandatory to perform all administration tasks on a specific pluggable database.
-
B. The pluggable databases increase patching time. Explanation Patching time is actually reduced in a multitenant environment because you can apply patches to the entire container database and all pluggable databases at once. This centralized patching approach saves time compared to patching multiple standalone databases individually.
-
C. The pluggable databases reduce administration effort. Explanation Pluggable databases can reduce administration effort because common tasks, such as backup and recovery, security, and resource management, can be managed at the container database level and automatically apply to all pluggable databases. This centralized management simplifies administration tasks.
-
D. The pluggable databases are patched together. Explanation Pluggable databases can be patched together in a multitenant environment by applying patches to the container database. The patches are then automatically applied to all pluggable databases within the container, ensuring consistency and efficiency in the patching process.
-
E. Pluggable databases are only used for database consolidation. Explanation While pluggable databases are commonly used for database consolidation in a multitenant environment, they can also be used for other purposes such as application isolation, easier database provisioning, and resource management. Pluggable databases offer flexibility in managing multiple databases within a single container database.
Correct!Wrong!Overall explanation
Correct Answer: C, D, E
Explanation/Reference:
The benefits of Oracle Multitenant are brought by implementing a pure deployment choice. The following list calls out the most compelling examples.
* High consolidation density. (E) The many pluggable databases in a single multitenant container database share its memory and backgroundprocesses, letting you operate many more pluggable databases on a particular platform than you can single databases that use the old architecture. This is the same benefit that schema-based consolidation brings.
* Rapid provisioning and cloning using SQL.
* New paradigms for rapid patching and upgrades. (D, not B)
The investment of time and effort to patch one multitenant container database results in patching all of its many pluggable databases. To patch a single pluggable database, you simply unplug/plug to a multitenant container database at a different Oracle Database software version.
* (C, not A) Manage many databases as one.
By consolidating existing databases as pluggable databases, administrators can manage many databases as one. For example, tasks like backup and disaster recovery are performed at the multitenant container database level.
* Dynamic between pluggable database resource management. In Oracle Database 12c, Resource Manager is extended with specific functionality to control the competition for resources between the pluggable databases within a multitenant container database.
Note:
* Oracle Multitenant is a new option for Oracle Database 12c Enterprise Edition that helps customers reduce IT costs by simplifying consolidation, provisioning, upgrades, and more. It is supported by a new architecture that allows a multitenant container database to hold many pluggable databases. And it fully complements other options, including Oracle Real Application Clusters and Oracle Active Data Guard. An existing database can be simply adopted, with no change, as a pluggable database; and no changes are needed in the other tiers of the application.
Reference: 12c Oracle Multitenant
-
-
16 Which three statements are true concerning unplugging a pluggable database (PDB)?
-
A. The PDB must be open in read only mode. Explanation Unplugging a PDB does not require it to be open in read-only mode. The PDB can be in any state, including open, closed, or read-only, before it is unplugged.
-
B. The PDB must be dosed. Explanation Before unplugging a PDB, it must be closed to ensure that all transactions are completed and the database is in a consistent state. Unplugging a PDB from a CDB requires it to be closed.
-
C. The unplugged PDB becomes a non-CDB. Explanation When a PDB is unplugged from a CDB, it does not automatically become a non-CDB. The unplugged PDB retains its pluggable nature and can be plugged into another CDB.
-
D. The unplugged PDB can be plugged into the same multitenant container database (CDB) Explanation An unplugged PDB can be plugged back into the same multitenant container database (CDB) from which it was unplugged. This allows for easy movement of PDBs between different CDBs within the same Oracle Database environment
-
E. The unplugged PDB can be plugged into another CDB. Explanation One of the key benefits of unplugging a PDB is the ability to plug it into another CDB. This enables easy migration and deployment of PDBs across different Oracle Database environments.
-
F. The PDB data files are automatically removed from disk. Explanation When a PDB is unplugged, the PDB data files are not automatically removed from disk. The data files remain intact and can be reused when the PDB is plugged back into a CDB or into another CDB.
Correct!Wrong!Overall explanation
Correct Answer: B, D, E
Explanation/Reference:
B, not A: The PDB must be closed before unplugging it.
D: An unplugged PDB contains data dictionary tables, and some of the columns in these encode information in an endianness-sensitive way. There is no supported way to handle the conversion of such columns automatically. This means, quite simply, that an unplugged PDB cannot be moved across an endianness difference.
E (not F): To exploit the new unplug/plug paradigm for patching the Oracle version most effectively, the source and destination CDBs should share a filesystem so that the PDB's datafiles can remain in place.
Reference: Oracle White Paper, Oracle Multitenant
-
-
17 You wish to enable an audit policy for all database users, except SYS, SYSTEM, and SCOTT.
You issue the following statements:
- SQL> AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SYS;
- SQL> AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SYSTEM;
- SQL> AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SCOTT;
For which database users is the audit policy now active?
-
A. All users except SYS Explanation The audit policy is now active for all users except SYS. The statement "AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SYS;" explicitly excludes the SYS user from the audit policy.
-
B. All users except SCOTT Explanation The audit policy is now active for all users except SCOTT. The statement "AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SCOTT;" explicitly excludes the SCOTT user from the audit policy.
-
C. All users except sys and SCOTT Explanation The audit policy is now active for all users except SYS and SCOTT. The statements "AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SYS;" and "AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SCOTT;" explicitly exclude the SYS and SCOTT users from the audit policy.
-
D. All users except sys, system, and SCOTT Explanation The audit policy is now active for all users except SYS, SYSTEM, and SCOTT. The statements "AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SYS;", "AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SYSTEM;", and "AUDIT POLICY ORA_DATABASE_PARAMETER EXCEPT SCOTT;" explicitly exclude the SYS, SYSTEM, and SCOTT users from the audit policy.
Correct!Wrong!Overall explanation
Correct Answer: B
Explanation/Reference:
If you run multiple AUDIT statements on the same unified audit policy but specify different EXCEPT users, then Oracle Database uses the last exception user list, not any of the users from the preceding lists. This means the effect of the earlier AUDIT POLICY ... EXCEPT statements are overridden by the latest AUDIT POLICY ... EXCEPT statement.
Note:
* The ORA_DATABASE_PARAMETER policy audits commonly used Oracle Database parameter settings.
By default, this policy is not enabled.
* You can use the keyword ALL to audit all actions. The following example shows how to audit all actions on the HR.EMPLOYEES table, except actions by user pmulligan.
Example Auditing All Actions on a Table
- CREATE AUDIT POLICY all_actions_on_hr_emp_pol
- ACTIONS ALL ON HR.EMPLOYEES;
- AUDIT POLICY all_actions_on_hr_emp_pol EXCEPT pmulligan;
Reference: Oracle Database Security Guide 12c, About Enabling Unified Audit Policies
-
18 Which two statements are true regarding the command?
On your Oracle 12c database, you invoked SQL *Loader to load data into the EMPLOYEES table in the HR schema by issuing the following command:
$> sqlldr hr/hr@pdb table=employees
-
A. It succeeds with default settings if the EMPLOYEES table belonging to HR is already defined in the database. Explanation This statement is true because if the EMPLOYEES table belonging to HR is already defined in the database, the SQL *Loader command will succeed with default settings, allowing data to be loaded into the table.
-
B. It fails because no SQL *Loader data file location is specified. Explanation This statement is incorrect because the SQL *Loader command provided specifies the table to load data into, so the absence of a data file location does not necessarily cause the command to fail.
-
C. It fails if the HR user does not have the CREATE ANY DIRECTORY privilege. Explanation This statement is true because if the HR user does not have the CREATE ANY DIRECTORY privilege, the SQL *Loader command will fail as it needs the privilege to access the directory where the data files are located.
-
D. It fails because no SQL *Loader control file location is specified. Explanation This statement is incorrect because the SQL *Loader command provided does not explicitly specify a control file location, but it can use default settings or search for the control file in the default location, so the absence of a control file location does not necessarily cause the command to fail.
Correct!Wrong!Overall explanation
Correct Answer: A, C
Explanation/Reference:
* SQL*Loader is invoked when you specify the sqlldr command and, optionally, parameters that establish session characteristics.
-
-
19 Which technique will move the table and indexes while maintaining the highest level of availability to the application?
Your are the DBA supporting an Oracle 11g Release 2 database and wish to move a table containing several DATE, CHAR, VARCHAR2, and NUMBER data types, and the table's indexes, to another tablespace.
The table does not have a primary key and is used by an OLTP application.
-
A. Oracle Data Pump. Explanation Oracle Data Pump allows you to move the table and indexes to another tablespace while the database remains online and accessible. It provides a high level of availability to the application during the migration process.
-
B. An ALTER TABLE MOVE to move the table and ALTER INDEX REBUILD to move the indexes. Explanation Using ALTER TABLE MOVE and ALTER INDEX REBUILD may cause downtime as the table and indexes are being physically moved. This approach may not provide the highest level of availability to the application during the migration.
-
C. An ALTER TABLE MOVE to move the table and ALTER INDEX REBUILD ONLINE to move the indexes. Explanation While using ALTER TABLE MOVE and ALTER INDEX REBUILD ONLINE reduces downtime compared to the previous option, it still may not provide the highest level of availability to the application during the migration process.
-
D. Online Table Redefinition. Explanation Online Table Redefinition allows you to redefine a table's structure without causing downtime. However, it may not be the most suitable option for moving the table and indexes to another tablespace while maintaining the highest level of availability to the application.
-
E. Edition-Based Table Redefinition. Explanation Edition-Based Table Redefinition is used for online application upgrades and does not directly address the requirement of moving a table and indexes to another tablespace while ensuring the highest level of availability to the application.
Correct!Wrong!Overall explanation
Correct Answer: D
Explanation/Reference:
* Oracle Database provides a mechanism to make table structure modifications without significantly affecting the availability of the table. The mechanism is called online table redefinition. Redefining tables online provides a substantial increase in availability compared to traditional methods of redefining tables.
* To redefine a table online:
Choose the redefinition method: by key or by rowid
* By key--Select a primary key or pseudo-primary key to use for the redefinition. Pseudo- primary keys are unique keys with all component columns having NOT NULL constraints. For this method, the versions of the tables before and after redefinition should have the same primary key columns. This is the preferred and default method of redefinition.
* By rowid--Use this method if no key is available. In this method, a hidden column named M_ROW$$ is added to the post-redefined version of the table. It is recommended that this column be dropped or marked as unused after the redefinition is complete. If COMPATIBLE is set to 10.2.0 or higher, the final phase of redefinition automatically sets this column unused. You can then use the ALTER TABLE ... DROP UNUSED COLUMNS statement to drop it.
You cannot use this method on index-organized tables.
Note:
* When you rebuild an index, you use an existing index as the data source. Creating an index in this manner enables you to change storage characteristics or move to a new tablespace. Rebuilding an index based on an existing data source removes intra-block fragmentation. Compared to dropping the index and using the CREATE INDEX statement, re-creating an existing index offers better performance.
Incorrect:
Not E: Edition-based redefinition enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time.
-
-
20 What are two benefits of installing Grid Infrastructure software for a stand-alone server before installing and creating an Oracle database?
-
A. Effectively implements role separation Explanation Installing Grid Infrastructure software for a stand-alone server allows for effective role separation by separating the management of the Oracle software from the management of the database itself. This separation enhances security and simplifies maintenance tasks.
-
B. Enables you to take advantage of Oracle Managed Files. Explanation Enabling the installation of Grid Infrastructure software before creating an Oracle database allows you to take advantage of Oracle Managed Files (OMF). OMF simplifies database administration by automatically managing file naming and placement, reducing the need for manual file management tasks.
-
C. Automatically registers the database with Oracle Restart. Explanation While installing Grid Infrastructure software for a stand-alone server does not automatically register the database with Oracle Restart, it does provide the foundation for Oracle Restart to manage the Oracle database instance in case of a failure. This can help improve the availability and reliability of the database.
-
D. Helps you to easily upgrade the database from a prior release. Explanation Installing Grid Infrastructure software for a stand-alone server does not directly help in easily upgrading the database from a prior release. However, having Grid Infrastructure in place can provide a more stable foundation for the database, which can potentially simplify the upgrade process.
-
E. Enables the Installation of Grid Infrastructure files on block or raw devices. Explanation Enabling the installation of Grid Infrastructure files on block or raw devices is not a direct benefit of installing Grid Infrastructure software for a stand-alone server before creating an Oracle database. Grid Infrastructure primarily provides cluster management capabilities and high availability features, rather than storage configuration options.
Correct!Wrong!Overall explanation
Correct Answer: C
Explanation/Reference:
To use Oracle ASM or Oracle Restart, you must first install Oracle Grid Infrastructure for a standalone server before you install and create the database. Otherwise, you must manually register the database with Oracle Restart.
Note.
The Oracle Grid Infrastructure for a standalone server provides the infrastructure to include your singleinstance database in an enterprise grid architecture. Oracle Database 12c combines these infrastructure products into one software installation called the Oracle Grid Infrastructure home. On a single-instance database, the Oracle Grid Infrastructure home includes Oracle Restart and Oracle Automatic Storage Management (Oracle ASM) software.
Reference: Oracle Grid Infrastructure for a Standalone Server, Oracle Database, Installation Guide, 12c
-
-
21 Identify two correct statements about multitenant architectures.
-
A. Multitenant architecture can be deployed only in a Real Application Clusters (RAC) configuration. Explanation Multitenant architecture can be deployed in both Real Application Clusters (RAC) and non-RAC configurations. RAC is not a mandatory requirement for implementing a multitenant architecture.
-
B. Multiple pluggable databases (PDBs) share certain multitenant container database (CDB) resources. Explanation Multiple pluggable databases (PDBs) share certain multitenant container database (CDB) resources such as memory and background processes. This sharing of resources allows for efficient resource utilization in a multitenant environment.
-
C. Multiple CDBs share certain PDB resources. Explanation Multiple CDBs do not share PDB resources. Each PDB is associated with a specific CDB and shares resources only within that particular CDB.
-
D. Multiple non-RAC CDB instances can mount the same PDB as long as they are on the same server. Explanation Multiple non-RAC CDB instances can mount the same PDB as long as they are on the same server. This allows for flexibility in managing and accessing PDBs across different CDB instances on the same server.
-
E. Patches are always applied at the CDB level. Explanation Patches can be applied at both the CDB level and the PDB level. While some patches may need to be applied at the CDB level, others may be specific to individual PDBs. It is not always necessary to apply patches only at the CDB level.
-
F. A PDB can have a private undo tablespace. Explanation A PDB can have a private undo tablespace, which allows for undo operations to be managed at the PDB level. This provides isolation and control over undo operations within each individual PDB in a multitenant architecture.
Correct!Wrong!Overall explanation
Correct Answer: C, D
Explanation/Reference:
Not A: Oracle Multitenant is a new option for Oracle Database 12c Enterprise Edition that helps customers reduce IT costs by simplifying consolidation, provisioning, upgrades, and more. It is supported by a new architecture that allows a container database to hold many pluggable databases. And it fully complements other options, including Oracle Real Application Clusters and Oracle Active Data Guard. An existing database can be simply adopted, with no change, as a pluggable database; and no changes are needed in the other tiers of the application.
Not E: New paradigms for rapid patching and upgrades.
The investment of time and effort to patch one multitenant container database results in patching all of its many pluggable databases. To patch a single pluggable database, you simply unplug/plug to a multitenant container database at a different Oracle Database software version.
not F:
* Redo and undo go hand in hand, and so the CDB as a whole has a single undo tablespace per RAC instance.
-
-
22 Which two actions does the script perform?
You upgrade your Oracle database in a multiprocessor environment. As a recommended you execute the following script:
SQL > @utlrp.sql
-
A. Parallel compilation of only the stored PL/SQL code Explanation The script @utlrp.sql performs parallel recompilation of any stored PL/SQL code in the Oracle database, not just the compilation of stored PL/SQL code.
-
B. Sequential recompilation of only the stored PL/SQL code Explanation The script @utlrp.sql does not perform sequential recompilation of only the stored PL/SQL code. It handles recompilation in a parallel manner.
-
C. Parallel recompilation of any stored PL/SQL code Explanation The script @utlrp.sql executes parallel recompilation of any stored PL/SQL code in the Oracle database, not just the compilation of stored PL/SQL code.
-
D. Sequential recompilation of any stored PL/SQL code Explanation The script @utlrp.sql does not perform sequential recompilation of any stored PL/PLSQL code. It handles recompilation in a parallel manner.
-
E. Parallel recompilation of Java code Explanation The script @utlrp.sql does not perform parallel recompilation of Java code. It focuses on recompiling stored PL/SQL code in a parallel manner.
-
F. Sequential recompilation of Java code Explanation The script @utlrp.sql does not perform sequential recompilation of Java code. It focuses on recompiling stored PL/SQL code in a parallel manner.
Correct!Wrong!Overall explanation
Correct Answer: C, E
Explanation/Reference:
utlrp.sql and utlprp.sql
The utlrp.sql and utlprp.sql scripts are provided by Oracle to recompile all invalid objects in the database. They are typically run after major database changes such as upgrades or patches. They are located in the
$ORACLE_HOME/rdbms/admin directory and provide a wrapper on the UTL_RECOMP package. The utlrp.sql script simply calls the utlprp.sql script with a command line parameter of "0". The utlprp.sql accepts a single integer parameter that indicates the level of parallelism as follows.
0 - The level of parallelism is derived based on the CPU_COUNT parameter.
1 - The recompilation is run serially, one object at a time.
N - The recompilation is run in parallel with "N" number of threads. Both scripts must be run as the SYS user, or another user with SYSDBA, to work correctly.
Reference: Recompiling Invalid Schema Objects
-
-
23 Which statement is true concerning dropping a pluggable database (PDB)?
-
A. The PDB must be open in read-only mode. Explanation Dropping a pluggable database (PDB) requires the PDB to be open in read-only mode. This ensures that no active transactions are ongoing and that the database is in a consistent state before it is dropped.
-
B. The PDB must be in mount state. Explanation The PDB must not be in the mount state when dropping it. The PDB should be open in read-only mode to ensure that all necessary operations are completed before dropping the database.
-
C. The PDB must be unplugged. Explanation Before dropping a PDB, it does not need to be unplugged. Unplugging a PDB is a separate operation that involves removing the PDB from the container database, but it is not a requirement for dropping the PDB.
-
D. The PDB data files are always removed from disk. Explanation When dropping a PDB, the PDB data files are not always removed from disk automatically. The DBA may need to manually remove the data files after dropping the PDB to free up disk space.
-
E. A dropped PDB can never be plugged back into a multitenant container database (CDB). Explanation A dropped PDB can be plugged back into a multitenant container database (CDB) if needed. While the PDB is dropped, its metadata is removed from the CDB, but the PDB can be plugged back in if required in the future.
Correct!Wrong!Overall explanation
Correct Answer: C
-
-
24 Which three features work together, to allow a SQL statement to have different cursors for the same statement based on different selectivity ranges?
-
A. Bind Variable Peeking Explanation Bind Variable Peeking is a feature in Oracle Database that allows the optimizer to peek at the value of bind variables at the first execution of a SQL statement and use that information to generate an optimal execution plan. This can result in different cursors being generated for the same statement based on different selectivity ranges.
-
B. SQL Plan Baselines Explanation SQL Plan Baselines are used to capture and preserve efficient execution plans for SQL statements. While they can help in stabilizing the execution plans, they do not directly contribute to having different cursors for the same statement based on different selectivity ranges.
-
C. Adaptive Cursor Sharing Explanation Adaptive Cursor Sharing is a feature in Oracle Database that allows the optimizer to generate multiple execution plans for a SQL statement based on different bind variable values. This feature enables the database to use different cursors for the same statement based on different selectivity ranges.
-
D. Bind variable used in a SQL statement Explanation Using a bind variable in a SQL statement is a common practice to improve performance and prevent SQL injection attacks. However, simply using a bind variable in a SQL statement does not inherently allow for different cursors to be generated based on different selectivity ranges.
-
E. Literals in a SQL statement Explanation Using literals in a SQL statement refers to hard-coding values directly into the query. This practice can limit the reusability of the execution plan and hinder performance optimization. It does not facilitate the generation of different cursors for the same statement based on different selectivity ranges.
Correct!Wrong!Overall explanation
Correct Answer: A, C, E
Explanation/Reference:
* In bind variable peeking (also known as bind peeking), the optimizer looks at the value in a bind variable when the database performs a hard parse of a statement.
When a query uses literals, the optimizer can use the literal values to find the best plan. However, when a query uses bind variables, the optimizer must select the best plan without the presence of literals in the SQL text. This task can be extremely difficult. By peeking at bind values the optimizer can determine the selectivity of a WHERE clause condition as if literals had been used, thereby improving the plan.
C: Oracle 11g/12g uses Adaptive Cursor Sharing to solve this problem by allowing the server to compare the effectiveness of execution plans between executions with different bind variable values. If it notices suboptimal plans, it allows certain bind variable values, or ranges of values, to use alternate execution plans for the same statement. This functionality requires no additional configuration.
-
-
25 Which method or feature should you use?
You notice a performance change in your production Oracle 12c database. You want to know which change caused this performance difference.
-
A. Compare Period ADDM report Explanation The Compare Period ADDM report in Oracle Database 12c allows you to compare two different time periods and analyze the performance differences. It provides detailed information on the performance impact of configuration changes, workload changes, or any other factors that may have affected the database performance.
-
B. AWR Compare Period report Explanation The AWR Compare Period report in Oracle Database 12c is used to compare two different time periods and identify any performance differences. It provides information on system statistics, wait events, SQL statements, and other performance-related metrics to help pinpoint the root cause of the performance change.
-
C. Active Session History (ASH) report Explanation The Active Session History (ASH) report in Oracle Database 12c provides a detailed history of active sessions and their performance metrics. While it can be useful for real-time monitoring and analysis, it may not be the most effective method for identifying the specific change that caused a performance difference over time.
-
D. Taking a new snapshot and comparing it with a preserved snapshot Explanation Taking a new snapshot and comparing it with a preserved snapshot can provide a point-in-time comparison of the database performance. However, this method may not be as comprehensive or detailed as using the Compare Period ADDM or AWR reports, which offer more in-depth analysis of performance changes.
Correct!Wrong!Overall explanation
Correct Answer: B
Explanation/Reference:
The awrddrpt.sql report is the Automated Workload Repository Compare Period Report. The awrddrpt.sql script is located in the $ORACLE_HOME/rdbms/admin directory.
Incorrect:
Not A: Compare Period ADDM Use this report to perform a high-level comparison of one workload replay to its capture or to another replay of the same capture. Only workload replays that contain at least 5 minutes of database time can be compared using this report
-