[Free] 2018(Jan) EnsurePass Passguide Oracle 1z0-060 Dumps with VCE and PDF 71-80
Upgrade to Oracle Database 12c
Question No: 71
You are planning the creation of a new multitenant container database (CDB) and want to store the ROOT and SEED container data files in separate directories.
You plan to create the database using SQL statements. Which three techniques can you use to achieve this?
Use Oracle Managed Files (OMF).
Specify the SEED FILE_NAME_CONVERT clause.
Specify the PDB_FILE_NAME_CONVERT initialization parameter.
Specify the DB_FILE_NAMECONVERT initialization parameter.
Specify all files in the CREATE DATABASE statement without using Oracle managed Files (OMF).
Explanation: You must specify the names and locations of the seed#39;s files in one of the following ways:
(A) Oracle Managed Files
(B) The SEED FILE_NAME_CONVERT Clause
(C) The PDB_FILE_NAME_CONVERT Initialization Parameter
Question No: 72
Which three statements are true concerning the multitenant architecture?
Each pluggable database (PDB) has its own set of background processes.
A PDB can have a private temp tablespace.
PDBs can share the sysaux tablespace.
Log switches occur only at the multitenant container database (CDB) level.
Different PDBs can have different default block sizes.
PDBs share a common system tablespace.
Instance recovery is always performed at the CDB level.
Answer: B,D,G Explanation: B:
A PDB would have its SYSTEM, SYSAUX, TEMP tablespaces.It can also contains other user created tablespaces in it.
There is one default temporary tablespace for the entire CDB. However, you can create additional temporary tablespaces in individual PDBs.
There is a single redo log and a single control file for an entire CDB
A log switch is the point at which the database stops writing to one redo log file and begins writing to another. Normally, a log switch occurs when the current redo log file is
completely filled and writing must continue to the next redo log file.
G: instance recovery
The automatic application of redo log records to uncommitted data blocks when an database instance is restarted after a failure.
Incorrect: Not A:
There is one set of background processes shared by the root and all PDBs. –
High consolidation density. The many pluggable databases in a single
container database share its memory and background processes, letting you operate many more pluggable databases on a particular platform than you can single databases that use the old architecture.
Not C: There is a separate SYSAUX tablespace for the root and for each PDB. Not F: There is a separate SYSTEM tablespace for the root and for each PDB. –
Question No: 73
Examine the parameters for your database instance:
Which three statements are true about the process of automatic optimization by using cardinality feedback?
The optimizer automatically changes a plan during subsequent execution of a SQL statement if there is a huge difference in optimizer estimates and execution statistics.
The optimizer can re optimize a query only once using cardinality feedback.
The optimizer enables monitoring for cardinality feedback after the first execution of a
The optimizer does not monitor cardinality feedback if dynamic sampling and multicolumn statistics are enabled.
After the optimizer identifies a query as a re-optimization candidate, statistics collected by the collectors are submitted to the optimizer.
Explanation: C: During the first execution of a SQL statement, an execution plan is generated as usual.
D: if multi-column statistics are not present for the relevant combination of columns, the optimizer can fall back on cardinality feedback.
(not B)* Cardinality feedback. This feature, enabled by default in 11.2, is intended to improve plans for repeated executions.
dynamic sampling or multi-column statistics allow the optimizer to more accurately estimate selectivity of conjunctive predicates.
OPTIMIZER_DYNAMIC_SAMPLING controls the level of dynamic sampling performed by the optimizer.
Range of values. 0 to 10
Cardinality feedback was introduced in Oracle Database 11gR2. The purpose of this feature is to automatically improve plans for queries that are executed repeatedly, for which the optimizer does not estimate cardinalities in the plan properly. The optimizer may misestimate cardinalities for a variety of reasons, such as missing or inaccurate statistics, or complex predicates. Whatever the reason for the misestimate, cardinality feedback may be able to help.
Question No: 74
Examine the following impdp command to import a database over the network from a pre- 12c Oracle database (source):
Which three are prerequisites for successful execution of the command?
The import operation must be performed by a user on the target database with the DATAPUMP_IMP_FULL_DATABASE role, and the database link must connect to a user on the source database with the DATAPUMP_EXD_FULL_DATABASE role.
All the user-defined tablespaces must be in read-only mode on the source database.
The export dump file must be created before starting the import on the target database.
The source and target database must be running on the same platform with the same endianness.
The path of data files on the target database must be the same as that on the source database.
The impdp operation must be performed by the same user that performed the expdp operation.
Explanation: In this case we have run the impdp without performing any conversion if endian format is different then we have to first perform conversion.
Question No: 75
Which two statements are true about the RMAN validate database command?
It checks the database for intrablock corruptions.
It can detect corrupt pfiles.
It can detect corrupt spfiles.
It checks the database for interblock corruptions.
It can detect corrupt block change tracking files.
Answer: A,C Explanation:
Block corruptions can be divided Into Interblock corruption and intrablock corruption. In intrablock corruption. th芦 corruption occurs within the block itself and can be either physical or logical corruption. In interblock corruption, the corruption occurs between blocks and can only be logical corruption.
(key word) * The VALIDATE command checks for intrablock corruptions only. Only DBVERIFY and the ANALYZE statement detect Interblock corruption.
VALIDATE Command Output 鈥⑩€?gt; List of Control File and SPFILE. File TYPE gt;禄禄 SPFILE or Control File.
Status gt;禄禄 OK if no corruption, or FAILED If block corruption is found.
Blocks Failing 禄禄禄 The number of blocks that fail the corruption check. These blocks are newly corrupt.
Blocks Examined 禄禄禄 Total number of blocks in the file.
Oracle#39; Database Backup and Recovery User#39;s Guide
12c Release 1 (12.1) – 16 Validating Database Files and Backups
Question No: 76
In which two scenarios do you use SQL* Loader to load data?
Transform the data while it is being loaded into the database.
Use transparent parallel processing without having to split the external data first.
Load data into multiple tables during the same load statement.
Generate unique sequential key values in specified columns.
Question No: 77
A redaction policy was added to the SAL column of the SCOTT.EMP table:
All users have their default set of system privileges.
For which three situations will data not be redacted?
SYS sessions, regardless of the roles that are set in the session
SYSTEM sessions, regardless of the roles that are set in the session
SCOTT sessions, only if the MGR role is set in the session
SCOTT sessions, only if the MGR role is granted to SCOTT
SCOTT sessions, because he is the owner of the table
SYSTEM session, only if the MGR role is set in the session
Answer: A,B,D Explanation:
Both users SYS and SYSTEM automatically have the EXEMPT REDACTION POLICY system privilege. (SYSTEM has the EXP_FULL_DATABASE role, which includes the EXEMPT REDACTION POLICY system privilege.) This means that the SYS and SYSTEM users can always bypass any existing Oracle Data Redaction policies, and will always be able to view data from tables (or views) that have Data Redaction policies defined on them
Question No: 78
Identify three scenarios in which you would recommend the use of SQL Performance Analyzer to analyze impact on the performance of SQL statements.
Change in the Oracle Database version
Change in your network infrastructure
Change in the hardware configuration of the database server
Migration of database storage from non-ASM to ASM storage
Database and operating system upgrade
Explanation: Oracle 11g/12c makes further use of SQL tuning sets with the SQL Performance Analyzer, which compares the performance of the statements in a tuning set before and after a database change. The database change can be as major or minor as you like, such as:
(E) Database, operating system, or hardware upgrades.
(A,C) Database, operating system, or hardware configuration changes.
Database initialization parameter changes.
Schema changes, such as adding indexes or materialized views.
Refreshing optimizer statistics.
Creating or changing SQL profiles.
Question No: 79
In your multitenant container database (CDB) containing pluggable databases (PDB), users complain about performance degradation.
How does real-time Automatic database Diagnostic Monitor (ADDM) check performance degradation and provide solutions?
It collects data from SGA and compares it with a preserved snapshot.
It collects data from SGA, analyzes it, and provides a report.
It collects data from SGA and compares it with the latest snapshot.
It collects data from both SGA and PGA, analyzes it, and provides a report.
Answer: B Explanation: Note:
The multitenant architecture enables an Oracle database to function as a multitenant container database (CDB) that includes zero, one, or many customer-created pluggable databases (PDBs). A PDB is a portable collection of schemas, schema objects, and nonschema objects that appears to an Oracle Net client as a non-CDB. All Oracle databases before Oracle Database 12c were non-CDBs.
The System Global Area (SGA) is a group of shared memory areas that are dedicated to an Oracle “instance” (an instance is your database programs and RAM).
The PGA (Program or Process Global Area) is a memory area (RAM) that stores data and control information for a single process.
Question No: 80
You database is running an ARCHIVELOG mode.
The following parameter are set in your database instance: LOG_ARCHIVE_FORMAT = arch %t_%r.arc
LOG_ARCHIVE_DEST_1 = ‘LOCATION = /disk1/archive’ DB_RECOVERY_FILE_DEST_SIZE = 50G
DB_RECOVERY_FILE = ‘/u01/oradata’
Which statement is true about the archived redo log files?
They are created only in the location specified by the LOG_ARCHIVE_DEST_1 parameter.
They are created only in the Fast Recovery Area.
They are created in the location specified by the LOG_ARCHIVE_DEST_1 parameter and in the default location $ORACLE_HOME/dbs/arch.
They are created in the location specified by the LOG_ARCHIVE_DEST_1 parameter and the location specified by the DB_RECOVERY_FILE_DEST parameter.
Explanation: You can choose to archive redo logs to a single destination or to multiple destinations.
Destinations can be local-within the local file system or an Oracle Automatic Storage Management (Oracle ASM) disk group-or remote (on a standby database). When you archive to multiple destinations, a copy of each filled redo log file is written to each destination. These redundant copies help ensure that archived logs are always available in the event of a failure at one of the destinations.
To archive to only a single destination, specify that destination using the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST initialization parameters. ARCHIVE_DEST initialization parameter. To archive to multiple destinations, you can choose to archive to two or more locations using the LOG_ARCHIVE_DEST_n initialization parameters, or to archive only to a primary and secondary destination using the LOG_ ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST initialization parameters.
100% Free Download!
–Download Free Demo:1z0-060 Demo PDF
100% Pass Guaranteed!
–Download 2018 EnsurePass 1z0-060 Full Exam PDF and VCE
|Lowest Price Guarantee||Yes||No||No|
|Free VCE Simulator||Yes||No||No|