[Free] 2018(Jan) EnsurePass Passguide Oracle 1z0-060 Dumps with VCE and PDF 71-80

2018 Jan Oracle Official New Released 1z0-060
100% Free Download! 100% Pass Guaranteed!

Upgrade to Oracle Database 12c

Question No: 71

You are planning the creation of a new multitenant container database (CDB) and want to store the ROOT and SEED container data files in separate directories.

You plan to create the database using SQL statements. Which three techniques can you use to achieve this?

  1. Use Oracle Managed Files (OMF).

  2. Specify the SEED FILE_NAME_CONVERT clause.

  3. Specify the PDB_FILE_NAME_CONVERT initialization parameter.

  4. Specify the DB_FILE_NAMECONVERT initialization parameter.

  5. Specify all files in the CREATE DATABASE statement without using Oracle managed Files (OMF).

Answer: A,B,C

Explanation: You must specify the names and locations of the seed#39;s files in one of the following ways:

  • (A) Oracle Managed Files


  • (C) The PDB_FILE_NAME_CONVERT Initialization Parameter

    Question No: 72

    Which three statements are true concerning the multitenant architecture?

    1. Each pluggable database (PDB) has its own set of background processes.

    2. A PDB can have a private temp tablespace.

    3. PDBs can share the sysaux tablespace.

    4. Log switches occur only at the multitenant container database (CDB) level.

    5. Different PDBs can have different default block sizes.

    6. PDBs share a common system tablespace.

    7. Instance recovery is always performed at the CDB level.

    Answer: B,D,G Explanation: B:

  • A PDB would have its SYSTEM, SYSAUX, TEMP tablespaces.It can also contains other user created tablespaces in it.

  • There is one default temporary tablespace for the entire CDB. However, you can create additional temporary tablespaces in individual PDBs.


  • There is a single redo log and a single control file for an entire CDB

  • A log switch is the point at which the database stops writing to one redo log file and begins writing to another. Normally, a log switch occurs when the current redo log file is

  • completely filled and writing must continue to the next redo log file.

    G: instance recovery

    The automatic application of redo log records to uncommitted data blocks when an database instance is restarted after a failure.

    Incorrect: Not A:

    • There is one set of background processes shared by the root and all PDBs. –

    • High consolidation density. The many pluggable databases in a single

    container database share its memory and background processes, letting you operate many more pluggable databases on a particular platform than you can single databases that use the old architecture.

    Not C: There is a separate SYSAUX tablespace for the root and for each PDB. Not F: There is a separate SYSTEM tablespace for the root and for each PDB. –

    Question No: 73

    Examine the parameters for your database instance:

    Ensurepass 2018 PDF and VCE

    Which three statements are true about the process of automatic optimization by using cardinality feedback?

    1. The optimizer automatically changes a plan during subsequent execution of a SQL statement if there is a huge difference in optimizer estimates and execution statistics.

    2. The optimizer can re optimize a query only once using cardinality feedback.

    3. The optimizer enables monitoring for cardinality feedback after the first execution of a


    4. The optimizer does not monitor cardinality feedback if dynamic sampling and multicolumn statistics are enabled.

    5. After the optimizer identifies a query as a re-optimization candidate, statistics collected by the collectors are submitted to the optimizer.

    Answer: A,C,D

    Explanation: C: During the first execution of a SQL statement, an execution plan is generated as usual.

    D: if multi-column statistics are not present for the relevant combination of columns, the optimizer can fall back on cardinality feedback.

    (not B)* Cardinality feedback. This feature, enabled by default in 11.2, is intended to improve plans for repeated executions.

    optimizer_dynamic_sampling optimizer_features_enable

    • dynamic sampling or multi-column statistics allow the optimizer to more accurately estimate selectivity of conjunctive predicates.


    • OPTIMIZER_DYNAMIC_SAMPLING controls the level of dynamic sampling performed by the optimizer.

      Range of values. 0 to 10

    • Cardinality feedback was introduced in Oracle Database 11gR2. The purpose of this feature is to automatically improve plans for queries that are executed repeatedly, for which the optimizer does not estimate cardinalities in the plan properly. The optimizer may misestimate cardinalities for a variety of reasons, such as missing or inaccurate statistics, or complex predicates. Whatever the reason for the misestimate, cardinality feedback may be able to help.

      Question No: 74

      Examine the following impdp command to import a database over the network from a pre- 12c Oracle database (source):

      Ensurepass 2018 PDF and VCE

      Which three are prerequisites for successful execution of the command?

      1. The import operation must be performed by a user on the target database with the DATAPUMP_IMP_FULL_DATABASE role, and the database link must connect to a user on the source database with the DATAPUMP_EXD_FULL_DATABASE role.

      2. All the user-defined tablespaces must be in read-only mode on the source database.

      3. The export dump file must be created before starting the import on the target database.

      4. The source and target database must be running on the same platform with the same endianness.

      5. The path of data files on the target database must be the same as that on the source database.

      6. The impdp operation must be performed by the same user that performed the expdp operation.

      Answer: A,B,D

      Explanation: In this case we have run the impdp without performing any conversion if endian format is different then we have to first perform conversion.

      Question No: 75

      Which two statements are true about the RMAN validate database command?

      1. It checks the database for intrablock corruptions.

      2. It can detect corrupt pfiles.

      3. It can detect corrupt spfiles.

      4. It checks the database for interblock corruptions.

      5. It can detect corrupt block change tracking files.

      Answer: A,C Explanation:

      Block corruptions can be divided Into Interblock corruption and intrablock corruption. In intrablock corruption. th芦 corruption occurs within the block itself and can be either physical or logical corruption. In interblock corruption, the corruption occurs between blocks and can only be logical corruption.

      (key word) * The VALIDATE command checks for intrablock corruptions only. Only DBVERIFY and the ANALYZE statement detect Interblock corruption.

      VALIDATE Command Output 鈥⑩€?gt; List of Control File and SPFILE. File TYPE gt;禄禄 SPFILE or Control File.

      Status gt;禄禄 OK if no corruption, or FAILED If block corruption is found.

      Blocks Failing 禄禄禄 The number of blocks that fail the corruption check. These blocks are newly corrupt.

      Blocks Examined 禄禄禄 Total number of blocks in the file.

      Oracle#39; Database Backup and Recovery User#39;s Guide

      12c Release 1 (12.1) – 16 Validating Database Files and Backups

      Question No: 76

      In which two scenarios do you use SQL* Loader to load data?

      1. Transform the data while it is being loaded into the database.

      2. Use transparent parallel processing without having to split the external data first.

      3. Load data into multiple tables during the same load statement.

      4. Generate unique sequential key values in specified columns.

      Answer: C,D

      Explanation: http://docs.oracle.com/cd/B28359_01/server.111/b28319/ldr_concepts.htm

      Question No: 77

      A redaction policy was added to the SAL column of the SCOTT.EMP table:

      Ensurepass 2018 PDF and VCE

      All users have their default set of system privileges.

      For which three situations will data not be redacted?

      1. SYS sessions, regardless of the roles that are set in the session

      2. SYSTEM sessions, regardless of the roles that are set in the session

      3. SCOTT sessions, only if the MGR role is set in the session

      4. SCOTT sessions, only if the MGR role is granted to SCOTT

      5. SCOTT sessions, because he is the owner of the table

      6. SYSTEM session, only if the MGR role is set in the session

      Answer: A,B,D Explanation:

      Both users SYS and SYSTEM automatically have the EXEMPT REDACTION POLICY system privilege. (SYSTEM has the EXP_FULL_DATABASE role, which includes the EXEMPT REDACTION POLICY system privilege.) This means that the SYS and SYSTEM users can always bypass any existing Oracle Data Redaction policies, and will always be able to view data from tables (or views) that have Data Redaction policies defined on them

      Question No: 78

      Identify three scenarios in which you would recommend the use of SQL Performance Analyzer to analyze impact on the performance of SQL statements.

      1. Change in the Oracle Database version

      2. Change in your network infrastructure

      3. Change in the hardware configuration of the database server

      4. Migration of database storage from non-ASM to ASM storage

      5. Database and operating system upgrade

      Answer: A,C,E

      Explanation: Oracle 11g/12c makes further use of SQL tuning sets with the SQL Performance Analyzer, which compares the performance of the statements in a tuning set before and after a database change. The database change can be as major or minor as you like, such as:

    • (E) Database, operating system, or hardware upgrades.

    • (A,C) Database, operating system, or hardware configuration changes.

    • Database initialization parameter changes.

    • Schema changes, such as adding indexes or materialized views.

    • Refreshing optimizer statistics.

    • Creating or changing SQL profiles.

      Question No: 79

      In your multitenant container database (CDB) containing pluggable databases (PDB), users complain about performance degradation.

      How does real-time Automatic database Diagnostic Monitor (ADDM) check performance degradation and provide solutions?

      1. It collects data from SGA and compares it with a preserved snapshot.

      2. It collects data from SGA, analyzes it, and provides a report.

      3. It collects data from SGA and compares it with the latest snapshot.

      4. It collects data from both SGA and PGA, analyzes it, and provides a report.

      Answer: B Explanation: Note:

    • The multitenant architecture enables an Oracle database to function as a multitenant container database (CDB) that includes zero, one, or many customer-created pluggable databases (PDBs). A PDB is a portable collection of schemas, schema objects, and nonschema objects that appears to an Oracle Net client as a non-CDB. All Oracle databases before Oracle Database 12c were non-CDBs.

    • The System Global Area (SGA) is a group of shared memory areas that are dedicated to an Oracle “instance” (an instance is your database programs and RAM).

    • The PGA (Program or Process Global Area) is a memory area (RAM) that stores data and control information for a single process.

    Question No: 80

    You database is running an ARCHIVELOG mode.

    The following parameter are set in your database instance: LOG_ARCHIVE_FORMAT = arch %t_%r.arc


    DB_RECOVERY_FILE = ‘/u01/oradata’

    Which statement is true about the archived redo log files?

    1. They are created only in the location specified by the LOG_ARCHIVE_DEST_1 parameter.

    2. They are created only in the Fast Recovery Area.

    3. They are created in the location specified by the LOG_ARCHIVE_DEST_1 parameter and in the default location $ORACLE_HOME/dbs/arch.

    4. They are created in the location specified by the LOG_ARCHIVE_DEST_1 parameter and the location specified by the DB_RECOVERY_FILE_DEST parameter.

    Answer: A

    Explanation: You can choose to archive redo logs to a single destination or to multiple destinations.

    Destinations can be local-within the local file system or an Oracle Automatic Storage Management (Oracle ASM) disk group-or remote (on a standby database). When you archive to multiple destinations, a copy of each filled redo log file is written to each destination. These redundant copies help ensure that archived logs are always available in the event of a failure at one of the destinations.

    To archive to only a single destination, specify that destination using the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST initialization parameters. ARCHIVE_DEST initialization parameter. To archive to multiple destinations, you can choose to archive to two or more locations using the LOG_ARCHIVE_DEST_n initialization parameters, or to archive only to a primary and secondary destination using the LOG_ ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST initialization parameters.

    100% Free Download!
    Download Free Demo:1z0-060 Demo PDF
    100% Pass Guaranteed!
    Download 2018 EnsurePass 1z0-060 Full Exam PDF and VCE

    EnsurePass ExamCollection Testking
    Lowest Price Guarantee Yes No No
    Up-to-Dated Yes No No
    Real Questions Yes No No
    Explanation Yes No No
    PDF VCE Yes No No
    Free VCE Simulator Yes No No
    Instant Download Yes No No

    2018 EnsurePass IT Certification PDF and VCE

    Leave a Comment


    Your email address will not be published. Required fields are marked *