realexamdumps.com

Saturday 10 August 2019

Latest Microsoft 70-767 Real Exam Study Questions - Microsoft 70-767 Dump

Question: 1

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but
the text of the scenario is exactly the same in each question in this series.
You have a Microsoft SQL Server data warehouse instance that supports several client applications.
The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer,
Dimension.Date, Fact.Ticket, and Fact.Order. The Dimension.SalesTerritory and Dimension.Customer
tables are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company
wants to change it daily. The Fact.Order table is loaded by using an ETL process. Indexes have been
added to the table over time, but the presence of these indexes slows data loading.
All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You
have a second database named DB2 that contains copies of production data for a development
environment. The data warehouse has grown and the cost of storage has increased. Data older than one
year is accessed infrequently and is considered historical.
You have the following requirements:
You are not permitted to make changes to the client applications.
You need to optimize the storage for the data warehouse.
What change should you make?

A. Partition the Fact.Order table, and move historical data to new filegroups on lower-cost storage.
B. Create new tables on lower-cost storage, move the historical data to the new tables, and then shrink
the database.
C. Remove the historical data from the database to leave available space for new data.
D. Move historical data to new tables on lower-cost storage.

Answer: A

Explanation:
Create the load staging table in the same filegroup as the partition you are loading.
Create the unload staging table in the same filegroup as the partition you are deleteing.
From scenario: Data older than one year is accessed infrequently and is considered historical.

References: https://blogs.msdn.microsoft.com/sqlcat/2013/09/16/top-10-best-practices-for-building-alarge-scale-relational-data-warehouse



Question: 2

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but
the text of the scenario is exactly the same in each question in this series.
You have a Microsoft SQL Server data warehouse instance that supports several client applications.
The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer,
Dimension.Date, Fact.Ticket, and Fact.Order. The Dimension.SalesTerritory and Dimension.Customer
tables are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company
wants to change it daily. The Fact.Order table is loaded by using an ETL process. Indexes have been
added to the table over time, but the presence of these indexes slows data loading.
All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You
have a second database named DB2 that contains copies of production data for a development
environment. The data warehouse has grown and the cost of storage has increased. Data older than one
year is accessed infrequently and is considered historical.
You have the following requirements:
Implement table partitioning to improve the manageability of the data warehouse and to avoid the need
to repopulate all transactional data each night. Use a partitioning strategy that is as granular as possible.
Partition the Fact.Order table and retain a total of seven years of data.
Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition
structure must apply a sliding window strategy to ensure that a new partition is available for the
upcoming month, and that the oldest month of data is archived and removed.
Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date
tables.
Incrementally load all tables in the database and ensure that all incremental changes are processed.
Maximize the performance during the data loading process for the Fact.Order partition.
Ensure that historical data remains online and available for querying.
Reduce ongoing storage costs while maintaining query performance for current data.
You are not permitted to make changes to the client applications.
You need to implement the data partitioning strategy.
How should you partition the Fact.Order table?

A. Create 17,520 partitions.
B. Use a granularity of two days.
C. Create 2,557 partitions.
D. Create 730 partitions.

Answer: C

Explanation:
We create on partition for each day. 7 years times 365 days is 2,555. Make that 2,557 to provide for leap
years.
From scenario: Partition the Fact.Order table and retain a total of seven years of data.
Maximize the performance during the data loading process for the Fact.Order partition.


Question: 3

Note: This question is part of a series of questions that present the same scenario. Each question in the
series contains a unique solution that might meet the stated goals. Some question sets might have more
than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have the following line-of-business solutions:
If a change is made to the ReferenceNr column in any of the sources, set the value of IsDisabled to True
and create a new row in the Products table.
If a row is deleted in any of the sources, set the value of IsDisabled to True in the data warehouse.
One or more Microsoft SQL Server instances support each solution. Each solution has its own product
catalog. You have an additional server that hosts SQL Server Integration Services (SSIS) and a data
warehouse. You populate the data warehouse with data from each of the line-of-business solutions. The
data warehouse does not store primary key values from the individual source tables.
The database for each solution has a table named Products that stored product information. The
Products table in each database uses a separate and unique key for product records. Each table shares a
column named ReferenceNr between the databases. This column is used to create queries that involve
more than once solution.
You need to load data from the individual solutions into the data warehouse nightly. The following
requirements must be met:
Enable the Change Tracking for the Product table in the source databases.
Query the cdc.fn_cdc_get_all_changes_capture_dbo_products function from the sources for updated
rows.
Set the IsDisabled column to True for rows with the old ReferenceNr value.
Create a new row in the data warehouse Products table with the new ReferenceNr value.
Solution: Perform the following actions:
Does the solution meet the goal?

A. Yes
B. No

Answer: B


Question: 4

Note: This question is part of a series of questions that present the same scenario. Each question in the
series contains a unique solution that might meet the stated goals. Some question sets might have more
than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have the following line-of-business solutions:
ERP system
Online WebStore
Partner extranet
One or more Microsoft SQL Server instances support each solution. Each solution has its own product
catalog. You have an additional server that hosts SQL Server Integration Services (SSIS) and a data
warehouse. You populate the data warehouse with data from each of the line-of-business solutions. The
data warehouse does not store primary key values from the individual source tables.
The database for each solution has a table named Products that stored product information. The
Products table in each database uses a separate and unique key for product records. Each table shares a
column named ReferenceNr between the databases. This column is used to create queries that involve
more than once solution.
You need to load data from the individual solutions into the data warehouse nightly. The following
requirements must be met:
If a change is made to the ReferenceNr column in any of the sources, set the value of IsDisabled to True
and create a new row in the Products table.
If a row is deleted in any of the sources, set the value of IsDisabled to True in the data warehouse.
Solution: Perform the following actions:
Enable the Change Tracking feature for the Products table in the three source databases.
Query the CHANGETABLE function from the sources for the deleted rows.
Set the IsDIsabled column to True on the data warehouse Products table for the listed rows.
Does the solution meet the goal?

A. Yes
B. No

Answer: B



Question: 5

Note: This question is part of a series of questions that present the same scenario. Each question in the
series contains a unique solution that might meet the stated goals. Some question sets might have more
than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have the following line-of-business solutions:
ERP system
Online WebStore
Partner extranet
One or more Microsoft SQL Server instances support each solution. Each solution has its own product
catalog. You have an additional server that hosts SQL Server Integration Services (SSIS) and a data
warehouse. You populate the data warehouse with data from each of the line-of-business solutions. The
data warehouse does not store primary key values from the individual source tables.
The database for each solution has a table named Products that stored product information. The
Products table in each database uses a separate and unique key for product records. Each table shares a
column named ReferenceNr between the databases. This column is used to create queries that involve
more than once solution.
You need to load data from the individual solutions into the data warehouse nightly. The following
requirements must be met:
If a change is made to the ReferenceNr column in any of the sources, set the value of IsDisabled to True
and create a new row in the Products table.
If a row is deleted in any of the sources, set the value of IsDisabled to True in the data warehouse.
Solution: Perform the following actions:
Enable the Change Tracking for the Product table in the source databases.
Query the CHANGETABLE function from the sources for the updated rows.
Set the IsDisabled column to True for the listed rows that have the old ReferenceNr value.
Create a new row in the data warehouse Products table with the new ReferenceNr value.
Does the solution meet the goal?

A. Yes
B. No

Answer: B

Explanation:
We must check for deleted rows, not just updates rows.

References: https://www.timmitchell.net/post/2016/01/18/getting-started-with-change-tracking-in-sqlserver/
Explanation:
We must check for updated rows, not just deleted rows.
References: https://www.timmitchell.net/post/2016/01/18/getting-started-with-change-tracking-in-sqlserver/
Explanation:
We must also handle the deleted rows, not just the updated rows.
References: https://solutioncenter.apexsql.com/enable-use-sql-server-change-data-capture/


Latest Microsoft 70-767 Real Exam Study Questions - Microsoft 70-767 Dump

No comments:

Post a Comment