Pass the Snowflake SnowPro Advanced: Architect ARA-R01 Questions and answers with CertsForce

Viewing page 4 out of 5 pages
Viewing questions 31-40 out of questions
Questions # 31:

What built-in Snowflake features make use of the change tracking metadata for a table? (Choose two.)

Options:

A.

The MERGE command


B.

The UPSERT command


C.

The CHANGES clause


D.

A STREAM object


E.

The CHANGE_DATA_CAPTURE command


Expert Solution
Questions # 32:

Role A has the following permissions:

. USAGE on db1

. USAGE and CREATE VIEW on schemal in db1

. SELECT on tablel in schemal

Role B has the following permissions:

. USAGE on db2

. USAGE and CREATE VIEW on schema2 in db2

. SELECT on table2 in schema2

A user has Role A set as the primary role and Role B as a secondary role.

What command will fail for this user?

Options:

A.

use database db1;

use schema schemal;

create view v1 as select * from db2.schema2.table2;


B.

use database db2;

use schema schema2;

create view v2 as select * from dbl.schemal. tablel;


C.

use database db2;

use schema schema2;

select * from db1.schemal.tablel union select * from table2;


D.

use database db1;

use schema schemal;

select * from db2.schema2.table2;


Expert Solution
Questions # 33:

What Snowflake features should be leveraged when modeling using Data Vault?

Options:

A.

Snowflake’s support of multi-table inserts into the data model’s Data Vault tables


B.

Data needs to be pre-partitioned to obtain a superior data access performance


C.

Scaling up the virtual warehouses will support parallel processing of new source loads


D.

Snowflake’s ability to hash keys so that hash key joins can run faster than integer joins


Expert Solution
Questions # 34:

A company has an inbound share set up with eight tables and five secure views. The company plans to make the share part of its production data pipelines.

Which actions can the company take with the inbound share? (Choose two.)

Options:

A.

Clone a table from a share.


B.

Grant modify permissions on the share.


C.

Create a table from the shared database.


D.

Create additional views inside the shared database.


E.

Create a table stream on the shared table.


Expert Solution
Questions # 35:

Why might a Snowflake Architect use a star schema model rather than a 3NF model when designing a data architecture to run in Snowflake? (Select TWO).

Options:

A.

Snowflake cannot handle the joins implied in a 3NF data model.


B.

The Architect wants to remove data duplication from the data stored in Snowflake.


C.

The Architect is designing a landing zone to receive raw data into Snowflake.


D.

The Bl tool needs a data model that allows users to summarize facts across different dimensions, or to drill down from the summaries.


E.

The Architect wants to present a simple flattened single view of the data to a particular group of end users.


Expert Solution
Questions # 36:

A media company needs a data pipeline that will ingest customer review data into a Snowflake table, and apply some transformations. The company also needs to use Amazon Comprehend to do sentiment analysis and make the de-identified final data set available publicly for advertising companies who use different cloud providers in different regions.

The data pipeline needs to run continuously and efficiently as new records arrive in the object storage leveraging event notifications. Also, the operational complexity, maintenance of the infrastructure, including platform upgrades and security, and the development effort should be minimal.

Which design will meet these requirements?

Options:

A.

Ingest the data using copy into and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.


B.

Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Create an external function to do model inference with Amazon Comprehend and write the final records to a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.


C.

Ingest the data into Snowflake using Amazon EMR and PySpark using the Snowflake Spark connector. Apply transformations using another Spark job. Develop a python program to do model inference by leveraging the Amazon Comprehend text analysis API. Then write the results to a Snowflake table and create a listing in the Snowflake Marketplace to make the data available to other companies.


D.

Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.


Expert Solution
Questions # 37:

A media company needs a data pipeline that will ingest customer review data into a Snowflake table, and apply some transformations. The company also needs to use Amazon Comprehend to do sentiment analysis and make the de-identified final data set available publicly for advertising companies who use different cloud providers in different regions.

The data pipeline needs to run continuously ang efficiently as new records arrive in the object storage leveraging event notifications. Also, the operational complexity, maintenance of the infrastructure, including platform upgrades and security, and the development effort should be minimal.

Which design will meet these requirements?

Options:

A.

Ingest the data using COPY INTO and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.


B.

Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Create an external function to do model inference with Amazon Comprehend and write the final records to a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.


C.

Ingest the data into Snowflake using Amazon EMR and PySpark using the Snowflake Spark connector. Apply transformations using another Spark job. Develop a python program to do model inference by leveraging the Amazon Comprehend text analysis API. Then write the results to a Snowflake table and create a listing in the Snowflake Marketplace to make the data available to other companies.


D.

Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.


Expert Solution
Questions # 38:

Assuming all Snowflake accounts are using an Enterprise edition or higher, in which development and testing scenarios would be copying of data be required, and zero-copy cloning not be suitable? (Select TWO).

Options:

A.

Developers create their own datasets to work against transformed versions of the live data.


B.

Production and development run in different databases in the same account, and Developers need to see production-like data but with specific columns masked.


C.

Data is in a production Snowflake account that needs to be provided to Developers in a separate development/testing Snowflake account in the same cloud region.


D.

Developers create their own copies of a standard test database previously created for them in the development account, for their initial development and unit testing.


E.

The release process requires pre-production testing of changes with data of production scale and complexity. For security reasons, pre-production also runs in the production account.


Expert Solution
Questions # 39:

In a managed access schema, what are characteristics of the roles that can manage object privileges? (Select TWO).

Options:

A.

Users with the SYSADMIN role can grant object privileges in a managed access schema.


B.

Users with the SECURITYADMIN role or higher, can grant object privileges in a managed access schema.


C.

Users who are database owners can grant object privileges in a managed access schema.


D.

Users who are schema owners can grant object privileges in a managed access schema.


E.

Users who are object owners can grant object privileges in a managed access schema.


Expert Solution
Questions # 40:

How can the Snowflake context functions be used to help determine whether a user is authorized to see data that has column-level security enforced? (Select TWO).

Options:

A.

Set masking policy conditions using current_role targeting the role in use for the current session.


B.

Set masking policy conditions using is_role_in_session targeting the role in use for the current account.


C.

Set masking policy conditions using invoker_role targeting the executing role in a SQL statement.


D.

Determine if there are ownership privileges on the masking policy that would allow the use of any function.


E.

Assign the accountadmin role to the user who is executing the object.


Expert Solution
Viewing page 4 out of 5 pages
Viewing questions 31-40 out of questions