Pass the Snowflake SnowPro Advanced: Architect ARA-C01 Questions and answers with CertsForce

Viewing page 5 out of 5 pages
Viewing questions 41-50 out of questions
Questions # 41:

Database DB1 has schema S1 which has one table, T1.

DB1 --> S1 --> T1

The retention period of EG1 is set to 10 days.

The retention period of s: is set to 20 days.

The retention period of t: Is set to 30 days.

The user runs the following command:

Drop Database DB1;

What will the Time Travel retention period be for T1?

Options:

A.

10 days


B.

20 days


C.

30 days


D.

37 days


Expert Solution
Questions # 42:

A Snowflake Architect is designing an application and tenancy strategy for an organization where strong legal isolation rules as well as multi-tenancy are requirements.

Which approach will meet these requirements if Role-Based Access Policies (RBAC) is a viable option for isolating tenants?

Options:

A.

Create accounts for each tenant in the Snowflake organization.


B.

Create an object for each tenant strategy if row level security is viable for isolating tenants.


C.

Create an object for each tenant strategy if row level security is not viable for isolating tenants.


D.

Create a multi-tenant table strategy if row level security is not viable for isolating tenants.


Expert Solution
Questions # 43:

What considerations need to be taken when using database cloning as a tool for data lifecycle management in a development environment? (Select TWO).

Options:

A.

Any pipes in the source are not cloned.


B.

Any pipes in the source referring to internal stages are not cloned.


C.

Any pipes in the source referring to external stages are not cloned.


D.

The clone inherits all granted privileges of all child objects in the source object, including the database.


E.

The clone inherits all granted privileges of all child objects in the source object, excluding the database.


Expert Solution
Questions # 44:

A company has a Snowflake account named ACCOUNTA in AWS us-east-1 region. The company stores its marketing data in a Snowflake database named MARKET_DB. One of the company’s business partners has an account named PARTNERB in Azure East US 2 region. For marketing purposes the company has agreed to share the database MARKET_DB with the partner account.

Which of the following steps MUST be performed for the account PARTNERB to consume data from the MARKET_DB database?

Options:

A.

Create a new account (called AZABC123) in Azure East US 2 region. From account ACCOUNTA create a share of database MARKET_DB, create a new database out of this share locally in AWS us-east-1 region, and replicate this new database to AZABC123 account. Then set up data sharing to the PARTNERB account.


B.

From account ACCOUNTA create a share of database MARKET_DB, and create a new database out of this share locally in AWS us-east-1 region. Then make this database the provider and share it with the PARTNERB account.


C.

Create a new account (called AZABC123) in Azure East US 2 region. From account ACCOUNTA replicate the database MARKET_DB to AZABC123 and from this account set up the data sharing to the PARTNERB account.


D.

Create a share of database MARKET_DB, and create a new database out of this share locally in AWS us-east-1 region. Then replicate this database to the partner’s account PARTNERB.


Expert Solution
Questions # 45:

A company is storing large numbers of small JSON files (ranging from 1-4 bytes) that are received from IoT devices and sent to a cloud provider. In any given hour, 100,000 files are added to the cloud provider.

What is the MOST cost-effective way to bring this data into a Snowflake table?

Options:

A.

An external table


B.

A pipe


C.

A stream


D.

A copy command at regular intervals


Expert Solution
Questions # 46:

A company is designing its serving layer for data that is in cloud storage. Multiple terabytes of the data will be used for reporting. Some data does not have a clear use case but could be useful for experimental analysis. This experimentation data changes frequently and is sometimes wiped out and replaced completely in a few days.

The company wants to centralize access control, provide a single point of connection for the end-users, and maintain data governance.

What solution meets these requirements while MINIMIZING costs, administrative effort, and development overhead?

Options:

A.

Import the data used for reporting into a Snowflake schema with native tables. Then create external tables pointing to the cloud storage folders used for the experimentation data. Then create two different roles with grants to the different datasets to match the different user personas, and grant these roles to the corresponding users.


B.

Import all the data in cloud storage to be used for reporting into a Snowflake schema with native tables. Then create a role that has access to this schema and manage access to the data through that role.


C.

Import all the data in cloud storage to be used for reporting into a Snowflake schema with native tables. Then create two different roles with grants to the different datasets to match the different user personas, and grant these roles to the corresponding users.


D.

Import the data used for reporting into a Snowflake schema with native tables. Then create views that have SELECT commands pointing to the cloud storage files for the experimentation data. Then create two different roles to match the different user personas, and grant these roles to the corresponding users.


Expert Solution
Questions # 47:

A company is using Snowflake in Azure in the Netherlands. The company analyst team also has data in JSON format that is stored in an Amazon S3 bucket in the AWS Singapore region that the team wants to analyze.

The Architect has been given the following requirements:

1. Provide access to frequently changing data

2. Keep egress costs to a minimum

3. Maintain low latency

How can these requirements be met with the LEAST amount of operational overhead?

Options:

A.

Use a materialized view on top of an external table against the S3 bucket in AWS Singapore.


B.

Use an external table against the S3 bucket in AWS Singapore and copy the data into transient tables.


C.

Copy the data between providers from S3 to Azure Blob storage to collocate, then use Snowpipe for data ingestion.


D.

Use AWS Transfer Family to replicate data between the S3 bucket in AWS Singapore and an Azure Netherlands Blob storage, then use an external table against the Blob storage.


Expert Solution
Questions # 48:

How does a standard virtual warehouse policy work in Snowflake?

Options:

A.

It conserves credits by keeping running clusters fully loaded rather than starting additional clusters.


B.

It starts only if the system estimates that there is a query load that will keep the cluster busy for at least 6 minutes.


C.

It starts only f the system estimates that there is a query load that will keep the cluster busy for at least 2 minutes.


D.

It prevents or minimizes queuing by starting additional clusters instead of conserving credits.


Expert Solution
Viewing page 5 out of 5 pages
Viewing questions 41-50 out of questions