Pass the MuleSoft MuleSoft Certified Architect MCIA-Level-1 Questions and answers with CertsForce

Viewing page 5 out of 9 pages
Viewing questions 41-50 out of questions
Questions # 41:

An external REST client periodically sends an array of records in a single POST request to a Mule application API endpoint.

The Mule application must validate each record of the request against a JSON schema before sending it to a downstream system in the same order that it was received in the array

Record processing will take place inside a router or scope that calls a child flow. The child flow has its own error handling defined. Any validation or communication failures should not prevent further processing of the remaining records.

To best address these requirements what is the most idiomatic(used for it intended purpose) router or scope to used in the parent flow, and what type of error handler should be used in the child flow?

Options:

A.

First Successful router in the parent flow

On Error Continue error handler in the child flow


B.

For Each scope in the parent flow

On Error Continue error handler in the child flow


C.

Parallel For Each scope in the parent flow

On Error Propagate error handler in the child flow


D.

Until Successful router in the parent flow

On Error Propagate error handler in the child flow


Expert Solution
Questions # 42:

A leading eCommerce giant will use MuleSoft APIs on Runtime Fabric (RTF) to process customer orders. Some customer-sensitive information, such as credit card information, is required in request payloads or is included in response payloads in some of the APIs. Other API requests and responses are not authorized to access some of this customer-sensitive information but have been implemented to validate and transform based on the structure and format of this customer-sensitive information (such as account IDs, phone numbers, and postal codes).

What approach configures an API gateway to hide sensitive data exchanged between API consumers and API implementations, but can convert tokenized fields back to their original value for other API requests or responses, without having to recode the API implementations?

Later, the project team requires all API specifications to be augmented with an additional non-functional requirement (NFR) to protect the backend services from a high rate of requests, according to defined service-level

agreements (SLAs). The NFR's SLAs are based on a new tiered subscription level "Gold", "Silver", or "Platinum" that must be tied to a new parameter that is being added to the Accounts object in their enterprise data model.

Following MuleSoft's recommended best practices, how should the project team now convey the necessary non-functional requirement to stakeholders?

Options:

A.

Create and deploy API proxies in API Manager for the NFR, change the baseurl in each

API specification to the corresponding API proxy implementation endpoint, and publish each modified API specification to Exchange


B.

Update each API specification with comments about the NFR's SLAs and publish each modified API specification to Exchange


C.

Update each API specification with a shared RAML fragment required to implement the NFR and publish the RAML fragment and each modified API specification to Exchange


D.

Create a shared RAML fragment required to implement the NFR, list each API implementation endpoint in the RAML fragment, and publish the RAML fragment to Exchange


Expert Solution
Questions # 43:

A Mule application contains a Batch Job with two Batch Steps (Batch_Step_l and Batch_Step_2). A payload with 1000 records is received by the Batch Job.

How many threads are used by the Batch Job to process records, and how does each Batch Step process records within the Batch Job?

Options:

A.

Each Batch Job uses SEVERAL THREADS for the Batch Steps Each Batch Step instance receives ONE record at a time as the payload, and RECORDS are processed IN PARALLEL within and between the two Batch Steps


B.

Each Batch Job uses a SINGLE THREAD for all Batch steps Each Batch step instance receives ONE record at a time as the payload, and RECORDS are processed IN ORDER, first through Batch_Step_l and then through Batch_Step_2


C.

Each Batch Job uses a SINGLE THREAD to process a configured block size of record Each Batch Step instance receives A BLOCK OF records as the payload, and BLOCKS of records are processed IN ORDER


D.

Each Batch Job uses SEVERAL THREADS for the Batch Steps Each Batch Step instance receives ONE record at a time as the payload, and BATCH STEP INSTANCES execute IN PARALLEL to process records and Batch Steps in ANY order as fast as possible


Expert Solution
Questions # 44:

A system API EmployeeSAPI is used to fetch employee's data from an underlying SQL database.

The architect must design a caching strategy to query the database only when there is an update to the employees stable or else return a cached response in order to minimize the number of redundant transactions being handled by the database.

What must the architect do to achieve the caching objective?

Options:

A.

Use an On Table Row on employees table and call invalidate cache

Use an object store caching strategy and expiration interval to empty


B.

Use a Scheduler with a fixed frequency every hour triggering an invalidate cache flow

Use an object store caching strategy and expiration interval to empty


C.

Use a Scheduler with a fixed frequency every hour triggering an invalidate cache flow

Use an object store caching strategy and set expiration interval to 1-hour


D.

Use an on table rule on employees table call invalidate cache and said new employees data to cache

Use an object store caching strategy and set expiration interval to 1-hour


Expert Solution
Questions # 45:

An organization is successfully using API led connectivity, however, as the application network grows, all the manually performed tasks to publish share and discover, register, apply policies to, and deploy an API are becoming repetitive pictures driving the organization to automate this process using efficient CI/'CD pipeline. Considering Anypoint platforms capabilities how should the organization approach automating is API lifecycle?

Options:

A.

Use runtime manager rest apis for API management and mavenforAPI deployment


B.

Use Maven with a custom configuration required for the API lifecycle


C.

Use Anypoint CLI or Anypoint Platform REST apis with scripting language such as groovy


D.

Use Exchange rest api's for API management and MavenforAPI deployment


Expert Solution
Questions # 46:

An API has been updated in Anypoint Exchange by its API producer from version 3.1.1 to 3.2.0 following accepted semantic versioning practices and the changes have been communicated via the API's public portal. The API endpoint does NOT change in the new version. How should the developer of an API client respond to this change?

Options:

A.

The update should be identified as a project risk and full regression testing of the functionality that uses this API should be run.


B.

The API producer should be contacted to understand the change to existing functionality.


C.

The API producer should be requested to run the old version in parallel with the new one.


D.

The API client code ONLY needs to be changed if it needs to take advantage of new features.


Expert Solution
Questions # 47:

An organization has decided on a cloud migration strategy to minimize the organization's own IT resources. Currently the organization has all of its new applications running on its own premises and uses an on-premises load balancer that exposes all APIs under the base URL (https://api.rutujar.com ).

As part of migration strategy, the organization is planning to migrate all of its new applications and load balancer CloudHub.

What is the most straightforward and cost-effective approach to Mule application deployment and load balancing that preserves the public URL's?

Options:

A.

Deploy the Mule application to Cloudhub

Create a CNAME record for base URL( httpsr://api.rutujar.com) in the Cloudhub shared load balancer that points to the A record of theon-premises load balancer

Apply mapping rules in SLB to map URLto their corresponding Mule applications


B.

Deploy the Mule application to Cloudhub

Update a CNAME record for base URL ( https://api.rutujar.com) in the organization 's DNS server to point to the A record of the Cloudhub dedicated load balancer

Apply mapping rules in DLB to map URLto their corresponding Mule applications


C.

Deploy the Mule application to Cloudhub

Update a CNAME record for base URL ( https://api.rutujar.com) in the organization 's DNS server to point to the A record of the CloudHub shared load balancer

Apply mapping rules in SLB to map URLto their corresponding Mule applications


D.

For each migrated Mule application, deploy an API proxy application to Cloudhub with all traffic to the mule applications routed through a Cloud Hub Dedicated load balancer (DLB)

Update a CNAME record for base URL ( https://api.rutujar.com) in the organization 's DNS server to point to the A record of the CloudHub dedicated load balancer

Apply mapping rules in DLB to map each API proxy appl


Expert Solution
Questions # 48:

An organization is designing multiple new applications to run on CloudHub in a single Anypoint VPC and that must share data using a common persistent Anypoint object store V2 (OSv2).

Which design gives these mule applications access to the same object store instance?

Options:

A.

AVM connector configured to directly access the persistence queue of the persistent object store


B.

An Anypoint MQ connector configured to directly access the persistent object store


C.

Object store V2 can be shared across cloudhub applications with the configured osv2 connector


D.

The object store V2 rest API configured to access the persistent object store


Expert Solution
Questions # 49:

What is not true about Mule Domain Project?

Options:

A.

This allows Mule applications to share resources


B.

Expose multiple services within the Mule domain on the same port


C.

Only available Anypoint Runtime Fabric


D.

Send events (messages) to other Mule applications using VM queues


Expert Solution
Questions # 50:

A large life sciences customer plans to use the Mule Tracing module with the Mapped Diagnostic Context (MDC) logging operations to enrich logging in its Mule application and to improve tracking by providing more context in the Mule application logs. The customer also wants to improve throughput and lower the message processing latency in its Mule application flows.

After installing the Mule Tracing module in the Mule application, how should logging be performed in flows in Mule applications, and what should be changed In the log4j2.xml files?

Options:

A.

In the flows, add Mule Tracing module Set logging variable operations before any Core Logger components.

In log4j2.xml files, change the appender's pattern layout to use %MDC and then assign the appender to a Logger or Root element.


B.

In the flows, add Mule Tracing module Set logging variable operations before any Core

Logger components.

In log4j2.xmI files, change the appender’s pattern layout to use the %MDC placeholder and then assign the appender to an AsyncLogger element.


C.

In the flows, add Mule Tracing module Set logging variable operations before any Core

Logger components.

In log4j2.xmI files, change the appender’'s pattern layout to use %asyncLogger placeholder and then assign the appender to an AsyncLogger element.


D.

In the flows, wrap Logger components in Async scopes. In log4j2.xmI files, change the appender's pattern layout to use the %asyncLogger

placeholder and then assign the appender to a Logger or Root element.


Expert Solution
Viewing page 5 out of 9 pages
Viewing questions 41-50 out of questions