Pass the Salesforce Salesforce MuleSoft MuleSoft-Integration-Architect-I Questions and answers with CertsForce

Viewing page 2 out of 9 pages
Viewing questions 11-20 out of questions
Questions # 11:

An organization's governance process requires project teams to get formal approval from all key stakeholders for all new Integration design specifications. An integration Mule application Is being designed that interacts with various backend systems. The Mule application will be created using Anypoint Design Center or Anypoint Studio and will then be deployed to a customer-hosted runtime.

What key elements should be included in the integration design specification when requesting approval for this Mule application?

Options:

A.

SLAs and non-functional requirements to access the backend systems


B.

Snapshots of the Mule application's flows, including their error handling


C.

A list of current and future consumers of the Mule application and their contact details


D.

The credentials to access the backend systems and contact details for the administrator of each system


Expert Solution
Questions # 12:

49 of A popular retailer is designing a public API for its numerous business partners. Each business partner will invoke the API at the URL 58. https://api.acme.com/partnefs/vl. The API implementation is estimated to require deployment to 5 CloudHub workers.

The retailer has obtained a public X.509 certificate for the name apl.acme.com, signed by a reputable CA, to be used as the server certificate.

Where and how should the X.509 certificate and Mule applications be used to configure load balancing among the 5 CloudHub workers, and what DNS entries should be configured in order for the retailer to support its numerous business partners?

Options:

A.

Add the X.509 certificate to the Mule application's deployable archive, then configure a CloudHub Dedicated Load Balancer (DLB) for each of the Mule application's CloudHub workers

Create a CNAME for api.acme.com pointing to the DLB's A record


B.

Add the X.509 certificate to the CloudHub Shared Load Balancer (SLB), not to the Mule application

Create a CNAME for api.acme.com pointing to the SLB's A record


C.

Add the X.509 certificate to a CloudHub Dedicated Load Balancer (DLB), not to the Mule application

Create a CNAME for api.acme.com pointing to the DLB's A record


D.

Add the x.509 certificate to the Mule application's deployable archive, then configure the CloudHub Shared Load Balancer (SLB)

for each of the Mule application's CloudHub workers

Create a CNAME for api.acme.com pointing to the SLB's A record


Expert Solution
Questions # 13:

A company is planning to extend its Mule APIs to the Europe region. Currently all new applications are deployed to Cloudhub in the US region following this naming convention

{API name}-{environment}. for example, Orders-SAPI-dev, Orders-SAPI-prod etc.

Considering there is no network restriction to block communications between API's, what strategy should be implemented in order to apply the same new API's running in the EU region of CloudHub as well to minimize latency between API's and target users and systems in Europe?

Options:

A.

Set region property to Europe (eu-de) in API manager for all the mule application

No need to change the naming convention


B.

Set region property to Europe (eu-de) in API manager for all the mule application

Change the naming convention to {API name}-{environment}-{region} and communicate this change to the consuming applications and users


C.

Set region property to Europe (eu-de) in runtime manager for all the mule application

No need to change the naming convention


D.

Set region property to Europe (eu-de) in runtime manager for all the mule application

Change the naming convention to {API name}-{environment}-{region} and communicate this change to the consuming applications and users


Expert Solution
Questions # 14:

An integration team follows MuleSoft’s recommended approach to full lifecycle API development.

Which activity should this team perform during the API implementation phase?

Options:

A.

Validate the API specification


B.

Use the API specification to build the MuleSoft application


C.

Design the API specification


D.

Use the API specification to monitor the MuleSoft application


Expert Solution
Questions # 15:

Why would an Enterprise Architect use a single enterprise-wide canonical data model (CDM) when designing an integration solution using Anypoint Platform?

Options:

A.

To reduce dependencies when integrating multiple systems that use different data formats


B.

To automate Al-enabled API implementation generation based on normalized backend databases from separate vendors


C.

To leverage a data abstraction layer that shields existing Mule applications from nonbackward compatible changes to the model's data structure


D.

To remove the need to perform data transformation when processing message payloads in Mule applications


Expert Solution
Questions # 16:

An organization plans to use the Anypoint Platform audit logging service to log Anypoint MQ actions.

What consideration must be kept in mind when leveraging Anypoint MQ Audit Logs?

Options:

A.

Anypoint MQ Audit Logs include logs for sending, receiving, or browsing messages


B.

Anypoint MQ Audit Logs include fogs for failed Anypoint MQ operations


C.

Anypoint MQ Audit Logs include logs for queue create, delete, modify, and purge operations


Expert Solution
Questions # 17:

What approach configures an API gateway to hide sensitive data exchanged between API consumers and API implementations, but can convert tokenized fields back to their original value for other API requests or responses, without having to recode the API implementations?

Options:

A.

Create both masking and tokenization formats and use both to apply a tokenization policy in an API gateway to mask sensitive values in message payloads withcharacters, and apply a corresponding detokenization policy to return the original

values to other APIs


B.

Create a masking format and use it to apply a tokenization policy in an API gateway to mask sensitive values in message payloads with characters, and apply a corresponding detokenization policy to return the original values to other APIs


C.

Use a field-level encryption policy in an API gateway to replace sensitive fields in message payload with encrypted values, and apply a corresponding field-level

decryption policy to return the original values to other APIs


D.

Create a tokenization format and use it to apply a tokenization policy in an API gateway to replace sensitive fields in message payload with similarly formatted

tokenized values, and apply a corresponding detokenization policy to return the original values to other APIs


Expert Solution
Questions # 18:

An Organization has previously provisioned its own AWS VPC hosting various servers. The organization now needs to use Cloudhub to host a Mule application that will implement a REST API once deployed to Cloudhub, this Mule application must be able to communicate securely with the customer-provisioned AWS VPC resources within the same region, without being interceptable on the public internet.

What Anypoint Platform features should be used to meet these network communication requirements between Cloudhub and the existing customer-provisioned AWS VPC?

Options:

A.

Add a Mulesoft hosted Anypoint VPC configured and with VPC Peering to the AWS VPC


B.

Configure an external identity provider (IDP) in Anypoint Platform with certificates from the customer provisioned AWS VPC


C.

Add a default API Whitelisting policy to API Manager to automatically whitelist the customer provisioned AWS VPC IP ranges needed by the Mule applicaton


D.

Use VM queues in the Mule application to allow any non-mule assets within the customer provisioned AWS VPC to subscribed to and receive messages


Expert Solution
Questions # 19:

An Order microservice and a Fulfillment microservice are being designed to communicate with their dients through message-based integration (and NOT through API invocations).

The Order microservice publishes an Order message (a kind of command message) containing the details of an order to be fulfilled. The intention is that Order messages are only consumed by one Mute application, the Fulfillment microservice.

The Fulfilment microservice consumes Order messages, fulfills the order described therein, and then publishes an OrderFulfilted message (a kind of event message). Each OrderFulfilted message can be consumed by any interested Mule application, and the Order microservice is one such Mute application.

What is the most appropriate choice of message broker(s) and message destination(s) in this scenario?

Options:

A.

Order messages are sent to an Anypoint MQ exchange OrderFulfilled messages are sent to an Anypoint MQ queue Both microservices interact with Anypoint MQ as the message broker, which must therefore scale to support the load of both microservices


B.

Order messages are sent to a JMS queue. OrderFulfilled messages are sent to a JMS topic Both microservices interact with the same JMS provider (message broker) instance, which must therefore scale to support the load of both microservices


C.

Order messages are sent directly to the Fulfillment microservices. OrderFulfilled messages are sent directly to the Order microservice The Order microservice interacts with one AMQP-compatible message broker and the Fulfillment microservice interacts with a different AMQP-compatible message broker, so that both message brokers can be chosen and scaled to best support the load of each microservice


D.

Order messages are sent to a JMS queue. OrderFulfilled messages are sent to a JMS topic The Order microservice interacts with one JMS provider (message broker) and the Fulfillment microservice interacts with a different JMS provider, so that both message brokers can be chosen and scaled to best support the load of each microservice


Expert Solution
Questions # 20:

An application deployed to a runtime fabric environment with two cluster replicas is designed to periodically trigger of flow for processing a high-volume set of records from the source system and synchronize with the SaaS system using the Batch job scope

After processing 1000 records in a periodic synchronization of 1 lakh records, the replicas in which batch job instance was started went down due to unexpected failure in the runtime fabric environment

What is the consequence of losing the replicas that run the Batch job instance?

Options:

A.

The remaining 99000 records will be lost and left and processed


B.

The second replicas will take over processing the remaining

99000 records


C.

A new replacement replica will be available and will be process all 1,00,000 records from scratch leading to duplicate record processing


D.

A new placement replica will be available and will take or processing the remaining 99,000 records


Expert Solution
Viewing page 2 out of 9 pages
Viewing questions 11-20 out of questions