Are you tired of looking for a source that'll keep you updated on the Salesforce Certified MuleSoft Platform Architect 1 Exam (SU24) Exam? Plus, has a collection of affordable, high-quality, and incredibly easy Salesforce MuleSoft-Platform-Architect-I Practice Questions? Well then, you are in luck because Salesforcexamdumps.com just updated them! Get Ready to become a Salesforce MuleSoft Certified.
PDF
$100 $40
Test Engine
$140 $56
PDF + Test Engine
$180 $72
Here are Salesforce MuleSoft-Platform-Architect-I PDF available features:
Salesforce MuleSoft-Platform-Architect-I is a necessary certification exam to get certified. The certification is a reward to the deserving candidate with perfect results. The Salesforce MuleSoft Certification validates a candidate's expertise to work with Salesforce. In this fast-paced world, a certification is the quickest way to gain your employer's approval. Try your luck in passing the Salesforce Certified MuleSoft Platform Architect 1 Exam (SU24) Exam and becoming a certified professional today. Salesforcexamdumps.com is always eager to extend a helping hand by providing approved and accepted Salesforce MuleSoft-Platform-Architect-I Practice Questions. Passing Salesforce Certified MuleSoft Platform Architect 1 Exam (SU24) will be your ticket to a better future!
Pass with Salesforce MuleSoft-Platform-Architect-I Braindumps!
Contrary to the belief that certification exams are generally hard to get through, passing Salesforce Certified MuleSoft Platform Architect 1 Exam (SU24) is incredibly easy. Provided you have access to a reliable resource such as Salesforcexamdumps.com Salesforce MuleSoft-Platform-Architect-I PDF. We have been in this business long enough to understand where most of the resources went wrong. Passing Salesforce Salesforce MuleSoft certification is all about having the right information. Hence, we filled our Salesforce MuleSoft-Platform-Architect-I Dumps with all the necessary data you need to pass. These carefully curated sets of Salesforce Certified MuleSoft Platform Architect 1 Exam (SU24) Practice Questions target the most repeated exam questions. So, you know they are essential and can ensure passing results. Stop wasting your time waiting around and order your set of Salesforce MuleSoft-Platform-Architect-I Braindumps now!
We aim to provide all Salesforce MuleSoft certification exam candidates with the best resources at minimum rates. You can check out our free demo before pressing down the download to ensure Salesforce MuleSoft-Platform-Architect-I Practice Questions are what you wanted. And do not forget about the discount. We always provide our customers with a little extra.
Unlike other websites, Salesforcexamdumps.com prioritize the benefits of the Salesforce Certified MuleSoft Platform Architect 1 Exam (SU24) candidates. Not every Salesforce exam candidate has full-time access to the internet. Plus, it's hard to sit in front of computer screens for too many hours. Are you also one of them? We understand that's why we are here with the Salesforce MuleSoft solutions. Salesforce MuleSoft-Platform-Architect-I Question Answers offers two different formats PDF and Online Test Engine. One is for customers who like online platforms for real-like Exam stimulation. The other is for ones who prefer keeping their material close at hand. Moreover, you can download or print Salesforce MuleSoft-Platform-Architect-I Dumps with ease.
If you still have some queries, our team of experts is 24/7 in service to answer your questions. Just leave us a quick message in the chat-box below or email at [email protected].
Refer to the exhibit. An organization needs to enable access to their customer data from both a mobile app and a web application, which each need access to common fields as well as certain unique fields. The data is available partially in a database and partially in a 3rd-party CRM system. What APIs should be created to best fit these design requirements?
A. Option A B. Option B C. Option C D. Option D
Answer: C Explanation: ExplanationCorrect Answer: Separate Experience APIs for the mobile and web app, but a commonProcess API that invokes separate System APIs created for the database and CRM system*****************************************As per MuleSoft's API-led connectivity:>> Experience APIs should be built as per each consumer needs and their experience.>> Process APIs should contain all the orchestration logic to achieve the businessfunctionality.>> System APIs should be built for each backend system to unlock their data.Reference: https://blogs.mulesoft.com/dev/api-dev/what-is-api-led-connectivity/
Question # 2
What are 4 important Platform Capabilities offered by Anypoint Platform?
A. API Versioning, API Runtime Execution and Hosting, API Invocation, API ConsumerEngagement B. API Design and Development, API Runtime Execution and Hosting, API Versioning, APIDeprecation C. API Design and Development, API Runtime Execution and Hosting, API Operations andManagement, API Consumer Engagement D. API Design and Development, API Deprecation, API Versioning, API ConsumerEngagement
Answer: C Explanation: ExplanationCorrect Answer: API Design and Development, API Runtime Execution and Hosting, APIOperations and Management, API Consumer Engagement*****************************************>> API Design and Development - Anypoint Studio, Anypoint Design Center, AnypointConnectors>> API Runtime Execution and Hosting - Mule Runtimes, CloudHub, Runtime Services>> API Operations and Management - Anypoint API Manager, Anypoint Exchange>> API Consumer Management - API Contracts, Public Portals, Anypoint Exchange, APINotebooks
Question # 3
What correctly characterizes unit tests of Mule applications?
A. They test the validity of input and output of source and target systems B. They must be run in a unit testing environment with dedicated Mule runtimes for theenvironment C. They must be triggered by an external client tool or event source D. They are typically written using MUnit to run in an embedded Mule runtime that does notrequire external connectivity
Answer: D Explanation: ExplanationCorrect Answer: They are typically written using MUnit to run in an embedded Mule runtimethat does not require external connectivity.*****************************************Below TWO are characteristics of Integration Tests but NOT unit tests:>> They test the validity of input and output of source and target systems. >> They must be triggered by an external client tool or event source.It is NOT TRUE that Unit Tests must be run in a unit testing environment with dedicatedMule runtimes for the environment.MuleSoft offers MUnit for writing Unit Tests and they run in an embedded Mule Runtimewithout needing any separate/ dedicated Runtimes to execute them. They also do NOTneed any external connectivity as MUnit supports mocking via stubs.https://dzone.com/articles/munit-framework
Question # 4
An organization is implementing a Quote of the Day API that caches today's quote. What scenario can use the GoudHub Object Store via the Object Store connector to persist the cache's state?
A. When there are three CloudHub deployments of the API implementation to threeseparate CloudHub regions that must share the cache state B. When there are two CloudHub deployments of the API implementation by two AnypointPlatform business groups to the same CloudHub region that must share the cache state C. When there is one deployment of the API implementation to CloudHub and anottVdeployment to a customer-hosted Mule runtime that must share the cache state D. When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state
Answer: D Explanation: ExplanationCorrect Answer: When there is one CloudHub deployment of the API implementation tothree CloudHub workers that must share the cache state.*****************************************Key details in the scenario:>> Use the CloudHub Object Store via the Object Store connectorConsidering above details:>> CloudHub Object Stores have one-to-one relationship with CloudHub Mule Applications.>> We CANNOT use an application's CloudHub Object Store to be shared among multipleMule applications running in different Regions or Business Groups or Customer-hostedMule Runtimes by using Object Store connector.>> If it is really necessary and very badly needed, then Anypoint Platform supports a wayby allowing access to CloudHub Object Store of another application using Object StoreREST API. But NOT using Object Store connector.So, the only scenario where we can use the CloudHub Object Store via the Object Storeconnector to persist the cache’s state is when there is one CloudHub deployment of theAPI implementation to multiple CloudHub workers that must share the cache state.
Question # 5
What is a key requirement when using an external Identity Provider for Client Management in Anypoint Platform?
A. Single sign-on is required to sign in to Anypoint Platform B. The application network must include System APIs that interact with the Identity Provider C. To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, API clients mustsubmit access tokens issued by that same Identity Provider D. APIs managed by Anypoint Platform must be protected by SAML 2.0 policies
Answer: C Explanation: https://www.folkstalk.com/2019/11/mulesoft-integration-and-platform.html ExplanationCorrect Answer: To invoke OAuth 2.0-protected APIs managed by Anypoint Platform, APIclients must submit access tokens issued by that same Identity Provider*****************************************>> It is NOT necessary that single sign-on is required to sign in to Anypoint Platformbecause we are using an external Identity Provider for Client Management>> It is NOT necessary that all APIs managed by Anypoint Platform must be protected bySAML 2.0 policies because we are using an external Identity Provider for ClientManagement>> Not TRUE that the application network must include System APIs that interact with theIdentity Provider because we are using an external Identity Provider for Client ManagementOnly TRUE statement in the given options is - "To invoke OAuth 2.0-protected APIsmanaged by Anypoint Platform, API clients must submit access tokens issued by that sameIdentity Provider"References:https://docs.mulesoft.com/api-manager/2.x/external-oauth-2.0-token-validation-policyhttps://blogs.mulesoft.com/dev/api-dev/api-security-ways-to-authenticate-and-authorize/
Question # 6
What CANNOT be effectively enforced using an API policy in Anypoint Platform?
A. Guarding against Denial of Service attacks B. Maintaining tamper-proof credentials between APIs C. Logging HTTP requests and responses D. Backend system overloading
Answer: A Explanation: ExplanationCorrect Answer: Guarding against Denial of Service attacks*****************************************>> Backend system overloading can be handled by enforcing "Spike Control Policy">> Logging HTTP requests and responses can be done by enforcing "Message LoggingPolicy">> Credentials can be tamper-proofed using "Security" and "Compliance" PoliciesHowever, unfortunately, there is no proper way currently on Anypoint Platform to guardagainst DOS attacks.Reference: https://help.mulesoft.com/s/article/DDos-Dos-at
Question # 7
An API experiences a high rate of client requests (TPS) vwth small message paytoads. How can usage limits be imposed on the API based on the type of client application?
A. Use an SLA-based rate limiting policy and assign a client application to a matching SLAtier based on its type B. Use a spike control policy that limits the number of requests for each client applicationtype C. Use a cross-origin resource sharing (CORS) policy to limit resource sharing betweenclient applications, configured by the client application type D. Use a rate limiting policy and a client ID enforcement policy, each configured by theclient application type
Answer: A Explanation: Correct Answer: Use an SLA-based rate limiting policy and assign a clientapplication to a matching SLA tier based on its type.*****************************************>> SLA tiers will come into play whenever any limits to be imposed on APIs based on clienttypeReference: https://docs.mulesoft.com/api-manager/2.x/rate-limiting-and-throttling-slabased-policies
Question # 8
Which layer in the API-led connectivity focuses on unlocking key systems, legacy systems, data sources etc and exposes the functionality?
A. Experience Layer B. Process Layer C. System Layer
Answer: C Explanation: Explanation Correct Answer: System Layer The APIs used in an API-led approach to connectivity fall into three categories:System APIs – these usually access the core systems of record and provide a means ofinsulating the user from the complexity or any changes to the underlying systems. Oncebuilt, many users, can access data without any need to learn the underlying systems andcan reuse these APIs in multiple projects.Process APIs – These APIs interact with and shape data within a single system or acrosssystems (breaking down data silos) and are created here without a dependence on thesource systems from which that data originates, as well as the target channels throughwhich that data is delivered.Experience APIs – Experience APIs are the means by which data can be reconfigured sothat it is most easily consumed by its intended audience, all from a common data source,rather than setting up separate point-to-point integrations for each channel. An ExperienceAPI is usually created with API-first design principles where the API is designed for thespecific user experience in mind.
Question # 9
What is the most performant out-of-the-box solution in Anypoint Platform to track transaction state in an asynchronously executing long-running process implemented as a Mule application deployed to multiple CloudHub workers?
A. Redis distributed cache B. java.util.WeakHashMap C. Persistent Object Store D. File-based storage
Answer: C Explanation: Correct Answer: Persistent Object Store*****************************************>> Redis distributed cache is performant but NOT out-of-the-box solution in AnypointPlatform>> File-storage is neither performant nor out-of-the-box solution in Anypoint Platform>> java.util.WeakHashMap needs a completely custom implementation of cache fromscratch using Java code and is limited to the JVM where it is running. Which means thestate in the cache is not worker aware when running on multiple workers. This type ofcache is local to the worker. So, this is neither out-of-the-box nor worker-aware amongmultiple workers on cloudhub. https://www.baeldung.com/java-weakhashmap>> Persistent Object Store is an out-of-the-box solution provided by Anypoint Platformwhich is performant as well as worker aware among multiple workers running onCloudHub. https://docs.mulesoft.com/object-store/So, Persistent Object Store is the right answer
Question # 10
An Order API must be designed that contains significant amounts of integration logic and involves the invocation of the Product API. The power relationship between Order API and Product API is one of "Customer/Supplier", because the Product API is used heavily throughout the organization and is developed by a dedicated development team located in the office of the CTO. What strategy should be used to deal with the API data model of the Product API within the Order API?
A. Convince the development team of the Product API to adopt the API data model of theOrder API such that the integration logic of the Order API can work with one consistentinternal data model B. Work with the API data types of the Product API directly when implementing theintegration logic of the Order API such that the Order API uses the same (unchanged) datatypes as the Product API C. Implement an anti-corruption layer in the Order API that transforms the Product API datamodel into internal data types of the Order API D. Start an organization-wide data modeling initiative that will result in an Enterprise DataModel that will then be used in both the Product API and the Order API
Answer: C Explanation: ExplanationCorrect Answer: Convince the development team of the product API to adopt the API datamodel of the Order API such that integration logic of the Order API can work with oneconsistent internal data model*****************************************Key details to note from the given scenario:>> Power relationship between Order API and Product API is customer/supplierSo, as per below rules of "Power Relationships", the caller (in this case Order API) wouldrequest for features to the called (Product API team) and the Product API team would needto accomodate those requests.
Question # 11
An Order API must be designed that contains significant amounts of integration logic and involves the invocation of the Product API. The power relationship between Order API and Product API is one of "Customer/Supplier", because the Product API is used heavily throughout the organization and is developed by a dedicated development team located in the office of the CTO. What strategy should be used to deal with the API data model of the Product API within the Order API?
A. Convince the development team of the Product API to adopt the API data model of theOrder API such that the integration logic of the Order API can work with one consistentinternal data model B. Work with the API data types of the Product API directly when implementing theintegration logic of the Order API such that the Order API uses the same (unchanged) datatypes as the Product API C. Implement an anti-corruption layer in the Order API that transforms the Product API datamodel into internal data types of the Order API D. Start an organization-wide data modeling initiative that will result in an Enterprise DataModel that will then be used in both the Product API and the Order API
Answer: C Explanation: ExplanationCorrect Answer: Convince the development team of the product API to adopt the API datamodel of the Order API such that integration logic of the Order API can work with oneconsistent internal data model*****************************************Key details to note from the given scenario:>> Power relationship between Order API and Product API is customer/supplierSo, as per below rules of "Power Relationships", the caller (in this case Order API) wouldrequest for features to the called (Product API team) and the Product API team would needto accomodate those requests.
Question # 12
A new upstream API Is being designed to offer an SLA of 500 ms median and 800 ms maximum (99th percentile) response time. The corresponding API implementation needs to sequentially invoke 3 downstream APIs of very similar complexity. The first of these downstream APIs offers the following SLA for its response time: median: 100 ms, 80th percentile: 500 ms, 95th percentile: 1000 ms. If possible, how can a timeout be set in the upstream API for the invocation of the first downstream API to meet the new upstream API's desired SLA?
A. Set a timeout of 50 ms; this times out more invocations of that API but gives additionalroom for retries B. Set a timeout of 100 ms; that leaves 400 ms for the other two downstream APIs tocomplete C. No timeout is possible to meet the upstream API's desired SLA; a different SLA must benegotiated with the first downstream API or invoke an alternative API D. Do not set a timeout; the Invocation of this API Is mandatory and so we must wait until itresponds
Answer: B Explanation:ExplanationCorrect Answer: Set a timeout of 100ms; that leaves 400ms for other two downstream APIsto complete*****************************************Key details to take from the given scenario:>> Upstream API's designed SLA is 500ms (median). Lets ignore maximum SLA responsetimes.>> This API calls 3 downstream APIs sequentially and all these are of similar complexity.>> The first downstream API is offering median SLA of 100ms, 80th percentile: 500ms;95th percentile: 1000ms.Based on the above details:>> We can rule out the option which is suggesting to set 50ms timeout. Because, if themedian SLA itself being offered is 100ms then most of the calls are going to timeout andtime gets wasted in retried them and eventually gets exhausted with all retries. Even ifsome retries gets successful, the remaining time wont leave enough room for 2nd and 3rddownstream APIs to respond within time.>> The option suggesting to NOT set a timeout as the invocation of this API is mandatoryand so we must wait until it responds is silly. As not setting time out would go against thegood implementation pattern and moreover if the first API is not responding within itsoffered median SLA 100ms then most probably it would either respond in 500ms (80thpercentile) or 1000ms (95th percentile). In BOTH cases, getting a successful responsefrom 1st downstream API does NO GOOD because already by this time the Upstream APISLA of 500 ms is breached. There is no time left to call 2nd and 3rd downstream APIs.>> It is NOT true that no timeout is possible to meet the upstream APIs desired SLA.As 1st downstream API is offering its median SLA of 100ms, it means MOST of the time wewould get the responses within that time. So, setting a timeout of 100ms would be ideal forMOST calls as it leaves enough room of 400ms for remaining 2 downstream API calls.
Question # 13
The implementation of a Process API must change. What is a valid approach that minimizes the impact of this change on API clients?
A. Update the RAML definition of the current Process API and notify API client developersby sending them links to the updated RAML definition B. Postpone changes until API consumers acknowledge they are ready to migrate to a newProcess API or API version C. Implement required changes to the Process API implementation so that wheneverpossible, the Process API's RAML definition remains unchanged D. Implement the Process API changes in a new API implementation, and have the old APIimplementation return an HTTP status code 301 - Moved Permanently to inform API clientsthey should be calling the new API implementation
Answer: C Explanation: ExplanationCorrect Answer: Implement required changes to the Process API implementation so that,whenever possible, the Process API’s RAML definition remains unchanged.*****************************************Key requirement in the question is:>> Approach that minimizes the impact of this change on API clientsBased on above:>> Updating the RAML definition would possibly impact the API clients if the changesrequire any thing mandatory from client side. So, one should try to avoid doing that untilreally necessary.>> Implementing the changes as a completely different API and then redirectly the clientswith 3xx status code is really upsetting design and heavily impacts the API clients.>> Organisations and IT cannot simply postpone the changes required until all APIconsumers acknowledge they are ready to migrate to a new Process API or API version.This is unrealistic and not possible.The best way to handle the changes always is to implement required changes to the APIimplementations so that, whenever possible, the API’s RAML definition remainsunchanged.
Question # 14
What is true about automating interactions with Anypoint Platform using tools such as Anypoint Platform REST APIs, Anypoint CU, or the Mule Maven plugin?
A. Access to Anypoint Platform APIs and Anypoint CU can be controlled separately through the roles and permissions in Anypoint Platform, so that specific users can getaccess to Anypoint CLI white others get access to the platform APIs B. Anypoint Platform APIs can ONLY automate interactions with CloudHub, while the MuleMaven plugin is required for deployment to customer-hosted Mule runtimes C. By default, the Anypoint CLI and Mule Maven plugin are NOT included in the Muleruntime, so are NOT available to be used by deployed Mule applications D. API policies can be applied to the Anypoint Platform APIs so that ONLY certain LOBshave access to specific functions
Answer: C Explanation: Explanation Correct Answer: By default, the Anypoint CLI and Mule Maven plugin are NOT included inthe Mule runtime, so are NOT available to be used by deployed Mule applications*****************************************>> We CANNOT apply API policies to the Anypoint Platform APIs like we can do on ourcustom written API instances. So, option suggesting this is FALSE.>> Anypoint Platform APIs can be used for automating interactions with both CloudHuband customer-hosted Mule runtimes. Not JUST the CloudHub. So, option opposing this isFALSE.>> Mule Maven plugin is NOT mandatory for deployment to customer-hosted Muleruntimes. It just helps your CI/CD to have smoother automation. But not a compulsoryrequirement to deploy. So, option opposing this is FALSE.>> We DO NOT have any such special roles and permissions on the platform to separatelycontrol access for some users to have Anypoint CLI and others to have Anypoint PlatformAPIs. With proper general roles/permissions (API Owner, Cloudhub Admin etc..), one canuse any of the options (Anypoint CLI or Platform APIs). So, option suggesting this isFALSE.Only TRUE statement given in the choices is that - Anypoint CLI and Mule Maven pluginare NOT included in the Mule runtime, so are NOT available to be used by deployed Muleapplications.Maven is part of Studio or you can use other Maven installation for development.CLI is convenience only. It is one of many ways how to install app to the runtime.These are definitely NOT part of anything except your process of deployment orautomation.
Question # 15
Say, there is a legacy CRM system called CRM-Z which is offering below functions: 1. Customer creation 2. Amend details of an existing customer 3. Retrieve details of a customer 4. Suspend a customer
A. Implement a system API named customerManagement which has all the functionalitieswrapped in it as various operations/resources B. Implement different system APIs named createCustomer, amendCustomer,retrieveCustomer and suspendCustomer as they are modular and has seperation ofconcerns C. Implement different system APIs named createCustomerInCRMZ,amendCustomerInCRMZ, retrieveCustomerFromCRMZ and suspendCustomerInCRMZ asthey are modular and has seperation of concerns
Answer: B Explanation: •ExplanationCorrect Answer: Implement different system APIs named createCustomer,amendCustomer, retrieveCustomer and suspendCustomer as they are modular and hasseperation of concerns*****************************************>> It is quite normal to have a single API and different Verb + Resource combinations.However, this fits well for an Experience API or a Process API but not a best architecturestyle for System APIs. So, option with just one customerManagement API is not the bestchoice here.>> The option with APIs in createCustomerInCRMZ format is next close choice w.r.tmodularization and less maintenance but the naming of APIs is directly coupled with thelegacy system. A better foreseen approach would be to name your APIs by abstracting thebackend system names as it allows seamless replacement/migration of any backendsystem anytime. So, this is not the correct choice too.>> createCustomer, amendCustomer, retrieveCustomer and suspendCustomer is the rightapproach and is the best fit compared to other options as they are both modular and sametime got the names decoupled from backend system and it has covered all requirements aSystem API needs.
Question # 16
An organization has several APIs that accept JSON data over HTTP POST. The APIs are all publicly available and are associated with several mobile applications and web applications. The organization does NOT want to use any authentication or compliance policies for these APIs, but at the same time, is worried that some bad actor could send payloads that could somehow compromise the applications or servers running the API implementations. What out-of-the-box Anypoint Platform policy can address exposure to this threat?
A. Shut out bad actors by using HTTPS mutual authentication for all API invocations B. Apply an IP blacklist policy to all APIs; the blacklist will Include all bad actors C. Apply a Header injection and removal policy that detects the malicious data before it isused D. Apply a JSON threat protection policy to all APIs to detect potential threat vectors
Answer: D Explanation: ExplanationCorrect Answer: Apply a JSON threat protection policy to all APIs to detect potential threatvectors*****************************************>> Usually, if the APIs are designed and developed for specific consumers (knownconsumers/customers) then we would IP Whitelist the same to ensure that traffic onlycomes from them.>> However, as this scenario states that the APIs are publicly available and being used byso many mobile and web applications, it is NOT possible to identify and blacklist allpossible bad actors.>> So, JSON threat protection policy is the best chance to prevent any bad JSON payloadsfrom such bad actors.
Question # 17
Refer to the exhibit.
A developer is building a client application to invoke an API deployed to the STAGING environment that is governed by a client ID enforcement policy. What is required to successfully invoke the API?
A. The client ID and secret for the Anypoint Platform account owning the API in theSTAGING environment B. The client ID and secret for the Anypoint Platform account's STAGING environment C. The client ID and secret obtained from Anypoint Exchange for the API instance in theSTAGING environment D. A valid OAuth token obtained from Anypoint Platform and its associated client ID andsecret
Answer: C Explanation: Explanation Correct Answer: The client ID and secret obtained from Anypoint Exchange for the APIinstance in the STAGING environment*****************************************>> We CANNOT use the client ID and secret of Anypoint Platform account or any individualenvironments for accessing the APIs>> As the type of policy that is enforced on the API in question is "Client ID EnforcmentPolicy", OAuth token based access won't work.Right way to access the API is to use the client ID and secret obtained from AnypointExchange for the API instance in a particular environment we want to work on.References:Managing API instance Contracts on API Managerhttps://docs.mulesoft.com/api-manager/1.x/request-access-to-api-taskhttps://docs.mulesoft.com/exchange/to-request-accesshttps://docs.mulesoft.com/api-manager/2.x/policy-mule3-client-id-based-policies
Question # 18
What is most likely NOT a characteristic of an integration test for a REST API implementation?
A. The test needs all source and/or target systems configured and accessible B. The test runs immediately after the Mule application has been compiled and packaged C. The test is triggered by an external HTTP request D. The test prepares a known request payload and validates the response payload
Answer: B Explanation: ExplanationCorrect Answer: The test runs immediately after the Mule application has been compiledand packaged*****************************************>> Integration tests are the last layer of tests we need to add to be fully covered.>> These tests actually run against Mule running with your full configuration in place andare tested from external source as they work in PROD.>> These tests exercise the application as a whole with actual transports enabled. So,external systems are affected when these tests run.So, these tests do NOT run immediately after the Mule application has been compiled andpackaged.FYI... Unit Tests are the one that run immediately after the Mule application has beencompiled and packaged.Reference: https://docs.mulesoft.com/mule-runtime/3.9/testing-strategies#integrationtesting
Question # 19
When using CloudHub with the Shared Load Balancer, what is managed EXCLUSIVELY by the API implementation (the Mule application) and NOT by Anypoint Platform?
A. The assignment of each HTTP request to a particular CloudHub worker B. The logging configuration that enables log entries to be visible in Runtime Manager C. The SSL certificates used by the API implementation to expose HTTPS endpoints D. The number of DNS entries allocated to the API implementation
Answer: C Explanation: ExplanationCorrect Answer: The SSL certificates used by the API implementation to expose HTTPSendpoints*****************************************>> The assignment of each HTTP request to a particular CloudHub worker is taken care byAnypoint Platform itself. We need not manage it explicitly in the API implementation and infact we CANNOT manage it in the API implementation.>> The logging configuration that enables log entries to be visible in Runtime Manager isALWAYS managed in the API implementation and NOT just for SLB. So this is notsomething we do EXCLUSIVELY when using SLB.>> We DO NOT manage the number of DNS entries allocated to the API implementationinside the code. Anypoint Platform takes care of this. It is the SSL certificates used by the API implementation to expose HTTPS endpoints thatis to be managed EXCLUSIVELY by the API implementation. Anypoint Platform does NOTdo this when using SLBs.
Question # 20
A code-centric API documentation environment should allow API consumers to investigate and execute API client source code that demonstrates invoking one or more APIs as part of representative scenarios. What is the most effective way to provide this type of code-centric API documentation environment using Anypoint Platform?
A. Enable mocking services for each of the relevant APIs and expose them via theirAnypoint Exchange entry B. Ensure the APIs are well documented through their Anypoint Exchange entries and APIConsoles and share these pages with all API consumers C. Create API Notebooks and include them in the relevant Anypoint Exchange entries D. Make relevant APIs discoverable via an Anypoint Exchange entry
Answer: C Explanation: ExplanationCorrect Answer: Create API Notebooks and Include them in the relevant Anypointexchange entries*****************************************>> API Notebooks are the one on Anypoint Platform that enable us to provide code-centricAPI documentation: https://docs.mulesoft.com/exchange/to-use-api-notebookBottom of FormTop of Form
Question # 21
A System API is designed to retrieve data from a backend system that has scalability challenges. What API policy can best safeguard the backend system?
A. IPwhitelist B. SLA-based rate limiting C. Auth 2 token enforcement D. Client ID enforcement
Answer: B Explanation: ExplanationCorrect Answer: SLA-based rate limiting*****************************************>> Client Id enforement policy is a "Compliance" related NFR and does not help inmaintaining the "Quality of Service (QoS)". It CANNOT and NOT meant for protecting thebackend systems from scalability challenges.>> IP Whitelisting and OAuth 2.0 token enforcement are "Security" related NFRs and againdoes not help in maintaining the "Quality of Service (QoS)". They CANNOT and are NOTmeant for protecting the backend systems from scalability challenges.Rate Limiting, Rate Limiting-SLA, Throttling, Spike Control are the policies that are "Qualityof Service (QOS)" related NFRs and are meant to help in protecting the backend systemsfrom getting overloaded.https://dzone.com/articles/how-to-secure-apis
Question # 22
Which of the following best fits the definition of API-led connectivity?
A.API-led connectivity is not just an architecture or technology but also a way to organizepeople and processes for efficient IT delivery in the organization B.API-led connectivity is a 3-layered architecture covering Experience, Process and Systemlayers C.API-led connectivity is a technology which enabled us to implement Experience, Processand System layer based APIs
Answer: A Explanation: ExplanationCorrect Answer: API-led connectivity is not just an architecture or technology but also away to organize people and processes for efficient IT delivery in the organization.*****************************************Reference: https://blogs.mulesoft.com/dev/api-dev/what-is-api-led-connectivity/
Question # 23
Refer to the exhibit.
what is true when using customer-hosted Mule runtimes with the MuleSoft-hosted Anypoint Platform control plane (hybrid deployment)?
A. Anypoint Runtime Manager initiates a network connection to a Mule runtime in order to deploy Mule applications B. The MuleSoft-hosted Shared Load Balancer can be used to load balance API invocations to the Mule runtimes C. API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane D. Anypoint Runtime Manager automatically ensures HA in the control plane by creating a new Mule runtime instance in case of a node failure
Answer: C
Question # 24
What is typically NOT a function of the APIs created within the framework called API-led connectivity?
A. They provide an additional layer of resilience on top of the underlying backend system,thereby insulating clients from extended failure of these systems. B. They allow for innovation at the user Interface level by consuming the underlying assetswithout being aware of how data Is being extracted from backend systems. C. They reduce the dependency on the underlying backend systems by helping unlock datafrom backend systems In a reusable and consumable way. D. They can compose data from various sources and combine them with orchestration logic to create higher level value.
Answer: A Explanation: ExplanationCorrect Answer: They provide an additional layer of resilience on top of the underlyingbackend system, thereby insulating clients from extended failure of these systems.*****************************************In API-led connectivity,>> Experience APIs - allow for innovation at the user interface level by consuming theunderlying assets without being aware of how data is being extracted from backendsystems.>> Process APIs - compose data from various sources and combine them withorchestration logic to create higher level value>> System APIs - reduce the dependency on the underlying backend systems by helpingunlock data from backend systems in a reusable and consumable way.However, they NEVER promise that they provide an additional layer of resilience on top ofthe underlying backend system, thereby insulating clients from extended failure of thesesystems.https://dzone.com/articles/api-led-connectivity-with-mule
Question # 25
What API policy would LEAST likely be applied to a Process API?
A. Custom circuit breaker B. Client ID enforcement C. Rate limiting D. JSON threat protection
Answer: D Explanation: Explanation Correct Answer: JSON threat protection*****************************************Fact: Technically, there are no restrictions on what policy can be applied in what layer. Anypolicy can be applied on any layer API. However, context should also be consideredproperly before blindly applying the policies on APIs.That is why, this question asked for a policy that would LEAST likely be applied to aProcess API.From the given options:>> All policies except "JSON threat protection" can be applied without hesitation to theAPIs in Process tier.>> JSON threat protection policy ideally fits for experience APIs to prevent suspiciousJSON payload coming from external API clients. This covers more of a security aspect bytrying to avoid possibly malicious and harmful JSON payloads from external clients callingexperience APIs.As external API clients are NEVER allowed to call Process APIs directly and also thesekind of malicious and harmful JSON payloads are always stopped at experience API layeronly using this policy, it is LEAST LIKELY that this same policy is again applied on ProcessLayer API.Reference: https://docs.mulesoft.com/api-manager/2.x/policy-mule3-provided-policies
Question # 26
What is true about where an API policy is defined in Anypoint Platform and how it is then applied to API instances?
A. The API policy Is defined In Runtime Manager as part of the API deployment to a Muleruntime, and then ONLY applied to the specific API Instance B. The API policy Is defined In API Manager for a specific API Instance, and then ONLYapplied to the specific API instance C. The API policy Is defined in API Manager and then automatically applied to ALL APIinstances D. The API policy is defined in API Manager, and then applied to ALL API instances in thespecified environment
Answer: B Explanation: ExplanationCorrect Answer: The API policy is defined in API Manager for a specific API instance, andthen ONLY applied to the specific API instance.*****************************************>> Once our API specifications are ready and published to Exchange, we need to visit APIManager and register an API instance for each API.>> API Manager is the place where management of API aspects takes place likeaddressing NFRs by enforcing policies on them.>> We can create multiple instances for a same API and manage them differently fordifferent purposes.>> One instance can have a set of API policies applied and another instance of same APIcan have different set of policies applied for some other purpose.>> These APIs and their instances are defined PER environment basis. So, one need tomanage them seperately in each environment.>> We can ensure that same configuration of API instances (SLAs, Policies etc..) getspromoted when promoting to higher environments using platform feature. But this isoptional only. Still one can change them per environment basis if they have to.>> Runtime Manager is the place to manage API Implementations and their Mule Runtimesbut NOT APIs itself. Though API policies gets executed in Mule Runtimes, We CANNOTenforce API policies in Runtime Manager. We would need to do that via API Manager onlyfor a cherry picked instance in an environment.So, based on these facts, right statement in the given choices is - "The API policy isdefined in API Manager for a specific API instance, and then ONLY applied to the specificAPI instance".Reference: https://docs.mulesoft.com/api-manager/2.x/latest-overview-concept
Question # 27
A Mule application exposes an HTTPS endpoint and is deployed to three CloudHub workers that do not use static IP addresses. The Mule application expects a high volume of client requests in short time periods. What is the most cost-effective infrastructure component that should be used to serve the high volume of client requests?
A.A customer-hosted load balancer B.The CloudHub shared load balancer C. An API proxy D. Runtime Manager autoscaling
Answer: B Explanation: ExplanationCorrect Answer: The CloudHub shared load balancer*****************************************The scenario in this question can be split as below:>> There are 3 CloudHub workers (So, there are already good number of workers tohandle high volume of requests)>> The workers are not using static IP addresses (So, one CANNOT use customer loadbalancingsolutions without static IPs)>> Looking for most cost-effective component to load balance the client requests amongthe workers.Based on the above details given in the scenario:>> Runtime autoscaling is NOT at all cost-effective as it incurs extra cost. Most over, thereare already 3 workers running which is a good number.>> We cannot go for a customer-hosted load balancer as it is also NOT most cost-effective(needs custom load balancer to maintain and licensing) and same time the Mule App is nothaving Static IP Addresses which limits from going with custom load balancing.>> An API Proxy is irrelevant there as it has no role to play w.r.t handling high volumes orload balancing.So, the only right option to go with and fits the purpose of scenario being most costeffectiveis - using a CloudHub Shared Load Balancer.
Question # 28
Refer to the exhibit.
What is the best way to decompose one end-to-end business process into a collaboration of Experience, Process, and System APIs?
A. Option A B. Option B C. Option C D. Option D
Answer: B Explanation: ExplanationCorrect Answer: Allow System APIs to return data that is NOT currently required by theidentified Process or Experience APIs.*****************************************>> All customizations for the end-user application should be handled in "Experience API"only. Not in Process API>> We should use tiered approach but NOT always by creating exactly one API for each ofthe 3 layers. Experience APIs might be one but Process APIs and System APIs are oftenmore than one. System APIs for sure will be more than one all the time as they are thesmallest modular APIs built in front of end systems.>> Process APIs can call System APIs as well as other Process APIs. There is no suchanti-design pattern in API-Led connectivity saying Process APIs should not call otherProcess APIs.So, the right answer in the given set of options that makes sense as per API-Ledconnectivity principles is to allow System APIs to return data that is NOT currently requiredby the identified Process or Experience APIs. This way, some future Process APIs canmake use of that data from System APIs and we need NOT touch the System layer APIsagain and again.
Salesforce MuleSoft-Platform-Architect-I Latest Result Cards
What our clients say about MuleSoft-Platform-Architect-I Quiz Sheets
Faraz Bobal
Oct 11, 2024
Salesforcexamdumps is the top notch online resource for MuleSoft-Platform-Architect-I exam. Additionally, they made a few little adjustments to the exam updates, which helped me prepare for the test and meet the requirements.
Aadil Hayre
Oct 10, 2024
I swiftly browsed the materials at Salesforcexamdumps with the assistance of specialists. Got 88% marks and landed on my dream job. Thank you so much Salesforcexamdumps.
Bishnu Patel
Oct 10, 2024
I was really worried in past days because could not find any way to pass MuleSoft-Platform-Architect-I exam. then one of my friend suggested Salesforcexamdumps excellent braindumps. Trust me this website is gold. Highly recommend to you all.
Nolan Stewart
Oct 09, 2024
Salesforcexamdumps platform offered a money-back guarantee, which gave me more trust and demonstrated their legitimacy. To ensure success on the first try, I would advise everyone to purchase the Salesforce Certified MuleSoft Platform Architect 1 Exam dumps pdf.
Matteo Taylor
Oct 09, 2024
I was quite concerned about preparing for my Salesforce MuleSoft exam until I visited Salesforcexamdumps and downloaded their study guide. The content was produced by specialists with such accuracy and conciseness.
Ram sawami
Oct 08, 2024
Nobody could possibly regret purchasing anything from Salesforcexamdumps, in my opinion. With less work, I too passed my exam, and they assured me of success.
Daxton Carter
Oct 08, 2024
There are many of resource suppliers available if you search for them. I trusted Salesforcexamdumps and downloaded the Salesforce MuleSoft PDF questions and answers. The essential course concepts were succinctly and concisely described.
Kobe Patterson
Oct 07, 2024
Nobody could possibly regret purchasing anything from Salesforcexamdumps, in my opinion. With less work, I passed my exam, and they assured me of 100% success with these questions.
Arpit Chaudhry
Oct 07, 2024
Shortly before my exam, I visited Salesforcexamdumps and looked over their dumps. I purchased the PDF Questions and Answers for Salesforce MuleSoft after seeing that they offered a money-back guarantee. I wholeheartedly suggest this exam material.
Theo Bates
Oct 06, 2024
Thank you Salesforcexamdumps. I passed my MuleSoft-Platform-Architect-I exam just because of you guys.
Leave a comment
Your email address will not be published. Required fields are marked *
Leave a comment
Your email address will not be published. Required fields are marked *