Amazon Web Services AWS Certified AI Practitioner Exam AIF-C01 Question # 34 Topic 4 Discussion

Amazon Web Services AWS Certified AI Practitioner Exam AIF-C01 Question # 34 Topic 4 Discussion

AIF-C01 Exam Topic 4 Question 34 Discussion:
Question #: 34
Topic #: 4

A company wants to use language models to create an application for inference on edge devices. The inference must have the lowest latency possible.

Which solution will meet these requirements?


A.

Deploy optimized small language models (SLMs) on edge devices.


B.

Deploy optimized large language models (LLMs) on edge devices.


C.

Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.


D.

Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.


Get Premium AIF-C01 Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.