Huawei HCIP - AI EI Developer V2.5 Exam H13-321_V2.5 Question # 16 Topic 2 Discussion
H13-321_V2.5 Exam Topic 2 Question 16 Discussion:
Question #: 16
Topic #: 2
In 2017, the Google machine translation team proposed the Transformer in their paperAttention is All You Need. The Transformer consists of an encoder and a(n) --------. (Fill in the blank.)
Encoder:Encodes the input sequence into contextualized representations.
Decoder:Uses the encoder output and self-attention over previously generated tokens to produce the target sequence.
Exact Extract from HCIP-AI EI Developer V2.5:
"The Transformer consists of an encoder-decoder structure, with self-attention mechanisms in both components for sequence-to-sequence learning."
[Reference:HCIP-AI EI Developer V2.5 Official Study Guide – Chapter: Transformer Overview, ]
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit