The Spark framework and the MapReduce framework are both distributed computing frameworks in Hadoop clusters, the following describes the difference between the two frameworks, which are the correct items? (Many)
A.
The calculation iteration process done by Spark mainly relies on memory, and intermediate data is placed in memory, which has high computing efficiency
B.
MapReduce algorithms are diverse and support the combined application of multiple algorithms
C.
Spark operations are rich in types and support a variety of data conversion operations
D.
MapReduce Every iteration of the calculation needs to put the data on disk, and the next iteration must be read from the disk, which is inefficient
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit