What do first-order and second-order Markov processes have in common concerning next word prediction?
Both use WordNet to model the probability of the next word
Both are unsupervised methods
Both provide the foundation to build a trigram language model
Neither makes assumptions about the probability of the next word
Submit