The asymptotic classification risk for nearest neighbour procedures is well understood in the case of i.i.d. training sequences. Here we generalize these results to a class of dependent models including hidden Markov models. In the case where the observed patterns have Lebesgue densities, the asymptotic risk takes the same expression as in the i.i.d. case. For discrete distributions we show that the asymptotic risk depends on the rule used for breaking ties of equal distances.