When the dimension of the feature vectors and the number of mixtures in the Gaussian codebooks are increased for better recognition accuracy, the density approximation made for each HMM state with each feature vector in the observation sequence will grow to a bottle-neck for online operation. As mentioned earlier the computations necessary for the density approximation can be much reduced, if a small subset where most of the K best matching density components are expected to be found, can be extracted from the large codebook. This can be done by clustering the Gaussians after the HMM training [Bocchieri, 1993], but by the SOM this is conveniently incorporated in the codebook training. Suggestions here for search that exploit the SOM structure include the topological and the tree K-best search. These methods also utilize the correlation between the successive feature vectors. There are, naturally, also other ways to speed up the density computation, for example, by using a simple coarse approximation for the states that do not potentially affect much the Viterbi search result. Methods to determine such codebooks and their approximations are experimented in Publication 5 and in [Lopez-Gonzalo and Hernandez-Gomez, 1993,Komori et al., 1995].