Thursday, August 30, 2018

Using SRILM server in sphinx4

Using SRILM server in sphinx4


Recently Ive added the support for the SRILM language model server to the sphinx4 so its possible to use much bigger models during the search keeping the same memory requriements and, more important, during lattice rescoring. Lattice rescoring is still in progress, so here is the idea how to use network language model during search.

SRILM has a number of adavantages for example it implements few interesting algorithms and even for simple tasks like trigram language model creation its way better than cmuclmtk. At least model pruning is
supported.

To start first dump the language model vocabulary since its required in linguist

ngram -lm your.lm --write-vocab my.vocab

So start the server with

ngram -use-server 5000 -lm your.lm

Configure the recognizer

<component name="rescoringModel"
type="edu.cmu.sphinx.linguist.language.ngram.NetworkLanguageModel">
<property name="port" value="5000"/>
<property name="location" value="your.vocab"/>
<property name="logMath" value="logMath"/>
</component>

And start the lattice demo. Youll see the result soon.

Adjust the cache according to the size of your model. It shoudlnt be large for a simple search. Typically the cache size isnt more than 100000 for a simple search.

Still, usage of the large-gram model is not reasonable for a typical search because of the large amount of word trigrams that should be tracked. Its more efficient to use trigram or even bigram model first and make a second recognizer pass with the rescored language model. More details on rescoring in the next posts.

visit link download