“2014-07-25”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以内容“==Resoruce Building== == Leftover questions== * Investigating LOUDS FST. * CLG embedded decoder plus online compiler. * DNN-GMM co-training == AM development == ==...”创建新页面)
 
(没有差异)

2014年7月25日 (五) 02:13的最后版本

Resoruce Building

Leftover questions

  • Investigating LOUDS FST.
  • CLG embedded decoder plus online compiler.
  • DNN-GMM co-training

AM development

Sparse DNN

  • WJS sparse DNN shows a slightly better than non-sparse cases when the network is in a large scale
  • Pre-training does work for DNN training (for both 4/5/6 layers)

Noise training

  • Journal paper writing on going


Multilingual ASR

  • With multlingual training, performance is largely retained with most of known test sets;
  • However for unknown accents, performance is not stable

Drop out & convolutional network

  • Zhiyong will study drop out
  • Zhiyong & Mengyuan will study convolutional network

Denoising & Farfield ASR

  • Use an reverberation tool to generate a new set of datasets
  • xEnt results(eval 92):
              before adaptation    after adaptation
   clean:      -                          -
   near:       19.25                    12.94
   far:        59.38                    40.46
  • Lasso-based reverberation cancellation got initial clean data

VAD

  • Waiting for engineering work

Scoring

  • Refine the acoustic model with AMIDA database. problem solved by involving both wsj and AMIDA.

Embedded decoder

  • Chatting LM release
  • Train two smaller network: 500x4+600, 400x4+500: on going
  • Need to upload the new client code onto git
  • Build a new graph with MPE3 am and chatting LM.


LM development

Domain specific LM

h2. Domain specific LM construction

h3. TAG LM

  • TAG still problematic with all-to-number tag
  • check the randomness of the number tag.

h3. Chatting LM

  • Building chatting lexicon
  • First version released (80k lexicon)

Word2Vector

W2V based doc classification

  • Initial results variable Bayesian GMM obtained. Performance is not as good as the conventional GMM.


Semantic word tree

  • Version v2.0 released (filter with query log)
  • Please deliver to /nfs/disk/perm/data/corpora/semanticTree (Xingchao)
  • Version v3.0 under going. Further refinement with Baidu Baike hierarchy


NN LM

  • Character-based NNLM (6700 chars, 7gram), 500M data training done.
  • Inconsistent pattern in WER were found on Tenent test sets
  • probably need to use another test set to do investigation.
  • Investigate MS RNN LM training

Speaker ID

  • reading materials
  • prepare to run sre08


Translation

  • collecting more data (Xinhua parallel text, bible, name entity) for the second version
  • work into text alignment
  • Will release v2.0 today