“2013-09-27”版本间的差异
来自cslt Wiki
(→Sparse DNN) |
(→Sparse DNN) |
||
第10行: | 第10行: | ||
* An interesting investigation is drop-out 50% weights after each iteration, and then re-training without sticky. | * An interesting investigation is drop-out 50% weights after each iteration, and then re-training without sticky. | ||
− | Report on [ | + | Report on [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/文件:Chart1.png] |
=== FBank features === | === FBank features === |
2013年9月29日 (日) 15:22的版本
目录
Data sharing
- LM count files still undelivered!
DNN progress
Sparse DNN
- Optimal Brain Damage based sparsity is on going. Prepare the algorithm.
- An interesting investigation is drop-out 50% weights after each iteration, and then re-training without sticky.
Report on [1]
FBank features
1000 hour testing: click
Tencent exps
N/A
Noisy training
Sample noise segments randomly for each utterance. Using Dirichlet to sample noise distribution on various types, and use Gaussian to sample SNR.
White noise with car noise are 1/3 respectively in the base distribution. The performance report is here:
The conclusions is that:
1. by sampling noises, most of the noise patterns can be learned efficiently and thus improve performance on noisy test data. 2. by sampling noises with high variance, performance on clean speech is largely remained.
Continuous LM
1. SogouQ n-gram building: 500M text data, 110k words. Two tests:
(1) using Tencent online1 and online2 transcription: online1 1651 online2: 1512 (2) using 70k sogouQ test set : ppl 33
This means the SogouQ text is significantly different from the online1 and online2 Tencent set, due to the highly different domain.
2. NN LM
Using 11k words as input, 192 hidden layer. 500M text data from QA data. test with online2 transcription.
(1) Take 1-1024 from NN LM, and others predicted by 4-gram. n-gram baseline: 402.37; NN+ngram: 122.54 (2) Take 1-2048 from NN LM, and others predicted by 4-gram. n-gram baseline: 402.37; NN+ngram: 127.59 (3) Take 1024-2048 from NN LM, and others predicted by 4-gram. n-gram baseline: 402.37; NN+ngram: 118.92
Conclusions: NN LM is extremely good than n-gram, due to its smooth capacity.