“2013-09-13”版本间的差异
来自cslt Wiki
(→FBank features) |
|||
第25行: | 第25行: | ||
* Choose FBank 40, test various LDA output dimension. The results show LDA is still helpful, and dimension 200 is sufficient. | * Choose FBank 40, test various LDA output dimension. The results show LDA is still helpful, and dimension 200 is sufficient. | ||
− | [http://192.168.0.50:3000/series/?q=&action=view&series=56%2C54%2C43%2C36&chart_type=bar | + | [http://192.168.0.50:3000/series/?q=&action=view&series=56%2C54%2C43%2C36&chart_type=bar Performance chart] |
* We need to investigate non-linear discriminative approach which is simple but leads to less information lost. | * We need to investigate non-linear discriminative approach which is simple but leads to less information lost. |
2013年9月13日 (五) 04:06的最后版本
目录
Data sharing
- LM count files still undelivered!
DNN progress
Sparse DNN
- Cutting 50% of the weights, and then start to run into sticky with learning rate 0.0025. Continuous pruning until 1.6% weights left.
- The test results show not much gains with noise.
- 1/8 sparsity shows no evident performance reduction, as we observed before and is consistent the results reported by MS.
FBank features
- CMN shows similar impact to MFCC & FBank. Since MFCC involves summary of various random channels, the mean and covariance of the dimensions are less random. This leads to two possible impacts: first, the dimensions are relatively stable therefore CMVN does not contribute much; on other hand, estimation of mean and variance is more accurate so CMVN leads to more reliable results. This means CMVN leads to unpredictable performance improvement for MFCC & Fbank, depending on the data set.
- Choose various Fbank dimension, keep LDA output dimension as 100. FB30 seems the best.
- Choose FBank 40, test various LDA output dimension. The results show LDA is still helpful, and dimension 200 is sufficient.
- We need to investigate non-linear discriminative approach which is simple but leads to less information lost.
- We can also test a simple 'the same dimension DCT'. If the performance is still worse than FB, we confirm that the problem is due to noisy channel accumulation.
- Need to investigate Gammatone filter banks. The same idea as FB, that we want to keep the information as much as possible. And it is possible to combine FB and GFB to pursue a better performance.
Tencent exps
N/A
DNN Confidence estimation
- Lattice-based confidence show better performance with DNN with before.
- Accumulated DNN confidence is done. The confidence values are much more reasonable.
- Prepare MLP/DNN-based confidence integration.
Noisy training
- We trained model with a random noise approach, which samples half of the training data and add 15db white noise. We hope this rand-noise learning will improve the performance of data in noise while keeping the discriminative power of the model in clean speech.
- The results are largely consistent with our expectation, that the performance on noisy data were greatly improved, while the performance on clean speech is not hurted much.
- We are looking forward to the noisy training which introduces some noises randomly online in training.
- Car noise training. It shows limited impact of car noise.
DNN confidence
- The non-acoustic lattice-based confidence is done. The phone-based accumulated confidence is done. Chart
- It looks like the acoustic information does not contribute much to the lattice based confidence. Which means that we need a better way to combine the acoustic and the linguistic sources with models e.g., MLP.