“Zhiyuan Tang 2016-04-18”版本间的差异
来自cslt Wiki
第15行: | 第15行: | ||
2. more experiemnts for refining the joint model, such as enhancing the enhanced model again with speaker data; | 2. more experiemnts for refining the joint model, such as enhancing the enhanced model again with speaker data; | ||
− | + | 3. following ICASSP 16. |
2016年4月24日 (日) 10:24的最后版本
Last week:
1. enhancing the joint model with SWBD focusing on speech recognition, shows improvement[1];
2. problem remains that when the WSJ was reduced to be 8k, the advantage of joint training disappeared at least for speech recognition. (comment later: WSJ was reduced to 8k by mistake, so the pipeline needs to be reconducted, conlusion 1 still stands).
This week:
1. find the reason why joint training failed on 8k WSJ;
2. more experiemnts for refining the joint model, such as enhancing the enhanced model again with speaker data;
3. following ICASSP 16.