“Zhiyuan Tang 2016-04-18”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(相同用户的2个中间修订版本未显示)
第5行: 第5行:
 
1. enhancing the joint model with SWBD focusing on speech recognition, shows improvement[http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&step=view_request&cvssid=515];  
 
1. enhancing the joint model with SWBD focusing on speech recognition, shows improvement[http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&step=view_request&cvssid=515];  
  
2. problem remains that when the WSJ was reduced to be 8k, the advantage of joint training disappeared at least for speech recognition. (comment: WSJ was reduced to 8k by mistake, so the pipeline needs to be reconducted).
+
2. problem remains that when the WSJ was reduced to be 8k, the advantage of joint training disappeared at least for speech recognition. (comment later: WSJ was reduced to 8k by mistake, so the pipeline needs to be reconducted, conlusion 1 still stands).
  
  
第15行: 第15行:
 
2. more experiemnts for refining the joint model, such as enhancing the enhanced model again with speaker data;
 
2. more experiemnts for refining the joint model, such as enhancing the enhanced model again with speaker data;
  
2. following ICASSP 16.
+
3. following ICASSP 16.

2016年4月24日 (日) 10:24的最后版本


Last week:

1. enhancing the joint model with SWBD focusing on speech recognition, shows improvement[1];

2. problem remains that when the WSJ was reduced to be 8k, the advantage of joint training disappeared at least for speech recognition. (comment later: WSJ was reduced to 8k by mistake, so the pipeline needs to be reconducted, conlusion 1 still stands).


This week:

1. find the reason why joint training failed on 8k WSJ;

2. more experiemnts for refining the joint model, such as enhancing the enhanced model again with speaker data;

3. following ICASSP 16.