“NLP Status Report 2017-5-22”版本间的差异
来自cslt Wiki
第7行: | 第7行: | ||
|- | |- | ||
|Aodong LI || | |Aodong LI || | ||
− | + | * bleu of baseline = 43.87 | |
+ | * 2nd translator uses as training data the concat(Chinese, machine translated English): | ||
+ | hidden_size, emb_size, lr = 500, 310, 0.001 bleu = 43.53 (best) | ||
+ | hidden_size, emb_size, lr = 700, 510, 0.001 bleu = 45.21 (best) but most results are under 43.1 | ||
+ | hidden_size, emb_size, lr = 700, 510, 0.0005 bleu = 42.19 (best) | ||
+ | * double-decoder model with joint loss (final loss = 1st decoder's loss + 2nd decoder's loss): | ||
+ | bleu = 40.11 (best) | ||
+ | The 1st decoder's output is generally better than 2nd decoder's output. | ||
+ | * The training process of double-decoder model '''without''' joint loss is problematic. | ||
|| | || | ||
+ | * Replace the force teaching mechanism in training process with beam search mechanism. | ||
|- | |- | ||
|Shiyue Zhang || | |Shiyue Zhang || |
2017年5月24日 (三) 04:27的版本
Date | People | Last Week | This Week |
---|---|---|---|
2017/5/22 | Jiyuan Zhang | ||
Aodong LI |
hidden_size, emb_size, lr = 500, 310, 0.001 bleu = 43.53 (best) hidden_size, emb_size, lr = 700, 510, 0.001 bleu = 45.21 (best) but most results are under 43.1 hidden_size, emb_size, lr = 700, 510, 0.0005 bleu = 42.19 (best)
bleu = 40.11 (best) The 1st decoder's output is generally better than 2nd decoder's output.
|
| |
Shiyue Zhang |
|
| |
Shipan Ren |
|
|