“NLP Status Report 2017-6-5”版本间的差异
来自cslt Wiki
第21行: | 第21行: | ||
|- | |- | ||
|Shiyue Zhang || | |Shiyue Zhang || | ||
+ | * trained word2vec on big data, and directly used it on NMT, but resulted in quite poor performance | ||
+ | * trained M-NMT model, got bleu=36.58 (+1.34 than NMT). But found the EOS in mem has a big influence on result: | ||
|| | || |
2017年6月5日 (一) 05:52的版本
Date | People | Last Week | This Week |
---|---|---|---|
2017/6/5 | Jiyuan Zhang | ||
Aodong LI |
Only make the English encoder's embedding constant -- 45.98 Only initialize the English encoder's embedding and then finetune it -- 46.06 Share the attention mechanism and then directly add them -- 46.20
Shrink output vocab from 30000 to 20000 and best result is 31.53 Train the model with 40 batch size and best result until now is 30.63 |
| |
Shiyue Zhang |
|
||
Shipan Ren |