“NLP Status Report 2017-1-3”版本间的差异
来自cslt Wiki
(3位用户的6个中间修订版本未显示) | |||
第2行: | 第2行: | ||
!Date !! People !! Last Week !! This Week | !Date !! People !! Last Week !! This Week | ||
|- | |- | ||
− | | rowspan="6"| | + | | rowspan="6"|2017/1/3 |
|Yang Feng || | |Yang Feng || | ||
*[[nmt+mn:]] tried to improve the nmt baseline; | *[[nmt+mn:]] tried to improve the nmt baseline; | ||
+ | *met with problems for baseline, rulling out the factor of output order and file format and got the reason of learning rate. | ||
*read the code of Andy's; | *read the code of Andy's; | ||
*wrote the code for bleu evaluation; | *wrote the code for bleu evaluation; | ||
− | * | + | *managed to fix the code of nmt+mn; |
− | *ran experiments | + | *ran experiments [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/50/Nmt_mn_report.pdf report]] |
|| | || | ||
*[[nmt+mn:]] do further experiments. | *[[nmt+mn:]] do further experiments. | ||
|- | |- | ||
|Jiyuan Zhang || | |Jiyuan Zhang || | ||
− | * | + | *improved speed of prediction process |
− | * | + | *ran expriments:<br/> |
− | + | two sytles expriments of top1_memory_model<br/> | |
+ | overfitting expriments of top1_memory_model<br/> | ||
+ | two styles expriments of average_memory_model<br/> | ||
+ | overfitting expirments of average_memory_model | ||
|| | || | ||
*improve poem model | *improve poem model | ||
第35行: | 第39行: | ||
|- | |- | ||
|Guli || | |Guli || | ||
− | * | + | * run nmt with monolingual data |
− | * | + | * bleu computation |
+ | * learn about tensorflow | ||
|| | || | ||
− | * | + | * improve my paper |
− | * | + | * analyze experiment results |
|- | |- | ||
|Peilun Xiao || | |Peilun Xiao || |
2017年1月3日 (二) 07:02的最后版本
Date | People | Last Week | This Week |
---|---|---|---|
2017/1/3 | Yang Feng |
| |
Jiyuan Zhang |
two sytles expriments of top1_memory_model |
| |
Andi Zhang |
|
| |
Shiyue Zhang |
|
| |
Guli |
|
| |
Peilun Xiao |
|
|