“Chao Xing 2015-12-7”版本间的差异
来自cslt Wiki
第1行: | 第1行: | ||
Last Week: | Last Week: | ||
− | + | ---- | |
Solution: | Solution: | ||
− | |||
* None. | * None. | ||
Plan to do: | Plan to do: | ||
− | + | ||
1. Reproduce RNN picture model with specified image. | 1. Reproduce RNN picture model with specified image. | ||
第16行: | 第15行: | ||
---- | ---- | ||
What have done: | What have done: | ||
− | + | ||
1. Investigate DRAW with RNN-lstm and RNN-at-lstm by input same random sample. | 1. Investigate DRAW with RNN-lstm and RNN-at-lstm by input same random sample. | ||
第22行: | 第21行: | ||
Plan to do: | Plan to do: | ||
− | + | ||
1. Broadcast last week mnist results to Chinese images. | 1. Broadcast last week mnist results to Chinese images. | ||
2015年12月7日 (一) 01:20的版本
Last Week:
Solution:
- None.
Plan to do:
1. Reproduce RNN picture model with specified image.
2. Done with Matrix Factorization.
3. Start to investigate word segmentation.
This Week:
What have done:
1. Investigate DRAW with RNN-lstm and RNN-at-lstm by input same random sample.
2. Investigate word sense vectors. Try to find some strategy among balance word frequency, word sense vectors, and neural computing.
Plan to do:
1. Broadcast last week mnist results to Chinese images.
2. Give a report after read some papers.
3. Test some word sense vector models.