Dongxu Zhang 2015-10-12
来自cslt Wiki
Last Week:
- simple pooling cannot improve the performance(With initialization) . After tuning arpha with 0.8, 0.9, performance went down. And 0.999 raise up the performance. 0.99 is still running.
- remove some low frequency relations in KBP data and evaluation tool is tuned and can be used on this dataset.
- try the cbow-like topic model. performance is ok in reuters8 dataset. running on 20newsgroup.
This week:
- complete the CNN code with position. and run both CNN and RNN on KBP dataset.
- waiting for results when arpha = 0.99 and 0.999. waiting for results on cbow-like model.
- compare performance on KNN classification using distance depicted with and without attention.
- devide the 20ng into two parts, one for supervision, one for unsupervised data, to try one-shot document classification.