“Vivi-poem-generation”版本间的差异
来自cslt Wiki
(→论文) |
|||
第62行: | 第62行: | ||
* [http://link.springer.com/chapter/10.1007/978-3-319-49685-6_4/fulltext.html|Can Machine Generate Traditional Chinese Poetry? A Feigenbaum Test, Springer, LNCS, vol 10023, pp.171-183.] | * [http://link.springer.com/chapter/10.1007/978-3-319-49685-6_4/fulltext.html|Can Machine Generate Traditional Chinese Poetry? A Feigenbaum Test, Springer, LNCS, vol 10023, pp.171-183.] | ||
+ | |||
+ | * [https://arxiv.org/abs/1705.03773 Jiyuan Zhang, Yang Feng, Dong Wang, Yang Wang, Andrew Abel, Shiyue Zhang, Andi Zhangi, "Flexible and Creative Chinese Poetry Generation Using Neural Memory"] |
2017年10月30日 (一) 02:03的版本
目录
薇薇:会写诗的机器人
vivi 3.0 (on going)
目标
- Transfer modern sentences to poems
- Utilize extra knowledge to boost innovation
- Reinforcement learning to improve quality
vivi 2.0
基本方法
- Tensorflow 实现
- Attention-based LSTM/GRU S2S
- Sampling words as input to generate the present sentence
- Memory augmentation (global and local)
- Local attention for theme (+)
- Local attention on previous generation, with couplet assignment (line number?) (+)
- N-best decoding (+)
实现细节
- Rythms with less characters removed
- Characters seldom used as rhythms words are removed
- Characters that are low-frequency are removed
特性
- 训练基础模型,用memory实现精细创新
- 用memory可实现风格、体例转换
- 用Local attention可实现人为指导创作(+)
- 可实现律诗中的对仗
测试结果
论文
vivi 1.0
基本方法
- Theano 实现
- 基于sequence-to-sequence的LSTM/GRU模型, 运用Attention 机制。
- 输入为一首诗的第一句,输出为后面所有句子
- 预训练word vectors,用多种体例古文结合在一起训练
- 生成时可对用户输入进行扩展