“ASR:2015-03-09”版本间的差异
来自cslt Wiki
(以“==Speech Processing == === AM development === ==== Environment ==== * grid-11 often shutdown automatically, too slow computation speed. * buy a new 800W power -- Xu...”为内容创建页面) |
(→Text Processing) |
||
第67行: | 第67行: | ||
:* mix the sougou2T-lm,kn-discount(done) | :* mix the sougou2T-lm,kn-discount(done) | ||
:* train a large lm using 25w-dict.(hanzhenglong/wxx) | :* train a large lm using 25w-dict.(hanzhenglong/wxx) | ||
− | |||
− | |||
::* v2.0c filter the useless word.(next week) | ::* v2.0c filter the useless word.(next week) | ||
::* set the test set for new word (hold) | ::* set the test set for new word (hold) | ||
====tag LM==== | ====tag LM==== | ||
− | * Tag Lm | + | * Tag Lm(JT) |
− | :* | + | :* error |
* similar word extension in FST | * similar word extension in FST | ||
− | :* | + | :* add the experiment to tag-lm paper. |
− | + | ||
====RNN LM==== | ====RNN LM==== | ||
*rnn | *rnn | ||
− | :* | + | :* discuss the rnn-lstm lm |
− | + | ||
*lstm+rnn | *lstm+rnn | ||
:* check the lstm-rnnlm code about how to Initialize and update learning rate.(hold) | :* check the lstm-rnnlm code about how to Initialize and update learning rate.(hold) | ||
第89行: | 第85行: | ||
====W2V based doc classification==== | ====W2V based doc classification==== | ||
− | * data prepare. | + | * data prepare.(hold) |
====Knowledge vector==== | ====Knowledge vector==== | ||
* paper is done, submitted ACL | * paper is done, submitted ACL | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
===Translation=== | ===Translation=== | ||
第110行: | 第100行: | ||
====improve fuzzy match==== | ====improve fuzzy match==== | ||
* add Synonyms similarity using MERT-4 method(hold) | * add Synonyms similarity using MERT-4 method(hold) | ||
− | |||
− | |||
− | |||
− | |||
− | |||
===online learning=== | ===online learning=== | ||
− | * | + | * data is ready.prepare the ACL paper |
====context framework==== | ====context framework==== | ||
− | * code for | + | * code for demo |
− | :* | + | :* demo is done |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
* new inter will install SEMPRE | * new inter will install SEMPRE |
2015年3月9日 (一) 06:10的版本
Speech Processing
AM development
Environment
- grid-11 often shutdown automatically, too slow computation speed.
- buy a new 800W power -- Xuewei
RNN AM
- details at http://liuc.cslt.org/pages/rnnam.html
- triphone one state based RNN?
Mic-Array
- the technical report is done.
- reproduce environment for interspeech
Dropout & Maxout & rectifier
- HOLD
- Need to solve the too small learning-rate problem
- 20h small scale sparse dnn with rectifier. --Chao liu
- 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao
Convolutive network
- Convolutive network(DAE)
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=311
- Technical report writing, Mian Wang, Yiye Lin, Shi Yin, Mengyuan Zhao
- reproduce experiments -- Yiye
DNN-DAE(Deep Auto-Encode-DNN)
- HOLD
- Technical report to draft, Xiangyu Zeng, Shi Yin, Mengyuan Zhao and Zhiyong Zhang,
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=318
RNN-DAE(Deep based Auto-Encode-RNN)
VAD
- DAE
- Technical report done. -- Shi Yin
Speech rate training
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268
- Technical report to draft. Xiangyu Zeng, Shi Yin
- Prepare for NCMSSC
Confidence
- HOLD
- Reproduce the experiments on fisher dataset.
- Use the fisher DNN model to decode all-wsj dataset
- preparing scoring for puqiang data
Neural network visulization
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=324
- Technical report writing, Mian Wang.
Speaker ID
Text Processing
LM development
Domain specific LM
- LM2.X
- mix the sougou2T-lm,kn-discount(done)
- train a large lm using 25w-dict.(hanzhenglong/wxx)
- v2.0c filter the useless word.(next week)
- set the test set for new word (hold)
tag LM
- Tag Lm(JT)
- error
- similar word extension in FST
- add the experiment to tag-lm paper.
RNN LM
- rnn
- discuss the rnn-lstm lm
- lstm+rnn
- check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)
Word2Vector
W2V based doc classification
- data prepare.(hold)
Knowledge vector
- paper is done, submitted ACL
Translation
- v5.0 demo released
- cut the dict and use new segment-tool
Sparse NN in NLP
- write a technical report(Wednesday) and make a report.
- ready to prepare the ACL
QA
improve fuzzy match
- add Synonyms similarity using MERT-4 method(hold)
online learning
- data is ready.prepare the ACL paper
context framework
- code for demo
- demo is done
- new inter will install SEMPRE