“ASR:2015-01-12”版本间的差异
来自cslt Wiki
(以“==Speech Processing == === AM development === ==== Environment ==== * May gpu760 of grid-14 be something wrong. To be exchanged. ==== Sparse DNN ==== * details at...”为内容创建页面) |
(→Text Processing) |
||
第73行: | 第73行: | ||
====Domain specific LM==== | ====Domain specific LM==== | ||
− | * LM2. | + | * LM2.1 |
:* mix the sougou2T-lm,kn-discount continue | :* mix the sougou2T-lm,kn-discount continue | ||
:* train a large lm using 25w-dict.(hanzhenglong/wxx) | :* train a large lm using 25w-dict.(hanzhenglong/wxx) | ||
− | :* | + | ::* data pre-processing("this week") |
* new dict. | * new dict. | ||
第82行: | 第82行: | ||
====tag LM==== | ====tag LM==== | ||
− | * | + | * Tag Lm |
− | :* tag Probability should test add the weight(hanzhenglong) and handover to hanzhenglong ( | + | :* tag Probability should test add the weight(hanzhenglong) and handover to hanzhenglong ("this month") |
:paper | :paper | ||
:* paper submit this week. | :* paper submit this week. | ||
− | + | * similar word extension in FST | |
+ | :* find similarity word using word2vec,word vector is training. | ||
+ | :* set the weight for word | ||
====RNN LM==== | ====RNN LM==== | ||
*rnn | *rnn | ||
第97行: | 第99行: | ||
====W2V based doc classification==== | ====W2V based doc classification==== | ||
− | + | * data prepare. | |
− | * | + | |
− | + | ||
====Knowledge vector==== | ====Knowledge vector==== | ||
* Knowledge vector | * Knowledge vector | ||
:* Make a proper test set. | :* Make a proper test set. | ||
+ | :* | ||
:* Modify the object function and training process. | :* Modify the object function and training process. | ||
====relation==== | ====relation==== | ||
第119行: | 第120行: | ||
::* POS, NER ,tf ,idf .. | ::* POS, NER ,tf ,idf .. | ||
− | ==== | + | ====context framework==== |
− | * | + | * code for organization |
− | + | ||
====query normalization==== | ====query normalization==== | ||
* using NER to normalize the word | * using NER to normalize the word | ||
* new inter will install SEMPRE | * new inter will install SEMPRE |
2015年1月12日 (一) 04:40的版本
目录
Speech Processing
AM development
Environment
- May gpu760 of grid-14 be something wrong. To be exchanged.
Sparse DNN
- details at http://liuc.cslt.org/pages/sparse.html
- need to test clean data
- MPE training to be continue
RNN AM
- Trying toolkit of Microsoft.(+)
- details at http://liuc.cslt.org/pages/rnnam.html
Dropout & Maxout & retifier
- Drop out
- Change the test data to more noisy data, to verify the effectiveness of dropout.
- MaxOut && P-norm
- Need to solve the too small learning-rate problem
- Add one normalization layer after the pnorm-layer
- Add L2-norm upper bound
- hold
- Need to solve the too small learning-rate problem
Convolutive network
- Convolutive network(DAE)
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=311
- Feature extractor
- Combined with raw features, better performance obsearved.
- Technical report to draft, Yiye Lin, Shi Yin, Menyuan Zhao and Mian Wang
- To test real enviroment echo.
DNN-DAE(Deep Atuo-Encode-DNN)
- test on XinWenLianBo music. results on
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhaomy&step=view_request&cvssid=318
- Technical report to draft, Mengyuan Zhao and Zhiyong Zhang.
- To test real enviroment echo.
- test on XinWenLianBo music. results on
VAD
- Harmonics and Teager energy features done.
- Model to be trained.
Speech rate training
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268
- Technical report to draft. Shi Yin
Confidence
- Reproduce the experiments on fisher dataset.
- Use the fisher DNN model to decode all-wsj dataset
- preparing scoring for puqiang data
- HOLD
Neural network visulization
Speaker ID
Language ID
- GMM-based language is ready.
- Delivered to Jietong
- Prepare the test-case
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=328
Voice Conversion
- Yiye is reading materials(+)
Text Processing
LM development
Domain specific LM
- LM2.1
- mix the sougou2T-lm,kn-discount continue
- train a large lm using 25w-dict.(hanzhenglong/wxx)
- data pre-processing("this week")
- new dict.
- dongxu help zhenglong with large dictionary.
tag LM
- Tag Lm
- tag Probability should test add the weight(hanzhenglong) and handover to hanzhenglong ("this month")
- paper
- paper submit this week.
- similar word extension in FST
- find similarity word using word2vec,word vector is training.
- set the weight for word
RNN LM
- rnn
- test wer RNNLM on Chinese data from jietong-data
- generate the ngram model from rnnlm and test the ppl with different size txt.
- lstm+rnn
- check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)
Word2Vector
W2V based doc classification
- data prepare.
Knowledge vector
- Knowledge vector
- Make a proper test set.
- Modify the object function and training process.
relation
Character to word
- Character to word conversion(hold)
Translation
- v5.0 demo released
- cut the dict and use new segment-tool
QA
improve fuzzy match
- add Synonyms similarity using MERT-4 method(hold)
improve lucene search
- add more feature to improve search.
- POS, NER ,tf ,idf ..
context framework
- code for organization
query normalization
- using NER to normalize the word
- new inter will install SEMPRE