“ASR:2015-01-19”版本间的差异
来自cslt Wiki
(→RNN-DAE(Deep based Atuo-Encode- RNN)) |
|||
第37行: | 第37行: | ||
:* XueWei will reproduce the experiments. | :* XueWei will reproduce the experiments. | ||
− | ====RNN-DAE(Deep based Atuo-Encode- RNN)==== | + | ====RNN-DAE(Deep based Atuo-Encode-RNN)==== |
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=261 | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=261 | ||
:* HOLD | :* HOLD |
2015年1月23日 (五) 08:48的最后版本
目录
Speech Processing
AM development
Environment
- May gpu760 of grid-14 be something wrong. To be exchanged.
- grid-11 often shutdown automatically
- grid-2/grid-10 have replaced the CPU fan.
- Add one hard disk to cuda.q machines.
Sparse DNN
- details at http://liuc.cslt.org/pages/sparse.html
RNN AM
- Trying toolkit of Microsoft.(+)
- details at http://liuc.cslt.org/pages/rnnam.html
Dropout & Maxout & retifier
- Drop out
- MaxOut && P-norm(+)
- Need to solve the too small learning-rate problem
- Add one normalization layer after the pnorm-layer
- Add L2-norm upper bound
- hold
- Need to solve the too small learning-rate problem
Convolutive network
- Convolutive network(DAE)
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=311
- To test real environment echo.(+)
- Feature extractor
- Technical report to draft, Yiye Lin, Shi Yin, Menyuan Zhao and Mian Wang
- Feature extractor
DNN-DAE(Deep Atuo-Encode-DNN)
- Technical report to draft, Mengyuan Zhao and Zhiyong Zhang.
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=318
- XueWei will reproduce the experiments.
RNN-DAE(Deep based Atuo-Encode-RNN)
VAD
- Harmonics and Teager energy features.
- MPE training
- Test only Harmonic feature
Speech rate training
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268
- Technical report to draft. Shi Yin
- Prepare for ChinaSIP
Confidence
- Reproduce the experiments on fisher dataset.
- Use the fisher DNN model to decode all-wsj dataset
- preparing scoring for puqiang data
- HOLD
Neural network visulization
Speaker ID
Language ID
- GMM-based language is ready.
- Delivered to Jietong
- Prepare the test-case
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=328
Voice Conversion
- Yiye is reading materials
- HOLD
Text Processing
LM development
Domain specific LM
- LM2.1
- mix the sougou2T-lm,kn-discount continue
- train a large lm using 25w-dict.(hanzhenglong/wxx)
- find the problem in asr result
- the model will finish (Tuesday)
tag LM
- Tag Lm
- tag Probability should test add the weight(hanzhenglong) and handover to hanzhenglong ("this month")
- run a tag demo(this week)
- paper
- paper submit this week.
- similar word extension in FST
- find similarity word using word2vec,word vector is training.
- set the weight for word
- set a proper test set
- write a draft of a paper
RNN LM
- rnn
- test wer RNNLM on Chinese data from jietong-data
- generate the ngram model from rnnlm and test the ppl with different size txt.
- lstm+rnn
- check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)
Word2Vector
W2V based doc classification
- data prepare.
Knowledge vector
- Knowledge vector
- Make a proper test set.
- use text information and train word vector together.
- Modify the object function and training process.
- try to train on the whole data set
- result
- 0.745->0.79, using yago for training.
Character to word
- Character to word conversion(hold)
Translation
- v5.0 demo released
- cut the dict and use new segment-tool
Sparse NN in NLP
- review related paper
QA
improve fuzzy match
- add Synonyms similarity using MERT-4 method(hold)
improve lucene search
- add more feature to improve search.
- POS, NER ,tf ,idf ..
- extract more features about lexical, syntactic and semantic to improve re-ranking performance.
context framework
- code for organization
- change to knowledge graph
query normalization
- using NER to normalize the word
- new inter will install SEMPRE