“ASR:2014-12-29”版本间的差异
来自cslt Wiki
(→Domain specific LM) |
|||
(某位用户的一个中间修订版本未显示) | |||
第1行: | 第1行: | ||
+ | ==Speech Processing == | ||
+ | === AM development === | ||
+ | |||
+ | ==== Environment ==== | ||
+ | * Modification | ||
+ | :* First down-frequency of gpu760 | ||
+ | :* Improved the gpu Fan-speed | ||
+ | :* Change the sleep-mode of gpu | ||
+ | |||
+ | * May gpu760 of grid-14 be something wrong. To be exchanged. | ||
+ | * To buy 3*2k PCs. | ||
+ | |||
+ | ==== Sparse DNN ==== | ||
+ | * details at http://liuc.cslt.org/pages/sparse.html | ||
+ | |||
+ | ==== RNN AM==== | ||
+ | * Adjusting the learning rate.(+) | ||
+ | * Trying toolkit of Microsoft.(+) | ||
+ | * Trying new LSTM toolkit from Baidu | ||
+ | * details at http://liuc.cslt.org/pages/rnnam.html | ||
+ | |||
+ | ==== A new nnet training scheduler ==== | ||
+ | * details at http://liuc.cslt.org/pages/nnet-sched.html | ||
+ | * Test 500h dataset, 36-epchs/8-batches --Similar performance observed compared with std recipe | ||
+ | * Test on 36600h dataset --done. | ||
+ | |||
+ | ====Dropout & Maxout & Convolutive network==== | ||
+ | |||
+ | * Drop out(+) | ||
+ | :** Find and test unknown noise test-data.(++) | ||
+ | |||
+ | * MaxOut && P-norm | ||
+ | :* Need to solve the too small learning-rate problem | ||
+ | :** Add one normalization layer after the pnorm-layer | ||
+ | :** Add L2-norm upper bound | ||
+ | |||
+ | * Convolutive network(DAE) | ||
+ | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=311 | ||
+ | :* To test real enviroment echo. | ||
+ | |||
+ | ====DAE(Deep Atuo-Encode-DNN)==== | ||
+ | :* test on XinWenLianBo music. results on | ||
+ | :** http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhaomy&step=view_request&cvssid=318 | ||
+ | :* To test real enviroment echo. | ||
+ | |||
+ | ====VAD==== | ||
+ | * Harmonics and Teager energy features being investigation (++) | ||
+ | |||
+ | ====Speech rate training==== | ||
+ | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268 | ||
+ | |||
+ | ====Confidence==== | ||
+ | * Reproduce the experiments on fisher dataset. | ||
+ | * Use the fisher DNN model to decode all-wsj dataset | ||
+ | * preparing scoring for puqiang data | ||
+ | * HOLD | ||
+ | |||
+ | ===Speaker ID=== | ||
+ | :* Non-stream GMM:wer-2.28% | ||
+ | seperate3-ivector:wer-3.54 single-ivector:wer-1.57 | ||
+ | seperate-PLDA:wer-0.87 single-PLDA:wer-1.00 | ||
+ | :* Code ready | ||
+ | |||
+ | ===Language ID=== | ||
+ | * GMM-based language is ready. | ||
+ | * Delivered to Jietong | ||
+ | * Prepare the test-case | ||
+ | * http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=328 | ||
+ | * To test 10 language-ids | ||
+ | |||
+ | ===Voice Conversion=== | ||
+ | * Yiye is reading materials(+) | ||
+ | |||
+ | |||
==Text Processing== | ==Text Processing== | ||
===LM development=== | ===LM development=== | ||
第54行: | 第128行: | ||
====improve lucene search==== | ====improve lucene search==== | ||
:* add more feature to improve search. | :* add more feature to improve search. | ||
− | ::* | + | ::* POS, NER ,tf ,idf .. |
+ | |||
====XiaoI framework==== | ====XiaoI framework==== | ||
* context in xiaoI | * context in xiaoI |
2014年12月29日 (一) 08:48的最后版本
Speech Processing
AM development
Environment
- Modification
- First down-frequency of gpu760
- Improved the gpu Fan-speed
- Change the sleep-mode of gpu
- May gpu760 of grid-14 be something wrong. To be exchanged.
- To buy 3*2k PCs.
Sparse DNN
- details at http://liuc.cslt.org/pages/sparse.html
RNN AM
- Adjusting the learning rate.(+)
- Trying toolkit of Microsoft.(+)
- Trying new LSTM toolkit from Baidu
- details at http://liuc.cslt.org/pages/rnnam.html
A new nnet training scheduler
- details at http://liuc.cslt.org/pages/nnet-sched.html
- Test 500h dataset, 36-epchs/8-batches --Similar performance observed compared with std recipe
- Test on 36600h dataset --done.
Dropout & Maxout & Convolutive network
- Drop out(+)
- Find and test unknown noise test-data.(++)
- MaxOut && P-norm
- Need to solve the too small learning-rate problem
- Add one normalization layer after the pnorm-layer
- Add L2-norm upper bound
- Need to solve the too small learning-rate problem
- Convolutive network(DAE)
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=311
- To test real enviroment echo.
DAE(Deep Atuo-Encode-DNN)
- test on XinWenLianBo music. results on
- To test real enviroment echo.
VAD
- Harmonics and Teager energy features being investigation (++)
Speech rate training
Confidence
- Reproduce the experiments on fisher dataset.
- Use the fisher DNN model to decode all-wsj dataset
- preparing scoring for puqiang data
- HOLD
Speaker ID
- Non-stream GMM:wer-2.28%
seperate3-ivector:wer-3.54 single-ivector:wer-1.57 seperate-PLDA:wer-0.87 single-PLDA:wer-1.00
- Code ready
Language ID
- GMM-based language is ready.
- Delivered to Jietong
- Prepare the test-case
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=328
- To test 10 language-ids
Voice Conversion
- Yiye is reading materials(+)
Text Processing
LM development
Domain specific LM
- LM2.0
- data check for lexicon(jietong)
- merge lm with NAME POI etc.(hanzhenglong/wxx)
- mix the sougou2T-lm,kn-discount continue
- train a large lm using 25w-dict.(hanzhenglong/wxx)
- prun history lm(wxx)
- new dict.
- dongxu help zhenglong with large dictionary.
tag LM
- need to do
- tag Probability should test add the weight(hanzhenglong) and handover to hanzhenglong (hold)
- paper
- modify the paper(yuanb two days),paper submit this week.
RNN LM
- rnn
- test wer RNNLM on Chinese data from jietong-data(this week)
- generate the ngram model from rnnlm and test the ppl with different size txt.[1]
- lstm+rnn
- check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)
Word2Vector
W2V based doc classification
- Initial results variable Bayesian GMM obtained. Performance is not as good as the conventional GMM.(hold)
- Non-linear inter-language transform: English-Spanish-Czch: wv model training done, transform model on investigation
Knowledge vector
- Knowledge vector
- Make a proper test set.
- Modify the object function and training process.
- Read Liu's paper.
relation
- Accomplish transE with almost the same performance as the paper did(even better)[2]
Character to word
- Character to word conversion(hold)
- prepare the task: word similarity
- prepare the dict.
Translation
- v5.0 demo released
- cut the dict and use new segment-tool
QA
improve fuzzy match
- add Synonyms similarity using MERT-4 method(hold)
improve lucene search
- add more feature to improve search.
- POS, NER ,tf ,idf ..
XiaoI framework
- context in xiaoI
query normalization
- using NER to normalize the word
- new inter will install SEMPRE