“ASR:2015-06-01”版本间的差异
来自cslt Wiki
(以“==Speech Processing == === AM development === ==== Environment ==== * grid-15 often does not work * grid-14 often does not work ==== RNN AM==== * details at http:/...”为内容创建页面) |
|||
第62行: | 第62行: | ||
====W2V based document classification==== | ====W2V based document classification==== | ||
− | * make a technical report about document classification using CNN --yiqiao | + | * make a technical report about document classification using CNN --yiqiao(done) |
* CNN adapt to resolve the low resource problem | * CNN adapt to resolve the low resource problem | ||
===Translation=== | ===Translation=== | ||
第68行: | 第68行: | ||
===Order representation === | ===Order representation === | ||
− | |||
− | |||
* Sort out vectors and do the experiment on objective function convergence | * Sort out vectors and do the experiment on objective function convergence | ||
* test on classification task and prediction task | * test on classification task and prediction task | ||
第82行: | 第80行: | ||
===relation classifier=== | ===relation classifier=== | ||
− | |||
− | |||
− | |||
*Finish the draft. | *Finish the draft. | ||
===plan to do=== | ===plan to do=== | ||
* combine LDA with neural network | * combine LDA with neural network | ||
+ | * modify the objective function | ||
+ | * sup-sampling method to solve the low frequence word |
2015年6月1日 (一) 01:09的版本
Speech Processing
AM development
Environment
- grid-15 often does not work
- grid-14 often does not work
RNN AM
- details at http://liuc.cslt.org/pages/rnnam.html
- Test monophone on RNN using dark-knowledge --Chao Liu
- run using wsj,MPE --Chao Liu
- run bi-directon --Chao Liu
- train RNN with dark knowledge transfer on AURORA4 --zhiyuan
Mic-Array
- hold
- Change the prediction from fbank to spectrum features
- investigate alpha parameter in time domian and frquency domain
- ALPHA>=0, using data generated by reverber toolkit
- consider theta
- compute EER with kaldi
RNN-DAE(Deep based Auto-Encode-RNN)
- deliver to mengyuan
Speaker ID
- DNN-based sid --Yiye Lin
Ivector&Dvector based ASR
- hold --Tian Lan
- Cluster the speakers to speaker-classes, then using the distance or the posterior-probability as the metric
- Direct using the dark-knowledge strategy to do the ivector training.
- Ivector dimention is smaller, performance is better
- Augument to hidden layer is better than input layer
- train on wsj(testbase dev93+evl92)
Dark knowledge
- Ensemble using 100h dataset to construct diffrernt structures -- Mengyuan
- adaptation English and Chinglish
- Try to improve the chinglish performance extremly
- unsupervised training with wsj contributes to aurora4 model --Xiangyu Zeng
- test large database with AMIDA
- test hidden layer knowledge transfer--xuewei
bilingual recognition
- hold
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zxw&step=view_request&cvssid=359 --Zhiyuan Tang and Mengyuan
language vector
- train DNN with language vector--xuewei
Text Processing
RNN LM
- character-lm rnn(hold)
- lstm+rnn
- check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)
W2V based document classification
- make a technical report about document classification using CNN --yiqiao(done)
- CNN adapt to resolve the low resource problem
Translation
- Test the performance of the similar-pair method in bilingual recognition
Order representation
- Sort out vectors and do the experiment on objective function convergence
- test on classification task and prediction task
binary vector
- Finish hamming metric binary vector.
- Try to finish binary vector.
- Do test report.
Stochastic ListNet
- To finish writing first edition of emnlp 2015 long paper
relation classifier
- Finish the draft.
plan to do
- combine LDA with neural network
- modify the objective function
- sup-sampling method to solve the low frequence word