“ASR:2015-02-02”版本间的差异
来自cslt Wiki
(以“==Speech Processing == === AM development === ==== Environment ==== * May gpu760 of grid-14 has been repairing. * grid-11 often shutdown automatically, too slow com...”为内容创建页面) |
|||
(1位用户的5个中间修订版本未显示) | |||
第8行: | 第8行: | ||
==== RNN AM==== | ==== RNN AM==== | ||
* details at http://liuc.cslt.org/pages/rnnam.html | * details at http://liuc.cslt.org/pages/rnnam.html | ||
+ | |||
+ | ==== Mic-Array ==== | ||
+ | * XueWei is reading papers and preparing the technical report | ||
====Dropout & Maxout & rectifier ==== | ====Dropout & Maxout & rectifier ==== | ||
第13行: | 第16行: | ||
* 20h small scale sparse dnn with rectifier. --Chao liu | * 20h small scale sparse dnn with rectifier. --Chao liu | ||
* 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao | * 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao | ||
+ | * hold | ||
====Convolutive network==== | ====Convolutive network==== | ||
第29行: | 第33行: | ||
====VAD==== | ====VAD==== | ||
* DAE | * DAE | ||
− | * Technical report --Shi Yin | + | :* HOLD |
+ | * Technical report -- Shi Yin | ||
====Speech rate training==== | ====Speech rate training==== | ||
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268 | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268 | ||
− | :* Technical report to draft. Shi Yin | + | :* Technical report to draft. Xiangyu Zeng, Shi Yin |
:* Prepare for ChinaSIP | :* Prepare for ChinaSIP | ||
第43行: | 第48行: | ||
====Neural network visulization==== | ====Neural network visulization==== | ||
− | + | * http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=324 | |
+ | * Technical report, Mian Wang. | ||
===Speaker ID=== | ===Speaker ID=== | ||
第49行: | 第55行: | ||
:* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=327 | :* http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=327 | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
第64行: | 第61行: | ||
====Domain specific LM==== | ====Domain specific LM==== | ||
− | * LM2. | + | * LM2.X |
:* mix the sougou2T-lm,kn-discount continue | :* mix the sougou2T-lm,kn-discount continue | ||
:* train a large lm using 25w-dict.(hanzhenglong/wxx) | :* train a large lm using 25w-dict.(hanzhenglong/wxx) | ||
− | ::* | + | ::* v2.0a adjust the weight and smaller weight of transportation is better.(done) |
− | ::* add v1.0 vocab | + | ::* v2.0b add the v1.0 vocab(this week) |
− | ::* set the test set | + | ::* v2.0c filter the useless word.(next week) |
+ | ::* set the test set for new word (hold) | ||
====tag LM==== | ====tag LM==== | ||
* Tag Lm | * Tag Lm | ||
− | :* | + | :* code is given to jietong . |
* similar word extension in FST | * similar word extension in FST | ||
:* write a draft of a paper | :* write a draft of a paper | ||
− | :* result : | + | :* result [http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=mx&step=view_request&cvssid=332] |
+ | |||
====RNN LM==== | ====RNN LM==== | ||
*rnn | *rnn | ||
第89行: | 第88行: | ||
* data prepare. | * data prepare. | ||
====Knowledge vector==== | ====Knowledge vector==== | ||
− | + | * run the big data | |
− | + | :* continue to test, including paragraph vector and relation. | |
− | :* | + | :* result: http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=lr&step=view_request&cvssid=326 |
− | * result | + | * prepare the paper. |
====Character to word==== | ====Character to word==== | ||
* Character to word conversion(hold) | * Character to word conversion(hold) | ||
第101行: | 第100行: | ||
===Sparse NN in NLP=== | ===Sparse NN in NLP=== | ||
− | * write a technical report | + | * write a technical report(Wednesday) and make a repoert. |
===QA=== | ===QA=== | ||
第108行: | 第107行: | ||
====improve lucene search==== | ====improve lucene search==== | ||
:* add more feature to improve search. | :* add more feature to improve search. | ||
− | ::* POS, NER ,tf ,idf .. | + | ::* POS, NER ,tf ,idf |
− | :* | + | ::* result:P@1: 0.68734335-->0.7763158P@5: 0.80325814-->0.8383459 [http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/Huilan-learning-to-rank] |
− | :* using sentence vector | + | :* Optimize the code about extracting features and reranking and commit to Rong Liu to check in. |
+ | :* using sentence vector, it doesn't work. | ||
+ | ===online learning=== | ||
+ | *a simple edition about online learning part about QA. | ||
====context framework==== | ====context framework==== | ||
* code for organization | * code for organization | ||
− | :* change to knowledge graph | + | :* change to knowledge graph,and learn the D2R tool and JENA |
====query normalization==== | ====query normalization==== |
2015年2月6日 (五) 06:52的最后版本
目录
Speech Processing
AM development
Environment
- May gpu760 of grid-14 has been repairing.
- grid-11 often shutdown automatically, too slow computation speed.
RNN AM
- details at http://liuc.cslt.org/pages/rnnam.html
Mic-Array
- XueWei is reading papers and preparing the technical report
Dropout & Maxout & rectifier
- Need to solve the too small learning-rate problem
- 20h small scale sparse dnn with rectifier. --Chao liu
- 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao
- hold
Convolutive network
- Convolutive network(DAE)
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=311
- Technical report to draft, Mian Wang, Yiye Lin, Shi Yin, Mengyuan Zhao
DNN-DAE(Deep Auto-Encode-DNN)
- Technical report to draft, Xiangyu Zeng, Shi Yin, Mengyuan Zhao and Zhiyong Zhang,
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=318
RNN-DAE(Deep based Auto-Encode-RNN)
VAD
- DAE
- HOLD
- Technical report -- Shi Yin
Speech rate training
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=268
- Technical report to draft. Xiangyu Zeng, Shi Yin
- Prepare for ChinaSIP
Confidence
- Reproduce the experiments on fisher dataset.
- Use the fisher DNN model to decode all-wsj dataset
- preparing scoring for puqiang data
- HOLD
Neural network visulization
- http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=zhangzy&step=view_request&cvssid=324
- Technical report, Mian Wang.
Speaker ID
Text Processing
LM development
Domain specific LM
- LM2.X
- mix the sougou2T-lm,kn-discount continue
- train a large lm using 25w-dict.(hanzhenglong/wxx)
- v2.0a adjust the weight and smaller weight of transportation is better.(done)
- v2.0b add the v1.0 vocab(this week)
- v2.0c filter the useless word.(next week)
- set the test set for new word (hold)
tag LM
- Tag Lm
- code is given to jietong .
- similar word extension in FST
- write a draft of a paper
- result [1]
RNN LM
- rnn
- test wer RNNLM on Chinese data from jietong-data
- generate the ngram model from rnnlm and test the ppl with different size txt.
- lstm+rnn
- check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)
Word2Vector
W2V based doc classification
- data prepare.
Knowledge vector
- run the big data
- continue to test, including paragraph vector and relation.
- result: http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=lr&step=view_request&cvssid=326
- prepare the paper.
Character to word
- Character to word conversion(hold)
Translation
- v5.0 demo released
- cut the dict and use new segment-tool
Sparse NN in NLP
- write a technical report(Wednesday) and make a repoert.
QA
improve fuzzy match
- add Synonyms similarity using MERT-4 method(hold)
improve lucene search
- add more feature to improve search.
- POS, NER ,tf ,idf
- result:P@1: 0.68734335-->0.7763158P@5: 0.80325814-->0.8383459 [2]
- Optimize the code about extracting features and reranking and commit to Rong Liu to check in.
- using sentence vector, it doesn't work.
online learning
- a simple edition about online learning part about QA.
context framework
- code for organization
- change to knowledge graph,and learn the D2R tool and JENA
query normalization
- using NER to normalize the word
- new inter will install SEMPRE