“ASR:2015-02-02”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“==Speech Processing == === AM development === ==== Environment ==== * May gpu760 of grid-14 has been repairing. * grid-11 often shutdown automatically, too slow com...”为内容创建页面)
 
Lr讨论 | 贡献
Text Processing
第64行: 第64行:
  
 
====Domain specific LM====
 
====Domain specific LM====
* LM2.1
+
* LM2.X
 
:* mix the sougou2T-lm,kn-discount continue
 
:* mix the sougou2T-lm,kn-discount continue
 
:* train a large lm using 25w-dict.(hanzhenglong/wxx)
 
:* train a large lm using 25w-dict.(hanzhenglong/wxx)
::* add more data including poi, document information.
+
::* v2.0a adjust the weight and smaller weight of transportation is better.(done)
::* add v1.0 vocab and filter the useless word
+
::* v2.0b add the v1.0 vocab(this week)
::* set the test set
+
::* v2.0c filter the useless word.(next week)
 +
::* set the test set for new word (hold)
  
 
====tag LM====
 
====tag LM====
 
* Tag Lm
 
* Tag Lm
:* tag Probability should test add the weight(hanzhenglong) and handover to hanzhenglong ("this month")
+
:* code is given to jietong .
 
* similar word extension in FST
 
* similar word extension in FST
 
:* write a draft of a paper  
 
:* write a draft of a paper  
第89行: 第90行:
 
* data prepare.
 
* data prepare.
 
====Knowledge vector====
 
====Knowledge vector====
* Knowledge vector
+
* run the big data
:* run the big data
+
:* continue to test, including paragraph vector and relation.
:* prepare the paper.
+
:* result: http://cslt.riit.tsinghua.edu.cn/cgi-bin/cvss/cvss_request.pl?account=lr&step=view_request&cvssid=326
* result
+
* prepare the paper.
 
====Character to word====
 
====Character to word====
 
* Character to word conversion(hold)
 
* Character to word conversion(hold)
第101行: 第102行:
  
 
===Sparse NN in NLP===
 
===Sparse NN in NLP===
* write a technical report
+
* write a technical report(Wednesday) and make a repoert.
  
 
===QA===
 
===QA===
第108行: 第109行:
 
====improve lucene search====
 
====improve lucene search====
 
:* add more feature to improve search.
 
:* add more feature to improve search.
::* POS, NER ,tf ,idf ..
+
::* POS, NER ,tf ,idf  
 +
::* result:P@1:  0.68734335-->0.7763158  1097-->1239 (The number of queries is 1596.)P@5:  0.80325814-->0.8383459  1282-->1338 (The number of queries is 1596.)[http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/Huilan-learning-to-rank]
 
:* extract more features about lexical, syntactic and semantic to improve re-ranking performance.
 
:* extract more features about lexical, syntactic and semantic to improve re-ranking performance.
:* using sentence vector
+
:* using sentence vector, it doesn't work.
 
====context framework====
 
====context framework====
 
* code for organization
 
* code for organization
:* change to knowledge graph
+
:* change to knowledge graph,and learn the D2R tool and JENA
  
 
====query normalization====
 
====query normalization====

2015年2月2日 (一) 11:47的版本

Speech Processing

AM development

Environment

  • May gpu760 of grid-14 has been repairing.
  • grid-11 often shutdown automatically, too slow computation speed.

RNN AM

Dropout & Maxout & rectifier

  • Need to solve the too small learning-rate problem
  • 20h small scale sparse dnn with rectifier. --Chao liu
  • 20h small scale sparse dnn with Maxout/rectifier based on weight-magnitude-pruning. --Mengyuan Zhao

Convolutive network

  • Convolutive network(DAE)

DNN-DAE(Deep Auto-Encode-DNN)

RNN-DAE(Deep based Auto-Encode-RNN)

VAD

  • DAE
  • Technical report --Shi Yin

Speech rate training

Confidence

  • Reproduce the experiments on fisher dataset.
  • Use the fisher DNN model to decode all-wsj dataset
  • preparing scoring for puqiang data
  • HOLD

Neural network visulization

Speaker ID

Language ID

Voice Conversion

  • Yiye is reading materials
  • HOLD


Text Processing

LM development

Domain specific LM

  • LM2.X
  • mix the sougou2T-lm,kn-discount continue
  • train a large lm using 25w-dict.(hanzhenglong/wxx)
  • v2.0a adjust the weight and smaller weight of transportation is better.(done)
  • v2.0b add the v1.0 vocab(this week)
  • v2.0c filter the useless word.(next week)
  • set the test set for new word (hold)

tag LM

  • Tag Lm
  • code is given to jietong .
  • similar word extension in FST
  • write a draft of a paper
  • result :16.32->10.23

RNN LM

  • rnn
  • test wer RNNLM on Chinese data from jietong-data
  • generate the ngram model from rnnlm and test the ppl with different size txt.
  • lstm+rnn
  • check the lstm-rnnlm code about how to Initialize and update learning rate.(hold)

Word2Vector

W2V based doc classification

  • data prepare.

Knowledge vector

  • run the big data
  • prepare the paper.

Character to word

  • Character to word conversion(hold)

Translation

  • v5.0 demo released
  • cut the dict and use new segment-tool

Sparse NN in NLP

  • write a technical report(Wednesday) and make a repoert.

QA

improve fuzzy match

  • add Synonyms similarity using MERT-4 method(hold)

improve lucene search

  • add more feature to improve search.
  • POS, NER ,tf ,idf
  • result:P@1: 0.68734335-->0.7763158 1097-->1239 (The number of queries is 1596.)P@5: 0.80325814-->0.8383459 1282-->1338 (The number of queries is 1596.)[1]
  • extract more features about lexical, syntactic and semantic to improve re-ranking performance.
  • using sentence vector, it doesn't work.

context framework

  • code for organization
  • change to knowledge graph,and learn the D2R tool and JENA

query normalization

  • using NER to normalize the word
  • new inter will install SEMPRE