“2013-04-19”版本间的差异
来自cslt Wiki
第6行: | 第6行: | ||
==DNN progress== | ==DNN progress== | ||
===400 hour BN model === | ===400 hour BN model === | ||
− | + | *Tencent baseline: | |
<pre> | <pre> | ||
700 hour online data + 700 863 data , HLDA+MPE; 88k lexicon: | 700 hour online data + 700 863 data , HLDA+MPE; 88k lexicon: | ||
第19行: | 第19行: | ||
speedup: 26.8 | speedup: 26.8 | ||
</pre> | </pre> | ||
− | + | *bMMI | |
<pre> | <pre> | ||
exp/tri4b_mmi_b0.1/decode_tlm_biglm: | exp/tri4b_mmi_b0.1/decode_tlm_biglm: | ||
第31行: | 第31行: | ||
speedup: %WER 27.88 [ 1465 / 5255, 32 ins, 332 del, 1101 sub ] | speedup: %WER 27.88 [ 1465 / 5255, 32 ins, 332 del, 1101 sub ] | ||
</pre> | </pre> | ||
− | + | *fMMI | |
<pre> | <pre> | ||
exp/tri4b_fmmi_indirect/decode_tlm_it7_biglm: | exp/tri4b_fmmi_indirect/decode_tlm_it7_biglm: | ||
第43行: | 第43行: | ||
speedup: %WER 26.81 [ 1409 / 5255, 35 ins, 284 del, 1090 sub ] | speedup: %WER 26.81 [ 1409 / 5255, 35 ins, 284 del, 1090 sub ] | ||
</pre> | </pre> | ||
− | + | *DNN-bn | |
<pre> | <pre> | ||
exp/tri4d_fmmi_indirect/decode_tlm_it4_biglm: | exp/tri4d_fmmi_indirect/decode_tlm_it4_biglm: |
2013年4月25日 (四) 04:08的版本
目录
Data sharing
- AM/lexicon/LM are shared.
- LM count files are still in transfering.
DNN progress
400 hour BN model
- Tencent baseline:
700 hour online data + 700 863 data , HLDA+MPE; 88k lexicon: record1900: 8.4 2044: 22.4 online 1: 35.6 online 2: 29.6 map: 24.5 notepad: 16 general: 36 speedup: 26.8
- bMMI
exp/tri4b_mmi_b0.1/decode_tlm_biglm: map: %WER 27.54 [ 4029 / 14628, 63 ins, 533 del, 3433 sub ] 2044: %WER 24.44 [ 5681 / 23241, 313 ins, 844 del, 4524 sub ] notetp3: %WER 19.81 [ 367 / 1853, 8 ins, 48 del, 311 sub ] record1900: %WER 7.65 [ 909 / 11888, 17 ins, 377 del, 515 sub ] general: %WER 38.52 [ 14490 / 37619, 182 ins, 1314 del, 12994 sub ] online1: %WER 34.66 [ 9855 / 28433, 398 ins, 1895 del, 7562 sub ] online2: %WER 27.23 [ 16092 / 59101, 623 ins, 2954 del, 12515 sub ] speedup: %WER 27.88 [ 1465 / 5255, 32 ins, 332 del, 1101 sub ]
- fMMI
exp/tri4b_fmmi_indirect/decode_tlm_it7_biglm: map: %WER 27.69 [ 4050 / 14628, 61 ins, 538 del, 3451 sub ] 2044: %WER 24.03 [ 5584 / 23241, 316 ins, 817 del, 4451 sub ] notetp3: %WER 21.75 [ 403 / 1853, 7 ins, 53 del, 343 sub ] record1900: %WER 7.35 [ 874 / 11888, 31 ins, 347 del, 496 sub ] general: %WER 38.90 [ 14635 / 37619, 206 ins, 1331 del, 13098 sub ] online1: %WER 34.33 [ 9762 / 28433, 424 ins, 1888 del, 7450 sub ] online2: %WER 26.80 [ 15837 / 59101, 648 ins, 2902 del, 12287 sub ] speedup: %WER 26.81 [ 1409 / 5255, 35 ins, 284 del, 1090 sub ]
- DNN-bn
exp/tri4d_fmmi_indirect/decode_tlm_it4_biglm: map: %WER 23.79 [ 3480 / 14628, 58 ins, 465 del, 2957 sub ] 2044: %WER 21.77 [ 5060 / 23241, 297 ins, 711 del, 4052 sub ] notetp3: %WER 15.81 [ 293 / 1853, 8 ins, 35 del, 250 sub ] record1900: %WER 6.57 [ 781 / 11888, 18 ins, 325 del, 438 sub ] general: %WER 33.61 [ 12645 / 37619, 191 ins, 968 del, 11486 sub ] online1: %WER 31.44 [ 8940 / 28433, 311 ins, 1619 del, 7010 sub ] online2: %WER 24.10 [ 14245 / 59101, 523 ins, 2417 del, 11305 sub ] speedup: %WER 22.82 [ 1199 / 5255, 39 ins, 241 del, 919 sub ]
Tencent test result
- AM: 70h training data(2 day, 15 machines, 10 threads)
- LM: 88k LM
- Test case: general
- gmmi-bmmi: 38.7%
- dnn-1: 28% 11 frame window, phone-based tree
- dnn-2: 34% 9 frame window, state-based tree
GPU & CPU merge
- Invesigate the possibility to merge GPU and CPU code. Try to find out an easier way. (1 week)
L-1 sparse initial training
- Start to investigating.
Kaldi/HTK merge
- HTK2Kaldi: the tool with Kaldi does not work.
- Kaldi2HTK: done with implementation. Testing?
Embedded progress
- Some large performance (speed) degradation with the embedded platform(1/60).
- Planning for sparse DNN.
- QA LM training, still failed. Mengyuan need more work on this.