“ASR Status Report 2017-2-20”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
(以“ {| class="wikitable" !Date!!People !! Last Week !! This Week |- | rowspan="7"|2017.2.13 |Jingyi Lin || * || * |- |- |Yanqing Wang || * || * |- |- |H...”为内容创建页面)
 
第1行: 第1行:
  
 +
 +
 +
{| class="wikitable"
 +
!Date!!People !! Last Week !! This Week
 +
|-
 +
| rowspan="7"|2017.2.13
 +
 +
 +
 +
|Jingyi Lin
 +
||
 +
*
 +
||
 +
*
 +
|-
 +
 +
 +
|-
 +
|Yanqing Wang
 +
||
 +
*
 +
||
 +
*
 +
|-
 +
 +
 +
 +
 +
|-
 +
|Hang Luo
 +
||
 +
*
 +
||
 +
* Joint training of chinese and japanese
 +
**To find whether joint training work on this database
 +
|-
 +
 +
 +
|-
 +
|Ying Shi 
 +
||
 +
* joint traing(speech spker)baseline and read some papers
 +
||
 +
* visualization joint training
 +
**DNN same amount of label
 +
|-
 +
 +
 +
 +
|-
 +
|Yixiang Chen 
 +
||
 +
*
 +
||
 +
* ASVspoofing
 +
* Deep speaker embedding Two methods of improvement
 +
|-
 +
 +
 +
|-
 +
|Lantian Li 
 +
||
 +
* Deep speaker
 +
* ASVspoofing
 +
* Write book
 +
||
 +
* Deep speaker embedding:1、memory allocation,paramW sharing.  2、better than cosine distance while still worse than LDA and PLDA
 +
* ASVspoofing:a text-dependent task
 +
* Write book:complete chapter 1-3,leaving chapter 4.
 +
|-
 +
 +
 +
|-
 +
|Zhiyuan Tang
 +
||
 +
* babel data preparation, [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&step=view_request&cvssid=595 baselines]
 +
||
 +
* joint training of speech and language recognition, two languages for preliminary exploration
 +
|-
 +
 +
|}
 +
 +
-----------------
  
  

2017年2月28日 (二) 04:25的版本


Date People Last Week This Week
2017.2.13


Jingyi Lin
Yanqing Wang
Hang Luo
  • Joint training of chinese and japanese
    • To find whether joint training work on this database
Ying Shi
  • joint traing(speech spker)baseline and read some papers
  • visualization joint training
    • DNN same amount of label
Yixiang Chen
  • ASVspoofing
  • Deep speaker embedding Two methods of improvement
Lantian Li
  • Deep speaker
  • ASVspoofing
  • Write book
  • Deep speaker embedding:1、memory allocation,paramW sharing. 2、better than cosine distance while still worse than LDA and PLDA
  • ASVspoofing:a text-dependent task
  • Write book:complete chapter 1-3,leaving chapter 4.
Zhiyuan Tang
  • joint training of speech and language recognition, two languages for preliminary exploration


Date People Last Week This Week
2017.2.13


Jingyi Lin
Yanqing Wang
Hang Luo
  • Joint training of chinese and japanese
    • To find whether joint training work on this database
Ying Shi
  • joint traing(speech spker)baseline and read some papers
  • visualization joint training
    • DNN same amount of label
Yixiang Chen
  • ASVspoofing
  • Deep speaker embedding Two methods of improvement
Lantian Li
  • Deep speaker
  • ASVspoofing
  • Write book
  • Deep speaker embedding:1、memory allocation,paramW sharing. 2、better than cosine distance while still worse than LDA and PLDA
  • ASVspoofing:a text-dependent task
  • Write book:complete chapter 1-3,leaving chapter 4.
Zhiyuan Tang
  • joint training of speech and language recognition, two languages for preliminary exploration