“2019-12-30”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
 
(3位用户的3个中间修订版本未显示)
第20行: 第20行:
 
|Yunqi Cai
 
|Yunqi Cai
 
||  
 
||  
* traning and test the DNF serials model on ivectors  
+
* traning and test the DNF serials model on ivectors of SRE
 
||
 
||
*
+
*traning and test the DNF serials model on ivectors of LRE
 
||
 
||
 
*   
 
*   
第31行: 第31行:
 
|Zhiyuan Tang
 
|Zhiyuan Tang
 
||  
 
||  
*
+
* Denoising flow trained with only clean data.
 
||
 
||
*
+
* Denoising test.
 
||
 
||
 
*   
 
*   
第98行: 第98行:
 
|Jiawen Kang
 
|Jiawen Kang
 
||  
 
||  
*
+
*Merge Ghostvlad codes.
 +
*Futher experiments about selfattention.
 +
*X-vectors distribution figures.
 
||
 
||
*
+
*Prepare xvecctors for discriminant flow.
 +
*Further experiments.
 
||
 
||
 
*   
 
*   
第109行: 第112行:
 
|Ruiqi Liu
 
|Ruiqi Liu
 
||  
 
||  
*
+
* Finished the self-attention pooling experiments with CN-Celeb and VoxCeleb.
 
||
 
||
*
+
* Finish the last experiments.
 
||
 
||
 
*   
 
*   

2019年12月30日 (一) 01:39的最后版本

People This Week Next Week Task Tracking (DeadLine)
Dong Wang
  • Investigate more on DNF, by designing the contrastive training. Seems does not work well. More constrained is requested to avoid divergence.
  • Investigate the true contribution of DNF. It looks like between/within ratio has the best match with EER.
  • Investigate DNF-based PLDA, not successful yet. The performance is not better than cosine.
  • Keep on the investigation of DNF contribution.
  • Complete the draft of DNF-based paper to TASLP.
Yunqi Cai
  • traning and test the DNF serials model on ivectors of SRE
  • traning and test the DNF serials model on ivectors of LRE
Zhiyuan Tang
  • Denoising flow trained with only clean data.
  • Denoising test.
Lantian Li
  • DNF-SRE (standard Kaldi IO)
  • Clean up some tools (merge into Jungle)
  • DNF-SRE
Ying Shi
Wenqiang Du
Haoran Sun
Yue Fan
Jiawen Kang
  • Merge Ghostvlad codes.
  • Futher experiments about selfattention.
  • X-vectors distribution figures.
  • Prepare xvecctors for discriminant flow.
  • Further experiments.
Ruiqi Liu
  • Finished the self-attention pooling experiments with CN-Celeb and VoxCeleb.
  • Finish the last experiments.
Sitong Cheng
Zhixin Liu