“2024-10-14”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第86行: 第86行:
 
|Zehua Liu
 
|Zehua Liu
 
||
 
||
*
+
*Av-Hubert as Encoder performe very bad(cer:80%)
 +
**after finetune maybe better ,but still bad
 +
*Qwen-14B perform better(47%) than Qwen-7B(50%)
 +
*Finish In-Context-Learning code and is training
 +
** maybe i will get result very soon
 
||
 
||
 
*
 
*

2024年10月14日 (一) 10:39的版本

People This Week Next Week Task Tracking (DeadLine)
Dong Wang
Lantian Li
Ying Shi
Zhenghai You
Junming Yuan
  • MT-Hubert exp[1]:
    • codebook set + infoNCE ---> FC+softmax+CE / FC+sigmoid+BCE
      • To reduce the learning rate can work.
    • verified the feat-mask MT-Hubert with different lr
    • time-mask MT-Hubert verification (in progress)
Chen Chen
Xiaolou Li
Zehua Liu
  • Av-Hubert as Encoder performe very bad(cer:80%)
    • after finetune maybe better ,but still bad
  • Qwen-14B perform better(47%) than Qwen-7B(50%)
  • Finish In-Context-Learning code and is training
    • maybe i will get result very soon
Pengqi Li
  • Evaluate TAO and LayerCAM(verification) reliability.
    • Exploring the Consistency of TAO and LayerCAM Results on different models and datasets.
Wan Lin
Tianhao Wang
Xiaoxue Luo
  • Paper reading about sound separation
  • AudioSep reproduction
    • Training time is too long -> replace with a small dataset(in training)
Zhenyu Zhou
Junhui Chen
Jiaying Wang
Yu Zhang
  • SocioDojo Llama version
    • news integration is adjusted once every 12 hours
    • wikipedia & google search is banned
Wenqiang Du
  • Check the data from past training models and update the KWS model again(Model testing)
    • Chinese, Cantonese, Minnan, Haining and Uyghur
Yang Wei
  • Train text enroll KWS model with updated code (in progress)
Lily
Turi
  • Whisper model finetuning[2]
Yue Gu
  • revise the TASLP paper
  • read several papers about accent and prosody
Qi Qu
  • AED: classifiers retrained w/ new method (suppression on negative stimuli) and improvement attested.