“2024-11-04”版本间的差异

来自cslt Wiki
跳转至: 导航搜索
第84行: 第84行:
 
|Zehua Liu
 
|Zehua Liu
 
||
 
||
*
+
*In-Context-Learning(if sentence is very long,context seems fail)still finding reason
 +
** (context<30s)45.30% | 44.69% (context = 30s) | 46.02%(context = 120s)
 +
*Writing VTS project document
 
||
 
||
 
*
 
*

2024年11月4日 (一) 10:46的版本

People This Week Next Week Task Tracking (DeadLine)
Dong Wang
  • AI Medical sector 2 chapters done
Lantian Li
Ying Shi
  • Stop strategy for Cohort Overlap ASR here
Zhenghai You
Junming Yuan
  • paper reading
  • prepare to reproduce cocktail HuBERT (in progress)
Chen Chen
Xiaolou Li
  • Debug the Chinese VTS (in training already)
  • Write the report of VTS project (main work)
Zehua Liu
  • In-Context-Learning(if sentence is very long,context seems fail)still finding reason
    • (context<30s)45.30% | 44.69% (context = 30s) | 46.02%(context = 120s)
  • Writing VTS project document
Pengqi Li
Wan Lin
Tianhao Wang
  • investigating some new approach for target sound separation
  • prepare the code for LoRA tuned CLAP
Xiaoxue Luo
  • prepare the report
Zhenyu Zhou
Junhui Chen
  • NS with frame-level detection loss
    • use silero-vad
    • Model is training, seems EER decrease faster.
Jiaying Wang
Yu Zhang
  • SocioDojo
    • with cash ratio risk aware, and change information sources, seems have a decent risk control over Nasdaq 100 index [1]
  • Some paper reading and report in RoyalFlush, get some idea (mainly about LLM for time series task)
Wenqiang Du
Yang Wei
Lily
Turi
  • LoRA finetuning (Result is not good)
  • Data cleaning
Yue Gu
  • read several paper about speech tokenizer. I want to design a encoder, which processes different size feature frame and construct several different codebooks, to extract personality from the varing speech speed. It is still in progress.
  • paper writing
Qi Qu
  • KWS:
    • Yi (Liangshan, Sichuan) dataset prepared for training; dataset to be annotated for testing.
    • Experiments on model quantization for NPU devices: i16 quantization arrives at a balance between accuracy and efficiency (~2ms per inference, compared to ~250ms for non-quantized); more calibration data needed for further confirmation.
    • Full-featured demo (recording + feature extraction + model inference) for NPU devices in development.