|
|
第208行: |
第208行: |
| || | | || |
| * Interspeech2024 | | * Interspeech2024 |
− | * Journal paper draft preparation | + | * Journal paper draft preparation[https://z1et6d3xtb.feishu.cn/docx/HHjVdsUfeoPPtYx9rt6c5MRmnGd?from=from_copylink] |
| || | | || |
| * | | * |
People |
This Week |
Next Week |
Task Tracking (DeadLine)
|
Dong Wang
|
- Uyghur database paper, draft done.
- ICME review, almost done.
- MicroMagnetic paper, before final check.
|
|
|
Lantian Li
|
|
|
|
Ying Shi
|
|
|
|
Zhenghai You
|
|
|
|
Junming Yuan
|
- MT-pretraining double check exp + extend exp[1]
- Identified the influence with the BN layer in 10-shot/5-shot exp.
- Extend a new pretrained model(training on clean data with BCE loss)
- Report performance differences on fixing difference layers in finetuning.(after group meeting)
|
|
|
Chen Chen
|
- reproduce robustness experiments [2]
|
|
|
Xiaolou Li
|
- robustness experiments of AVSR system
|
|
|
Zehua Liu
|
|
|
|
Pengqi Li
|
- [3] Attention supervise learning with Liuhuan
- Confirm code for train step
- But performance is not better than without supervise
- Assume and Analysis
- Jinfu and Xueying summarized previous work
|
|
|
Wan Lin
|
|
|
|
Tianhao Wang
|
- SE Adapter assumption verification exps [4]
- assumption: entire fine-tuning = CNN refinement + SE adaptation
|
|
|
Zhenyu Zhou
|
|
|
|
Junhui Chen
|
|
|
|
Jiaying Wang
|
- experiments of cohort pit[5]
- result comparison with other cohort choices with train-100 training set
|
|
|
Yu Zhang
|
- financial-pipeline
- portfolio analysis code
- write doc
|
- Fix some bugs found while self checking
- Check out the entire process with Jun Wang
|
|
Wenqiang Du
|
- Project coordination and related file archiving
- Closing of the DiTing project
|
|
|
Yang Wei
|
|
|
|
Lily
|
- Interspeech2024
- Journal paper draft preparation[6]
|
|
|