|
|
第78行: |
第78行: |
| |Yunqi Cai | | |Yunqi Cai |
| || | | || |
− | * | + | *Run through the Thchs30 data, think about the research plan. |
| || | | || |
− | * | + | *Get to know every step of the ASR processes, and investigate the RNN LM. |
| || | | || |
| * | | * |
People |
Last Week |
This Week |
Task Tracking (DeadLine)
|
Yibo Liu
|
|
|
|
Xiuqi Jiang
|
- Made further adjustments to code, deleting unnecessary files and uploading generated samples to dir 'predict/results'.
- More models have been trained and compared.
- Collated notes on ML book and uploaded them on wiki.
|
- Try to train a model generating length-unfixed texts.
|
|
Jiayao Wu
|
- read some papers about kws and write the chapter before this weekend.
- carding the TDNN-F structure and count the lines of weights to prune and do value prune first.
|
- finish the assigned chapter of Speech book
- finish the value prune experiment
|
|
Zhaodi Qi
|
- read some papers about DID
-
|
- Continue to complete the language recognition task
|
|
Jiawei Yu
|
|
|
|
Yunqi Cai
|
- Run through the Thchs30 data, think about the research plan.
|
- Get to know every step of the ASR processes, and investigate the RNN LM.
|
|
Dan He
|
- Compared the size of a fully connected layer after tt decomposition through experiment, at the same time compared the train loss, validation loss and test accuracy between the two case.
|
- Further compared the inference speed, the precision after retrain.
- Try to decompose more layers of fully connected layers.
|
|
Yang Zhang
|
- Released and published our WeChat app(星云听).
- cleaned my code
- wrote document for this project (have not finished yet).
|
- continue to write document.
|
|