People |
Last Week |
This Week |
Task Tracking (DeadLine)
|
Yibo Liu
|
- Started to reconstruct the vivi code with better structures.
|
- Especially need to build proper models for planning and post processing.
|
|
Xiuqi Jiang
|
- Designed a better code structure for further experiments.
- Improved vivi2.0 and made some adjustments to .sh script.
|
- Build codes under the new structure.
|
|
Jiayao Wu
|
- do experiments on node_sparseness and update it on cvss
- re-label some data
|
- keep on doing experiments on pruning
- get familiar with pytorch
|
|
Zhaodi Qi
|
- Reduce the lid model and test the results
- Test test set of different channels
- Wrote a model based on asr(tdnn-f)-lid(tdnn) (similar to PTN) to solve channel inconsistency
|
- Complete the asr-lid model
|
|
Jiawei Yu
|
- wrote a tensorflow learning document, and I have not completed it.
- read some papers about attention and find some attention code in GitHub.
|
- try to run these attention code figure out mechanism of this code.
|
|
Yunqi Cai
|
- 1,figured out how the Bert model create the pretraining data and do the pretraining.
- 2,try to use the Bert to do the error correction of a text sentence.
- 3,re-label some ASR data
- 4,Test vivi2.0 model
|
- Construct a Text sentence error correction model
|
|
Dan He
|
- Do experiments on comparing test time and update it on cvss
- Read the experiment code carefully
|
- Directly decompose the trained parameters and put them into the network for retraining.
|
|
Yang Zhang
|
- 1. remodified nginx configuration and changed the server networking structure
- 2. tried to learn vae and did a test in wolf server
|
- continue to learn and test VAE
|
|
Wenwei Dong
|
|
|
|