Lantian Li 2015-11-09
来自cslt Wiki
Weekly Summary
1. Go on my deep speaker embedding tasks:
1). knowledge transfer for i-vector -- working with Zhiyuan Zhang.
--hold
2). metric learning using linear transform.
Experimental results are shown in CVSS 485.
Propose the hypothesis that if each speaker has more utterances, the LDA/PLDA better than MMML.
On the contrary, MMML is better. Need more experiments to verify the hypothesis.
3. Write paper on 'Discriminative Score Feature Selection for Speaker Verification'.
4. Read Interspeech papers on Speaker recogntion.
Next Week
1. Go on the task 1.
2. Complete the task 3.