Xiangyu Zeng 2015-10-19
来自cslt Wiki
last week:
1.found some mistakes in adadm-max sequence training, I should have used the babble training set which comes from method of noisy training(add babble noise into clean set) instead of using test_babble set directly, so I begun retrying the experiment
2.adjusted the code of "adjust-lr of adam-max", so it can make lr jump back to 0.008 when comes a new dataset.
this week:
1.go on the experiments that i should retry.
2.find the bugs in multitasks with speech rate