<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="http://cslt.org/mediawiki/skins/common/feed.css?303"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="zh-cn">
		<id>http://cslt.org/mediawiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Zhangzy</id>
		<title>cslt Wiki - 用户贡献 [zh-cn]</title>
		<link rel="self" type="application/atom+xml" href="http://cslt.org/mediawiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Zhangzy"/>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/%E7%89%B9%E6%AE%8A:%E7%94%A8%E6%88%B7%E8%B4%A1%E7%8C%AE/Zhangzy"/>
		<updated>2026-04-14T11:06:10Z</updated>
		<subtitle>用户贡献</subtitle>
		<generator>MediaWiki 1.23.3</generator>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Sinovoice-2016-2-2</id>
		<title>Sinovoice-2016-2-2</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Sinovoice-2016-2-2"/>
				<updated>2021-09-09T05:42:43Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：保护“Sinovoice-2016-2-2”（[编辑=CSLT users]（无限期）[移动=CSLT users]（无限期）[Read=CSLT users]（无限期））&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“Sinovoice-2016-2-2”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-02-25</id>
		<title>FreeNeb status Report 2019-02-25</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-02-25"/>
				<updated>2019-02-25T01:30:12Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2019-02-25”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/TTS-project-synthesis</id>
		<title>TTS-project-synthesis</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/TTS-project-synthesis"/>
				<updated>2019-02-18T12:12:58Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Project name=&lt;br /&gt;
Text To Speech&lt;br /&gt;
&lt;br /&gt;
=Project members=&lt;br /&gt;
Dong Wang, Zhiyong Zhang&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
We are interested in a flexible syntehsis based on neural model . The basic idea is that since the neural model can be &lt;br /&gt;
traind with multiple conditions, we can treat speaker and emotion as the conditional factors. We use the speaker vector&lt;br /&gt;
and emotion vector as addiiontal input to the model, and then train a single model that can produce sound of different&lt;br /&gt;
speakers and different emotions. &lt;br /&gt;
&lt;br /&gt;
In the following experiments, we use a simple DNN architecture to implement the training. The vocoder is WORD. &lt;br /&gt;
&lt;br /&gt;
=Experiments=&lt;br /&gt;
&lt;br /&gt;
==Mono-speaker==&lt;br /&gt;
&lt;br /&gt;
The first step is mono-speaker systems. We trained three systems: a female, a male and a child, each with a &lt;br /&gt;
single network. The performance is like the ofllowing.&lt;br /&gt;
&lt;br /&gt;
Synthesis text:好雨知时节，当春乃发声，随风潜入夜，润物细无声&lt;br /&gt;
&lt;br /&gt;
*Female[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/female01/female01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/male01/male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
 &lt;br /&gt;
*Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/child01.neutral/child01-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Multi-speaker==&lt;br /&gt;
&lt;br /&gt;
Now we combine all the data from male, female and child to train a single model.&lt;br /&gt;
&lt;br /&gt;
===Without Speaker-vector===&lt;br /&gt;
&lt;br /&gt;
The first experiment is that the data are blindly combined, without any indicator of speakers. &lt;br /&gt;
&lt;br /&gt;
*Female &amp;amp; Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-male01/female01-male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Female &amp;amp; Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-child01.neutral/female01-child.neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male &amp;amp; Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/male01-child01.neutral/male01_child01.neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===With Speaker-vector===&lt;br /&gt;
&lt;br /&gt;
Now we use speaker vector as an indicator of the speaker trait. &lt;br /&gt;
&lt;br /&gt;
*Specific person&lt;br /&gt;
&lt;br /&gt;
Firstly, use the speaker fector to specifiy a particular person:&lt;br /&gt;
&lt;br /&gt;
:*Female[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/all.dvector40/female01.dvec40_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
:*Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/all.dvector40/male01.dvec40_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Interpolate of different person&lt;br /&gt;
&lt;br /&gt;
Now let's produce interpolated voice by interpolating two speakers: female and amle.&lt;br /&gt;
&lt;br /&gt;
:* Female &amp;amp; Male with different ratio&lt;br /&gt;
&lt;br /&gt;
::*(1) 0.0:1.0[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_0_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(2) 0.1:0.9[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_1_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(3) 0.2:0.8[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_2_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(4) 0.3:0.7[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_3_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(5) 0.4:0.6[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_4_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(6) 0.5:0.5[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_5_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(7) 0.6:0.4[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_6_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(8) 0.7:0.3[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_7_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(9) 0.8:0.2[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_8_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(10) 0.9:0.1[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_9_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(11) 1.0:0.0[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_10_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Mono-speaker Multi-Emotion==&lt;br /&gt;
&lt;br /&gt;
Using emotion vectors can specify which emotio to use, and the emotion can be also interpolated. &lt;br /&gt;
&lt;br /&gt;
*Specific emotion&lt;br /&gt;
:* Neutral emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Happy emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-happy_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Sorrow emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-sorrow_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Angry emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-angry_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Interpolation emotion&lt;br /&gt;
:* Angry &amp;amp; neutral with different ratio&lt;br /&gt;
::*(1) 0.0:1.0 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_0_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(2) 0.1:0.9 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_1_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(3) 0.2:0.8 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_2_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(4) 0.3:0.7 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_3_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(5) 0.4:0.6 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_4_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(6) 0.5:0.5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(7) 0.6:0.4 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_6_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(8) 0.7:0.3 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_7_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(9) 0.8:0.2 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_8_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(10) 0.9:0.1 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_9_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(11) 1.0:0.0 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-angry_1_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Multi-speaker Multi-emotion==&lt;br /&gt;
&lt;br /&gt;
Finally, all the data (different speakers and different emotions) are combined together. Note that only the child voice&lt;br /&gt;
has different emotions of training data. We hope that this emotion can be learned so that we can generate voice of &lt;br /&gt;
other speakers with emotion, although they do not have any training data with emtoions. &lt;br /&gt;
&lt;br /&gt;
*Female&lt;br /&gt;
:* angry [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_angry_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* happy [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_happy_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* neutral [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_neutral_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* sorrow [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_sorrow_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male&lt;br /&gt;
:* angry [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_angry_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* happy [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_happy_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* neutral [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_neutral_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* sorrow [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_sorrow_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
=MLPG Comparation=&lt;br /&gt;
We compare the different implementation of mlpg AS merlin does(mlpg.py and fast_mlpg.py).&lt;br /&gt;
There are three implementations:&lt;br /&gt;
:*mlpg: As mlpg.py while compute all the dimension of delta features(including lf0/bap/mgc, the dim is 1/5/60 respectively)&lt;br /&gt;
:*mlpg-lossy: Wrong implementation of mlpg.py by only considering the first dimension of global co-variance.&lt;br /&gt;
:*fast-mlpg: As fast_mlpg.py in merlin.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Computation Time(Estimation)&lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
    alg.    |    lf0(dim=1)    |    bap(dim=5)   |   mgc(dim=60) &lt;br /&gt;
 mlpg-lossy |      100000      |     130000      |   160000    &lt;br /&gt;
    mlpg    |      130000      |     500000      |   6200000    &lt;br /&gt;
 fast-mlpg  |      60000       |     300000      |   3580000&lt;br /&gt;
  avg-rate  |      1:1.3:0.6   |     1:4:2+      |   1:40:20+&lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
* Synthesis waves&lt;br /&gt;
:*text&lt;br /&gt;
::*5='好雨知时节，当春乃发声，随风潜入夜，润物细无声。'&lt;br /&gt;
::*13='大熊猫最大的愿望就是拍一张自己的照片。'&lt;br /&gt;
&lt;br /&gt;
* no-mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg-no_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg-no_13.wav]&lt;br /&gt;
&lt;br /&gt;
* mlpg-lossy&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg01_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg01_13.wav]&lt;br /&gt;
&lt;br /&gt;
* mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg60_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg60_13.wav]&lt;br /&gt;
&lt;br /&gt;
* fast-mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/fast-mlpg_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/fast-mlpg_13.wav]&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/TTS-project-synthesis</id>
		<title>TTS-project-synthesis</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/TTS-project-synthesis"/>
				<updated>2019-02-18T12:11:55Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Project name=&lt;br /&gt;
Text To Speech&lt;br /&gt;
&lt;br /&gt;
=Project members=&lt;br /&gt;
Dong Wang, Zhiyong Zhang&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
We are interested in a flexible syntehsis based on neural model . The basic idea is that since the neural model can be &lt;br /&gt;
traind with multiple conditions, we can treat speaker and emotion as the conditional factors. We use the speaker vector&lt;br /&gt;
and emotion vector as addiiontal input to the model, and then train a single model that can produce sound of different&lt;br /&gt;
speakers and different emotions. &lt;br /&gt;
&lt;br /&gt;
In the following experiments, we use a simple DNN architecture to implement the training. The vocoder is WORD. &lt;br /&gt;
&lt;br /&gt;
=Experiments=&lt;br /&gt;
&lt;br /&gt;
==Mono-speaker==&lt;br /&gt;
&lt;br /&gt;
The first step is mono-speaker systems. We trained three systems: a female, a male and a child, each with a &lt;br /&gt;
single network. The performance is like the ofllowing.&lt;br /&gt;
&lt;br /&gt;
Synthesis text:好雨知时节，当春乃发声，随风潜入夜，润物细无声&lt;br /&gt;
&lt;br /&gt;
*Female[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/female01/female01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/male01/male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
 &lt;br /&gt;
*Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/child01.neutral/child01-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Multi-speaker==&lt;br /&gt;
&lt;br /&gt;
Now we combine all the data from male, female and child to train a single model.&lt;br /&gt;
&lt;br /&gt;
===Without Speaker-vector===&lt;br /&gt;
&lt;br /&gt;
The first experiment is that the data are blindly combined, without any indicator of speakers. &lt;br /&gt;
&lt;br /&gt;
*Female &amp;amp; Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-male01/female01-male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Female &amp;amp; Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-child01.neutral/female01-child.neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male &amp;amp; Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/male01-child01.neutral/male01_child01.neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===With Speaker-vector===&lt;br /&gt;
&lt;br /&gt;
Now we use speaker vector as an indicator of the speaker trait. &lt;br /&gt;
&lt;br /&gt;
*Specific person&lt;br /&gt;
&lt;br /&gt;
Firstly, use the speaker fector to specifiy a particular person:&lt;br /&gt;
&lt;br /&gt;
:*Female[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/all.dvector40/female01.dvec40_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
:*Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/all.dvector40/male01.dvec40_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Interpolate of different person&lt;br /&gt;
&lt;br /&gt;
Now let's produce interpolated voice by interpolating two speakers: female and amle.&lt;br /&gt;
&lt;br /&gt;
:* Female &amp;amp; Male with different ratio&lt;br /&gt;
&lt;br /&gt;
::*(1) 0.0:1.0[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_0_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(2) 0.1:0.9[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_1_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(3) 0.2:0.8[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_2_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(4) 0.3:0.7[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_3_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(5) 0.4:0.6[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_4_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(6) 0.5:0.5[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_5_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(7) 0.6:0.4[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_6_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(8) 0.7:0.3[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_7_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(9) 0.8:0.2[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_8_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(10) 0.9:0.1[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_9_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(11) 1.0:0.0[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_10_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Mono-speaker Multi-Emotion==&lt;br /&gt;
&lt;br /&gt;
Using emotion vectors can specify which emotio to use, and the emotion can be also interpolated. &lt;br /&gt;
&lt;br /&gt;
*Specific emotion&lt;br /&gt;
:* Neutral emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Happy emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-happy_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Sorrow emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-sorrow_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Angry emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-angry_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Interpolation emotion&lt;br /&gt;
:* Angry &amp;amp; neutral with different ratio&lt;br /&gt;
::*(1) 0.0:1.0 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_0_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(2) 0.1:0.9 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_1_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(3) 0.2:0.8 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_2_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(4) 0.3:0.7 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_3_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(5) 0.4:0.6 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_4_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(6) 0.5:0.5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(7) 0.6:0.4 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_6_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(8) 0.7:0.3 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_7_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(9) 0.8:0.2 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_8_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(10) 0.9:0.1 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_9_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(11) 1.0:0.0 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-angry_1_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Multi-speaker Multi-emotion==&lt;br /&gt;
&lt;br /&gt;
Finally, all the data (different speakers and different emotions) are combined together. Note that only the child voice&lt;br /&gt;
has different emotions of training data. We hope that this emotion can be learned so that we can generate voice of &lt;br /&gt;
other speakers with emotion, although they do not have any training data with emtoions. &lt;br /&gt;
&lt;br /&gt;
*Female&lt;br /&gt;
:* angry [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_angry_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* happy [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_happy_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* neutral [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_neutral_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* sorrow [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_sorrow_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male&lt;br /&gt;
:* angry [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_angry_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* happy [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_happy_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* neutral [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_neutral_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* sorrow [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_sorrow_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
=MLPG Comparation=&lt;br /&gt;
We compare the different implementation of mlpg AS merlin does(mlpg.py and fast_mlpg.py).&lt;br /&gt;
There are three implementations:&lt;br /&gt;
*mlpg: As mlpg.py while compute all the dimension of delta features(including lf0/bap/mgc, the dim is 1/5/60 respectively)&lt;br /&gt;
*mlpg-lossy: Wrong implementation of mlpg.py by only considering the first dimension of global co-variance.&lt;br /&gt;
*fast-mlpg: As fast_mlpg.py in merlin.&lt;br /&gt;
&lt;br /&gt;
*Computation Time(Estimation)&lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
    alg.    |    lf0(dim=1)    |    bap(dim=5)   |   mgc(dim=60) &lt;br /&gt;
 mlpg-lossy |      100000      |     130000      |   160000    &lt;br /&gt;
    mlpg    |      130000      |     500000      |   6200000    &lt;br /&gt;
 fast-mlpg  |      60000       |     300000      |   3580000&lt;br /&gt;
  avg-rate  |      1:1.3:0.6   |     1:4:2+      |   1:40:20+&lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
* Synthesis waves&lt;br /&gt;
5='好雨知时节，当春乃发声，随风潜入夜，润物细无声。'&lt;br /&gt;
13='大熊猫最大的愿望就是拍一张自己的照片。'&lt;br /&gt;
&lt;br /&gt;
* no-mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg-no_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg-no_13.wav]&lt;br /&gt;
&lt;br /&gt;
* mlpg-lossy&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg01_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg01_13.wav]&lt;br /&gt;
&lt;br /&gt;
* mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg60_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg60_13.wav]&lt;br /&gt;
&lt;br /&gt;
* fast-mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/fast-mlpg_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/fast-mlpg_13.wav]&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/TTS-project-synthesis</id>
		<title>TTS-project-synthesis</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/TTS-project-synthesis"/>
				<updated>2019-02-18T12:11:29Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Project name=&lt;br /&gt;
Text To Speech&lt;br /&gt;
&lt;br /&gt;
=Project members=&lt;br /&gt;
Dong Wang, Zhiyong Zhang&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
We are interested in a flexible syntehsis based on neural model . The basic idea is that since the neural model can be &lt;br /&gt;
traind with multiple conditions, we can treat speaker and emotion as the conditional factors. We use the speaker vector&lt;br /&gt;
and emotion vector as addiiontal input to the model, and then train a single model that can produce sound of different&lt;br /&gt;
speakers and different emotions. &lt;br /&gt;
&lt;br /&gt;
In the following experiments, we use a simple DNN architecture to implement the training. The vocoder is WORD. &lt;br /&gt;
&lt;br /&gt;
=Experiments=&lt;br /&gt;
&lt;br /&gt;
==Mono-speaker==&lt;br /&gt;
&lt;br /&gt;
The first step is mono-speaker systems. We trained three systems: a female, a male and a child, each with a &lt;br /&gt;
single network. The performance is like the ofllowing.&lt;br /&gt;
&lt;br /&gt;
Synthesis text:好雨知时节，当春乃发声，随风潜入夜，润物细无声&lt;br /&gt;
&lt;br /&gt;
*Female[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/female01/female01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/male01/male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
 &lt;br /&gt;
*Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/child01.neutral/child01-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Multi-speaker==&lt;br /&gt;
&lt;br /&gt;
Now we combine all the data from male, female and child to train a single model.&lt;br /&gt;
&lt;br /&gt;
===Without Speaker-vector===&lt;br /&gt;
&lt;br /&gt;
The first experiment is that the data are blindly combined, without any indicator of speakers. &lt;br /&gt;
&lt;br /&gt;
*Female &amp;amp; Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-male01/female01-male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Female &amp;amp; Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-child01.neutral/female01-child.neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male &amp;amp; Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/male01-child01.neutral/male01_child01.neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===With Speaker-vector===&lt;br /&gt;
&lt;br /&gt;
Now we use speaker vector as an indicator of the speaker trait. &lt;br /&gt;
&lt;br /&gt;
*Specific person&lt;br /&gt;
&lt;br /&gt;
Firstly, use the speaker fector to specifiy a particular person:&lt;br /&gt;
&lt;br /&gt;
:*Female[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/all.dvector40/female01.dvec40_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
:*Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/all.dvector40/male01.dvec40_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Interpolate of different person&lt;br /&gt;
&lt;br /&gt;
Now let's produce interpolated voice by interpolating two speakers: female and amle.&lt;br /&gt;
&lt;br /&gt;
:* Female &amp;amp; Male with different ratio&lt;br /&gt;
&lt;br /&gt;
::*(1) 0.0:1.0[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_0_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(2) 0.1:0.9[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_1_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(3) 0.2:0.8[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_2_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(4) 0.3:0.7[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_3_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(5) 0.4:0.6[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_4_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(6) 0.5:0.5[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_5_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(7) 0.6:0.4[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_6_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(8) 0.7:0.3[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_7_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(9) 0.8:0.2[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_8_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(10) 0.9:0.1[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_9_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(11) 1.0:0.0[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_10_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Mono-speaker Multi-Emotion==&lt;br /&gt;
&lt;br /&gt;
Using emotion vectors can specify which emotio to use, and the emotion can be also interpolated. &lt;br /&gt;
&lt;br /&gt;
*Specific emotion&lt;br /&gt;
:* Neutral emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Happy emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-happy_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Sorrow emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-sorrow_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Angry emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-angry_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Interpolation emotion&lt;br /&gt;
:* Angry &amp;amp; neutral with different ratio&lt;br /&gt;
::*(1) 0.0:1.0 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_0_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(2) 0.1:0.9 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_1_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(3) 0.2:0.8 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_2_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(4) 0.3:0.7 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_3_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(5) 0.4:0.6 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_4_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(6) 0.5:0.5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(7) 0.6:0.4 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_6_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(8) 0.7:0.3 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_7_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(9) 0.8:0.2 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_8_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(10) 0.9:0.1 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_9_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(11) 1.0:0.0 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-angry_1_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Multi-speaker Multi-emotion==&lt;br /&gt;
&lt;br /&gt;
Finally, all the data (different speakers and different emotions) are combined together. Note that only the child voice&lt;br /&gt;
has different emotions of training data. We hope that this emotion can be learned so that we can generate voice of &lt;br /&gt;
other speakers with emotion, although they do not have any training data with emtoions. &lt;br /&gt;
&lt;br /&gt;
*Female&lt;br /&gt;
:* angry [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_angry_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* happy [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_happy_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* neutral [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_neutral_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* sorrow [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_sorrow_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male&lt;br /&gt;
:* angry [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_angry_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* happy [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_happy_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* neutral [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_neutral_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* sorrow [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_sorrow_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
=MLPG Comparation=&lt;br /&gt;
We compare the different implementation of mlpg AS merlin does(mlpg.py and fast_mlpg.py).&lt;br /&gt;
There are three implementations:&lt;br /&gt;
mlpg: As mlpg.py while compute all the dimension of delta features(including lf0/bap/mgc, the dim is 1/5/60 respectively)&lt;br /&gt;
mlpg-lossy: Wrong implementation of mlpg.py by only considering the first dimension of global co-variance.&lt;br /&gt;
fast-mlpg: As fast_mlpg.py in merlin.&lt;br /&gt;
&lt;br /&gt;
*Computation Time(Estimation)&lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
    alg.    |    lf0(dim=1)    |    bap(dim=5)   |   mgc(dim=60) &lt;br /&gt;
 mlpg-lossy |      100000      |     130000      |   160000    &lt;br /&gt;
    mlpg    |      130000      |     500000      |   6200000    &lt;br /&gt;
 fast-mlpg  |      60000       |     300000      |   3580000&lt;br /&gt;
  avg-rate  |      1:1.3:0.6   |     1:4:2+      |   1:40:20+&lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
* Synthesis waves&lt;br /&gt;
5='好雨知时节，当春乃发声，随风潜入夜，润物细无声。'&lt;br /&gt;
13='大熊猫最大的愿望就是拍一张自己的照片。'&lt;br /&gt;
&lt;br /&gt;
* no-mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg-no_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg-no_13.wav]&lt;br /&gt;
&lt;br /&gt;
* mlpg-lossy&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg01_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg01_13.wav]&lt;br /&gt;
&lt;br /&gt;
* mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg60_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg60_13.wav]&lt;br /&gt;
&lt;br /&gt;
* fast-mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/fast-mlpg_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/fast-mlpg_13.wav]&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/TTS-project-synthesis</id>
		<title>TTS-project-synthesis</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/TTS-project-synthesis"/>
				<updated>2019-02-18T12:09:12Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Project name=&lt;br /&gt;
Text To Speech&lt;br /&gt;
&lt;br /&gt;
=Project members=&lt;br /&gt;
Dong Wang, Zhiyong Zhang&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
We are interested in a flexible syntehsis based on neural model . The basic idea is that since the neural model can be &lt;br /&gt;
traind with multiple conditions, we can treat speaker and emotion as the conditional factors. We use the speaker vector&lt;br /&gt;
and emotion vector as addiiontal input to the model, and then train a single model that can produce sound of different&lt;br /&gt;
speakers and different emotions. &lt;br /&gt;
&lt;br /&gt;
In the following experiments, we use a simple DNN architecture to implement the training. The vocoder is WORD. &lt;br /&gt;
&lt;br /&gt;
=Experiments=&lt;br /&gt;
&lt;br /&gt;
==Mono-speaker==&lt;br /&gt;
&lt;br /&gt;
The first step is mono-speaker systems. We trained three systems: a female, a male and a child, each with a &lt;br /&gt;
single network. The performance is like the ofllowing.&lt;br /&gt;
&lt;br /&gt;
Synthesis text:好雨知时节，当春乃发声，随风潜入夜，润物细无声&lt;br /&gt;
&lt;br /&gt;
*Female[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/female01/female01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/male01/male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
 &lt;br /&gt;
*Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/child01.neutral/child01-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Multi-speaker==&lt;br /&gt;
&lt;br /&gt;
Now we combine all the data from male, female and child to train a single model.&lt;br /&gt;
&lt;br /&gt;
===Without Speaker-vector===&lt;br /&gt;
&lt;br /&gt;
The first experiment is that the data are blindly combined, without any indicator of speakers. &lt;br /&gt;
&lt;br /&gt;
*Female &amp;amp; Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-male01/female01-male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Female &amp;amp; Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-child01.neutral/female01-child.neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male &amp;amp; Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/male01-child01.neutral/male01_child01.neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===With Speaker-vector===&lt;br /&gt;
&lt;br /&gt;
Now we use speaker vector as an indicator of the speaker trait. &lt;br /&gt;
&lt;br /&gt;
*Specific person&lt;br /&gt;
&lt;br /&gt;
Firstly, use the speaker fector to specifiy a particular person:&lt;br /&gt;
&lt;br /&gt;
:*Female[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/all.dvector40/female01.dvec40_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
:*Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/all.dvector40/male01.dvec40_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Interpolate of different person&lt;br /&gt;
&lt;br /&gt;
Now let's produce interpolated voice by interpolating two speakers: female and amle.&lt;br /&gt;
&lt;br /&gt;
:* Female &amp;amp; Male with different ratio&lt;br /&gt;
&lt;br /&gt;
::*(1) 0.0:1.0[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_0_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(2) 0.1:0.9[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_1_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(3) 0.2:0.8[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_2_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(4) 0.3:0.7[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_3_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(5) 0.4:0.6[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_4_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(6) 0.5:0.5[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_5_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(7) 0.6:0.4[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_6_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(8) 0.7:0.3[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_7_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(9) 0.8:0.2[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_8_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(10) 0.9:0.1[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_9_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(11) 1.0:0.0[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_10_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Mono-speaker Multi-Emotion==&lt;br /&gt;
&lt;br /&gt;
Using emotion vectors can specify which emotio to use, and the emotion can be also interpolated. &lt;br /&gt;
&lt;br /&gt;
*Specific emotion&lt;br /&gt;
:* Neutral emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Happy emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-happy_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Sorrow emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-sorrow_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Angry emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-angry_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Interpolation emotion&lt;br /&gt;
:* Angry &amp;amp; neutral with different ratio&lt;br /&gt;
::*(1) 0.0:1.0 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_0_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(2) 0.1:0.9 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_1_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(3) 0.2:0.8 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_2_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(4) 0.3:0.7 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_3_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(5) 0.4:0.6 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_4_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(6) 0.5:0.5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(7) 0.6:0.4 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_6_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(8) 0.7:0.3 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_7_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(9) 0.8:0.2 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_8_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(10) 0.9:0.1 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_9_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(11) 1.0:0.0 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-angry_1_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Multi-speaker Multi-emotion==&lt;br /&gt;
&lt;br /&gt;
Finally, all the data (different speakers and different emotions) are combined together. Note that only the child voice&lt;br /&gt;
has different emotions of training data. We hope that this emotion can be learned so that we can generate voice of &lt;br /&gt;
other speakers with emotion, although they do not have any training data with emtoions. &lt;br /&gt;
&lt;br /&gt;
*Female&lt;br /&gt;
:* angry [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_angry_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* happy [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_happy_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* neutral [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_neutral_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* sorrow [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_sorrow_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male&lt;br /&gt;
:* angry [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_angry_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* happy [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_happy_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* neutral [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_neutral_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* sorrow [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_sorrow_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
=MLPG Comparation=&lt;br /&gt;
We compare the different implementation of mlpg AS merlin does(mlpg.py and fast_mlpg.py).&lt;br /&gt;
There are three implementations:&lt;br /&gt;
mlpg: As mlpg.py while compute all the dimension of delta features(including lf0/bap/mgc, the dim is 1/5/60 respectively)&lt;br /&gt;
mlpg-lossy: Wrong implementation of mlpg.py by only considering the first dimension of global co-variance.&lt;br /&gt;
fast-mlpg: As fast_mlpg.py in merlin.&lt;br /&gt;
&lt;br /&gt;
*Computation Time(Estimation)&lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
    alg.    |    lf0(dim=1)    |    bap(dim=5)   |   mgc(dim=60) &lt;br /&gt;
 mlpg-lossy |      100000      |     130000      |   160000    &lt;br /&gt;
    mlpg    |      130000      |     500000      |   6200000    &lt;br /&gt;
 fast-mlpg  |      60000       |     300000      |   3580000&lt;br /&gt;
  avg-rate  |      1:1.3:0.6   |     1:4:2+      |   1:40:20+&lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
* Synthesis waves&lt;br /&gt;
5='好雨知时节，当春乃发声，随风潜入夜，润物细无声。'&lt;br /&gt;
13='大熊猫最大的愿望就是拍一张自己的照片。'&lt;br /&gt;
&lt;br /&gt;
* mlpg-lossy&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg01_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg01_13.wav]&lt;br /&gt;
&lt;br /&gt;
* mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg60_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg60_13.wav]&lt;br /&gt;
&lt;br /&gt;
* fast-mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/fast-mlpg_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/fast-mlpg_13.wav]&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/TTS-project-synthesis</id>
		<title>TTS-project-synthesis</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/TTS-project-synthesis"/>
				<updated>2019-02-18T12:08:50Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Project name=&lt;br /&gt;
Text To Speech&lt;br /&gt;
&lt;br /&gt;
=Project members=&lt;br /&gt;
Dong Wang, Zhiyong Zhang&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
We are interested in a flexible syntehsis based on neural model . The basic idea is that since the neural model can be &lt;br /&gt;
traind with multiple conditions, we can treat speaker and emotion as the conditional factors. We use the speaker vector&lt;br /&gt;
and emotion vector as addiiontal input to the model, and then train a single model that can produce sound of different&lt;br /&gt;
speakers and different emotions. &lt;br /&gt;
&lt;br /&gt;
In the following experiments, we use a simple DNN architecture to implement the training. The vocoder is WORD. &lt;br /&gt;
&lt;br /&gt;
=Experiments=&lt;br /&gt;
&lt;br /&gt;
==Mono-speaker==&lt;br /&gt;
&lt;br /&gt;
The first step is mono-speaker systems. We trained three systems: a female, a male and a child, each with a &lt;br /&gt;
single network. The performance is like the ofllowing.&lt;br /&gt;
&lt;br /&gt;
Synthesis text:好雨知时节，当春乃发声，随风潜入夜，润物细无声&lt;br /&gt;
&lt;br /&gt;
*Female[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/female01/female01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/male01/male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
 &lt;br /&gt;
*Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/child01.neutral/child01-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Multi-speaker==&lt;br /&gt;
&lt;br /&gt;
Now we combine all the data from male, female and child to train a single model.&lt;br /&gt;
&lt;br /&gt;
===Without Speaker-vector===&lt;br /&gt;
&lt;br /&gt;
The first experiment is that the data are blindly combined, without any indicator of speakers. &lt;br /&gt;
&lt;br /&gt;
*Female &amp;amp; Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-male01/female01-male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Female &amp;amp; Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-child01.neutral/female01-child.neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male &amp;amp; Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/male01-child01.neutral/male01_child01.neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===With Speaker-vector===&lt;br /&gt;
&lt;br /&gt;
Now we use speaker vector as an indicator of the speaker trait. &lt;br /&gt;
&lt;br /&gt;
*Specific person&lt;br /&gt;
&lt;br /&gt;
Firstly, use the speaker fector to specifiy a particular person:&lt;br /&gt;
&lt;br /&gt;
:*Female[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/all.dvector40/female01.dvec40_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
:*Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/all.dvector40/male01.dvec40_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Interpolate of different person&lt;br /&gt;
&lt;br /&gt;
Now let's produce interpolated voice by interpolating two speakers: female and amle.&lt;br /&gt;
&lt;br /&gt;
:* Female &amp;amp; Male with different ratio&lt;br /&gt;
&lt;br /&gt;
::*(1) 0.0:1.0[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_0_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(2) 0.1:0.9[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_1_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(3) 0.2:0.8[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_2_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(4) 0.3:0.7[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_3_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(5) 0.4:0.6[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_4_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(6) 0.5:0.5[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_5_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(7) 0.6:0.4[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_6_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(8) 0.7:0.3[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_7_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(9) 0.8:0.2[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_8_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(10) 0.9:0.1[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_9_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(11) 1.0:0.0[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_10_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Mono-speaker Multi-Emotion==&lt;br /&gt;
&lt;br /&gt;
Using emotion vectors can specify which emotio to use, and the emotion can be also interpolated. &lt;br /&gt;
&lt;br /&gt;
*Specific emotion&lt;br /&gt;
:* Neutral emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Happy emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-happy_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Sorrow emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-sorrow_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Angry emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-angry_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Interpolation emotion&lt;br /&gt;
:* Angry &amp;amp; neutral with different ratio&lt;br /&gt;
::*(1) 0.0:1.0 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_0_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(2) 0.1:0.9 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_1_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(3) 0.2:0.8 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_2_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(4) 0.3:0.7 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_3_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(5) 0.4:0.6 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_4_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(6) 0.5:0.5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(7) 0.6:0.4 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_6_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(8) 0.7:0.3 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_7_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(9) 0.8:0.2 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_8_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(10) 0.9:0.1 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_9_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(11) 1.0:0.0 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-angry_1_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Multi-speaker Multi-emotion==&lt;br /&gt;
&lt;br /&gt;
Finally, all the data (different speakers and different emotions) are combined together. Note that only the child voice&lt;br /&gt;
has different emotions of training data. We hope that this emotion can be learned so that we can generate voice of &lt;br /&gt;
other speakers with emotion, although they do not have any training data with emtoions. &lt;br /&gt;
&lt;br /&gt;
*Female&lt;br /&gt;
:* angry [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_angry_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* happy [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_happy_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* neutral [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_neutral_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* sorrow [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_sorrow_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male&lt;br /&gt;
:* angry [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_angry_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* happy [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_happy_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* neutral [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_neutral_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* sorrow [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_sorrow_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
=MLPG Comparation=&lt;br /&gt;
We compare the different implementation of mlpg AS merlin does(mlpg.py and fast_mlpg.py).&lt;br /&gt;
There are three implementations:&lt;br /&gt;
mlpg: As mlpg.py while compute all the dimension of delta features(including lf0/bap/mgc, the dim is 1/5/60 respectively)&lt;br /&gt;
mlpg-lossy: Wrong implementation of mlpg.py by only considering the first dimension of global co-variance.&lt;br /&gt;
fast-mlpg: As fast_mlpg.py in merlin.&lt;br /&gt;
&lt;br /&gt;
*Computation Time(Estimation)&lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
    alg.    |    lf0(dim=1)    |    bap(dim=5)   |   mgc(dim=60) &lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
 mlpg-lossy |      100000      |     130000      |   160000    &lt;br /&gt;
    mlpg    |      130000      |     500000      |   6200000    &lt;br /&gt;
 fast-mlpg  |      60000       |     300000      |   3580000&lt;br /&gt;
&lt;br /&gt;
  avg-rate  |      1:1.3:0.6   |     1:4:2+      |   1:40:20+&lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
* Synthesis waves&lt;br /&gt;
5='好雨知时节，当春乃发声，随风潜入夜，润物细无声。'&lt;br /&gt;
13='大熊猫最大的愿望就是拍一张自己的照片。'&lt;br /&gt;
&lt;br /&gt;
* mlpg-lossy&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg01_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg01_13.wav]&lt;br /&gt;
&lt;br /&gt;
* mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg60_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg60_13.wav]&lt;br /&gt;
&lt;br /&gt;
* fast-mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/fast-mlpg_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/fast-mlpg_13.wav]&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/TTS-project-synthesis</id>
		<title>TTS-project-synthesis</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/TTS-project-synthesis"/>
				<updated>2019-02-18T12:08:00Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Project name=&lt;br /&gt;
Text To Speech&lt;br /&gt;
&lt;br /&gt;
=Project members=&lt;br /&gt;
Dong Wang, Zhiyong Zhang&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
We are interested in a flexible syntehsis based on neural model . The basic idea is that since the neural model can be &lt;br /&gt;
traind with multiple conditions, we can treat speaker and emotion as the conditional factors. We use the speaker vector&lt;br /&gt;
and emotion vector as addiiontal input to the model, and then train a single model that can produce sound of different&lt;br /&gt;
speakers and different emotions. &lt;br /&gt;
&lt;br /&gt;
In the following experiments, we use a simple DNN architecture to implement the training. The vocoder is WORD. &lt;br /&gt;
&lt;br /&gt;
=Experiments=&lt;br /&gt;
&lt;br /&gt;
==Mono-speaker==&lt;br /&gt;
&lt;br /&gt;
The first step is mono-speaker systems. We trained three systems: a female, a male and a child, each with a &lt;br /&gt;
single network. The performance is like the ofllowing.&lt;br /&gt;
&lt;br /&gt;
Synthesis text:好雨知时节，当春乃发声，随风潜入夜，润物细无声&lt;br /&gt;
&lt;br /&gt;
*Female[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/female01/female01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/male01/male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
 &lt;br /&gt;
*Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/huilian/child01.neutral/child01-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Multi-speaker==&lt;br /&gt;
&lt;br /&gt;
Now we combine all the data from male, female and child to train a single model.&lt;br /&gt;
&lt;br /&gt;
===Without Speaker-vector===&lt;br /&gt;
&lt;br /&gt;
The first experiment is that the data are blindly combined, without any indicator of speakers. &lt;br /&gt;
&lt;br /&gt;
*Female &amp;amp; Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-male01/female01-male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Female &amp;amp; Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/female01-child01.neutral/female01-child.neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male &amp;amp; Child[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/male01-child01.neutral/male01_child01.neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===With Speaker-vector===&lt;br /&gt;
&lt;br /&gt;
Now we use speaker vector as an indicator of the speaker trait. &lt;br /&gt;
&lt;br /&gt;
*Specific person&lt;br /&gt;
&lt;br /&gt;
Firstly, use the speaker fector to specifiy a particular person:&lt;br /&gt;
&lt;br /&gt;
:*Female[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/all.dvector40/female01.dvec40_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
:*Male[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/all.dvector40/male01.dvec40_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Interpolate of different person&lt;br /&gt;
&lt;br /&gt;
Now let's produce interpolated voice by interpolating two speakers: female and amle.&lt;br /&gt;
&lt;br /&gt;
:* Female &amp;amp; Male with different ratio&lt;br /&gt;
&lt;br /&gt;
::*(1) 0.0:1.0[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_0_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(2) 0.1:0.9[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_1_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(3) 0.2:0.8[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_2_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(4) 0.3:0.7[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_3_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(5) 0.4:0.6[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_4_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(6) 0.5:0.5[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_5_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(7) 0.6:0.4[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_6_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(8) 0.7:0.3[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_7_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(9) 0.8:0.2[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_8_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(10) 0.9:0.1[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_9_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
::*(11) 1.0:0.0[http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speakers/mix/iterpolation/female01_male01/iterpolation_10_female01_male01_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Mono-speaker Multi-Emotion==&lt;br /&gt;
&lt;br /&gt;
Using emotion vectors can specify which emotio to use, and the emotion can be also interpolated. &lt;br /&gt;
&lt;br /&gt;
*Specific emotion&lt;br /&gt;
:* Neutral emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-neutral_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Happy emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-happy_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Sorrow emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-sorrow_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* Angry emotion [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-angry_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Interpolation emotion&lt;br /&gt;
:* Angry &amp;amp; neutral with different ratio&lt;br /&gt;
::*(1) 0.0:1.0 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_0_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(2) 0.1:0.9 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_1_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(3) 0.2:0.8 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_2_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(4) 0.3:0.7 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_3_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(5) 0.4:0.6 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_4_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(6) 0.5:0.5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(7) 0.6:0.4 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_6_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(8) 0.7:0.3 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_7_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(9) 0.8:0.2 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_8_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(10) 0.9:0.1 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/mix-emotion-angry-neutral_1_9_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
::*(11) 1.0:0.0 [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/emotion/roobo.child/x-angry_1_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
==Multi-speaker Multi-emotion==&lt;br /&gt;
&lt;br /&gt;
Finally, all the data (different speakers and different emotions) are combined together. Note that only the child voice&lt;br /&gt;
has different emotions of training data. We hope that this emotion can be learned so that we can generate voice of &lt;br /&gt;
other speakers with emotion, although they do not have any training data with emtoions. &lt;br /&gt;
&lt;br /&gt;
*Female&lt;br /&gt;
:* angry [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_angry_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* happy [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_happy_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* neutral [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_neutral_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* sorrow [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/female01_sorrow_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
*Male&lt;br /&gt;
:* angry [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_angry_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* happy [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_happy_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* neutral [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_neutral_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
:* sorrow [http://zhangzy.cslt.org/categories/tts/sample-wav/mimic-wangd-front-end/multi-speaker_multi-emotion/male01_sorrow_final_5_amdurTanh_acTanh_mlpg1_postfilter1.world.wav01.wav]&lt;br /&gt;
&lt;br /&gt;
=MLPG Comparation=&lt;br /&gt;
We compare the different implementation of mlpg AS merlin does(mlpg.py and fast_mlpg.py).&lt;br /&gt;
There are three implementations:&lt;br /&gt;
mlpg: As mlpg.py while compute all the dimension of delta features(including lf0/bap/mgc, the dim is 1/5/60 respectively)&lt;br /&gt;
mlpg-lossy: Wrong implementation of mlpg.py by only considering the first dimension of global co-variance.&lt;br /&gt;
fast-mlpg: As fast_mlpg.py in merlin.&lt;br /&gt;
&lt;br /&gt;
*Computation Time(Estimation)&lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
    alg.    |    lf0(dim=1)    |    bap(dim=5)   |   mgc(dim=60) &lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
 mlpg-lossy |      100000      |     130000      |   160000    &lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
    mlpg    |      130000      |     500000      |   6200000    &lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
 fast-mlpg  |      60000       |     300000      |   3580000&lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
  avg-rate  |      1:1.3:0.6   |     1:4:2+      |   1:40:20+&lt;br /&gt;
-----------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
* Synthesis waves&lt;br /&gt;
5='好雨知时节，当春乃发声，随风潜入夜，润物细无声。'&lt;br /&gt;
13='大熊猫最大的愿望就是拍一张自己的照片。'&lt;br /&gt;
&lt;br /&gt;
* mlpg-lossy&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg01_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg01_13.wav]&lt;br /&gt;
&lt;br /&gt;
* mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg60_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/mlpg60_13.wav]&lt;br /&gt;
&lt;br /&gt;
* fast-mlpg&lt;br /&gt;
:*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/fast-mlpg_5.wav]&lt;br /&gt;
:*13 :*5 [http://zhangzy.cslt.org/categories/tts/sample-wav/mlpg-cmp/fast-mlpg_13.wav]&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-02-18</id>
		<title>FreeNeb status Report 2019-02-18</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-02-18"/>
				<updated>2019-02-18T04:18:54Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2019-02-18”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-01-28</id>
		<title>FreeNeb status Report 2019-01-28</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-01-28"/>
				<updated>2019-01-28T05:02:45Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2019-01-28”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-01-21</id>
		<title>FreeNeb status Report 2019-01-21</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-01-21"/>
				<updated>2019-01-21T03:08:59Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2019-01-21”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-01-14</id>
		<title>FreeNeb status Report 2019-01-14</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-01-14"/>
				<updated>2019-01-14T03:17:51Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2019-01-14”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-01-14</id>
		<title>FreeNeb status Report 2019-01-14</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-01-14"/>
				<updated>2019-01-14T02:43:33Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2019-01-14”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-01-14</id>
		<title>FreeNeb status Report 2019-01-14</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-01-14"/>
				<updated>2019-01-14T02:38:45Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2019-01-14”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-01-07</id>
		<title>FreeNeb status Report 2019-01-07</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-01-07"/>
				<updated>2019-01-07T03:25:10Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2019-01-07”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-01-02</id>
		<title>FreeNeb status Report 2019-01-02</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2019-01-02"/>
				<updated>2019-01-02T02:57:29Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2019-01-02”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-12-24</id>
		<title>FreeNeb status Report 2018-12-24</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-12-24"/>
				<updated>2018-12-24T03:05:52Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This Week:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! Last Week !! This Week !! Meet Minutes !! Task Tracing(&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
|Mengyuan Zhao ||&lt;br /&gt;
本周:&lt;br /&gt;
* 工程化&lt;br /&gt;
# 熟悉语音分割流程&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
* 工程化&lt;br /&gt;
# 继续梳理demo list，并上线。&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyong Zhang||&lt;br /&gt;
本周：&lt;br /&gt;
# 新嵌入式板子验证及串口输出测试；&lt;br /&gt;
# 归档英语/日语模型整理，已完成英语&lt;br /&gt;
# 国网语音切分工具部署--虚拟机模式&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
# ASR-decoder重置&lt;br /&gt;
# 嵌入式语音识别板子测试&lt;br /&gt;
# 归档日语模型&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Yang Wei ||&lt;br /&gt;
本周：&lt;br /&gt;
* 外包声纹demo分数问题定位&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
* 完成外包声纹demo测试&lt;br /&gt;
* 新版asr socket server部署测试&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Zhenlong Han||&lt;br /&gt;
本周：&lt;br /&gt;
# 整理项目工具框架&lt;br /&gt;
# 跟进国网标注&lt;br /&gt;
# 双猴京华项目支持&lt;br /&gt;
# 分音塔标注检查&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
# 训练国网模型&lt;br /&gt;
# 整理工具脚本&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shuai Zhang||&lt;br /&gt;
本周：&lt;br /&gt;
# asr服务端更换模型&lt;br /&gt;
# release x-vector Demo&lt;br /&gt;
# vad engine 需求更新修改&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
# vad engine bug修改&lt;br /&gt;
# 所有demo更新引擎&lt;br /&gt;
# 助残项目计划&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanchi Jin||&lt;br /&gt;
本周：&lt;br /&gt;
* 完成初版自适应训练平台，lm训练部分&lt;br /&gt;
* 支持日本同方vpr server部署&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
* 持续完成分音塔月度计划&lt;br /&gt;
# 中文最终测试集整理，确定&lt;br /&gt;
# 第二批次中文100h训练&lt;br /&gt;
* 国家电网&lt;br /&gt;
# datax第5、6批次标注数据整理&lt;br /&gt;
# 数据分发&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Rong Liu||&lt;br /&gt;
上周&lt;br /&gt;
1. 京华合同落地&lt;br /&gt;
2. 需求沟通，友杰智新离线asr和语种识别；海天瑞声的离线asr匹配&lt;br /&gt;
3. 国网项目沟通，结项情况推进；提供维语语音识别相关项目资料&lt;br /&gt;
4. 智能助残demo需求确定，开发对接禹为，初步方案确定&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
1. 品牌共享+自适应产品推进&lt;br /&gt;
2. 友杰智新需求和合作模式沟通&lt;br /&gt;
3. 智能助残demo完成&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Dong Wang||&lt;br /&gt;
本周:&lt;br /&gt;
# Free宝、债转股、年终奖分配方案，合作公司方案计划确定。&lt;br /&gt;
# 《机器学习》引用检查完成 @云麒&lt;br /&gt;
# Attention系统设计@蓝天@嘉威&lt;br /&gt;
# 基于VAE的说话人特征提取设计&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
#. 基于VAE的说话人特征提取方案确定&lt;br /&gt;
#. 语音识别手册：《说话人自适应》和《环境鲁棒性》两章完成&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang&lt;br /&gt;
||&lt;br /&gt;
上周：&lt;br /&gt;
1. PyTorch/TensorFlow 工具探索和使用；&lt;br /&gt;
2. 歌词生成（古风）进一步落实；&lt;br /&gt;
3. ASR 技术报告准备（delayed）。&lt;br /&gt;
||&lt;br /&gt;
本周：&lt;br /&gt;
1. PyTorch/TensorFlow Speech Recipe 整理上传；&lt;br /&gt;
2. 歌词生成进一步推动；&lt;br /&gt;
3. 技术报告。&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li||&lt;br /&gt;
上周：&lt;br /&gt;
# 完成 Nnet-vad 和 Energy-vad 的训练与对比测试&lt;br /&gt;
# 完成九天微联声纹测试（嵌入式）&lt;br /&gt;
# 调研当前市场声纹 API 发布情况&lt;br /&gt;
# 开展模型压缩测试&lt;br /&gt;
||&lt;br /&gt;
本周：&lt;br /&gt;
# Nnet-vad 优化&lt;br /&gt;
# 完成模型压缩测试&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Yating Peng||&lt;br /&gt;
上周：&lt;br /&gt;
*财务：参加海淀税务组织的新个税法培训；&lt;br /&gt;
*行政：设计制作公司名片、总结会议纪要、续费、报销、合同盖章处理；&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
本周：&lt;br /&gt;
*确定年会地点；&lt;br /&gt;
*收羽绒服&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shiying||&lt;br /&gt;
上周：&lt;br /&gt;
* local ASR 结果release（am: clean chain model  graph: graph1e-5 graph1e-6 graph1e-7 graph1e-9）&lt;br /&gt;
* 14000h 中文数据clean up（预计还需要超过一周的时间）&lt;br /&gt;
* 与勇哥一起做语音识别小模型探索（目前最小的模型：1.2M  10种命令词识别结果为：8.81%）&lt;br /&gt;
||&lt;br /&gt;
本周&lt;br /&gt;
* 继续中文14000h 中文数据clean up&lt;br /&gt;
* 语音识别小模型（尽量将模型控制在1M以内）&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|Wenqiang Du ||&lt;br /&gt;
上周:&lt;br /&gt;
* 8K数据（9400h）chain模型训练（预计最快还要11天）&lt;br /&gt;
* 日语graph以及大小语言模型解码&lt;br /&gt;
* 日语的chain模型做区分性训练（问题排查中）&lt;br /&gt;
* 16k-8k的数据各项测试（结果持续更新到bugdb上）&lt;br /&gt;
||&lt;br /&gt;
本周：&lt;br /&gt;
* 继续8K数据大模型训练&lt;br /&gt;
* 16k-8k方法验证&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-12-17</id>
		<title>FreeNeb status Report 2018-12-17</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-12-17"/>
				<updated>2018-12-17T02:45:32Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-12-17”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-12-10</id>
		<title>FreeNeb status Report 2018-12-10</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-12-10"/>
				<updated>2018-12-10T03:05:38Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-12-10”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/How_to_install_driver_of_optical-network-card_and_setup_the_ip-config</id>
		<title>How to install driver of optical-network-card and setup the ip-config</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/How_to_install_driver_of_optical-network-card_and_setup_the_ip-config"/>
				<updated>2018-12-05T12:14:16Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Optical port network card plugging ==&lt;br /&gt;
# First verify the type of your Optical-network, usually PCI-E (x8/x16).&lt;br /&gt;
# Scan the mother-board to check if there are any PCI-E(x8/x16) left, then plug the card to the match one.&lt;br /&gt;
# Run &amp;quot;lspci |grep net&amp;quot; to see the numbers of network-card.&lt;br /&gt;
# Run &amp;quot;ethtool enp4s0&amp;quot; to check the speed property of the network-card. 1000 or 10000bits/s&lt;br /&gt;
&lt;br /&gt;
== Driver install(ixgbe) ==&lt;br /&gt;
# First check the chip-type and adapter-type of the network-card.&lt;br /&gt;
# Go to https://www.intel.com/content/www/us/en/support/products/36773/network-and-i-o/ethernet-products.html to find the corresponding drivers.&lt;br /&gt;
# Download the source code of driver and then compile it on your target machine according to the README in src&lt;br /&gt;
:# make &amp;amp; make install&lt;br /&gt;
:# modinfo ./ixgbe.ko&lt;br /&gt;
:# remove ixgbe&lt;br /&gt;
:# insmod ixgbe&lt;br /&gt;
:# modprobe ixgbe&lt;br /&gt;
    &lt;br /&gt;
== Ip-config settings ==&lt;br /&gt;
# Run &amp;quot;systemctl start NetworkManager&amp;quot; and &amp;quot;nmtui&amp;quot; to grab the name of the new card.&lt;br /&gt;
# Copy raw electric-network-card config to the optical one.&amp;quot;cp ifcfg-enp0s31f6 ifcfg-enp4s0&amp;quot;&lt;br /&gt;
# ifdown the electric-network-card &amp;amp;&amp;amp; ifup optical-network-card&lt;br /&gt;
# Restart the network &amp;quot;systemctl restart network&amp;quot;&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/How_to_install_driver_of_optical-network-card_and_setup_the_ip-config</id>
		<title>How to install driver of optical-network-card and setup the ip-config</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/How_to_install_driver_of_optical-network-card_and_setup_the_ip-config"/>
				<updated>2018-12-05T12:03:47Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Optical port network card plugging ==&lt;br /&gt;
# First verify the type of your Optical-network, usually PCI-E (x8/x16).&lt;br /&gt;
# Scan the mother-board to check if there are any PCI-E(x8/x16) left, then plug the card to the match one.&lt;br /&gt;
# Run &amp;quot;lspci |grep net&amp;quot; to see the numbers of network-card.&lt;br /&gt;
&lt;br /&gt;
== Driver install(ixgbe) ==&lt;br /&gt;
  * First check the chip-type and adapter-type of the network-card.&lt;br /&gt;
  * Go to https://www.intel.com/content/www/us/en/support/products/36773/network-and-i-o/ethernet-products.html to find the corresponding drivers.&lt;br /&gt;
  * Download the source code of driver and then compile it on your target machine according to the README in src&lt;br /&gt;
    ** make&lt;br /&gt;
    ** make install&lt;br /&gt;
&lt;br /&gt;
#&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/How_to_install_driver_of_optical-network-card_and_setup_the_ip-config</id>
		<title>How to install driver of optical-network-card and setup the ip-config</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/How_to_install_driver_of_optical-network-card_and_setup_the_ip-config"/>
				<updated>2018-12-05T12:00:08Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;# Optical port network card plugging&lt;br /&gt;
  * First verify the type of your Optical-network, usually PCI-E (x8/x16).&lt;br /&gt;
  * Scan the mother-board to check if there are any PCI-E(x8/x16) left, then plug the card to the match one.&lt;br /&gt;
# Driver install(ixgbe)&lt;br /&gt;
  * First check the chip-type and adapter-type of the network-card.&lt;br /&gt;
  * Go to https://www.intel.com/content/www/us/en/support/products/36773/network-and-i-o/ethernet-products.html to find the corresponding drivers.&lt;br /&gt;
  * Download the source code of driver and then compile it on your target machine according to the README in src&lt;br /&gt;
    ** make&lt;br /&gt;
    ** make install&lt;br /&gt;
#&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/How_to_install_driver_of_optical-network-card_and_setup_the_ip-config</id>
		<title>How to install driver of optical-network-card and setup the ip-config</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/How_to_install_driver_of_optical-network-card_and_setup_the_ip-config"/>
				<updated>2018-12-05T10:57:56Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;# Optical port network card plugging&lt;br /&gt;
  * First verify the type of your Optical-network, usually PCI-E (x8/x16).&lt;br /&gt;
  * Scan the mother-board to check if there are any PCI-E(x8/x16) left, then plug the card to the match one.&lt;br /&gt;
&lt;br /&gt;
# Driver install(ixgbe)&lt;br /&gt;
  * First check the chip-type and adapter-type of the network-card.&lt;br /&gt;
  * Go to https://www.intel.com/content/www/us/en/support/products/36773/network-and-i-o/ethernet-products.html to find the corresponding drivers.&lt;br /&gt;
  * Download the source code of driver and then compile it on your target machine according to the README in src&lt;br /&gt;
    ** make&lt;br /&gt;
    ** make install&lt;br /&gt;
&lt;br /&gt;
#&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/How_to_install_driver_of_optical-network-card_and_setup_the_ip-config</id>
		<title>How to install driver of optical-network-card and setup the ip-config</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/How_to_install_driver_of_optical-network-card_and_setup_the_ip-config"/>
				<updated>2018-12-05T10:57:22Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：以“# Optical port network card plugging * First verify the type of your Optical-network, usually PCI-E (x8/x16). * Scan the mother-board to check if there are any PCI-E...”为内容创建页面&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;# Optical port network card plugging&lt;br /&gt;
* First verify the type of your Optical-network, usually PCI-E (x8/x16).&lt;br /&gt;
* Scan the mother-board to check if there are any PCI-E(x8/x16) left, then plug the card to the match one.&lt;br /&gt;
&lt;br /&gt;
# Driver install(ixgbe)&lt;br /&gt;
* First check the chip-type and adapter-type of the network-card.&lt;br /&gt;
* Go to https://www.intel.com/content/www/us/en/support/products/36773/network-and-i-o/ethernet-products.html to find the corresponding drivers.&lt;br /&gt;
* Download the source code of driver and then compile it on your target machine according to the README in src&lt;br /&gt;
** make&lt;br /&gt;
** make install&lt;br /&gt;
&lt;br /&gt;
#&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Computing</id>
		<title>Computing</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Computing"/>
				<updated>2018-12-05T10:42:14Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：/* FAQ */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==General info==&lt;br /&gt;
[[Grid coumputing]]&lt;br /&gt;
&lt;br /&gt;
[[Use CSLT cluster]]&lt;br /&gt;
&lt;br /&gt;
[[CSLT cluster queues]]&lt;br /&gt;
&lt;br /&gt;
[[CSLT cluster nodes]]&lt;br /&gt;
&lt;br /&gt;
[[CSLT Central Storage (CCS)]]&lt;br /&gt;
&lt;br /&gt;
[[ASR-publication process|CSLT Publication]]&lt;br /&gt;
&lt;br /&gt;
==FAQ==&lt;br /&gt;
[[Steps of adding a new grid node]]&lt;br /&gt;
&lt;br /&gt;
[[How to setup SGE]]&lt;br /&gt;
&lt;br /&gt;
[[How to setup your homepage]]&lt;br /&gt;
&lt;br /&gt;
[[How to use SSH tunnel to access the campus network]]&lt;br /&gt;
&lt;br /&gt;
[[How to publish tools,data,code]]&lt;br /&gt;
&lt;br /&gt;
[[How to access cvss from outside]]&lt;br /&gt;
&lt;br /&gt;
[[Several service alias you may want to known ]]&lt;br /&gt;
&lt;br /&gt;
[[How to connect to cvss if the web server fails]]&lt;br /&gt;
&lt;br /&gt;
[[Using cvs]]&lt;br /&gt;
&lt;br /&gt;
[[Using neighbour hood browser]]&lt;br /&gt;
&lt;br /&gt;
[[Access outside from behind firewall using socks5]]&lt;br /&gt;
&lt;br /&gt;
[[What to do if our website can not access from outside? ]]&lt;br /&gt;
&lt;br /&gt;
[[How to mount grid disks]]&lt;br /&gt;
&lt;br /&gt;
[[How to reboot the grid]]&lt;br /&gt;
&lt;br /&gt;
[[How to repair super blocks]]&lt;br /&gt;
&lt;br /&gt;
[[How to build a centos-7 node]]&lt;br /&gt;
&lt;br /&gt;
[[How to setup Samba on centos 7]]&lt;br /&gt;
&lt;br /&gt;
[[Centos7:  ERROR: could not insert 'nvidia': Required key not available]]&lt;br /&gt;
&lt;br /&gt;
[[Centos7:  After reboot the grid, how to reset the NIS]]&lt;br /&gt;
&lt;br /&gt;
[[Ubuntu: set domain name]]&lt;br /&gt;
&lt;br /&gt;
[[Ubuntu: set nfs server]]&lt;br /&gt;
&lt;br /&gt;
[[Centos: config mysql server]]&lt;br /&gt;
&lt;br /&gt;
[[Look at me when failing to configure service on Linux]]&lt;br /&gt;
&lt;br /&gt;
[[convert sql to csv]]&lt;br /&gt;
&lt;br /&gt;
[http://wiki.ubuntu.com.cn/Wiki%E4%BD%BF%E7%94%A8%E6%96%B9%E6%B3%95 How to edit wiki pages?]&lt;br /&gt;
&lt;br /&gt;
[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/02/Github_%E7%AE%80%E6%98%93%E6%95%99%E7%A8%8B_.pdf  Github Simple Guide]&lt;br /&gt;
&lt;br /&gt;
[[How if gird-n can not be found by ping]]&lt;br /&gt;
&lt;br /&gt;
[[How to reset a pasword for a wiki user]]&lt;br /&gt;
&lt;br /&gt;
[[How if a machine can not ping outside]]&lt;br /&gt;
&lt;br /&gt;
[[How to solve the mistmatch between nvidia-smi and driver]]&lt;br /&gt;
&lt;br /&gt;
[[How to install driver of optical-network-card and setup the ip-config ]]&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-12-03</id>
		<title>FreeNeb status Report 2018-12-03</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-12-03"/>
				<updated>2018-12-03T01:13:18Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-12-03”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-11-26</id>
		<title>FreeNeb status Report 2018-11-26</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-11-26"/>
				<updated>2018-11-26T01:25:26Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This Week:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! Last Week !! This Week !! Meet Minutes !! Task Tracing(&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
|Mengyuan Zhao ||&lt;br /&gt;
本周:&lt;br /&gt;
* 工程化&lt;br /&gt;
# 完善nnet3-to-nnet1转换工具，加入对StatisticalExtraction、StatisticalPooling的支持&lt;br /&gt;
# local VPR engine:&lt;br /&gt;
## 实现了cmvn和PLDA打分，但与kaldi执行结果不同，还需进一步debug。&lt;br /&gt;
* 服务器维护&lt;br /&gt;
# corpus1创建完成&lt;br /&gt;
# 协助之勇修理tiger01&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
* 工程化&lt;br /&gt;
# local VPR engine:&lt;br /&gt;
## 继续debug cmvn和plda打分模块。&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyong Zhang||&lt;br /&gt;
本周：&lt;br /&gt;
# TTS-海峡研究院特定说话人语音合成--Failed , 需做adaptation，重新合成&lt;br /&gt;
# TTS-大规模数据训练--整理数据中&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
# ASR-decoder重置&lt;br /&gt;
# TTS-海峡研究院特定说话人语音合成&lt;br /&gt;
# TTS-大规模数据训练&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Yang Wei ||&lt;br /&gt;
本周：&lt;br /&gt;
* 测试vad引擎&lt;br /&gt;
* 测试使用tdnn-f chain模型的asr引擎rt&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
* 完成vad引擎测试&lt;br /&gt;
* 测试i-vector vpr引擎&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Zhenlong Han||&lt;br /&gt;
本周：&lt;br /&gt;
# 整理项目工具框架&lt;br /&gt;
# 跟进国网标注&lt;br /&gt;
# 双猴京华项目支持&lt;br /&gt;
# 测试分音塔日语识别率&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
# 分音塔项目&lt;br /&gt;
# 国网项目&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shuai Zhang||&lt;br /&gt;
本周：&lt;br /&gt;
#. vad engine 修改功能需求&lt;br /&gt;
#. vpr打包&lt;br /&gt;
#. asr服务&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
#. vad engine&lt;br /&gt;
#. asr服务压测&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanchi Jin||&lt;br /&gt;
本周：&lt;br /&gt;
# 支持roobo语音识别项目，更新v3.6模型。&lt;br /&gt;
# 评估分音塔标准测试集&lt;br /&gt;
# 分析国网训练模型提升效果&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
# 优化分音塔日语识别模型&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Rong Liu||&lt;br /&gt;
上周&lt;br /&gt;
1. 黄淮学院AI实验室落地沟通，由于内部问题，沟通进度不如预期&lt;br /&gt;
2. 秒针费用结算流程及后续合作方式&lt;br /&gt;
3. 协助roobo、分音塔、国网项目状态沟通和推进&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
1. 继续推进黄淮学院AI资源落地&lt;br /&gt;
2. Roobo、分音塔和国网项目状态推进&lt;br /&gt;
3. roobo专利&lt;br /&gt;
4. 其它前期项目需求沟通&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Dong Wang||&lt;br /&gt;
本周:&lt;br /&gt;
#. 实习生课题讨论部分完成&lt;br /&gt;
#. 入台证办理（失败）&lt;br /&gt;
#. 日本演示、DataX进展等相关项目讨论&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
#. 完成实习生课题声纹识别部分讨论&lt;br /&gt;
#. BP讨论&lt;br /&gt;
#. 研讨阿汤提出的品牌共享计划&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang&lt;br /&gt;
||&lt;br /&gt;
上周：&lt;br /&gt;
Attended APSIPA.&lt;br /&gt;
||&lt;br /&gt;
本周：&lt;br /&gt;
1. Deep compression 调研与实现.&lt;br /&gt;
2. pair-wise 后端设计.&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li||&lt;br /&gt;
上周：&lt;br /&gt;
# 支持秒针声纹项目 @zs&lt;br /&gt;
# 完成 i/d/x-vector 的 CMN 验证&lt;br /&gt;
# 支持 @zmy x-vector 工程化&lt;br /&gt;
# 开展若干声纹产品原型设计&lt;br /&gt;
# 组织实习生学习讨论&lt;br /&gt;
||&lt;br /&gt;
本周：&lt;br /&gt;
# 开启 d-x-vector 模型的 串行训练&lt;br /&gt;
# 尝试 xi-vector 模型实现&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Yating Peng||&lt;br /&gt;
上周：&lt;br /&gt;
*看政府2019年科技计划和基金，可报2019年中关村国家自主创新示范区科技型小微企业研发费用支持资金和国际合作研发项目，在找性价比高的可靠代理中，希望本周能确定下来；&lt;br /&gt;
*去社区开租房发票；&lt;br /&gt;
*日常财务报销处理。&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
本周：&lt;br /&gt;
*准备政府资金支持材料；&lt;br /&gt;
*完善员工档案excel；&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shiying||&lt;br /&gt;
上周：&lt;br /&gt;
* 参加APSIPA 2018会议，两份口头报告&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
本周&lt;br /&gt;
* local ASR model  （context 为3的倍数的ASR model）&lt;br /&gt;
* ASR model 综合测试&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|Wenqiang Du ||&lt;br /&gt;
上周:&lt;br /&gt;
* roobo口语打分模型训练&lt;br /&gt;
* 日语NHK新闻数据重新加入训练&lt;br /&gt;
* 用新训练的8k模型对16K转8K数据做自适应&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
本周：&lt;br /&gt;
* 对roobo口语模型进行多组实验&lt;br /&gt;
* 16k转8k新模型训练&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-11-19</id>
		<title>FreeNeb status Report 2018-11-19</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-11-19"/>
				<updated>2018-11-19T01:04:19Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This Week:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!People !! Last Week !! This Week !! Meet Minutes !! Task Tracing(&amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;DeadLine&amp;lt;/font&amp;gt;)&lt;br /&gt;
|-&lt;br /&gt;
|Mengyuan Zhao ||&lt;br /&gt;
本周:&lt;br /&gt;
* 工程化&lt;br /&gt;
# 完成ivector-based声纹识别引擎开发&lt;br /&gt;
* 服务器维护&lt;br /&gt;
# 备份了/freeneb/release目录&lt;br /&gt;
* 项目&lt;br /&gt;
# roobo口语打分&lt;br /&gt;
## 按照roobo的需求，增加了输出phone串，和phone级别打分的接口函数，更新了word级别打分的算法。&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
* 工程化&lt;br /&gt;
# local VPR engine:&lt;br /&gt;
## 实现StatisticalPooling component，以实现对x-vector的支持&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyong Zhang||&lt;br /&gt;
本周：&lt;br /&gt;
# TTS-化学论文摘要合成&lt;br /&gt;
# release目录model整理&lt;br /&gt;
# TTS-前端/模型训练调研&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
# ASR-decoder重置&lt;br /&gt;
# TTS-海峡研究院特定说话人语音合成&lt;br /&gt;
# TTS-大规模数据训练&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Yang Wei ||&lt;br /&gt;
本周：&lt;br /&gt;
* vad引擎部分测试&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
* 完成vad引擎测试&lt;br /&gt;
* TDNN-F chain 模型rt测试&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Zhenlong Han||&lt;br /&gt;
本周：&lt;br /&gt;
# 再测试汽车之家&lt;br /&gt;
# 整理秒针数据完成，正在训练完成&lt;br /&gt;
# 跟进国网标注，分析训练问题&lt;br /&gt;
# 双猴京华项目支持&lt;br /&gt;
# 马老师本地识别项目支持&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
# 发布秒针模型&lt;br /&gt;
# 分音塔项目&lt;br /&gt;
# 国网项目&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shuai Zhang||&lt;br /&gt;
本周：&lt;br /&gt;
#. vad engine 修改功能需求&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
#. vad engine&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanchi Jin||&lt;br /&gt;
本周：&lt;br /&gt;
# 处理训练国网数据。&lt;br /&gt;
# 支持预演项目。&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
# 整理所有项目测试集&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Rong Liu||&lt;br /&gt;
上周&lt;br /&gt;
1. 黄淮学院AI实验室沟通，待细节协议确定&lt;br /&gt;
2. 国网数据结构化分析，解析出客服（1.6k）、客户(9.7W)、及对应地区（分布）标签，可用于声纹&lt;br /&gt;
3. 配合誉为科技windows离线输入法联调，基本完成&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
1. 推动黄淮学院AI实验室协议确定，启动招标流程&lt;br /&gt;
2. 京华电子合同签订&lt;br /&gt;
3. 嵌入式语音产品调研&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Dong Wang||&lt;br /&gt;
本周:&lt;br /&gt;
#. ML book通过出版社审查，准备签定合同。&lt;br /&gt;
#. ICASSP论文提交。&lt;br /&gt;
#. 黄淮学院AI实验室进展顺利。&lt;br /&gt;
#. DataX场所、启动资金、数据库采集方案等完成，近期开始采集声纹。&lt;br /&gt;
#. DataX代FreeNeb收集文本和网上数据。&lt;br /&gt;
||&lt;br /&gt;
下周：&lt;br /&gt;
#. 参加ICASSP会议&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang&lt;br /&gt;
||&lt;br /&gt;
上周：&lt;br /&gt;
1. 口语打分交付计划，及 phone/word 参考 likelihood 生成；&lt;br /&gt;
2. ICASSP 论文查写；&lt;br /&gt;
3. FreeNeb Logo 设计与整理。&lt;br /&gt;
||&lt;br /&gt;
本周：&lt;br /&gt;
1. 模型压缩方法调研与实现；&lt;br /&gt;
2. pair-wise 后端设计&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li||&lt;br /&gt;
上周：&lt;br /&gt;
# 完成 ICASSP 论文&lt;br /&gt;
# 完成 x-vector 模型的 解码调参（chunk_size）&lt;br /&gt;
# 完成 d-vector 模型的 训练调参（nnet_structure, dropout, batch_size）&lt;br /&gt;
# 跟进 声纹明星-微信小程序&lt;br /&gt;
# 支持 秒针声纹项目&lt;br /&gt;
||&lt;br /&gt;
本周：&lt;br /&gt;
# 开启 d-x-vector 模型的 串行训练&lt;br /&gt;
# 尝试 xi-vector 模型实现&lt;br /&gt;
# 阅览 ICASSP18 论文&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Yating Peng||&lt;br /&gt;
上周：&lt;br /&gt;
*汇总十月账，做账，报税，发工资；&lt;br /&gt;
*公司布置；&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
本周：&lt;br /&gt;
*准备19年政府资金支持项目，物色合适代理；&lt;br /&gt;
*去社区开租房发票，继续完善公司布置；&lt;br /&gt;
*日常财务报销。&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|Shiying||&lt;br /&gt;
上周：&lt;br /&gt;
* 汉语大模型noise training(no skip)&lt;br /&gt;
* 汉语大模型( clean skip)&lt;br /&gt;
* 熟悉fnscore代码&lt;br /&gt;
* 完善汉语模型release&lt;br /&gt;
* rnnlm&lt;br /&gt;
||&lt;br /&gt;
本周&lt;br /&gt;
* 启动汉语rnnlm训练&lt;br /&gt;
* 继续熟悉fnscore代码&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|Wenqiang Du ||&lt;br /&gt;
上周:&lt;br /&gt;
* 中文8K训练（iter=2900）共计4260&lt;br /&gt;
* 日语项目的配合&lt;br /&gt;
* 实习生demo的整理，文档整理&lt;br /&gt;
&lt;br /&gt;
||&lt;br /&gt;
本周：&lt;br /&gt;
* 继续8K中文模型训练&lt;br /&gt;
* 实习生文档整理&lt;br /&gt;
||&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-11-12</id>
		<title>FreeNeb status Report 2018-11-12</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-11-12"/>
				<updated>2018-11-12T01:36:33Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-11-12”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-11-05</id>
		<title>FreeNeb status Report 2018-11-05</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-11-05"/>
				<updated>2018-11-05T01:23:57Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-11-05”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-10-29</id>
		<title>FreeNeb status Report 2018-10-29</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-10-29"/>
				<updated>2018-10-29T01:30:56Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-10-29”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-10-29</id>
		<title>FreeNeb status Report 2018-10-29</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-10-29"/>
				<updated>2018-10-29T01:28:02Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：保护“FreeNeb status Report 2018-10-29”（[编辑=FreeNeb users]（无限期）[移动=FreeNeb users]（无限期）[Read=FreeNeb users]（无限期））&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-10-29”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-10-22</id>
		<title>FreeNeb status Report 2018-10-22</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-10-22"/>
				<updated>2018-10-22T01:19:16Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-10-22”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-10-15</id>
		<title>FreeNeb status Report 2018-10-15</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-10-15"/>
				<updated>2018-10-15T01:46:06Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-10-15”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-10-15</id>
		<title>FreeNeb status Report 2018-10-15</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-10-15"/>
				<updated>2018-10-15T01:44:29Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：保护“FreeNeb status Report 2018-10-15”（[编辑=FreeNeb users]（无限期）[移动=FreeNeb users]（无限期）[Read=FreeNeb users]（无限期））&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-10-15”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-10-08</id>
		<title>FreeNeb status Report 2018-10-08</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-10-08"/>
				<updated>2018-10-08T01:47:33Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-10-08”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-25</id>
		<title>FreeNeb status Report 2018-09-25</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-25"/>
				<updated>2018-09-25T00:58:43Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-09-25”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-25</id>
		<title>FreeNeb status Report 2018-09-25</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-25"/>
				<updated>2018-09-25T00:58:10Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-09-25”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-17</id>
		<title>FreeNeb status Report 2018-09-17</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-17"/>
				<updated>2018-09-17T01:24:43Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-09-17”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Annual-report-2015</id>
		<title>Annual-report-2015</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Annual-report-2015"/>
				<updated>2018-09-12T02:39:34Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[媒体文件:2015年 梦想的步伐.pptx|Wang Dong: Towards the future]]&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:2016-01-10 Tianyi Luo's language technology research group annual report.pdf| Luo Tianyi: Language processing team annual report]]&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:20160110_asr_annual_summary.pdf|Zhang Zhiyong: 2016 ASR group Annual Summary]]&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:2015年 总结 财务报告 zhangxw.ppt|Zhang Xuewei]]&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Zengxy.pptx|Zeng Xiangyu]]&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:Zhaomy-summary of 2015.pptx|Zhao Mengyuan]]&lt;br /&gt;
&lt;br /&gt;
[[媒体文件:2015-Lilt's_Annual_Summary.pdf|Lantian Li: Speaker recognition]]&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-10</id>
		<title>FreeNeb status Report 2018-09-10</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-10"/>
				<updated>2018-09-10T01:45:35Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：保护“FreeNeb status Report 2018-09-10”（[编辑=FreeNeb users]（无限期）[移动=FreeNeb users]（无限期）[Read=FreeNeb users]（无限期））&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-09-10”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-10</id>
		<title>FreeNeb status Report 2018-09-10</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-10"/>
				<updated>2018-09-10T01:45:17Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-09-10”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-08-27</id>
		<title>FreeNeb status Report 2018-08-27</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-08-27"/>
				<updated>2018-09-03T01:23:40Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-08-27”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-03</id>
		<title>FreeNeb status Report 2018-09-03</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-03"/>
				<updated>2018-09-03T01:23:00Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-09-03”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-03</id>
		<title>FreeNeb status Report 2018-09-03</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-03"/>
				<updated>2018-09-03T01:12:49Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-09-03”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-08-27</id>
		<title>FreeNeb status Report 2018-08-27</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-08-27"/>
				<updated>2018-09-03T01:05:34Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：保护“FreeNeb status Report 2018-08-27”（[编辑=FreeNeb users]（无限期）[移动=FreeNeb users]（无限期）[Read=FreeNeb users]（无限期））&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-08-27”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-03</id>
		<title>FreeNeb status Report 2018-09-03</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-09-03"/>
				<updated>2018-09-03T01:05:18Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：保护“FreeNeb status Report 2018-09-03”（[编辑=FreeNeb users]（无限期）[移动=FreeNeb users]（无限期）[Read=FreeNeb users]（无限期））&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-09-03”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-08-20</id>
		<title>FreeNeb status Report 2018-08-20</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_status_Report_2018-08-20"/>
				<updated>2018-08-20T05:36:50Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb status Report 2018-08-20”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_Status_Report_2016-12-19</id>
		<title>FreeNeb Status Report 2016-12-19</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_Status_Report_2016-12-19"/>
				<updated>2018-08-14T04:54:15Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：保护“FreeNeb Status Report 2016-12-19”（[编辑=FreeNeb users]（无限期）[移动=FreeNeb users]（无限期）[Read=FreeNeb users]（无限期））&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb Status Report 2016-12-19”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/FreeNeb_Status_Report_2016-12-26</id>
		<title>FreeNeb Status Report 2016-12-26</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/FreeNeb_Status_Report_2016-12-26"/>
				<updated>2018-08-14T04:54:08Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangzy：保护“FreeNeb Status Report 2016-12-26”（[编辑=FreeNeb users]（无限期）[移动=FreeNeb users]（无限期）[Read=FreeNeb users]（无限期））&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''警告：'''“FreeNeb Status Report 2016-12-26”指向这里，但您没有足够的权限来访问它。&lt;/div&gt;</summary>
		<author><name>Zhangzy</name></author>	</entry>

	</feed>