<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="http://cslt.org/mediawiki/skins/common/feed.css?303"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="zh-cn">
		<id>http://cslt.org/mediawiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Zhangmiao</id>
		<title>cslt Wiki - 用户贡献 [zh-cn]</title>
		<link rel="self" type="application/atom+xml" href="http://cslt.org/mediawiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Zhangmiao"/>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/%E7%89%B9%E6%AE%8A:%E7%94%A8%E6%88%B7%E8%B4%A1%E7%8C%AE/Zhangmiao"/>
		<updated>2026-04-14T11:11:09Z</updated>
		<subtitle>用户贡献</subtitle>
		<generator>MediaWiki 1.23.3</generator>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2018-1-8</id>
		<title>ASR Status Report 2018-1-8</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2018-1-8"/>
				<updated>2018-01-08T05:51:11Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week !! Task Tracking&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2018.1.8&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Update the APSIPA paper: experiments results; description of database; analysis of results.&lt;br /&gt;
* Collect the photo and made the photo book.&lt;br /&gt;
||&lt;br /&gt;
* Finish collecting photo and the photo book.&lt;br /&gt;
* Buy the gifts.&lt;br /&gt;
* Plan the time capsule project for the annual meeting.&lt;br /&gt;
* Plan the &amp;quot;special offer&amp;quot; project.&lt;br /&gt;
* Help to make tests for Parrot project.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* copy-right for voice-checker and voice-printer&lt;br /&gt;
* finish m2asr annual summary&lt;br /&gt;
* i-vector system for m2asr(train down test is in progress)&lt;br /&gt;
* contract of Deep Curious 深度好奇(haven't got response)&lt;br /&gt;
|| &lt;br /&gt;
* DNN-i-vector Lid&lt;br /&gt;
* build my personal home page&lt;br /&gt;
* CTC for social science academy(this task is very samilar with zero resource decoding which list in m2asr project)&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Commercial deep speaker model training still in process. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=646]&lt;br /&gt;
* Phone-aware scoring on deep speaker feature. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=643]&lt;br /&gt;
* Phonetic speaker embedding still in process. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=644]&lt;br /&gt;
* Overlap training for speaker features. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=645]&lt;br /&gt;
* Voice QR code design [Assign to Shouyi Dai].&lt;br /&gt;
||&lt;br /&gt;
* Commercial deep speaker model training and evaluation.&lt;br /&gt;
* Phonetic speaker embedding.&lt;br /&gt;
* Overlap training for speaker features.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Technical description of Parrot for a patent&lt;br /&gt;
||&lt;br /&gt;
* Configuration adaptation for Parrot&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week !! Task Tracking&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2018.1.2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Commercial deep speaker model training. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=646]&lt;br /&gt;
* Phone-aware scoring on deep speaker feature. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=643]&lt;br /&gt;
* Phonetic speaker embedding. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=644]&lt;br /&gt;
* Overlap training for speaker features. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=645]&lt;br /&gt;
||&lt;br /&gt;
* Commercial deep speaker model training.&lt;br /&gt;
* Phone-aware scoring on deep speaker feature.&lt;br /&gt;
* Phonetic speaker embedding.&lt;br /&gt;
* Overlap training for speaker features.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* A brief survey on oral language evaluation, both on application and algorithm&lt;br /&gt;
||&lt;br /&gt;
* A test version of Parrot&lt;br /&gt;
* Patent of Parrot&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2018-1-8</id>
		<title>ASR Status Report 2018-1-8</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2018-1-8"/>
				<updated>2018-01-08T05:49:49Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week !! Task Tracking&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2018.1.8&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Update the APSIPA paper: experiments results; &lt;br /&gt;
                           description of database;&lt;br /&gt;
                           analysis of results&lt;br /&gt;
* Collect the photo and made the photo book.&lt;br /&gt;
||&lt;br /&gt;
* Finish collecting photo and the photo book.&lt;br /&gt;
* Buy the gifts.&lt;br /&gt;
* Plan the time capsule project for the annual meeting.&lt;br /&gt;
* Plan the &amp;quot;special offer&amp;quot; project.&lt;br /&gt;
* Help to make tests for Parrot project.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* copy-right for voice-checker and voice-printer&lt;br /&gt;
* finish m2asr annual summary&lt;br /&gt;
* i-vector system for m2asr(train down test is in progress)&lt;br /&gt;
* contract of Deep Curious 深度好奇(haven't got response)&lt;br /&gt;
|| &lt;br /&gt;
* DNN-i-vector Lid&lt;br /&gt;
* build my personal home page&lt;br /&gt;
* CTC for social science academy(this task is very samilar with zero resource decoding which list in m2asr project)&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Commercial deep speaker model training still in process. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=646]&lt;br /&gt;
* Phone-aware scoring on deep speaker feature. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=643]&lt;br /&gt;
* Phonetic speaker embedding still in process. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=644]&lt;br /&gt;
* Overlap training for speaker features. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=645]&lt;br /&gt;
* Voice QR code design [Assign to Shouyi Dai].&lt;br /&gt;
||&lt;br /&gt;
* Commercial deep speaker model training and evaluation.&lt;br /&gt;
* Phonetic speaker embedding.&lt;br /&gt;
* Overlap training for speaker features.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Technical description of Parrot for a patent&lt;br /&gt;
||&lt;br /&gt;
* Configuration adaptation for Parrot&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week !! Task Tracking&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2018.1.2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Commercial deep speaker model training. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=646]&lt;br /&gt;
* Phone-aware scoring on deep speaker feature. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=643]&lt;br /&gt;
* Phonetic speaker embedding. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=644]&lt;br /&gt;
* Overlap training for speaker features. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=645]&lt;br /&gt;
||&lt;br /&gt;
* Commercial deep speaker model training.&lt;br /&gt;
* Phone-aware scoring on deep speaker feature.&lt;br /&gt;
* Phonetic speaker embedding.&lt;br /&gt;
* Overlap training for speaker features.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* A brief survey on oral language evaluation, both on application and algorithm&lt;br /&gt;
||&lt;br /&gt;
* A test version of Parrot&lt;br /&gt;
* Patent of Parrot&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Public_data</id>
		<title>Public data</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Public_data"/>
				<updated>2017-12-27T11:55:46Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==CCC data resource==&lt;br /&gt;
&lt;br /&gt;
CSLT holds a close collaboration with Chinese Corpus Consortium (CCC) to collect and publish databases in China. The aim of the CCC is to provide corpora for Chinese ASR, TTS, NLP, perception analysis, phonetics analysis, linguistic analysis, and other related tasks. The corpora can be speech- or text-based; read or spontaneous; wideband or narrowband; standard or dialectal Chinese; clean or with noise; or of any other kinds which are deemed helpful for the foresaid purposes. &lt;br /&gt;
&lt;br /&gt;
[http://www.cccforum.org visit CCC]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Trivial events database==&lt;br /&gt;
A free database involving 7 types of human trivial events: cough, laugh, &amp;quot;wei&amp;quot;, &amp;quot;hmm&amp;quot;, &amp;quot;tsk-tsk&amp;quot;, &amp;quot;ahem&amp;quot;, sniff. The data is &lt;br /&gt;
collected using a recording Android App.&lt;br /&gt;
&lt;br /&gt;
[https://share.weiyun.com/389a55251c59fc4f9740d5c28be380f7 download from Cloud]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Disguise database==&lt;br /&gt;
A free database involving human's normal speech and disguised speech. The data is collected using a recording Android App.&lt;br /&gt;
&lt;br /&gt;
[https://share.weiyun.com/a7355eb4321dafd2887460daa915191d download from Cloud]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Uyghur text database==&lt;br /&gt;
&lt;br /&gt;
CSLT collaborated with the [http://www.xju.edu.cn/ XinJiang University] on a wide range of research including speech recognition, information retrieval and text processing. We published a multitude of resources to boost the research on Uyghur. The text data published here is used for Uyghur text classification tasks, which involves 500 health and non-health documents respectively. It was collected by Mahpirat from XJU when she visited CSLT from 2012-2013. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/uygh/zip/data.tar.gz download] [http://pan.baidu.com/s/1hqKwE00 download from Baidu]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Sheik Cantonese lexicon==&lt;br /&gt;
&lt;br /&gt;
A free Cantonese lexicon collected from Adam Sheik's Cantonese Dict project. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/cantonese/sheik/index.html check details]&lt;br /&gt;
&lt;br /&gt;
== THUYG-20 database ==&lt;br /&gt;
&lt;br /&gt;
A free speech database for constructing a full-fledged Uyghur ASR system. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/thuyg20/README.html check details]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== THUYG-20 SRE database ==&lt;br /&gt;
&lt;br /&gt;
A free speech database for constructing a full-fledged Uyghur speaker recognition system. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/thuyg20-sre/README.html check details]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== SUD-12 database ==&lt;br /&gt;
&lt;br /&gt;
A speech database used for short utterance speaker recognition&lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/susr/SUB12/index.html check details]&lt;br /&gt;
&lt;br /&gt;
== THUCH30 database ==&lt;br /&gt;
&lt;br /&gt;
A speech database used for Chinese LVCSR. Recorded by Dong Wang many many years ago.&lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/thchs30/README.html check details]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==kazak ASR database==&lt;br /&gt;
A speech database used for Kazak LVCSR. &lt;br /&gt;
&lt;br /&gt;
The entire package involves the full &lt;br /&gt;
set of speech and language resources required to establish a Kazak  speech recognition system. &lt;br /&gt;
&lt;br /&gt;
[https://share.weiyun.com/4cf4ec64e4e59f8280de8c7baecaad27  QQ weiyun share link]&lt;br /&gt;
&lt;br /&gt;
You can send e-mail to shiying@cslt.riit.tsinghua.edu.cn to ask for share password.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Tibetan ASR database==&lt;br /&gt;
A speech database used for Tibetan LVCSR.&lt;br /&gt;
&lt;br /&gt;
The entire package involves the full &lt;br /&gt;
set of speech and language resources required to establish a Tibetan  speech recognition system. &lt;br /&gt;
&lt;br /&gt;
[https://share.weiyun.com/da691bff0f7c641646ae9fb1154ffdce QQ weiyun share link]&lt;br /&gt;
&lt;br /&gt;
You can send e-mail to shiying@cslt.riit.tsinghua.edu.cn to ask for share password.&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Public_data</id>
		<title>Public data</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Public_data"/>
				<updated>2017-12-27T11:53:36Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==CCC data resource==&lt;br /&gt;
&lt;br /&gt;
CSLT holds a close collaboration with Chinese Corpus Consortium (CCC) to collect and publish databases in China. The aim of the CCC is to provide corpora for Chinese ASR, TTS, NLP, perception analysis, phonetics analysis, linguistic analysis, and other related tasks. The corpora can be speech- or text-based; read or spontaneous; wideband or narrowband; standard or dialectal Chinese; clean or with noise; or of any other kinds which are deemed helpful for the foresaid purposes. &lt;br /&gt;
&lt;br /&gt;
[http://www.cccforum.org visit CCC]&lt;br /&gt;
&lt;br /&gt;
==Trivial events database==&lt;br /&gt;
A free database involving 7 types of human trivial events: cough, laugh, &amp;quot;wei&amp;quot;, &amp;quot;hmm&amp;quot;, &amp;quot;tsk-tsk&amp;quot;, &amp;quot;ahem&amp;quot;, sniff. The data is &lt;br /&gt;
collected using a recording Android App.&lt;br /&gt;
&lt;br /&gt;
[https://share.weiyun.com/389a55251c59fc4f9740d5c28be380f7 download from Cloud]&lt;br /&gt;
&lt;br /&gt;
==Uyghur text database==&lt;br /&gt;
&lt;br /&gt;
CSLT collaborated with the [http://www.xju.edu.cn/ XinJiang University] on a wide range of research including speech recognition, information retrieval and text processing. We published a multitude of resources to boost the research on Uyghur. The text data published here is used for Uyghur text classification tasks, which involves 500 health and non-health documents respectively. It was collected by Mahpirat from XJU when she visited CSLT from 2012-2013. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/uygh/zip/data.tar.gz download] [http://pan.baidu.com/s/1hqKwE00 download from Baidu]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Sheik Cantonese lexicon==&lt;br /&gt;
&lt;br /&gt;
A free Cantonese lexicon collected from Adam Sheik's Cantonese Dict project. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/cantonese/sheik/index.html check details]&lt;br /&gt;
&lt;br /&gt;
== THUYG-20 database ==&lt;br /&gt;
&lt;br /&gt;
A free speech database for constructing a full-fledged Uyghur ASR system. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/thuyg20/README.html check details]&lt;br /&gt;
&lt;br /&gt;
== THUYG-20 SRE database ==&lt;br /&gt;
&lt;br /&gt;
A free speech database for constructing a full-fledged Uyghur speaker recognition system. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/thuyg20-sre/README.html check details]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== SUD-12 database ==&lt;br /&gt;
&lt;br /&gt;
A speech database used for short utterance speaker recognition&lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/susr/SUB12/index.html check details]&lt;br /&gt;
&lt;br /&gt;
== THUCH30 database ==&lt;br /&gt;
&lt;br /&gt;
A speech database used for Chinese LVCSR. Recorded by Dong Wang many many years ago.&lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/thchs30/README.html check details]&lt;br /&gt;
&lt;br /&gt;
==kazak ASR database==&lt;br /&gt;
A speech database used for Kazak LVCSR. &lt;br /&gt;
&lt;br /&gt;
The entire package involves the full &lt;br /&gt;
set of speech and language resources required to establish a Kazak  speech recognition system. &lt;br /&gt;
&lt;br /&gt;
[https://share.weiyun.com/4cf4ec64e4e59f8280de8c7baecaad27  QQ weiyun share link]&lt;br /&gt;
&lt;br /&gt;
You can send e-mail to shiying@cslt.riit.tsinghua.edu.cn to ask for share password.&lt;br /&gt;
&lt;br /&gt;
==Tibetan ASR database==&lt;br /&gt;
A speech database used for Tibetan LVCSR.&lt;br /&gt;
&lt;br /&gt;
The entire package involves the full &lt;br /&gt;
set of speech and language resources required to establish a Tibetan  speech recognition system. &lt;br /&gt;
&lt;br /&gt;
[https://share.weiyun.com/da691bff0f7c641646ae9fb1154ffdce QQ weiyun share link]&lt;br /&gt;
&lt;br /&gt;
You can send e-mail to shiying@cslt.riit.tsinghua.edu.cn to ask for share password.&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Public_data</id>
		<title>Public data</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Public_data"/>
				<updated>2017-12-27T10:20:59Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==CCC data resource==&lt;br /&gt;
&lt;br /&gt;
CSLT holds a close collaboration with Chinese Corpus Consortium (CCC) to collect and publish databases in China. The aim of the CCC is to provide corpora for Chinese ASR, TTS, NLP, perception analysis, phonetics analysis, linguistic analysis, and other related tasks. The corpora can be speech- or text-based; read or spontaneous; wideband or narrowband; standard or dialectal Chinese; clean or with noise; or of any other kinds which are deemed helpful for the foresaid purposes. &lt;br /&gt;
&lt;br /&gt;
[http://www.cccforum.org visit CCC]&lt;br /&gt;
&lt;br /&gt;
==Trivial events database==&lt;br /&gt;
A free database involving 7 types of human trivial events: cough, laugh, &amp;quot;wei&amp;quot;, &amp;quot;hmm&amp;quot;, &amp;quot;tsk-tsk&amp;quot;, &amp;quot;ahem&amp;quot;, sniff. The data is &lt;br /&gt;
collected using a recording Android App.&lt;br /&gt;
&lt;br /&gt;
==Uyghur text database==&lt;br /&gt;
&lt;br /&gt;
CSLT collaborated with the [http://www.xju.edu.cn/ XinJiang University] on a wide range of research including speech recognition, information retrieval and text processing. We published a multitude of resources to boost the research on Uyghur. The text data published here is used for Uyghur text classification tasks, which involves 500 health and non-health documents respectively. It was collected by Mahpirat from XJU when she visited CSLT from 2012-2013. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/uygh/zip/data.tar.gz download] [http://pan.baidu.com/s/1hqKwE00 download from Baidu]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Sheik Cantonese lexicon==&lt;br /&gt;
&lt;br /&gt;
A free Cantonese lexicon collected from Adam Sheik's Cantonese Dict project. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/cantonese/sheik/index.html check details]&lt;br /&gt;
&lt;br /&gt;
== THUYG-20 database ==&lt;br /&gt;
&lt;br /&gt;
A free speech database for constructing a full-fledged Uyghur ASR system. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/thuyg20/README.html check details]&lt;br /&gt;
&lt;br /&gt;
== THUYG-20 SRE database ==&lt;br /&gt;
&lt;br /&gt;
A free speech database for constructing a full-fledged Uyghur speaker recognition system. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/thuyg20-sre/README.html check details]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== SUD-12 database ==&lt;br /&gt;
&lt;br /&gt;
A speech database used for short utterance speaker recognition&lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/susr/SUB12/index.html check details]&lt;br /&gt;
&lt;br /&gt;
== THUCH30 database ==&lt;br /&gt;
&lt;br /&gt;
A speech database used for Chinese LVCSR. Recorded by Dong Wang many many years ago.&lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/thchs30/README.html check details]&lt;br /&gt;
&lt;br /&gt;
==kazak ASR database==&lt;br /&gt;
A speech database used for Kazak LVCSR. &lt;br /&gt;
&lt;br /&gt;
The entire package involves the full &lt;br /&gt;
set of speech and language resources required to establish a Kazak  speech recognition system. &lt;br /&gt;
&lt;br /&gt;
[https://share.weiyun.com/4cf4ec64e4e59f8280de8c7baecaad27  QQ weiyun share link]&lt;br /&gt;
&lt;br /&gt;
You can send e-mail to shiying@cslt.riit.tsinghua.edu.cn to ask for share password.&lt;br /&gt;
&lt;br /&gt;
==Tibetan ASR database==&lt;br /&gt;
A speech database used for Tibetan LVCSR.&lt;br /&gt;
&lt;br /&gt;
The entire package involves the full &lt;br /&gt;
set of speech and language resources required to establish a Tibetan  speech recognition system. &lt;br /&gt;
&lt;br /&gt;
[https://share.weiyun.com/da691bff0f7c641646ae9fb1154ffdce QQ weiyun share link]&lt;br /&gt;
&lt;br /&gt;
You can send e-mail to shiying@cslt.riit.tsinghua.edu.cn to ask for share password.&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Public_data</id>
		<title>Public data</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Public_data"/>
				<updated>2017-12-27T10:08:50Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==CCC data resource==&lt;br /&gt;
&lt;br /&gt;
CSLT holds a close collaboration with Chinese Corpus Consortium (CCC) to collect and publish databases in China. The aim of the CCC is to provide corpora for Chinese ASR, TTS, NLP, perception analysis, phonetics analysis, linguistic analysis, and other related tasks. The corpora can be speech- or text-based; read or spontaneous; wideband or narrowband; standard or dialectal Chinese; clean or with noise; or of any other kinds which are deemed helpful for the foresaid purposes. &lt;br /&gt;
&lt;br /&gt;
[http://www.cccforum.org visit CCC]&lt;br /&gt;
&lt;br /&gt;
==Trivial events database==&lt;br /&gt;
A free database involving 7 types of human trivial events.&lt;br /&gt;
&lt;br /&gt;
==Uyghur text database==&lt;br /&gt;
&lt;br /&gt;
CSLT collaborated with the [http://www.xju.edu.cn/ XinJiang University] on a wide range of research including speech recognition, information retrieval and text processing. We published a multitude of resources to boost the research on Uyghur. The text data published here is used for Uyghur text classification tasks, which involves 500 health and non-health documents respectively. It was collected by Mahpirat from XJU when she visited CSLT from 2012-2013. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/uygh/zip/data.tar.gz download] [http://pan.baidu.com/s/1hqKwE00 download from Baidu]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Sheik Cantonese lexicon==&lt;br /&gt;
&lt;br /&gt;
A free Cantonese lexicon collected from Adam Sheik's Cantonese Dict project. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/cantonese/sheik/index.html check details]&lt;br /&gt;
&lt;br /&gt;
== THUYG-20 database ==&lt;br /&gt;
&lt;br /&gt;
A free speech database for constructing a full-fledged Uyghur ASR system. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/thuyg20/README.html check details]&lt;br /&gt;
&lt;br /&gt;
== THUYG-20 SRE database ==&lt;br /&gt;
&lt;br /&gt;
A free speech database for constructing a full-fledged Uyghur speaker recognition system. &lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/thuyg20-sre/README.html check details]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== SUD-12 database ==&lt;br /&gt;
&lt;br /&gt;
A speech database used for short utterance speaker recognition&lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/susr/SUB12/index.html check details]&lt;br /&gt;
&lt;br /&gt;
== THUCH30 database ==&lt;br /&gt;
&lt;br /&gt;
A speech database used for Chinese LVCSR. Recorded by Dong Wang many many years ago.&lt;br /&gt;
&lt;br /&gt;
[http://data.cslt.org/thchs30/README.html check details]&lt;br /&gt;
&lt;br /&gt;
==kazak ASR database==&lt;br /&gt;
A speech database used for Kazak LVCSR. &lt;br /&gt;
&lt;br /&gt;
The entire package involves the full &lt;br /&gt;
set of speech and language resources required to establish a Kazak  speech recognition system. &lt;br /&gt;
&lt;br /&gt;
[https://share.weiyun.com/4cf4ec64e4e59f8280de8c7baecaad27  QQ weiyun share link]&lt;br /&gt;
&lt;br /&gt;
You can send e-mail to shiying@cslt.riit.tsinghua.edu.cn to ask for share password.&lt;br /&gt;
&lt;br /&gt;
==Tibetan ASR database==&lt;br /&gt;
A speech database used for Tibetan LVCSR.&lt;br /&gt;
&lt;br /&gt;
The entire package involves the full &lt;br /&gt;
set of speech and language resources required to establish a Tibetan  speech recognition system. &lt;br /&gt;
&lt;br /&gt;
[https://share.weiyun.com/da691bff0f7c641646ae9fb1154ffdce QQ weiyun share link]&lt;br /&gt;
&lt;br /&gt;
You can send e-mail to shiying@cslt.riit.tsinghua.edu.cn to ask for share password.&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-12-25</id>
		<title>ASR Status Report 2017-12-25</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-12-25"/>
				<updated>2017-12-25T05:45:55Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week !! Task Tracking&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.12.25&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* Read the 16k model script&lt;br /&gt;
* The cough recognition codes left by Xiaofei&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* some function for voice-printer&lt;br /&gt;
** speaker vector per utterance  [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/63/SpkerVector2.png here]&lt;br /&gt;
** speaker vector minus base speaker vector [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/6b/Spkear_vector.png here]&lt;br /&gt;
* CTC for Haibo Wang (Token accuracy on train set 92.80%, on cv set 89.74%) haven't test on test set&lt;br /&gt;
* QRcode&lt;br /&gt;
** speaker vector merge phone grayscale [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f3/Speaker_factor_gray.png here]&lt;br /&gt;
** speaker vector merge phone black-and-white map  [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/97/1514176866%281%29.png here]&lt;br /&gt;
** speaker vector merge phone black-and-white map minus base vector  [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/4e/SpeakerQrCode2.png here]&lt;br /&gt;
* ivector baseline for kazak-uyghur LRE performance is 81.85% (Utt level)&lt;br /&gt;
|| &lt;br /&gt;
* Finish voice-checker copyright and submit the copyright in this Wednesday&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Complete the recipe for `VV_FACTOR`.&lt;br /&gt;
* 16K and 8K deep speaker model comparison.[http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=646]&lt;br /&gt;
||&lt;br /&gt;
* Patent for `VV_QuickMark`.&lt;br /&gt;
* Complete the demo for `VV_FACTOR`.[Assign to Shouyi Dai]&lt;br /&gt;
* Phonetic speaker embedding.&lt;br /&gt;
* Overlap training for speaker features.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* word level pronunciation accuracy based on likelihood (tell which word is well pronounced as '0' or badly pronounced '1')&lt;br /&gt;
||&lt;br /&gt;
* model adaptation&lt;br /&gt;
* if possible, an alpha version Parrot for test inside lab to collect some data for better configurature &lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week !! Task Tracking&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.12.18&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* Finish the Voice-printer program&lt;br /&gt;
* Apply the software copyright of Voice-printer&lt;br /&gt;
* APSIPA 2017&lt;br /&gt;
|| &lt;br /&gt;
* Finish the software copyright of Voice-checker&lt;br /&gt;
* Baseline of similar language recongnition system(i-vector, DNN, PTN)&lt;br /&gt;
||&lt;br /&gt;
* focus on function other than UI&lt;br /&gt;
* i-vector LID first&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Optimize the demo of `VV_Seg` and `VV_QuickMark`.&lt;br /&gt;
* Phone-aware scorning on deep speaker feature. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=643]&lt;br /&gt;
||&lt;br /&gt;
* Phone-aware scorning.&lt;br /&gt;
* Overlap training for speaker features.&lt;br /&gt;
|| &lt;br /&gt;
* test on trivial dataset&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* easy-to-read interfaces for Parrot&lt;br /&gt;
||&lt;br /&gt;
* phone-level likelihood for detail diagnosis and an alpha version Parrot for test inside lab&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-12-25</id>
		<title>ASR Status Report 2017-12-25</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-12-25"/>
				<updated>2017-12-25T05:43:44Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week !! Task Tracking&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.12.25&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* Read the 16k model scripts&lt;br /&gt;
* The cough recognition codes left by Xiaofei&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* some function for voice-printer&lt;br /&gt;
** speaker vector per utterance  [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/63/SpkerVector2.png here]&lt;br /&gt;
** speaker vector minus base speaker vector [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/6b/Spkear_vector.png here]&lt;br /&gt;
* CTC for Haibo Wang (Token accuracy on train set 92.80%, on cv set 89.74%) haven't test on test set&lt;br /&gt;
* QRcode&lt;br /&gt;
** speaker vector merge phone grayscale [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f3/Speaker_factor_gray.png here]&lt;br /&gt;
** speaker vector merge phone black-and-white map  [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/97/1514176866%281%29.png here]&lt;br /&gt;
** speaker vector merge phone black-and-white map minus base vector  [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/4e/SpeakerQrCode2.png here]&lt;br /&gt;
* ivector baseline for kazak-uyghur LRE performance is 81.85% (Utt level)&lt;br /&gt;
|| &lt;br /&gt;
* Finish voice-checker copyright and submit the copyright in this Wednesday&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Complete the recipe for `VV_FACTOR`.&lt;br /&gt;
* 16K and 8K deep speaker model comparison.[http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=646]&lt;br /&gt;
||&lt;br /&gt;
* Patent for `VV_QuickMark`.&lt;br /&gt;
* Complete the demo for `VV_FACTOR`.[Assign to Shouyi Dai]&lt;br /&gt;
* Phonetic speaker embedding.&lt;br /&gt;
* Overlap training for speaker features.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* word level pronunciation accuracy based on likelihood (tell which word is well pronounced as '0' or badly pronounced '1')&lt;br /&gt;
||&lt;br /&gt;
* model adaptation&lt;br /&gt;
* if possible, an alpha version Parrot for test inside lab to collect some data for better configurature &lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week !! Task Tracking&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.12.18&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* Finish the Voice-printer program&lt;br /&gt;
* Apply the software copyright of Voice-printer&lt;br /&gt;
* APSIPA 2017&lt;br /&gt;
|| &lt;br /&gt;
* Finish the software copyright of Voice-checker&lt;br /&gt;
* Baseline of similar language recongnition system(i-vector, DNN, PTN)&lt;br /&gt;
||&lt;br /&gt;
* focus on function other than UI&lt;br /&gt;
* i-vector LID first&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Optimize the demo of `VV_Seg` and `VV_QuickMark`.&lt;br /&gt;
* Phone-aware scorning on deep speaker feature. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=643]&lt;br /&gt;
||&lt;br /&gt;
* Phone-aware scorning.&lt;br /&gt;
* Overlap training for speaker features.&lt;br /&gt;
|| &lt;br /&gt;
* test on trivial dataset&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* easy-to-read interfaces for Parrot&lt;br /&gt;
||&lt;br /&gt;
* phone-level likelihood for detail diagnosis and an alpha version Parrot for test inside lab&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-12-25</id>
		<title>ASR Status Report 2017-12-25</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-12-25"/>
				<updated>2017-12-25T05:43:22Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week !! Task Tracking&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.12.25&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Read the 16k model scripts&lt;br /&gt;
||&lt;br /&gt;
* The cough recognition codes left by Xiaofei&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* some function for voice-printer&lt;br /&gt;
** speaker vector per utterance  [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/63/SpkerVector2.png here]&lt;br /&gt;
** speaker vector minus base speaker vector [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/6/6b/Spkear_vector.png here]&lt;br /&gt;
* CTC for Haibo Wang (Token accuracy on train set 92.80%, on cv set 89.74%) haven't test on test set&lt;br /&gt;
* QRcode&lt;br /&gt;
** speaker vector merge phone grayscale [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f3/Speaker_factor_gray.png here]&lt;br /&gt;
** speaker vector merge phone black-and-white map  [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/9/97/1514176866%281%29.png here]&lt;br /&gt;
** speaker vector merge phone black-and-white map minus base vector  [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/4/4e/SpeakerQrCode2.png here]&lt;br /&gt;
* ivector baseline for kazak-uyghur LRE performance is 81.85% (Utt level)&lt;br /&gt;
|| &lt;br /&gt;
* Finish voice-checker copyright and submit the copyright in this Wednesday&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Complete the recipe for `VV_FACTOR`.&lt;br /&gt;
* 16K and 8K deep speaker model comparison.[http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=646]&lt;br /&gt;
||&lt;br /&gt;
* Patent for `VV_QuickMark`.&lt;br /&gt;
* Complete the demo for `VV_FACTOR`.[Assign to Shouyi Dai]&lt;br /&gt;
* Phonetic speaker embedding.&lt;br /&gt;
* Overlap training for speaker features.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* word level pronunciation accuracy based on likelihood (tell which word is well pronounced as '0' or badly pronounced '1')&lt;br /&gt;
||&lt;br /&gt;
* model adaptation&lt;br /&gt;
* if possible, an alpha version Parrot for test inside lab to collect some data for better configurature &lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week !! Task Tracking&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.12.18&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* Finish the Voice-printer program&lt;br /&gt;
* Apply the software copyright of Voice-printer&lt;br /&gt;
* APSIPA 2017&lt;br /&gt;
|| &lt;br /&gt;
* Finish the software copyright of Voice-checker&lt;br /&gt;
* Baseline of similar language recongnition system(i-vector, DNN, PTN)&lt;br /&gt;
||&lt;br /&gt;
* focus on function other than UI&lt;br /&gt;
* i-vector LID first&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Optimize the demo of `VV_Seg` and `VV_QuickMark`.&lt;br /&gt;
* Phone-aware scorning on deep speaker feature. [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=643]&lt;br /&gt;
||&lt;br /&gt;
* Phone-aware scorning.&lt;br /&gt;
* Overlap training for speaker features.&lt;br /&gt;
|| &lt;br /&gt;
* test on trivial dataset&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* easy-to-read interfaces for Parrot&lt;br /&gt;
||&lt;br /&gt;
* phone-level likelihood for detail diagnosis and an alpha version Parrot for test inside lab&lt;br /&gt;
||&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Speaker_Recognition_on_Trivial_events</id>
		<title>Speaker Recognition on Trivial events</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Speaker_Recognition_on_Trivial_events"/>
				<updated>2017-11-11T02:21:28Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：/* Data downloading */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Project name=&lt;br /&gt;
Speaker Recognition on Trivial events&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Project members=&lt;br /&gt;
Dong Wang, Miao Zhang, Xiaofei Kang, Lantian Li, Zhiyuan Tang&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
Trivial events are ubiquitous in human to human conversations, e.g., cough, laugh and sniff. Compared to regular&lt;br /&gt;
speech, these trivial events are usually short and unclear,&lt;br /&gt;
thus generally regarded as not speaker discriminative and so&lt;br /&gt;
are largely ignored by present speaker recognition research.&lt;br /&gt;
However, these trivial events are highly valuable in some particular circumstances such as forensic examination, as they&lt;br /&gt;
are less subjected to intentional change, so can be used to&lt;br /&gt;
discover the genuine speaker from disguised speech.&lt;br /&gt;
In this project, we collect a trivial event speech database&lt;br /&gt;
and report speaker recognition results on the database, by both&lt;br /&gt;
human listeners and machines. &lt;br /&gt;
We want to find out:&lt;br /&gt;
(1) which type of trivial event conveys more speaker information;&lt;br /&gt;
(2) who, human or machine, is more apt to identify speakers from these trivial events.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Speaker feature learning=&lt;br /&gt;
&lt;br /&gt;
The discovery of the short-time property of speaker traits is the key step towards speech signal factorization, as &lt;br /&gt;
the speaker trait is one of the two main factors: the other is linguistic content that we have known for a long time&lt;br /&gt;
being short-time patterns. &lt;br /&gt;
&lt;br /&gt;
The key idea of speaker feature learning is simply based on the idea of discriminating training speakers based on &lt;br /&gt;
short-time frames by deep neural networks (DNN), date back to 2014 by Ehsan et al.[2]. As shown below, the output of the DNN &lt;br /&gt;
involves the training speakers, and the frame-level speaker features are read from the last hidden layer. The &lt;br /&gt;
basic assumption here is: if the output of the last hidden layer can be used as the input feature of the &lt;br /&gt;
last hidden layer (a software regression classifier), these features should be speaker discriminative. &lt;br /&gt;
&lt;br /&gt;
[[文件:Dnn-spk.png|500px]]&lt;br /&gt;
&lt;br /&gt;
However, the vanilla structure of Ehsan et al. performs rather poor compared to the i-vector counterpart. One reason is &lt;br /&gt;
that the simple back-end scoring is based on average to derive the utterance-based representations (called d-vectors) , but&lt;br /&gt;
another reason is the vanilla DNN structure that does not consider much of the context and pattern learning. We therefore&lt;br /&gt;
proposed a CT-DNN model that can learn stronger speaker features. The structure is shown below[1]:&lt;br /&gt;
&lt;br /&gt;
[[文件:Ctdnn-spk.png|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Recently, we found that an 'all-info' training is effective for learning features. Looking back to DNN and CT-DNN, although the features &lt;br /&gt;
read from last hidden layer are discriminative, but not 'all discriminative', because some discriminant info can be also impelemented&lt;br /&gt;
in the last affine layer. A better strategy is let the feature generation net (feature net) learns all the things of discrimination. &lt;br /&gt;
To achieve this, we discarded the parametric classifier (the last affine layer) and use the simple cosine distance to conduct the&lt;br /&gt;
classification. An iterative training scheme can be used to implement this idea, that is, after each epoch, averaging the speaker&lt;br /&gt;
features to derive speaker vectors, and then use the speaker vectors to replace the last hidden layer. The training will be then&lt;br /&gt;
taken as usual. The new structure is as follows[4]:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[文件:fullinfo-spk.png|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Database design=&lt;br /&gt;
&lt;br /&gt;
To collect the data, we designed a mobile application and&lt;br /&gt;
distributed it to people who agreed to participate. The application asked the participants to utter 6 types of trivial events&lt;br /&gt;
in a random order, and each event occurred 10 times randomly. The random order ensures a reasonable variance of the&lt;br /&gt;
recordings for each event. The sampling rate of the recordings was set to 16 kHz and the precision of the samples was&lt;br /&gt;
16 bits.&lt;br /&gt;
&lt;br /&gt;
We first designed CSLT-TRIVIAL-I which involves 6 trivial events of speech, i.e., cough, laugh, 'hmm', 'tsk-tsk', 'ahem'and sniff.&lt;br /&gt;
&lt;br /&gt;
[[文件:Cslt-trivial-1.png|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A speech database CSLT-DISGUISE-I of normal &amp;amp; disguised pairs was also designed.&lt;br /&gt;
&lt;br /&gt;
[[文件:Cslt-disguise-1.png|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Data downloading==&lt;br /&gt;
&lt;br /&gt;
CSLT-TRIVIAL-I CSLT-DISGUISE-I are free for research. LICENSE is needed. Contact  Dr. Dong Wang (wangdong99@mails.tsinghua.edu.cn).&lt;br /&gt;
Download link: https://github.com/CSLT-THU/TRIVIAL-EVENTS-RECOGNITION&lt;br /&gt;
&lt;br /&gt;
=Human performance=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Human performed not well on CSLT-TRIVIAL-I, also on CSLT-DISGUISE-I with detection error rate (DER) 47.47%.&lt;br /&gt;
&lt;br /&gt;
[[文件:Human-trivial-1.png|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Machine performance=&lt;br /&gt;
&lt;br /&gt;
Machine performance on CSLT-TRIVIAL-I.&lt;br /&gt;
&lt;br /&gt;
[[文件:Machine-trivial-1.png|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Machine performance on CSLT-DISGUISE-I.&lt;br /&gt;
&lt;br /&gt;
[[文件:Machine-disguise-1.png|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Research directions=&lt;br /&gt;
* Speech perception.&lt;br /&gt;
* Forensic examination.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Reference=&lt;br /&gt;
&lt;br /&gt;
[1] Lantian Li, Yixiang Chen, Ying Shi, Zhiyuan Tang, and Dong Wang, “Deep speaker feature learning for text-independent speaker verification,”, Interspeech 2017.&lt;br /&gt;
&lt;br /&gt;
[2] Lantian Li, Dong Wang, Yixiang Chen, Ying Shing, Zhiyuan Tang, http://wangd.cslt.org/public/pdf/spkfact.pdf&lt;br /&gt;
&lt;br /&gt;
[3] Dong Wang,Lantian Li,Ying Shi,Yixiang Chen,Zhiyuan Tang., &amp;quot;Deep Factorization for Speech Signal&amp;quot;, https://arxiv.org/abs/1706.01777&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Recording_instruction.pdf</id>
		<title>文件:Recording instruction.pdf</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Recording_instruction.pdf"/>
				<updated>2017-09-26T05:32:20Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:%E5%BD%95%E9%9F%B3%E6%8C%87%E5%AF%BC.pdf</id>
		<title>文件:录音指导.pdf</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:%E5%BD%95%E9%9F%B3%E6%8C%87%E5%AF%BC.pdf"/>
				<updated>2017-09-26T05:15:09Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-18</id>
		<title>ASR Status Report 2017-9-18</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-18"/>
				<updated>2017-09-18T05:58:57Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
* Absent&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
*Test and improve the IOS APP for recording audios.&lt;br /&gt;
*Finish the experiment to test the machine error rate,the result is in my cvss [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=kangxf&amp;amp;step=view_request&amp;amp;cvssid=629 here] .&lt;br /&gt;
|| &lt;br /&gt;
*Record the audios with zhangmiao using the money from wang.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Finish human test website&lt;br /&gt;
* Design recording app with Kangxf&lt;br /&gt;
* T-SNE analysis&lt;br /&gt;
|| &lt;br /&gt;
* Absent for school class&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* Implementation of node-pruning.&lt;br /&gt;
* comparison of connection-pruning and node-pruning, see [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=wangyanqing&amp;amp;step=view_request&amp;amp;cvssid=634 here]&lt;br /&gt;
||&lt;br /&gt;
* continue on relationship and comparison of connection-pruning and node-pruning.&lt;br /&gt;
* Implementation of long-term dropout and experiments based on it.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* group-based softmax finished [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=shiying&amp;amp;step=view_request&amp;amp;cvssid=627 here]&lt;br /&gt;
* multi-decoding for group-based softmax (in progress)&lt;br /&gt;
|| &lt;br /&gt;
* mulit-decoding for group-based softmax&lt;br /&gt;
* PTN &lt;br /&gt;
* apply Lid for group-based softmax&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Go on speaker segmentation tasks, see [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=615]&lt;br /&gt;
** Make some smooth tricks (Silence limits [MDR] and window-based smooth [FAR]).&lt;br /&gt;
** R.T. test.&lt;br /&gt;
* Music / Noise detection, see [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=624]&lt;br /&gt;
||&lt;br /&gt;
* Package the code for speaker segmentaion.&lt;br /&gt;
* Go on music / noise detection tasks.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Part theoretical study of mispronunciation detection.&lt;br /&gt;
* Toolbook writing.&lt;br /&gt;
||&lt;br /&gt;
* Experiments on phonetic LID.&lt;br /&gt;
* Experiments on mispronunciation detection&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----------------------------------------------&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
*Got phonetic feat from a stronger phonetic network&lt;br /&gt;
*Finished part of the experiment using stronger phonetic feature. &lt;br /&gt;
||&lt;br /&gt;
*Will be absent for school.&lt;br /&gt;
*But I will finish the remaining experiment.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* improve the human Test website：, save the test recordings, decline the positive samples&lt;br /&gt;
* Recording and cutting the audios, a total of 12 groups&lt;br /&gt;
|| &lt;br /&gt;
* Continue to record the audios with zhangmiao&lt;br /&gt;
* Continue to ask people to do human test&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Perform human test&lt;br /&gt;
* Record some other people and do the experiments again&lt;br /&gt;
|| &lt;br /&gt;
* Continue to ask people to do human test&lt;br /&gt;
* Recording(the goal is to record 400 to 500 people) [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/cc/录音说明.pdf here]&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* multi-decoding ASR model with more pdfs. Performance better than before but not well enough&lt;br /&gt;
* add sperate symbel to discriminated kazak and uyghur word set&lt;br /&gt;
* group-based softmax(in progress)&lt;br /&gt;
|| &lt;br /&gt;
* finish group-based softmax and test the performance&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Go on speaker segmentation tasks, see [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=615 here]&lt;br /&gt;
** Complete the phonetic-aware speaker segmentation.&lt;br /&gt;
*** Word-level boundaries from the ASR.&lt;br /&gt;
*** Word-level d-vector and clustering.&lt;br /&gt;
||&lt;br /&gt;
* Try some smooth tricks.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Organized the code and doc of Parrot system[http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&amp;amp;step=view_request&amp;amp;cvssid=635]&lt;br /&gt;
||&lt;br /&gt;
* Theoretical study of pronunciation detection&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:En-tsne.pdf</id>
		<title>文件:En-tsne.pdf</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:En-tsne.pdf"/>
				<updated>2017-09-16T12:37:33Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-11</id>
		<title>ASR Status Report 2017-9-11</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-11"/>
				<updated>2017-09-11T06:41:51Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
*Got phonetic feat from a stronger phonetic network&lt;br /&gt;
*Finished part of the experiment using stronger phonetic feature. &lt;br /&gt;
||&lt;br /&gt;
*Will be absent for school.&lt;br /&gt;
*But I will finish the remaining experiment.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* improve the human Test website：, save the test recordings, decline the positive samples&lt;br /&gt;
* Recording and cutting the audios, a total of 12 groups&lt;br /&gt;
|| &lt;br /&gt;
* Recording the 440 groups audios left with zhangmiao&lt;br /&gt;
* Continue to ask people to do human test&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Perform human test&lt;br /&gt;
* Record some other people and do the experiments again&lt;br /&gt;
|| &lt;br /&gt;
* Continue to ask people to do human test&lt;br /&gt;
* Recording(the goal is to record 400 to 500 people)&lt;br /&gt;
  [录音说明[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/cc/录音说明.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* multi-decoding ASR model with more pdfs. Performance better than before but not well enough&lt;br /&gt;
* add sperate symbel to discriminated kazak and uyghur word set&lt;br /&gt;
* group-based softmax(in progress)&lt;br /&gt;
|| &lt;br /&gt;
* finish group-based softmax and test the performance&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Go on speaker segmentation tasks, see [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=615 here]&lt;br /&gt;
** Complete the phonetic-aware speaker segmentation.&lt;br /&gt;
*** Word-level boundaries from the ASR.&lt;br /&gt;
*** Word-level d-vector and clustering.&lt;br /&gt;
||&lt;br /&gt;
* Try some smooth tricks.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----------------------------------------------&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
*Finished the phonetic i-vector experiment.&lt;br /&gt;
||&lt;br /&gt;
*get BN feature and train i-vector LID.&lt;br /&gt;
*Get phonetic feat from a stronger phonetic network&lt;br /&gt;
*combine PTN and phonetic i-vector.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* cutting audio and marking：21 speakers，a total of 1050 sentences&lt;br /&gt;
* Finish the new speaker recognition using the two recordings.&lt;br /&gt;
|| &lt;br /&gt;
* improve the human Test website&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
|| &lt;br /&gt;
* Perform human test on 21-style speech(add the disguise)&lt;br /&gt;
* Draw spectrums and t-SNE plots compared with experiment results&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* multi decodeing ASR model&lt;br /&gt;
* multi decodeing with fake Lid [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=shiying&amp;amp;step=view_request&amp;amp;cvssid=627 here]&lt;br /&gt;
* read code about TTS&lt;br /&gt;
|| &lt;br /&gt;
* employ group softmax to train multi decoding ASR model&lt;br /&gt;
* synthesis one 'real' speech&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Go on speaker segmentation tasks, see [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=615 here]&lt;br /&gt;
** Dimensionality reduction.&lt;br /&gt;
** Clustering.&lt;br /&gt;
** Visualization.&lt;br /&gt;
||&lt;br /&gt;
* Phonetic-aware speaker segmentation.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* more indicators for VV scoring system, see [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a1/VV_scoring.pdf].&lt;br /&gt;
||&lt;br /&gt;
* more indicators, a demo with Shuai.&lt;br /&gt;
* toolbook writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-11</id>
		<title>ASR Status Report 2017-9-11</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-11"/>
				<updated>2017-09-11T06:40:49Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
*Got phonetic feat from a stronger phonetic network&lt;br /&gt;
*Finished part of the experiment using stronger phonetic feature. &lt;br /&gt;
||&lt;br /&gt;
*Will be absent for school.&lt;br /&gt;
*But I will finish the remaining experiment.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* improve the human Test website：, save the test recordings, decline the positive samples&lt;br /&gt;
* Recording and cutting the audios, a total of 12 groups&lt;br /&gt;
|| &lt;br /&gt;
* Recording the 440 groups audios left with zhangmiao&lt;br /&gt;
* Continue to ask people to do human test&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Perform human test&lt;br /&gt;
* Record some other people and do the experiments again&lt;br /&gt;
|| &lt;br /&gt;
* Continue to ask people to do human test&lt;br /&gt;
* Recording(the goal is to record 400 to 500 people)&lt;br /&gt;
  [[http://cslt.riit.tsinghua.edu.cn/mediawiki/images/c/cc/录音说明.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* multi-decoding ASR model with more pdfs. Performance better than before but not well enough&lt;br /&gt;
* add sperate symbel to discriminated kazak and uyghur word set&lt;br /&gt;
* group-based softmax(in progress)&lt;br /&gt;
|| &lt;br /&gt;
* finish group-based softmax and test the performance&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Go on speaker segmentation tasks, see [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=615 here]&lt;br /&gt;
** Complete the phonetic-aware speaker segmentation.&lt;br /&gt;
** Word-level boundaries from the ASR.&lt;br /&gt;
** Word-level d-vector and clustering.&lt;br /&gt;
||&lt;br /&gt;
* Try some smooth tricks.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----------------------------------------------&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
*Finished the phonetic i-vector experiment.&lt;br /&gt;
||&lt;br /&gt;
*get BN feature and train i-vector LID.&lt;br /&gt;
*Get phonetic feat from a stronger phonetic network&lt;br /&gt;
*combine PTN and phonetic i-vector.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* cutting audio and marking：21 speakers，a total of 1050 sentences&lt;br /&gt;
* Finish the new speaker recognition using the two recordings.&lt;br /&gt;
|| &lt;br /&gt;
* improve the human Test website&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
|| &lt;br /&gt;
* Perform human test on 21-style speech(add the disguise)&lt;br /&gt;
* Draw spectrums and t-SNE plots compared with experiment results&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* multi decodeing ASR model&lt;br /&gt;
* multi decodeing with fake Lid [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=shiying&amp;amp;step=view_request&amp;amp;cvssid=627 here]&lt;br /&gt;
* read code about TTS&lt;br /&gt;
|| &lt;br /&gt;
* employ group softmax to train multi decoding ASR model&lt;br /&gt;
* synthesis one 'real' speech&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Go on speaker segmentation tasks, see [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=615 here]&lt;br /&gt;
** Dimensionality reduction.&lt;br /&gt;
** Clustering.&lt;br /&gt;
** Visualization.&lt;br /&gt;
||&lt;br /&gt;
* Phonetic-aware speaker segmentation.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* more indicators for VV scoring system, see [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a1/VV_scoring.pdf].&lt;br /&gt;
||&lt;br /&gt;
* more indicators, a demo with Shuai.&lt;br /&gt;
* toolbook writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:%E5%BD%95%E9%9F%B3%E8%AF%B4%E6%98%8E.pdf</id>
		<title>文件:录音说明.pdf</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:%E5%BD%95%E9%9F%B3%E8%AF%B4%E6%98%8E.pdf"/>
				<updated>2017-09-11T06:38:38Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-11</id>
		<title>ASR Status Report 2017-9-11</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-11"/>
				<updated>2017-09-11T06:37:02Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
*Got phonetic feat from a stronger phonetic network&lt;br /&gt;
*Finished part of the experiment using stronger phonetic feature. &lt;br /&gt;
||&lt;br /&gt;
*Will be absent for school.&lt;br /&gt;
*But I will finish the remaining experiment.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* improve the human Test website：, save the test recordings, decline the positive samples&lt;br /&gt;
* Recording and cutting the audios, a total of 12 groups&lt;br /&gt;
|| &lt;br /&gt;
* Recording the 440 groups audios left with zhangmiao&lt;br /&gt;
* Continue to ask people to do human test&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Perform human test&lt;br /&gt;
* Record some other people and do the experiments again&lt;br /&gt;
|| &lt;br /&gt;
* Continue to ask people to do human test&lt;br /&gt;
* Recording(the goal is to record 400 to 500 people)[[媒体文件:范例.ogg]]&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* multi-decoding ASR model with more pdfs down. Performance better than before but not well enough&lt;br /&gt;
* add sperate symbel to discriminated kazak and uyghur&lt;br /&gt;
* group-based softmax(in progress)&lt;br /&gt;
|| &lt;br /&gt;
* finish group-based softmax and test the performance&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----------------------------------------------&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
*Finished the phonetic i-vector experiment.&lt;br /&gt;
||&lt;br /&gt;
*get BN feature and train i-vector LID.&lt;br /&gt;
*Get phonetic feat from a stronger phonetic network&lt;br /&gt;
*combine PTN and phonetic i-vector.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* cutting audio and marking：21 speakers，a total of 1050 sentences&lt;br /&gt;
* Finish the new speaker recognition using the two recordings.&lt;br /&gt;
|| &lt;br /&gt;
* improve the human Test website&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
|| &lt;br /&gt;
* Perform human test on 21-style speech(add the disguise)&lt;br /&gt;
* Draw spectrums and t-SNE plots compared with experiment results&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* multi decodeing ASR model&lt;br /&gt;
* multi decodeing with fake Lid [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=shiying&amp;amp;step=view_request&amp;amp;cvssid=627 here]&lt;br /&gt;
* read code about TTS&lt;br /&gt;
|| &lt;br /&gt;
* employ group softmax to train multi decoding ASR model&lt;br /&gt;
* synthesis one 'real' speech&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Go on speaker segmentation tasks, see [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=615 here]&lt;br /&gt;
** Dimensionality reduction.&lt;br /&gt;
** Clustering.&lt;br /&gt;
** Visualization.&lt;br /&gt;
||&lt;br /&gt;
* Phonetic-aware speaker segmentation.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* more indicators for VV scoring system, see [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a1/VV_scoring.pdf].&lt;br /&gt;
||&lt;br /&gt;
* more indicators, a demo with Shuai.&lt;br /&gt;
* toolbook writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:%E5%BD%95%E9%9F%B3%E8%AF%B4%E6%98%8E.docx</id>
		<title>文件:录音说明.docx</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:%E5%BD%95%E9%9F%B3%E8%AF%B4%E6%98%8E.docx"/>
				<updated>2017-09-11T06:35:20Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-11</id>
		<title>ASR Status Report 2017-9-11</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-11"/>
				<updated>2017-09-11T05:23:18Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
*Got phonetic feat from a stronger phonetic network&lt;br /&gt;
*Finished part of the experiment using stronger phonetic feature. &lt;br /&gt;
||&lt;br /&gt;
*Will be absent for school.&lt;br /&gt;
*But I will finish the remaining experiment.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* improve the human Test website：, save the test recordings, decline the positive samples&lt;br /&gt;
* Recording and cutting the audios, a total of 12 groups&lt;br /&gt;
|| &lt;br /&gt;
* Recording the 440 groups audios left with zhangmiao&lt;br /&gt;
* Continue to ask people to do human test&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Perform human test&lt;br /&gt;
* Record some other people and do the experiments again&lt;br /&gt;
|| &lt;br /&gt;
* Continue to ask people to do human test&lt;br /&gt;
* Recording(the goal is to record 400 to 500 people)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----------------------------------------------&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
*Finished the phonetic i-vector experiment.&lt;br /&gt;
||&lt;br /&gt;
*get BN feature and train i-vector LID.&lt;br /&gt;
*Get phonetic feat from a stronger phonetic network&lt;br /&gt;
*combine PTN and phonetic i-vector.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* cutting audio and marking：21 speakers，a total of 1050 sentences&lt;br /&gt;
* Finish the new speaker recognition using the two recordings.&lt;br /&gt;
|| &lt;br /&gt;
* improve the human Test website&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
|| &lt;br /&gt;
* Perform human test on 21-style speech(add the disguise)&lt;br /&gt;
* Draw spectrums and t-SNE plots compared with experiment results&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* multi decodeing ASR model&lt;br /&gt;
* multi decodeing with fake Lid [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=shiying&amp;amp;step=view_request&amp;amp;cvssid=627 here]&lt;br /&gt;
* read code about TTS&lt;br /&gt;
|| &lt;br /&gt;
* employ group softmax to train multi decoding ASR model&lt;br /&gt;
* synthesis one 'real' speech&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Go on speaker segmentation tasks, see [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=615 here]&lt;br /&gt;
** Dimensionality reduction.&lt;br /&gt;
** Clustering.&lt;br /&gt;
** Visualization.&lt;br /&gt;
||&lt;br /&gt;
* Phonetic-aware speaker segmentation.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* more indicators for VV scoring system, see [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a1/VV_scoring.pdf].&lt;br /&gt;
||&lt;br /&gt;
* more indicators, a demo with Shuai.&lt;br /&gt;
* toolbook writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-11</id>
		<title>ASR Status Report 2017-9-11</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-11"/>
				<updated>2017-09-11T04:57:29Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Perform human test&lt;br /&gt;
* Record some other people and do the experiments again&lt;br /&gt;
|| &lt;br /&gt;
* Continue to ask people to do human test&lt;br /&gt;
* Recording(the goal is to record 400 to 500 people)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
*&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
----------------------------------------------&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
*Finished the phonetic i-vector experiment.&lt;br /&gt;
||&lt;br /&gt;
*get BN feature and train i-vector LID.&lt;br /&gt;
*Get phonetic feat from a stronger phonetic network&lt;br /&gt;
*combine PTN and phonetic i-vector.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* cutting audio and marking：21 speakers，a total of 1050 sentences&lt;br /&gt;
* Finish the new speaker recognition using the two recordings.&lt;br /&gt;
|| &lt;br /&gt;
* improve the human Test website&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
|| &lt;br /&gt;
* Perform human test on 21-style speech(add the disguise)&lt;br /&gt;
* Draw spectrums and t-SNE plots compared with experiment results&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* multi decodeing ASR model&lt;br /&gt;
* multi decodeing with fake Lid [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=shiying&amp;amp;step=view_request&amp;amp;cvssid=627 here]&lt;br /&gt;
* read code about TTS&lt;br /&gt;
|| &lt;br /&gt;
* employ group softmax to train multi decoding ASR model&lt;br /&gt;
* synthesis one 'real' speech&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Go on speaker segmentation tasks, see [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=lilt&amp;amp;step=view_request&amp;amp;cvssid=615 here]&lt;br /&gt;
** Dimensionality reduction.&lt;br /&gt;
** Clustering.&lt;br /&gt;
** Visualization.&lt;br /&gt;
||&lt;br /&gt;
* Phonetic-aware speaker segmentation.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* more indicators for VV scoring system, see [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a1/VV_scoring.pdf].&lt;br /&gt;
||&lt;br /&gt;
* more indicators, a demo with Shuai.&lt;br /&gt;
* toolbook writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Spectrum_of_five_trivial_events.pdf</id>
		<title>文件:Spectrum of five trivial events.pdf</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Spectrum_of_five_trivial_events.pdf"/>
				<updated>2017-09-05T03:36:25Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Asr-progress_2017.08</id>
		<title>Asr-progress 2017.08</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Asr-progress_2017.08"/>
				<updated>2017-09-04T05:36:53Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：/* Time Off Table */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
===Daily Report===&lt;br /&gt;
&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Person  !! start!! leave !! hours ||status (problems/solutions)&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.1&lt;br /&gt;
|Yanqing Wang||  11:00  || 19:30   ||  8.5h    ||  start to write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  Finish experiments of 12-style speech&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang || 10:00 || 23:00 || 13h ||   discuss the recording plan and decide a preliminary plan.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   run basic PTN system&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.2&lt;br /&gt;
|Yanqing Wang ||  11:00  || 19:00   ||  8h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Finish experiments of 12-style speech&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||    Complete a part of the recording work: collecting six types of sound from 13 people.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   run basic PTN system&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.3&lt;br /&gt;
|Yanqing Wang ||  11:00  || 17:00   ||  6h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Analyse the experiment results&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||  read related paper and do a speaker recognition experiment.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   change the chunk width and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.4&lt;br /&gt;
|Yanqing Wang ||   11:00  || 19:00   ||  8h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Design the new test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||    Finish experiments of 12-style speech with ZhangMiao,a total of 5 experiments.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   change the chunk width and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.5&lt;br /&gt;
|Yanqing Wang ||   12:30  || 19:00   ||  6.5h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  10:00  ||  21:00  ||   11h   ||  Build up the website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   change the chunk width and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.6&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai ||    ||    ||    ||    &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.7&lt;br /&gt;
|Yanqing Wang ||    11:00  || 19:00   ||  8h    ||  write a shell script to prune a new network according to a pruned network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Record work&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||   Recording the  audio for the speaker recognition.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   change sample_per_iter and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.8&lt;br /&gt;
|Yanqing Wang || 11:00  || 19:00   ||  8h    ||  use the yesterday's shell script but find that the proficiency is too low&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   || Wirte the code to cut silence in speech &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||    continue to record the audio.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 21:00 || 12h ||   change sample_per_iter and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.9&lt;br /&gt;
|Yanqing Wang ||    11:00  || 19:00   ||  8h    ||  learn to write a kaldi-command &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  replenish the test speech&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||   read the paper about i-vector and d-vector.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||  run basic i-vector system &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.10&lt;br /&gt;
|Yanqing Wang ||    11:00  || 19:00   ||  8h    ||  write a kaldi-command to  prune a new network according to a pruned network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  replace the speech in website with new one&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||  Learn the new test website from zhangmiao &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||  run basic i-vector system &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.11&lt;br /&gt;
|Yanqing Wang ||   11:00  || 19:00   ||  8h    ||  use yesterday's command to prune a new network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  check the linear chapter&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||  Learn PHP, and test the new test website.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||  write the code to make phonetic feature&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.12&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  11:00  ||  20:00  ||  9h    ||  check the linear chapter&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||  change the structure of 6 layer TDNN&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.13&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||   run phonetic i-vector experiment for dev_all&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.14&lt;br /&gt;
|Yanqing Wang ||    12:00  || 18:00   ||  6h    ||  exp: 97% pct prune and a contrast exp: apply its structure to a new network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  cut new recorded speech &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||   run phonetic i-vector experiment for dev_1s&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.15&lt;br /&gt;
|Yanqing Wang ||    12:00  || 18:00   ||  6h    ||  exp: 97% pct prune and a contrast exp: apply its structure to a new network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   || 9:00   ||  21:00  ||   12h   ||  run the experiments on recorded speech&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||   write code to deal with trails.trl for test data dev_3s&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.16&lt;br /&gt;
|Yanqing Wang ||    12:00  || 20:00   ||  8h    ||  summarize the 2 exp I did yesterday&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  design the 20-style human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||   run phonetic i-vector experiment for dev_3s&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.17&lt;br /&gt;
|Yanqing Wang ||    12:00  || 20:00   ||  8h    ||  explore the distribution of the nonlin6's output&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  wirte the wbsite&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   run phonetic i-vector experiment for dev_3s&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.18&lt;br /&gt;
|Yanqing Wang ||   12:00  || 20:00   ||  8h    ||  explore the distribution of the nonlin6's output&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  || 21:00   ||  12h    ||  wite the website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   prepare the ppt for report&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.19&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  10:00  || 20:00   ||  10h    ||  finish the website and update the results&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 10:00 || 21:00 || 11h ||   sort out the data for all the experiment, deal with ppt for report&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.20&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h || deal with ppt for report, draw diagrams for report&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.21&lt;br /&gt;
|Yanqing Wang ||    12:00  || 20:00   ||  8h    ||  do exps: prune a just-randomly-initiated network randomly, and train it ( groups 1 )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:30 || 23:00 || 12.5h  || Improve the test website to judge before committing.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h || deal with ppt for report, report my experiment for my group&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.22&lt;br /&gt;
|Yanqing Wang ||    11:30  || 18:00   ||  6.5h    ||  do exps: prune a just-randomly-initiated network randomly, and train it ( groups 2-3 )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  Recording new audios for the speaker recognition.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.23&lt;br /&gt;
|Yanqing Wang ||   11:30  || 18:00   ||  6.5h    ||  do exps: prune a just-randomly-initiated network randomly, and train it ( groups 4-5 )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang|| 9:30 || 21:30 || 12h  ||  continue to record new audios,a total of 38 person.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.24&lt;br /&gt;
|Yanqing Wang ||   11:30  || 18:00   ||  6.5h    || write awk , shell scripts and kaldi-command as a preparation for node-sparseness task.&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  Organize the voice and do a sample test,and learn 4 papers from lantian&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.25&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  do a speaker recognition with the new audios.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.26&lt;br /&gt;
|Yanqing Wang ||  8:30  || 15:00   ||  6.5h    ||  survey on node-sparseness, sammarize the exps of this week and ask for short-time leave&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.27&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||   ||   ||    || &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.28&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  I arrange a new voice and data sets, and find out all the twice recording all of 21 people.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.29&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||   cutting the audio of 21 speakers, a total of 1050 sentences.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.30&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:30 || 23:30  ||  13h  || mark the segments of 1050 audios. &lt;br /&gt;
|-&lt;br /&gt;
| Jiayin Cai || 10:30 || 19:30  ||  9h  || do the experiment for i-vector with delta_order =3 &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.31&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:30 || 23:30  ||  13h  ||   finish the speaker recognition experiment,and get a test result.&lt;br /&gt;
|-&lt;br /&gt;
| Jiayin Cai || 10:30 || 19:30  ||  9h  || do the experiment for i-vector with delta_order =3 &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Time Off Table===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Name       !! Days off&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang|| &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen||  &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang||  &lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang ||&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai ||&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Asr-progress_2017.08</id>
		<title>Asr-progress 2017.08</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Asr-progress_2017.08"/>
				<updated>2017-09-04T05:36:25Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
===Daily Report===&lt;br /&gt;
&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Person  !! start!! leave !! hours ||status (problems/solutions)&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.1&lt;br /&gt;
|Yanqing Wang||  11:00  || 19:30   ||  8.5h    ||  start to write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  Finish experiments of 12-style speech&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang || 10:00 || 23:00 || 13h ||   discuss the recording plan and decide a preliminary plan.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   run basic PTN system&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.2&lt;br /&gt;
|Yanqing Wang ||  11:00  || 19:00   ||  8h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Finish experiments of 12-style speech&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||    Complete a part of the recording work: collecting six types of sound from 13 people.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   run basic PTN system&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.3&lt;br /&gt;
|Yanqing Wang ||  11:00  || 17:00   ||  6h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Analyse the experiment results&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||  read related paper and do a speaker recognition experiment.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   change the chunk width and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.4&lt;br /&gt;
|Yanqing Wang ||   11:00  || 19:00   ||  8h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Design the new test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||    Finish experiments of 12-style speech with ZhangMiao,a total of 5 experiments.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   change the chunk width and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.5&lt;br /&gt;
|Yanqing Wang ||   12:30  || 19:00   ||  6.5h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  10:00  ||  21:00  ||   11h   ||  Build up the website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   change the chunk width and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.6&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai ||    ||    ||    ||    &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.7&lt;br /&gt;
|Yanqing Wang ||    11:00  || 19:00   ||  8h    ||  write a shell script to prune a new network according to a pruned network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Record work&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||   Recording the  audio for the speaker recognition.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   change sample_per_iter and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.8&lt;br /&gt;
|Yanqing Wang || 11:00  || 19:00   ||  8h    ||  use the yesterday's shell script but find that the proficiency is too low&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   || Wirte the code to cut silence in speech &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||    continue to record the audio.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 21:00 || 12h ||   change sample_per_iter and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.9&lt;br /&gt;
|Yanqing Wang ||    11:00  || 19:00   ||  8h    ||  learn to write a kaldi-command &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  replenish the test speech&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||   read the paper about i-vector and d-vector.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||  run basic i-vector system &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.10&lt;br /&gt;
|Yanqing Wang ||    11:00  || 19:00   ||  8h    ||  write a kaldi-command to  prune a new network according to a pruned network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  replace the speech in website with new one&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||  Learn the new test website from zhangmiao &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||  run basic i-vector system &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.11&lt;br /&gt;
|Yanqing Wang ||   11:00  || 19:00   ||  8h    ||  use yesterday's command to prune a new network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  check the linear chapter&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||  Learn PHP, and test the new test website.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||  write the code to make phonetic feature&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.12&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  11:00  ||  20:00  ||  9h    ||  check the linear chapter&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||  change the structure of 6 layer TDNN&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.13&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||   run phonetic i-vector experiment for dev_all&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.14&lt;br /&gt;
|Yanqing Wang ||    12:00  || 18:00   ||  6h    ||  exp: 97% pct prune and a contrast exp: apply its structure to a new network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  cut new recorded speech &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||   run phonetic i-vector experiment for dev_1s&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.15&lt;br /&gt;
|Yanqing Wang ||    12:00  || 18:00   ||  6h    ||  exp: 97% pct prune and a contrast exp: apply its structure to a new network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   || 9:00   ||  21:00  ||   12h   ||  run the experiments on recorded speech&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||   write code to deal with trails.trl for test data dev_3s&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.16&lt;br /&gt;
|Yanqing Wang ||    12:00  || 20:00   ||  8h    ||  summarize the 2 exp I did yesterday&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  design the 20-style human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||   run phonetic i-vector experiment for dev_3s&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.17&lt;br /&gt;
|Yanqing Wang ||    12:00  || 20:00   ||  8h    ||  explore the distribution of the nonlin6's output&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  wirte the wbsite&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   run phonetic i-vector experiment for dev_3s&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.18&lt;br /&gt;
|Yanqing Wang ||   12:00  || 20:00   ||  8h    ||  explore the distribution of the nonlin6's output&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  || 21:00   ||  12h    ||  wite the website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   prepare the ppt for report&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.19&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  10:00  || 20:00   ||  10h    ||  finish the website and update the results&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 10:00 || 21:00 || 11h ||   sort out the data for all the experiment, deal with ppt for report&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.20&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h || deal with ppt for report, draw diagrams for report&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.21&lt;br /&gt;
|Yanqing Wang ||    12:00  || 20:00   ||  8h    ||  do exps: prune a just-randomly-initiated network randomly, and train it ( groups 1 )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:30 || 23:00 || 12.5h  || Improve the test website to judge before committing.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h || deal with ppt for report, report my experiment for my group&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.22&lt;br /&gt;
|Yanqing Wang ||    11:30  || 18:00   ||  6.5h    ||  do exps: prune a just-randomly-initiated network randomly, and train it ( groups 2-3 )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  Recording new audios for the speaker recognition.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.23&lt;br /&gt;
|Yanqing Wang ||   11:30  || 18:00   ||  6.5h    ||  do exps: prune a just-randomly-initiated network randomly, and train it ( groups 4-5 )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang|| 9:30 || 21:30 || 12h  ||  continue to record new audios,a total of 38 person.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.24&lt;br /&gt;
|Yanqing Wang ||   11:30  || 18:00   ||  6.5h    || write awk , shell scripts and kaldi-command as a preparation for node-sparseness task.&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  Organize the voice and do a sample test,and learn 4 papers from lantian&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.25&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  do a speaker recognition with the new audios.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.26&lt;br /&gt;
|Yanqing Wang ||  8:30  || 15:00   ||  6.5h    ||  survey on node-sparseness, sammarize the exps of this week and ask for short-time leave&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.27&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||   ||   ||    || &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.28&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  I arrange a new voice and data sets, and find out all the twice recording all of 21 people.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.29&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||   cutting the audio of 21 speakers, a total of 1050 sentences.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.30&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:30 || 23:30  ||  13h  || mark the segments of 1050 audios. &lt;br /&gt;
|-&lt;br /&gt;
| Jiayin Cai || 10:30 || 19:30  ||  9h  || do the experiment for i-vector with delta_order =3 &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;5&amp;quot;|2017.8.31&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:30 || 23:30  ||  13h  ||   finish the speaker recognition experiment,and get a test result.&lt;br /&gt;
|-&lt;br /&gt;
| Jiayin Cai || 10:30 || 19:30  ||  9h  || do the experiment for i-vector with delta_order =3 &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Time Off Table===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Name       !! Days off&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang|| &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen||  &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang||  &lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang ||&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Asr-progress_2017.08</id>
		<title>Asr-progress 2017.08</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Asr-progress_2017.08"/>
				<updated>2017-09-04T05:34:27Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：/* Daily Report */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
===Daily Report===&lt;br /&gt;
&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Person  !! start!! leave !! hours ||status (problems/solutions)&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.1&lt;br /&gt;
|Yanqing Wang||  11:00  || 19:30   ||  8.5h    ||  start to write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  Finish experiments of 12-style speech&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang || 10:00 || 23:00 || 13h ||   discuss the recording plan and decide a preliminary plan.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   run basic PTN system&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.2&lt;br /&gt;
|Yanqing Wang ||  11:00  || 19:00   ||  8h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Finish experiments of 12-style speech&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||    Complete a part of the recording work: collecting six types of sound from 13 people.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   run basic PTN system&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.3&lt;br /&gt;
|Yanqing Wang ||  11:00  || 17:00   ||  6h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Analyse the experiment results&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||  read related paper and do a speaker recognition experiment.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   change the chunk width and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.4&lt;br /&gt;
|Yanqing Wang ||   11:00  || 19:00   ||  8h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Design the new test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||    Finish experiments of 12-style speech with ZhangMiao,a total of 5 experiments.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   change the chunk width and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.5&lt;br /&gt;
|Yanqing Wang ||   12:30  || 19:00   ||  6.5h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  10:00  ||  21:00  ||   11h   ||  Build up the website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   change the chunk width and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.6&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai ||    ||    ||    ||    &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.7&lt;br /&gt;
|Yanqing Wang ||    11:00  || 19:00   ||  8h    ||  write a shell script to prune a new network according to a pruned network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Record work&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||   Recording the  audio for the speaker recognition.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   change sample_per_iter and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.8&lt;br /&gt;
|Yanqing Wang || 11:00  || 19:00   ||  8h    ||  use the yesterday's shell script but find that the proficiency is too low&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   || Wirte the code to cut silence in speech &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||    continue to record the audio.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 21:00 || 12h ||   change sample_per_iter and run PTN &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.9&lt;br /&gt;
|Yanqing Wang ||    11:00  || 19:00   ||  8h    ||  learn to write a kaldi-command &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  replenish the test speech&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||   read the paper about i-vector and d-vector.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||  run basic i-vector system &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.10&lt;br /&gt;
|Yanqing Wang ||    11:00  || 19:00   ||  8h    ||  write a kaldi-command to  prune a new network according to a pruned network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  replace the speech in website with new one&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||  Learn the new test website from zhangmiao &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||  run basic i-vector system &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.11&lt;br /&gt;
|Yanqing Wang ||   11:00  || 19:00   ||  8h    ||  use yesterday's command to prune a new network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  check the linear chapter&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||  Learn PHP, and test the new test website.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||  write the code to make phonetic feature&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.12&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  11:00  ||  20:00  ||  9h    ||  check the linear chapter&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||  change the structure of 6 layer TDNN&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.13&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||   run phonetic i-vector experiment for dev_all&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.14&lt;br /&gt;
|Yanqing Wang ||    12:00  || 18:00   ||  6h    ||  exp: 97% pct prune and a contrast exp: apply its structure to a new network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  cut new recorded speech &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||   run phonetic i-vector experiment for dev_1s&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.15&lt;br /&gt;
|Yanqing Wang ||    12:00  || 18:00   ||  6h    ||  exp: 97% pct prune and a contrast exp: apply its structure to a new network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   || 9:00   ||  21:00  ||   12h   ||  run the experiments on recorded speech&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||   write code to deal with trails.trl for test data dev_3s&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.16&lt;br /&gt;
|Yanqing Wang ||    12:00  || 20:00   ||  8h    ||  summarize the 2 exp I did yesterday&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  design the 20-style human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h ||   run phonetic i-vector experiment for dev_3s&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.17&lt;br /&gt;
|Yanqing Wang ||    12:00  || 20:00   ||  8h    ||  explore the distribution of the nonlin6's output&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  wirte the wbsite&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   run phonetic i-vector experiment for dev_3s&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.18&lt;br /&gt;
|Yanqing Wang ||   12:00  || 20:00   ||  8h    ||  explore the distribution of the nonlin6's output&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  || 21:00   ||  12h    ||  wite the website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h ||   prepare the ppt for report&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.19&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  10:00  || 20:00   ||  10h    ||  finish the website and update the results&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 10:00 || 21:00 || 11h ||   sort out the data for all the experiment, deal with ppt for report&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.20&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 13:00 || 22:00 || 9h || deal with ppt for report, draw diagrams for report&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.21&lt;br /&gt;
|Yanqing Wang ||    12:00  || 20:00   ||  8h    ||  do exps: prune a just-randomly-initiated network randomly, and train it ( groups 1 )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:30 || 23:00 || 12.5h  || Improve the test website to judge before committing.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai || 9:00 || 22:00 || 13h || deal with ppt for report, report my experiment for my group&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.22&lt;br /&gt;
|Yanqing Wang ||    11:30  || 18:00   ||  6.5h    ||  do exps: prune a just-randomly-initiated network randomly, and train it ( groups 2-3 )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  Recording new audios for the speaker recognition.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.23&lt;br /&gt;
|Yanqing Wang ||   11:30  || 18:00   ||  6.5h    ||  do exps: prune a just-randomly-initiated network randomly, and train it ( groups 4-5 )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang|| 9:30 || 21:30 || 12h  ||  continue to record new audios,a total of 38 person.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.24&lt;br /&gt;
|Yanqing Wang ||   11:30  || 18:00   ||  6.5h    || write awk , shell scripts and kaldi-command as a preparation for node-sparseness task.&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  Organize the voice and do a sample test,and learn 4 papers from lantian&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.25&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  do a speaker recognition with the new audios.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.26&lt;br /&gt;
|Yanqing Wang ||  8:30  || 15:00   ||  6.5h    ||  survey on node-sparseness, sammarize the exps of this week and ask for short-time leave&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.27&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||   ||   ||    || &lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.28&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  I arrange a new voice and data sets, and find out all the twice recording all of 21 people.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.29&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||   cutting the audio of 21 speakers, a total of 1050 sentences.&lt;br /&gt;
|-&lt;br /&gt;
|Jiayin Cai  ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.30&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:30 || 23:30  ||  13h  || mark the segments of 1050 audios. &lt;br /&gt;
|-&lt;br /&gt;
| Jiayin Cai || 10:30 || 19:30  ||  9h  || do the experiment for i-vector with delta_order =3 &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.31&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:30 || 23:30  ||  13h  ||   finish the speaker recognition experiment,and get a test result.&lt;br /&gt;
|-&lt;br /&gt;
| Jiayin Cai || 10:30 || 19:30  ||  9h  || do the experiment for i-vector with delta_order =3 &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Time Off Table===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Name       !! Days off&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang|| &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen||  &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang||  &lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang ||&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-4</id>
		<title>ASR Status Report 2017-9-4</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-4"/>
				<updated>2017-09-04T01:30:08Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
*Finished the phonetic i-vector experiment.&lt;br /&gt;
||&lt;br /&gt;
*get BN feature and train i-vector LID.&lt;br /&gt;
*Get phonetic feat from a stronger phonetic network&lt;br /&gt;
*combine PTN and phonetic i-vector.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
|| &lt;br /&gt;
* Perform human test on 21-style speech(add the disguise)&lt;br /&gt;
* Draw spectrums and t-SNE plots compared with experiment results&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* multi decodeing ASR model&lt;br /&gt;
* multi decodeing with fake Lid [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=shiying&amp;amp;step=view_request&amp;amp;cvssid=627 here]&lt;br /&gt;
* read code about TTS&lt;br /&gt;
|| &lt;br /&gt;
* employ group softmax to train multi decoding ASR model&lt;br /&gt;
* synthesis one 'real' speech&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* more indicators for VV scoring system, see [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a1/VV_scoring.pdf].&lt;br /&gt;
||&lt;br /&gt;
* more indicators, a demo with Shuai.&lt;br /&gt;
* toolbook writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.8.21&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* Recording new audios from 38 person, located in /work7/tanghui/kangxf/workspaces/speaker/wavdata/V2.0 &lt;br /&gt;
* Improve the test website to judge before committing&lt;br /&gt;
|| &lt;br /&gt;
* Test the new recording。&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* pruning the connections and refining, [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&amp;amp;step=view_request&amp;amp;cvssid=626 results]&lt;br /&gt;
||&lt;br /&gt;
* Absent. &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* check toolkit code&lt;br /&gt;
* multilingual baseline system&lt;br /&gt;
|| &lt;br /&gt;
* train language id model&lt;br /&gt;
* use Lid to do multi-decoding&lt;br /&gt;
* some experiments for zhiyong zhang about TTS&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Attend IS2017.&lt;br /&gt;
||&lt;br /&gt;
* Go on speaker segmentation tasks.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* several indicators for VV scoring system, see [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a1/VV_scoring.pdf].&lt;br /&gt;
||&lt;br /&gt;
* more indicators, a demo with Shuai.&lt;br /&gt;
* toolbook writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-4</id>
		<title>ASR Status Report 2017-9-4</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-4"/>
				<updated>2017-09-04T01:27:25Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
*Finished the phonetic i-vector experiment.&lt;br /&gt;
||&lt;br /&gt;
*combine PTN and phonetic i-vector.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
|| &lt;br /&gt;
* Perform human test on 21-style speech(add the disguise)&lt;br /&gt;
* Draw t-SNE plots compared with experiment results&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* multi decodeing ASR model&lt;br /&gt;
* multi decodeing with fake Lid [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=shiying&amp;amp;step=view_request&amp;amp;cvssid=627 here]&lt;br /&gt;
* read code about TTS&lt;br /&gt;
|| &lt;br /&gt;
* employ group softmax to train multi decoding ASR model&lt;br /&gt;
* synthesis one 'real' speech&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* more indicators for VV scoring system, see [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a1/VV_scoring.pdf].&lt;br /&gt;
||&lt;br /&gt;
* more indicators, a demo with Shuai.&lt;br /&gt;
* toolbook writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.8.21&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* Recording new audios from 38 person, located in /work7/tanghui/kangxf/workspaces/speaker/wavdata/V2.0 &lt;br /&gt;
* Improve the test website to judge before committing&lt;br /&gt;
|| &lt;br /&gt;
* Test the new recording。&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* pruning the connections and refining, [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&amp;amp;step=view_request&amp;amp;cvssid=626 results]&lt;br /&gt;
||&lt;br /&gt;
* Absent. &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* check toolkit code&lt;br /&gt;
* multilingual baseline system&lt;br /&gt;
|| &lt;br /&gt;
* train language id model&lt;br /&gt;
* use Lid to do multi-decoding&lt;br /&gt;
* some experiments for zhiyong zhang about TTS&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Attend IS2017.&lt;br /&gt;
||&lt;br /&gt;
* Go on speaker segmentation tasks.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* several indicators for VV scoring system, see [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a1/VV_scoring.pdf].&lt;br /&gt;
||&lt;br /&gt;
* more indicators, a demo with Shuai.&lt;br /&gt;
* toolbook writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-4</id>
		<title>ASR Status Report 2017-9-4</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-9-4"/>
				<updated>2017-09-04T01:19:35Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;9&amp;quot;|2017.9.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Jiayin Cai&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
||&lt;br /&gt;
*&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Absent&lt;br /&gt;
|| &lt;br /&gt;
* Perform human test on 21-style speech(add the disguise)&lt;br /&gt;
* Draw t-SNE plots compared with experiment results&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
||&lt;br /&gt;
*  &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Absent.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* more indicators for VV scoring system, see [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a1/VV_scoring.pdf].&lt;br /&gt;
||&lt;br /&gt;
* more indicators, a demo with Shuai.&lt;br /&gt;
* toolbook writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.8.21&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* Recording new audios from 38 person, located in /work7/tanghui/kangxf/workspaces/speaker/wavdata/V2.0 &lt;br /&gt;
* Improve the test website to judge before committing&lt;br /&gt;
|| &lt;br /&gt;
* Test the new recording。&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* pruning the connections and refining, [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=tangzy&amp;amp;step=view_request&amp;amp;cvssid=626 results]&lt;br /&gt;
||&lt;br /&gt;
* Absent. &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* check toolkit code&lt;br /&gt;
* multilingual baseline system&lt;br /&gt;
|| &lt;br /&gt;
* train language id model&lt;br /&gt;
* use Lid to do multi-decoding&lt;br /&gt;
* some experiments for zhiyong zhang about TTS&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Attend IS2017.&lt;br /&gt;
||&lt;br /&gt;
* Go on speaker segmentation tasks.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* several indicators for VV scoring system, see [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/a/a1/VV_scoring.pdf].&lt;br /&gt;
||&lt;br /&gt;
* more indicators, a demo with Shuai.&lt;br /&gt;
* toolbook writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Asr-progress_2017.08</id>
		<title>Asr-progress 2017.08</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Asr-progress_2017.08"/>
				<updated>2017-09-01T08:41:40Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：/* Daily Report */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
===Daily Report===&lt;br /&gt;
&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Person  !! start!! leave !! hours ||status (problems/solutions)&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.1&lt;br /&gt;
|Yanqing Wang||  11:00  || 19:30   ||  8.5h    ||  start to write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  Finish experiments of 12-style speech&lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang || 10:00 || 23:00 || 13h ||   discuss the recording plan and decide a preliminary plan.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.2&lt;br /&gt;
|Yanqing Wang ||  11:00  || 19:00   ||  8h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Finish experiments of 12-style speech&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||    Complete a part of the recording work: collecting six types of sound from 13 people.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.3&lt;br /&gt;
|Yanqing Wang ||  11:00  || 17:00   ||  6h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Analyse the experiment results&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||  read related paper and do a speaker recognition experiment.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.4&lt;br /&gt;
|Yanqing Wang ||   11:00  || 19:00   ||  8h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Design the new test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||    Finish experiments of 12-style speech with ZhangMiao,a total of 5 experiments.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.5&lt;br /&gt;
|Yanqing Wang ||   12:30  || 19:00   ||  6.5h    ||  write TRP ( connection sparseness )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  10:00  ||  21:00  ||   11h   ||  Build up the website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.6&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.7&lt;br /&gt;
|Yanqing Wang ||    11:00  || 19:00   ||  8h    ||  write a shell script to prune a new network according to a pruned network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  Record work&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||   Recording the  audio for the speaker recognition.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.8&lt;br /&gt;
|Yanqing Wang || 11:00  || 19:00   ||  8h    ||  use the yesterday's shell script but find that the proficiency is too low&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   || Wirte the code to cut silence in speech &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||    continue to record the audio.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.9&lt;br /&gt;
|Yanqing Wang ||    11:00  || 19:00   ||  8h    ||  learn to write a kaldi-command &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  replenish the test speech&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||   read the paper about i-vector and d-vector.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.10&lt;br /&gt;
|Yanqing Wang ||    11:00  || 19:00   ||  8h    ||  write a kaldi-command to  prune a new network according to a pruned network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  replace the speech in website with new one&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||  Learn the new test website from zhangmiao &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.11&lt;br /&gt;
|Yanqing Wang ||   11:00  || 19:00   ||  8h    ||  use yesterday's command to prune a new network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  check the linear chapter&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:00 || 23:00 || 13h ||  Learn PHP, and test the new test website.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.12&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  11:00  ||  20:00  ||  9h    ||  check the linear chapter&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.13&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.14&lt;br /&gt;
|Yanqing Wang ||    12:00  || 18:00   ||  6h    ||  exp: 97% pct prune and a contrast exp: apply its structure to a new network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||  12h    ||  cut new recorded speech &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.15&lt;br /&gt;
|Yanqing Wang ||    12:00  || 18:00   ||  6h    ||  exp: 97% pct prune and a contrast exp: apply its structure to a new network&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   || 9:00   ||  21:00  ||   12h   ||  run the experiments on recorded speech&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.16&lt;br /&gt;
|Yanqing Wang ||    12:00  || 20:00   ||  8h    ||  summarize the 2 exp I did yesterday&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  design the 20-style human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.17&lt;br /&gt;
|Yanqing Wang ||    12:00  || 20:00   ||  8h    ||  explore the distribution of the nonlin6's output&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  21:00  ||   12h   ||  wirte the wbsite&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.18&lt;br /&gt;
|Yanqing Wang ||   12:00  || 20:00   ||  8h    ||  explore the distribution of the nonlin6's output&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  || 21:00   ||  12h    ||  wite the website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.19&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  10:00  || 20:00   ||  10h    ||  finish the website and update the results&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.20&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.21&lt;br /&gt;
|Yanqing Wang ||    12:00  || 20:00   ||  8h    ||  do exps: prune a just-randomly-initiated network randomly, and train it ( groups 1 )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:30 || 23:00 || 12.5h  || Improve the test website to judge before committing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.22&lt;br /&gt;
|Yanqing Wang ||    11:30  || 18:00   ||  6.5h    ||  do exps: prune a just-randomly-initiated network randomly, and train it ( groups 2-3 )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  Recording new audios for the speaker recognition.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.23&lt;br /&gt;
|Yanqing Wang ||   11:30  || 18:00   ||  6.5h    ||  do exps: prune a just-randomly-initiated network randomly, and train it ( groups 4-5 )&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang|| 9:30 || 21:30 || 12h  ||  continue to record new audios,a total of 38 person.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.24&lt;br /&gt;
|Yanqing Wang ||   11:30  || 18:00   ||  6.5h    || write awk , shell scripts and kaldi-command as a preparation for node-sparseness task.&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  Organize the voice and do a sample test,and learn 4 papers from lantian&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.25&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  do a speaker recognition with the new audios.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.26&lt;br /&gt;
|Yanqing Wang ||  8:30  || 15:00   ||  6.5h    ||  survey on node-sparseness, sammarize the exps of this week and ask for short-time leave&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.27&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||   ||   ||    || &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.28&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||  I arrange a new voice and data sets, and find out all the twice recording all of 21 people.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.29&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 9:30 || 21:30 || 12h  ||   cutting the audio of 21 speakers, a total of 1050 sentences.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.30&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:30 || 23:30  ||  13h  || mark the segments of 1050 audios. &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.8.31&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang || 10:30 || 23:30  ||  13h  ||   finish the speaker recognition experiment,and get a test result.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Time Off Table===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Name       !! Days off&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang|| &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen||  &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang||  &lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang ||&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-8-21</id>
		<title>ASR Status Report 2017-8-21</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-8-21"/>
				<updated>2017-08-21T04:39:40Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.8.21&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Prepare the data and finish experiments on 5 recorded speech.&lt;br /&gt;
* Finish the human test website(include 20 styles), express my apprecation to Shuai sister!&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.8.14&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* Recording 35 people audio, located in /work7/zhangmiao/speaker/wavdata/data_new&lt;br /&gt;
* Learn the new test website from zhangmiao&lt;br /&gt;
|| &lt;br /&gt;
* Go home with my mom, and come back on Friday night.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Recording work&lt;br /&gt;
* Test website's data preparation&lt;br /&gt;
* check the linear chapter&lt;br /&gt;
|| &lt;br /&gt;
* Continue to record&lt;br /&gt;
* do experiments on recorded speech if possible&lt;br /&gt;
* check the NN chapter&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/50/Connection_Sparseness.pdf TRP] uploaded.&lt;br /&gt;
* explore the importance of sparseness structure:&lt;br /&gt;
** After pruning, initialize non-zero values randomly, train.&lt;br /&gt;
** train nnet with 177-dimension hidden layer.&lt;br /&gt;
** [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=wangyanqing&amp;amp;step=view_request&amp;amp;cvssid=609 result]&lt;br /&gt;
||&lt;br /&gt;
* continue exploring the values of trained nnet.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* general codeMap finished(kazak)&lt;br /&gt;
* crawler program delayed(Most of the kazakh website is down. I will cralw data from overseas websites)&lt;br /&gt;
|| &lt;br /&gt;
* collect more Unicode. such as Tibetan, Mongolia.&lt;br /&gt;
* crawler kazak data from overseas websites.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Study English and help Lantian do some Exps.&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Visualization and quantification for d-vector [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e2/Spk_seg.pdf].&lt;br /&gt;
** phone-aware and phone-blind.&lt;br /&gt;
** within speaker variation and between speaker variation. &lt;br /&gt;
* Speaker segmentation Exps.&lt;br /&gt;
||&lt;br /&gt;
* Finish speaker segmentation Exp.&lt;br /&gt;
* Prepare IS17 presentation.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* reorganize auto-scoring system, next ???&lt;br /&gt;
* collecting material (PPT) for Kaldi toolbook.&lt;br /&gt;
||&lt;br /&gt;
* prefer to rewrite the scoring part.&lt;br /&gt;
* toolbook writing&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Asr-progress_2017.07</id>
		<title>Asr-progress 2017.07</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Asr-progress_2017.07"/>
				<updated>2017-08-17T08:22:12Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
===Daily Report===&lt;br /&gt;
&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Person  !! start!! leave !! hours ||status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.1&lt;br /&gt;
|Yanqing Wang ||  11:00  || 20:00   ||   9h   ||  continue on experiments on 4 types of activation function&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  do a meeting report on trivial events&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.2&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||  9h    ||   continue on experiments on 4 types of activation function&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||    11h   ||  design a human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.3&lt;br /&gt;
|Yanqing Wang || 11:00   || 20:00   ||  9h    ||  change the dimension to 1000 and retry the former exps&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  design a human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.4&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  design a human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.5&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  design a human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.6&lt;br /&gt;
|Yanqing Wang || 11:00   || 20:00   ||  9h    ||  change the dimension to 1000 and retry the former exps&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  design a human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.7&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   ||  continue former exps, read source code of Kaldi&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  finish the human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.8&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   ||  continue former exps, read source code of Kaldi&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.9&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   ||  continue former exps, read source code of Kaldi&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.10&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  Read the paper of Paralinguistics&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.11&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   ||  start to change the source code of Kaldi in order to implement retraining the nnet&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  Read the paper of Paralinguistics&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.12&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||   Read the paper of Paralinguistics&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.13&lt;br /&gt;
|Yanqing Wang ||  14:00  || 20:00   ||  6h    ||   start to change the source code of Kaldi in order to implement retraining the nnet&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  Make a plan for recording&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.14&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  Read material from Teacher Li&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.15&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   ||  change the source code of Kaldi in order to implement retraining the nnet&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.16&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.17&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || change the source code of Kaldi&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  check the book of deep learning&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.18&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || change the source code of Kaldi&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  check the book of deep learning&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.19&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || after changing source code, compile Kaldi and redo the former exps&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||  11h    ||  work out the recording plan with instruction from Teacher Li&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.20&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || redo the former exps&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  learnt kaldi and did experiments&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.21&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || redo the former exps and test the conclusions&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  joined a meeting in Chinese Academy of Social Sciences&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.22&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.23&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.24&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || debug and find the wrong place of exps&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  test performances on 12-style database&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.25&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || redo the former exps and test the conclusions&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  test performances on 12-style database&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.26&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || redo the former exps&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  test performances on 12-style database&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.27&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || redo the former exps&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||  11h    ||  test performances on 12-style database&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.28&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || conclude the exps and start to write a report&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  test performances on 12-style database&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.29&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.30&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.31&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || learn to use LaTeX&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||      ||  optimize the vad parameter to improve the performance&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Time Off Table===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Name       !! Days off&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang|| &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen||  &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang||  &lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang ||&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Asr-progress_2017.07</id>
		<title>Asr-progress 2017.07</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Asr-progress_2017.07"/>
				<updated>2017-08-17T08:21:00Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：/* Daily Report */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
===Daily Report===&lt;br /&gt;
&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date !! Person  !! start!! leave !! hours ||status&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.1&lt;br /&gt;
|Yanqing Wang ||  11:00  || 20:00   ||   9h   ||  continue on experiments on 4 types of activation function&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||  9:00  ||  20:00   ||   11h   ||  do a meeting report on trivial events&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.2&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||  9h    ||   continue on experiments on 4 types of activation function&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||    11h   ||  design a human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.3&lt;br /&gt;
|Yanqing Wang || 11:00   || 20:00   ||  9h    ||  change the dimension to 1000 and retry the former exps&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  design a human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.4&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  design a human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.5&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  design a human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.6&lt;br /&gt;
|Yanqing Wang || 11:00   || 20:00   ||  9h    ||  change the dimension to 1000 and retry the former exps&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  design a human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.7&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   ||  continue former exps, read source code of Kaldi&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  finish the human test website&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.8&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   ||  continue former exps, read source code of Kaldi&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.9&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   ||  continue former exps, read source code of Kaldi&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.10&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  Read the paper of Paralinguistics&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.11&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   ||  start to change the source code of Kaldi in order to implement retraining the nnet&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  Read the paper of Paralinguistics&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.12&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||   Read the paper of Paralinguistics&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.13&lt;br /&gt;
|Yanqing Wang ||  14:00  || 20:00   ||  6h    ||   start to change the source code of Kaldi in order to implement retraining the nnet&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  Make a plan for recording&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.14&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  Read material from Teacher Li&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.15&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   ||  change the source code of Kaldi in order to implement retraining the nnet&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.16&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.17&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || change the source code of Kaldi&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  check the book of deep learning&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.18&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || change the source code of Kaldi&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  check the book of deep learning&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.19&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || after changing source code, compile Kaldi and redo the former exps&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||  11h    ||  work out the recording plan with instruction from Teacher Li&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.20&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || redo the former exps&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  learnt kaldi and did experiments&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.21&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || redo the former exps and test the conclusions&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  joined a meeting in Chinese Academy of Social Sciences&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.22&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.23&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.24&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || debug and find the wrong place of exps&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  test performances on 12-style database&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.25&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || redo the former exps and test the conclusions&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  test performances on 12-style database&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.26&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || redo the former exps&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  test performances on 12-style database&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.27&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || redo the former exps&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||  11h    ||  test performances on 12-style database&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.28&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || conclude the exps and start to write a report&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||   11h   ||  test performances on 12-style database&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.29&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.30&lt;br /&gt;
|Yanqing Wang ||    ||    ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||     ||      ||  &lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;4&amp;quot;|2017.7.31&lt;br /&gt;
|Yanqing Wang ||  11:00  ||  20:00  ||   9h   || learn to use LaTeX&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen ||     ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang   ||    ||  20:00   ||      ||  optimize the vad parameter to improve the performance&lt;br /&gt;
|-&lt;br /&gt;
| Xiaofei Kang ||    ||    ||      ||   &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Time Off Table===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Name       !! Days off&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang|| &lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen||  &lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang||  &lt;br /&gt;
|-&lt;br /&gt;
|Xiaofei Kang ||&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-8-14</id>
		<title>ASR Status Report 2017-8-14</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-8-14"/>
				<updated>2017-08-14T05:24:26Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.8.14&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Recording work&lt;br /&gt;
* Test website's data preparation&lt;br /&gt;
* check the linear chapter&lt;br /&gt;
|| &lt;br /&gt;
* Continue to record&lt;br /&gt;
* do experiments on recorded speech if possible&lt;br /&gt;
* check the NN chapter&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/50/Connection_Sparseness.pdf TRP] uploaded.&lt;br /&gt;
* explore the importance of sparseness structure:&lt;br /&gt;
** After pruning, initialize non-zero values randomly, train.&lt;br /&gt;
** train nnet with 177-dimension hidden layer.&lt;br /&gt;
** [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=wangyanqing&amp;amp;step=view_request&amp;amp;cvssid=609 result]&lt;br /&gt;
||&lt;br /&gt;
* continue exploring the values of trained nnet.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* reorganize auto-scoring system, next ???&lt;br /&gt;
* collecting material (PPT) for Kaldi toolbook.&lt;br /&gt;
||&lt;br /&gt;
* prefer to rewrite the scoring part.&lt;br /&gt;
* toolbook writing&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.8.7&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* Finish experiments of 12-style speech with ZhangMiao. (Results are shown in ZhangMiao's CVSS)&lt;br /&gt;
* Complete a part of the recording work: collecting six types of sound from 13 people.&lt;br /&gt;
|| &lt;br /&gt;
* Finish the recording work left with ZhangMiao&lt;br /&gt;
* Build a new test website with ZhangMiao&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Finish experiments of 12-style speech with Xiaofei. (Results are shown in CVSS)&lt;br /&gt;
* Build a new test website &lt;br /&gt;
|| &lt;br /&gt;
* Recording work&lt;br /&gt;
* Improve the website by decreasing salience segments and replenish other styles&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* retrain experiments finished&lt;br /&gt;
* TRP finished&lt;br /&gt;
||&lt;br /&gt;
* structure V.S. value&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* setup server for m2asr [finished]&lt;br /&gt;
* design crawler program&lt;br /&gt;
|| &lt;br /&gt;
* finish the crawler program&lt;br /&gt;
* CodeMap for Tibetan&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Visualization and quantification for d-vector [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e2/Spk_seg.pdf].&lt;br /&gt;
** phone-aware and phone-blind.&lt;br /&gt;
** within speaker variation and between speaker variation.&lt;br /&gt;
* Lots of trifles.&lt;br /&gt;
|| &lt;br /&gt;
* Speaker segmentation task.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Some functions of the auto-scoring system rewrited.&lt;br /&gt;
|| &lt;br /&gt;
* An app demo with Shuai Zhang. &lt;br /&gt;
* Kaldi book writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-8-14</id>
		<title>ASR Status Report 2017-8-14</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-8-14"/>
				<updated>2017-08-14T05:23:49Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.8.14&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Recording work&lt;br /&gt;
* Test website's data preparation&lt;br /&gt;
* check the linear chapter&lt;br /&gt;
|| &lt;br /&gt;
* Continue to record&lt;br /&gt;
* do experiments on recorded speech is possible&lt;br /&gt;
* check the NN chapter&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/5/50/Connection_Sparseness.pdf TRP] uploaded.&lt;br /&gt;
* explore the importance of sparseness structure:&lt;br /&gt;
** After pruning, initialize non-zero values randomly, train.&lt;br /&gt;
** train nnet with 177-dimension hidden layer.&lt;br /&gt;
** [http://192.168.0.51:5555/cgi-bin/cvss/cvss_request.pl?account=wangyanqing&amp;amp;step=view_request&amp;amp;cvssid=609 result]&lt;br /&gt;
||&lt;br /&gt;
* continue exploring the values of trained nnet.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* reorganize auto-scoring system, next ???&lt;br /&gt;
* collecting material (PPT) for Kaldi toolbook.&lt;br /&gt;
||&lt;br /&gt;
* prefer to rewrite the scoring part.&lt;br /&gt;
* toolbook writing&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.8.7&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* Finish experiments of 12-style speech with ZhangMiao. (Results are shown in ZhangMiao's CVSS)&lt;br /&gt;
* Complete a part of the recording work: collecting six types of sound from 13 people.&lt;br /&gt;
|| &lt;br /&gt;
* Finish the recording work left with ZhangMiao&lt;br /&gt;
* Build a new test website with ZhangMiao&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Finish experiments of 12-style speech with Xiaofei. (Results are shown in CVSS)&lt;br /&gt;
* Build a new test website &lt;br /&gt;
|| &lt;br /&gt;
* Recording work&lt;br /&gt;
* Improve the website by decreasing salience segments and replenish other styles&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* retrain experiments finished&lt;br /&gt;
* TRP finished&lt;br /&gt;
||&lt;br /&gt;
* structure V.S. value&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* setup server for m2asr [finished]&lt;br /&gt;
* design crawler program&lt;br /&gt;
|| &lt;br /&gt;
* finish the crawler program&lt;br /&gt;
* CodeMap for Tibetan&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* Visualization and quantification for d-vector [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e2/Spk_seg.pdf].&lt;br /&gt;
** phone-aware and phone-blind.&lt;br /&gt;
** within speaker variation and between speaker variation.&lt;br /&gt;
* Lots of trifles.&lt;br /&gt;
|| &lt;br /&gt;
* Speaker segmentation task.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Some functions of the auto-scoring system rewrited.&lt;br /&gt;
|| &lt;br /&gt;
* An app demo with Shuai Zhang. &lt;br /&gt;
* Kaldi book writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-8-7</id>
		<title>ASR Status Report 2017-8-7</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-8-7"/>
				<updated>2017-08-07T04:53:19Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.8.7&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Finish experiments of 12-style speech with Xiaofei. (Results are shown in CVSS)&lt;br /&gt;
* Build a new test website &lt;br /&gt;
|| &lt;br /&gt;
* Recording work&lt;br /&gt;
* Improve the website by decreasing salience segments and replenish other styles&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* setup server for m2asr [finished]&lt;br /&gt;
* design crawler program&lt;br /&gt;
|| &lt;br /&gt;
* finish the crawler program&lt;br /&gt;
* CodeMap for Tibetan&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Some functions of the auto-scoring system rewrited.&lt;br /&gt;
|| &lt;br /&gt;
* An app demo with Shuai Zhang. &lt;br /&gt;
* Kaldi book writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.7.31&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* Finish the Speaker Recognition experiment：mouth with candy, normal chat&lt;br /&gt;
|| &lt;br /&gt;
* Understand all the scripts of the Speaker Recognition experiment, and then learn to modify it.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* finish the experiments on five kinds of speech&lt;br /&gt;
|| &lt;br /&gt;
* optimize the vad parameter to improve the performance&lt;br /&gt;
* finish the new human test website&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* retraining task: experiments are in progress, some time needed.&lt;br /&gt;
||&lt;br /&gt;
* all experiments should be done.&lt;br /&gt;
* TRP of retraining task.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* apply mongodb and ajax on the data checking website &lt;br /&gt;
** with mongodb we are not depend on file lock anymore&lt;br /&gt;
** there is no need to save web state(except some cookie) after employ ajax&lt;br /&gt;
* continue to learn crawler&lt;br /&gt;
|| &lt;br /&gt;
* setup server for m2asr (use sheep02)&lt;br /&gt;
* design crawler program&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* plot tsne picture for 863 &amp;amp; fisher-5000 data set&lt;br /&gt;
* find why performance of wisper better than performance of chat&lt;br /&gt;
|| &lt;br /&gt;
* check data and paper&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* T-sne plot for speaker segmentation preparation [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e2/Spk_seg.pdf].&lt;br /&gt;
* check TASLP and NIPS paper.&lt;br /&gt;
|| &lt;br /&gt;
* deep spk recipe.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Updated the auto-scoring system with the newest version of Kaldi. Several patches need to be repaired. &lt;br /&gt;
* Kaldi book writing.&lt;br /&gt;
|| &lt;br /&gt;
* Initial version of auto-scoring system.&lt;br /&gt;
* Kaldi book writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-8-7</id>
		<title>ASR Status Report 2017-8-7</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-8-7"/>
				<updated>2017-08-07T04:52:48Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.8.7&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Finish experiments of 12-style speech with Xiaofei.(Results are shown in CVSS)&lt;br /&gt;
* Build a new test website &lt;br /&gt;
|| &lt;br /&gt;
* Recording work&lt;br /&gt;
* Improve the website by decreasing salience segments and replenish other styles&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* setup server for m2asr [finished]&lt;br /&gt;
* design crawler program&lt;br /&gt;
|| &lt;br /&gt;
* finish the crawler program&lt;br /&gt;
* CodeMap for Tibetan&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Some functions of the auto-scoring system rewrited.&lt;br /&gt;
|| &lt;br /&gt;
* An app demo with Shuai Zhang. &lt;br /&gt;
* Kaldi book writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.7.31&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* Finish the Speaker Recognition experiment：mouth with candy, normal chat&lt;br /&gt;
|| &lt;br /&gt;
* Understand all the scripts of the Speaker Recognition experiment, and then learn to modify it.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* finish the experiments on five kinds of speech&lt;br /&gt;
|| &lt;br /&gt;
* optimize the vad parameter to improve the performance&lt;br /&gt;
* finish the new human test website&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* retraining task: experiments are in progress, some time needed.&lt;br /&gt;
||&lt;br /&gt;
* all experiments should be done.&lt;br /&gt;
* TRP of retraining task.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* apply mongodb and ajax on the data checking website &lt;br /&gt;
** with mongodb we are not depend on file lock anymore&lt;br /&gt;
** there is no need to save web state(except some cookie) after employ ajax&lt;br /&gt;
* continue to learn crawler&lt;br /&gt;
|| &lt;br /&gt;
* setup server for m2asr (use sheep02)&lt;br /&gt;
* design crawler program&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* plot tsne picture for 863 &amp;amp; fisher-5000 data set&lt;br /&gt;
* find why performance of wisper better than performance of chat&lt;br /&gt;
|| &lt;br /&gt;
* check data and paper&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* T-sne plot for speaker segmentation preparation [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e2/Spk_seg.pdf].&lt;br /&gt;
* check TASLP and NIPS paper.&lt;br /&gt;
|| &lt;br /&gt;
* deep spk recipe.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Updated the auto-scoring system with the newest version of Kaldi. Several patches need to be repaired. &lt;br /&gt;
* Kaldi book writing.&lt;br /&gt;
|| &lt;br /&gt;
* Initial version of auto-scoring system.&lt;br /&gt;
* Kaldi book writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/Cslt-member-visitors</id>
		<title>Cslt-member-visitors</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/Cslt-member-visitors"/>
				<updated>2017-08-05T03:21:19Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：/* Miao Zhang (张淼) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==Professionals==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Engineers==&lt;br /&gt;
&lt;br /&gt;
=== Yuxin Zhang (张雨心） ===&lt;br /&gt;
[[文件:Zyx.jpg|200px]]&lt;br /&gt;
* Haixia research center&lt;br /&gt;
* 2016.10 -&lt;br /&gt;
* Finance processing&lt;br /&gt;
* [[媒体文件:Agreement zyx.jpg|Data Security Agreement]]&lt;br /&gt;
&lt;br /&gt;
==Students==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Jiyuan Zhang (张记袁）===&lt;br /&gt;
[[文件:Zhangjiyuan.png|200px]]&lt;br /&gt;
* PKU&lt;br /&gt;
* 2016.4-&lt;br /&gt;
* neural generation model&lt;br /&gt;
* [[媒体文件:An overview of machine translation.pptx|Bi-weekly report]]&lt;br /&gt;
* [[媒体文件:Zhangjy_data.jpg|Data_security_agreement]]&lt;br /&gt;
&lt;br /&gt;
===Ying Shi (石颖）===&lt;br /&gt;
[[文件:Ying_shi.jpg|200px]]&lt;br /&gt;
* BJTU&lt;br /&gt;
* 2016.6.15-&lt;br /&gt;
* Speech processing&lt;br /&gt;
* [[媒体文件:Shiying_bi_weekly_report.ppt|Bi-weekly report]]&lt;br /&gt;
*[[媒体文件:DataSecurityAgreement_YingShi.jpg|DataSecurityAgreement]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Yixiang Chen (陈怿翔)===&lt;br /&gt;
[[文件:Chenyx.jpg|200px]]&lt;br /&gt;
*University of China Mining&lt;br /&gt;
* 2016.7-&lt;br /&gt;
* Speech processing&lt;br /&gt;
* [[媒体文件:Chenyx_report.pdf |Bi-weekly report]]&lt;br /&gt;
* [[媒体文件:Chenyx data.jpg|Data_security_agreement]]&lt;br /&gt;
&lt;br /&gt;
===Shiyue Zhang(张诗悦)===&lt;br /&gt;
[[文件:Zhang Shiyue.jpg|200px]]&lt;br /&gt;
* BUTP&lt;br /&gt;
* 2016.9.06-&lt;br /&gt;
* Language processing&lt;br /&gt;
* [[媒体文件:1.pic hd.jpg| Data_security_agreement]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Yang Wei (魏扬） ===&lt;br /&gt;
[[文件:Weiy_photo.jpg|200px]]&lt;br /&gt;
* BUPT&lt;br /&gt;
* 2016.10 -&lt;br /&gt;
* Speech processing&lt;br /&gt;
* [[媒体文件:Bi-monthly report weiy.pdf|Bi-monthly report]]&lt;br /&gt;
* [[媒体文件:Data agreement weiy.jpg|Data security agreement]]&lt;br /&gt;
&lt;br /&gt;
===Yanqing Wang（王延清）===&lt;br /&gt;
[[文件:wyq photo.jpeg|200px]]&lt;br /&gt;
* BUPT&lt;br /&gt;
* 2016.11-2017.2&lt;br /&gt;
* Speech processing&lt;br /&gt;
* [[媒体文件:Bi-weekly_report.pptx|Bi-weekly report]]&lt;br /&gt;
*[[媒体文件:DataSecurityAgreement wangyanqing.jpg|Data_security_agreement]]&lt;br /&gt;
&lt;br /&gt;
=== Yaodong Wang (王耀东) ===&lt;br /&gt;
[[文件:wangyd.jpg|200px]]&lt;br /&gt;
* CUFE&lt;br /&gt;
* 2016.12.22 -&lt;br /&gt;
* [[媒体文件:Bi_weekly_report.pptx |Bi-weekly report]]&lt;br /&gt;
* [[媒体文件:Data_security_Agreement.jpg|Data security agreement]]&lt;br /&gt;
* Financial processing&lt;br /&gt;
&lt;br /&gt;
=== Tongzheng Ren (任桐正) ===&lt;br /&gt;
[[文件:IcCardPicture.do2.jpeg|200px]]&lt;br /&gt;
* THU&lt;br /&gt;
* 2016.12.22 -&lt;br /&gt;
* [[媒体文件:利用LSTM预测时间序列.pptx|Bi-weekly report]]&lt;br /&gt;
* [[媒体文件:Data Security Agreement-Tongzheng Ren.jpg|Data security agreement]]&lt;br /&gt;
* Financial processing&lt;br /&gt;
&lt;br /&gt;
=== Shipan Ren (任师攀) ===&lt;br /&gt;
[[文件:Rsp.jpg|200px]]&lt;br /&gt;
* PKU&lt;br /&gt;
* 2017.05.10 -&lt;br /&gt;
* [[媒体文件:seq2seq.pptx|Bi-weekly report]]&lt;br /&gt;
* [[媒体文件:Agreement.jpg|Data security agreement]]&lt;br /&gt;
* Language processing&lt;br /&gt;
&lt;br /&gt;
=== Miao Zhang (张淼) ===&lt;br /&gt;
[[文件:miao.JPG|200px]]&lt;br /&gt;
* BUPT&lt;br /&gt;
* 2017.5.1 -&lt;br /&gt;
* [[媒体文件:Zm cough.pdf |Bi-weekly report]]&lt;br /&gt;
* [[媒体文件:Zm.JPG|Data security agreement]]&lt;br /&gt;
* Speech processing&lt;br /&gt;
&lt;br /&gt;
=== Xuejing Zhang (张学敬) ===&lt;br /&gt;
[[文件:Zhangxuejing.jpg|200px]]&lt;br /&gt;
* BISTU&lt;br /&gt;
* 2017.7.7 -&lt;br /&gt;
* [[媒体文件:Zhangxj.jpg|Data security agreement]]&lt;br /&gt;
* Language processing&lt;br /&gt;
&lt;br /&gt;
=== Xiaofei Kang (康晓非) ===&lt;br /&gt;
[[文件:xxxx.jpg|200px]]&lt;br /&gt;
* PKU&lt;br /&gt;
* 2017.7.20 -&lt;br /&gt;
* [[媒体文件:xxxx.jpg|Data security agreement]]&lt;br /&gt;
* Speech processing&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Miao.JPG</id>
		<title>文件:Miao.JPG</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Miao.JPG"/>
				<updated>2017-08-05T03:19:48Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Cough.pdf</id>
		<title>文件:Cough.pdf</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/%E6%96%87%E4%BB%B6:Cough.pdf"/>
				<updated>2017-08-01T02:56:10Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-7-31</id>
		<title>ASR Status Report 2017-7-31</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-7-31"/>
				<updated>2017-07-31T04:02:18Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.7.31&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* finish the experiments on five kinds of speech&lt;br /&gt;
|| &lt;br /&gt;
* optimize the vad parameter to improve the performance&lt;br /&gt;
* finish the new human test website&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* retraining task: experiments are in progress, some time needed.&lt;br /&gt;
||&lt;br /&gt;
* all experiments should be done.&lt;br /&gt;
* TRP of retraining task.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* apply mongodb and ajax on the data checking website &lt;br /&gt;
** with mongodb we are not depend on file lock anymore&lt;br /&gt;
** there is no need to save web state(except some cookie) after employ ajax&lt;br /&gt;
* continue to learn crawler&lt;br /&gt;
|| &lt;br /&gt;
* setup server for m2asr (use sheep02)&lt;br /&gt;
* design crawler program&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Updated the auto-scoring system with the newest version of Kaldi. Several patches need to be repaired. &lt;br /&gt;
* Kaldi book writing.&lt;br /&gt;
|| &lt;br /&gt;
* Initial version of auto-scoring system.&lt;br /&gt;
* Kaldi book writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.7.24&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* Prepare the data set of Speaker Recognition : pick out whisper&lt;br /&gt;
* Learn the the nnet3 model, run the nnet3 experiment &lt;br /&gt;
|| &lt;br /&gt;
* Learn the Speaker Recognition model, run the Speaker Recognition experiment&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* joined a meeting in Chinese Academy of Social Sciences&lt;br /&gt;
* worked out a recording plan&lt;br /&gt;
* learnt kaldi and did experiments&lt;br /&gt;
|| &lt;br /&gt;
* test performances on 12 kinds of voices we have&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Hui Tang &lt;br /&gt;
|| &lt;br /&gt;
* help jiayin to configure dnn and lstm in kaldi&lt;br /&gt;
|| &lt;br /&gt;
* left for postgraduate life&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* change the source code of Kaldi to implement retraining ( with zero value fixed )&lt;br /&gt;
* start to write a technical report of pruning the neural network ( not finished ) &lt;br /&gt;
||&lt;br /&gt;
* finish the retraining task&lt;br /&gt;
* finish the technical report&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* data checking website&lt;br /&gt;
* learn how to write a crawler program&lt;br /&gt;
|| &lt;br /&gt;
* write a more general crawler&lt;br /&gt;
* realign kazak train and test data with transfer learning model &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* use wisper audio for speaker recognition&lt;br /&gt;
* joined a meeting in Chinese Academy of Social Sciences&lt;br /&gt;
|| &lt;br /&gt;
* test performances on 12 kinds of voices&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* deepspk on TASLP.&lt;br /&gt;
* speaker segmentation.&lt;br /&gt;
|| &lt;br /&gt;
* recipe of deepspk.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Replaced ATLAS lib with MKL lib for compiling auto-scoring system.&lt;br /&gt;
* Kaldi book writing.&lt;br /&gt;
|| &lt;br /&gt;
* A basic demo for auto-scoring system. &lt;br /&gt;
* Kaldi book writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-7-24</id>
		<title>ASR Status Report 2017-7-24</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-7-24"/>
				<updated>2017-07-25T08:32:01Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;8&amp;quot;|2017.7.24&lt;br /&gt;
&lt;br /&gt;
|Xiaofei Kang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* joined a meeting in Chinese Academy of Social Sciences&lt;br /&gt;
* worked out a recording plan&lt;br /&gt;
* learnt kaldi and did experiments&lt;br /&gt;
|| &lt;br /&gt;
* test performances on 12 kinds of voices we have&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Hui Tang &lt;br /&gt;
|| &lt;br /&gt;
* help jiayin to configure dnn and lstm in kaldi&lt;br /&gt;
|| &lt;br /&gt;
* left for postgraduate life&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
||&lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Replaced ATLAS lib with MKL lib for compiling auto-scoring system.&lt;br /&gt;
* Kaldi book writing.&lt;br /&gt;
|| &lt;br /&gt;
* A basic demo for auto-scoring system. &lt;br /&gt;
* Kaldi book writing.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot;|2017.7.17&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Read the paralinguistic paper and material from Teacher Li&lt;br /&gt;
* work out the recording plan (delayed)&lt;br /&gt;
|| &lt;br /&gt;
* work out the recording plan with instruction from Teacher Li&lt;br /&gt;
*check the book of deep learning&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Hui Tang &lt;br /&gt;
|| &lt;br /&gt;
* finish checking the speech database&lt;br /&gt;
* help jiayin learn kaldi by training a language identification model &lt;br /&gt;
|| &lt;br /&gt;
* make sure which type of voice is what we need &lt;br /&gt;
* help jiayin to configure dnn and lstm in kaldi &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* check the former conclusions in a narrow network ( not finished yet )&lt;br /&gt;
* read the source code as a preparation for retrain task.&lt;br /&gt;
||&lt;br /&gt;
* finish checking the former conclusions and try to find the applicable conditions.&lt;br /&gt;
* finish the retrain task.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* kazak transfer learning (WER)&lt;br /&gt;
** dark:16.50&lt;br /&gt;
** fix:18.33&lt;br /&gt;
** org:23.69&lt;br /&gt;
* data checking website &lt;br /&gt;
** major function has been down&lt;br /&gt;
** save state when refresh or close the page is in progress&lt;br /&gt;
|| &lt;br /&gt;
* finish the website(employ text database)&lt;br /&gt;
* design more powerfull crawler&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Through voiceprint spectrum synthetic speech&lt;br /&gt;
* read paralinguistic and challenge of paralinguistic 2009-2017 &lt;br /&gt;
|| &lt;br /&gt;
* share paralinguistic&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Replace the old version kaldi with new ones. (delayed) &lt;br /&gt;
* Gather Part 1: 'Speech, Speech Processing and Tools' of Kaldi book for further release. (delayed)&lt;br /&gt;
|| &lt;br /&gt;
* Replace the old version kaldi with new ones for auto-scoring system.&lt;br /&gt;
* Gather Part 1: 'Speech, Speech Processing and Tools' of Kaldi book for further release. &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-7-17</id>
		<title>ASR Status Report 2017-7-17</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-7-17"/>
				<updated>2017-07-17T05:59:21Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot;|2017.7.17&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Read the paralinguistic paper and material from Teacher Li&lt;br /&gt;
* work out the recording plan (delayed)&lt;br /&gt;
|| &lt;br /&gt;
* work out the recording plan with instruction from Teacher Li&lt;br /&gt;
*check the book of deep learning&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Hui Tang &lt;br /&gt;
|| &lt;br /&gt;
* finish checking the speech database&lt;br /&gt;
* help jiayin learn kaldi by training a language identification model &lt;br /&gt;
|| &lt;br /&gt;
* make sure which type of voice is what we need &lt;br /&gt;
* help jiayin to configure dnn and lstm in kaldi &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* check the former conclusions in a narrow network ( not finished yet )&lt;br /&gt;
* read the source code as a preparation for retrain task.&lt;br /&gt;
||&lt;br /&gt;
* finish checking the former conclusions and try to find the applicable conditions.&lt;br /&gt;
* finish the retrain task.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* kazak transfer learning (WER)&lt;br /&gt;
** dark:16.50&lt;br /&gt;
** fix:18.33&lt;br /&gt;
** org:23.69&lt;br /&gt;
* data checking website &lt;br /&gt;
** major function has been down&lt;br /&gt;
** save state when refresh or close the page is in progress&lt;br /&gt;
|| &lt;br /&gt;
* finish the website(employ text database)&lt;br /&gt;
* design more powerfull crawler&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Through voiceprint spectrum synthetic speech&lt;br /&gt;
* read paralinguistic and challenge of paralinguistic 2009-2017 &lt;br /&gt;
|| &lt;br /&gt;
* share paralinguistic&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Replace the old version kaldi with new ones. (delayed) &lt;br /&gt;
* Gather Part 1: 'Speech, Speech Processing and Tools' of Kaldi book for further release. (delayed)&lt;br /&gt;
|| &lt;br /&gt;
* Replace the old version kaldi with new ones. &lt;br /&gt;
* Gather Part 1: 'Speech, Speech Processing and Tools' of Kaldi book for further release. &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot;|2017.7.10&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* A report about trivial events&lt;br /&gt;
* Finish the test website with Tanghui&lt;br /&gt;
|| &lt;br /&gt;
* Read the paper of Paralinguistics&lt;br /&gt;
* Make a plan for recording and start to record hopefully.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Hui Tang &lt;br /&gt;
|| &lt;br /&gt;
* completed the test web site [http://192.168.0.84:8091/speech/index.php web]&lt;br /&gt;
* finished checking the subset of our speech databases (nearly 800 sentences) &lt;br /&gt;
|| &lt;br /&gt;
* finish checking the reminder of the databases  (nearly  2500 sentences)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* use different activation function to prune [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/03/Results.pdf result]&lt;br /&gt;
||&lt;br /&gt;
* make the network narrow and test the former conclusions. &lt;br /&gt;
* change source code to retrain the neural network.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* help zheling to finish his first crawler program&lt;br /&gt;
* chech hazak speech data &lt;br /&gt;
** train: 1346 utterances' WER large than 20% (utterance level WER)&lt;br /&gt;
** test: 759 utterances' WER large than 20%(utterance level WER)&lt;br /&gt;
* transfer learning based on th30 and wsj (performance is poor)&lt;br /&gt;
|| &lt;br /&gt;
* tools for speech data checking &lt;br /&gt;
* transfer learning based on large Chinese ASR model&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* plot for “learning deep speaker features”&lt;br /&gt;
|| &lt;br /&gt;
* comprehend Paralinguistics&lt;br /&gt;
* record voice&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* deep speaker feature&lt;br /&gt;
** segmentation is still not suitable.&lt;br /&gt;
** visualization with the t-sne seems cool.&lt;br /&gt;
* help Zhangzy decode d-vector and re-train a new deep speaker model.&lt;br /&gt;
|| &lt;br /&gt;
* more details of segmentation experiments.&lt;br /&gt;
* prepare the weekly meeting.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Scanned the source code of auto-scoring system;&lt;br /&gt;
* A report about the research of the speech group (Thursday). &lt;br /&gt;
|| &lt;br /&gt;
* Replace the old version kaldi with new ones. &lt;br /&gt;
* Gather Part 1: 'Speech, Speech Processing and Tools' of Kaldi book for further release. &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-7-17</id>
		<title>ASR Status Report 2017-7-17</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-7-17"/>
				<updated>2017-07-17T05:58:17Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot;|2017.7.17&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* work out the recording plan (delayed)&lt;br /&gt;
* Read the paralinguistic paper and material from Teacher Li&lt;br /&gt;
|| &lt;br /&gt;
* work out the recording plan with instruction from Teacher Li&lt;br /&gt;
*check the book of deep learning&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Hui Tang &lt;br /&gt;
|| &lt;br /&gt;
* finish checking the speech database&lt;br /&gt;
* help jiayin learn kaldi by training a language identification model &lt;br /&gt;
|| &lt;br /&gt;
* make sure which type of voice is what we need &lt;br /&gt;
* help jiayin to configure dnn and lstm in kaldi &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* check the former conclusions in a narrow network ( not finished yet )&lt;br /&gt;
* read the source code as a preparation for retrain task.&lt;br /&gt;
||&lt;br /&gt;
* finish checking the former conclusions and try to find the applicable conditions.&lt;br /&gt;
* finish the retrain task.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* kazak transfer learning (WER)&lt;br /&gt;
** dark:16.50&lt;br /&gt;
** fix:18.33&lt;br /&gt;
** org:23.69&lt;br /&gt;
* data checking website &lt;br /&gt;
** major function has been down&lt;br /&gt;
** save state when refresh or close the page is in progress&lt;br /&gt;
|| &lt;br /&gt;
* finish the website(employ text database)&lt;br /&gt;
* design more powerfull crawler&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* Through voiceprint spectrum synthetic speech&lt;br /&gt;
* read paralinguistic and challenge of paralinguistic 2009-2017 &lt;br /&gt;
|| &lt;br /&gt;
* share paralinguistic&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Replace the old version kaldi with new ones. (delayed) &lt;br /&gt;
* Gather Part 1: 'Speech, Speech Processing and Tools' of Kaldi book for further release. (delayed)&lt;br /&gt;
|| &lt;br /&gt;
* Replace the old version kaldi with new ones. &lt;br /&gt;
* Gather Part 1: 'Speech, Speech Processing and Tools' of Kaldi book for further release. &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot;|2017.7.10&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* A report about trivial events&lt;br /&gt;
* Finish the test website with Tanghui&lt;br /&gt;
|| &lt;br /&gt;
* Read the paper of Paralinguistics&lt;br /&gt;
* Make a plan for recording and start to record hopefully.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Hui Tang &lt;br /&gt;
|| &lt;br /&gt;
* completed the test web site [http://192.168.0.84:8091/speech/index.php web]&lt;br /&gt;
* finished checking the subset of our speech databases (nearly 800 sentences) &lt;br /&gt;
|| &lt;br /&gt;
* finish checking the reminder of the databases  (nearly  2500 sentences)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* use different activation function to prune [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/03/Results.pdf result]&lt;br /&gt;
||&lt;br /&gt;
* make the network narrow and test the former conclusions. &lt;br /&gt;
* change source code to retrain the neural network.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* help zheling to finish his first crawler program&lt;br /&gt;
* chech hazak speech data &lt;br /&gt;
** train: 1346 utterances' WER large than 20% (utterance level WER)&lt;br /&gt;
** test: 759 utterances' WER large than 20%(utterance level WER)&lt;br /&gt;
* transfer learning based on th30 and wsj (performance is poor)&lt;br /&gt;
|| &lt;br /&gt;
* tools for speech data checking &lt;br /&gt;
* transfer learning based on large Chinese ASR model&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* plot for “learning deep speaker features”&lt;br /&gt;
|| &lt;br /&gt;
* comprehend Paralinguistics&lt;br /&gt;
* record voice&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* deep speaker feature&lt;br /&gt;
** segmentation is still not suitable.&lt;br /&gt;
** visualization with the t-sne seems cool.&lt;br /&gt;
* help Zhangzy decode d-vector and re-train a new deep speaker model.&lt;br /&gt;
|| &lt;br /&gt;
* more details of segmentation experiments.&lt;br /&gt;
* prepare the weekly meeting.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Scanned the source code of auto-scoring system;&lt;br /&gt;
* A report about the research of the speech group (Thursday). &lt;br /&gt;
|| &lt;br /&gt;
* Replace the old version kaldi with new ones. &lt;br /&gt;
* Gather Part 1: 'Speech, Speech Processing and Tools' of Kaldi book for further release. &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	<entry>
		<id>http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-7-17</id>
		<title>ASR Status Report 2017-7-17</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php/ASR_Status_Report_2017-7-17"/>
				<updated>2017-07-17T04:18:57Z</updated>
		
		<summary type="html">&lt;p&gt;Zhangmiao：&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot;|2017.7.17&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* Read the paralinguistic paper and material from Teacher Li&lt;br /&gt;
* work out the recording plan (delayed)&lt;br /&gt;
|| &lt;br /&gt;
* work out the recording plan with instruction from Teacher Li&lt;br /&gt;
*check the book of deep learning&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Hui Tang &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* check the former conclusions in a narrow network ( not finished yet )&lt;br /&gt;
* read the source code as a preparation for retrain task.&lt;br /&gt;
||&lt;br /&gt;
* finish checking the former conclusions and try to find the applicable conditions.&lt;br /&gt;
* finish the retrain task.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|| &lt;br /&gt;
* &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Replace the old version kaldi with new ones. (delayed) &lt;br /&gt;
* Gather Part 1: 'Speech, Speech Processing and Tools' of Kaldi book for further release. (delayed)&lt;br /&gt;
|| &lt;br /&gt;
* Replace the old version kaldi with new ones. &lt;br /&gt;
* Gather Part 1: 'Speech, Speech Processing and Tools' of Kaldi book for further release. &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Date!!People !! Last Week !! This Week&lt;br /&gt;
|-&lt;br /&gt;
| rowspan=&amp;quot;7&amp;quot;|2017.7.10&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|Miao Zhang&lt;br /&gt;
|| &lt;br /&gt;
* A report about trivial events&lt;br /&gt;
* Finish the test website with Tanghui&lt;br /&gt;
|| &lt;br /&gt;
* Read the paper of Paralinguistics&lt;br /&gt;
* Make a plan for recording and start to record hopefully.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Hui Tang &lt;br /&gt;
|| &lt;br /&gt;
* completed the test web site [http://192.168.0.84:8091/speech/index.php web]&lt;br /&gt;
* finished checking the subset of our speech databases (nearly 800 sentences) &lt;br /&gt;
|| &lt;br /&gt;
* finish checking the reminder of the databases  (nearly  2500 sentences)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yanqing Wang&lt;br /&gt;
|| &lt;br /&gt;
* use different activation function to prune [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/0/03/Results.pdf result]&lt;br /&gt;
||&lt;br /&gt;
* make the network narrow and test the former conclusions. &lt;br /&gt;
* change source code to retrain the neural network.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Ying Shi  &lt;br /&gt;
|| &lt;br /&gt;
* help zheling to finish his first crawler program&lt;br /&gt;
* chech hazak speech data &lt;br /&gt;
** train: 1346 utterances' WER large than 20% (utterance level WER)&lt;br /&gt;
** test: 759 utterances' WER large than 20%(utterance level WER)&lt;br /&gt;
* transfer learning based on th30 and wsj (performance is poor)&lt;br /&gt;
|| &lt;br /&gt;
* tools for speech data checking &lt;br /&gt;
* transfer learning based on large Chinese ASR model&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Yixiang Chen  &lt;br /&gt;
|| &lt;br /&gt;
* plot for “learning deep speaker features”&lt;br /&gt;
|| &lt;br /&gt;
* comprehend Paralinguistics&lt;br /&gt;
* record voice&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Lantian Li  &lt;br /&gt;
|| &lt;br /&gt;
* deep speaker feature&lt;br /&gt;
** segmentation is still not suitable.&lt;br /&gt;
** visualization with the t-sne seems cool.&lt;br /&gt;
* help Zhangzy decode d-vector and re-train a new deep speaker model.&lt;br /&gt;
|| &lt;br /&gt;
* more details of segmentation experiments.&lt;br /&gt;
* prepare the weekly meeting.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Zhiyuan Tang &lt;br /&gt;
|| &lt;br /&gt;
* Scanned the source code of auto-scoring system;&lt;br /&gt;
* A report about the research of the speech group (Thursday). &lt;br /&gt;
|| &lt;br /&gt;
* Replace the old version kaldi with new ones. &lt;br /&gt;
* Gather Part 1: 'Speech, Speech Processing and Tools' of Kaldi book for further release. &lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Zhangmiao</name></author>	</entry>

	</feed>