“OLR Challenge 2017”版本间的差异
(→Award) |
(→Challenge results) |
||
(2位用户的12个中间修订版本未显示) | |||
第106行: | 第106行: | ||
The Oriental Language Recognition (OLR) Challenge 2017, co-organized by CSLT@Tsinghua University and Speechocean, was completed with a great success. | The Oriental Language Recognition (OLR) Challenge 2017, co-organized by CSLT@Tsinghua University and Speechocean, was completed with a great success. | ||
− | The results have been published in the APSIPA ASC, Dec 12-15, 2017, Kuala Lumpur, Malaysia. | + | The results have been published in the APSIPA ASC, Dec 12-15, 2017, Kuala Lumpur, Malaysia ([[媒体文件:OLR_Challenge_2017_微信稿.pdf| News file/新闻稿]]). |
第117行: | 第117行: | ||
and | and | ||
<span style="color:red">'''6'''</span> teams did not give any response after the data download. | <span style="color:red">'''6'''</span> teams did not give any response after the data download. | ||
− | The <span style="color:red"> '''19'''</span> complete submissions have been ranked in two lists, | + | The <span style="color:red"> '''19'''</span> complete submissions have been ranked in two lists, |
+ | one is by the overall performance on all conditions, | ||
and the other is by the performance on the short-utterance condition. | and the other is by the performance on the short-utterance condition. | ||
第123行: | 第124行: | ||
=== Ranking List on Overall Performance === | === Ranking List on Overall Performance === | ||
− | For the overall performance ranking list, we present the results and details of the 19 teams | + | For the overall performance ranking list, we present the results and details of the 19 teams that have successfully submitted their results. Note that: |
* The top 6 systems defeated the baseline system provided by us. | * The top 6 systems defeated the baseline system provided by us. | ||
第133行: | 第134行: | ||
=== Ranking List on Short-Utterance Condition === | === Ranking List on Short-Utterance Condition === | ||
− | For the short-utterance performance ranking list, we present the results of the 19 teams that | + | For the short-utterance performance ranking list, we present the results of the 19 teams that have successfully submitted their results. Note that: |
* The top 10 systems defeated the baseline system provided by us. | * The top 10 systems defeated the baseline system provided by us. | ||
* The submissions with a star after the team name is an extended submission. This means they should not be treated equally as the regular submissions (without a star). | * The submissions with a star after the team name is an extended submission. This means they should not be treated equally as the regular submissions (without a star). | ||
− | [[文件:Olr17-short-utt.png | | + | [[文件:Olr17-short-utt.png | 350px]] |
=== Failed participants === | === Failed participants === | ||
第153行: | 第154行: | ||
=== Non-response participants === | === Non-response participants === | ||
− | There are 6 teams who downloaded the data but responded | + | There are 6 teams who downloaded the data but responded nothing still now. These teams are regarded as unfaithful data users: |
[[文件:Olr17-passive.png | 600px]] | [[文件:Olr17-passive.png | 600px]] | ||
第160行: | 第161行: | ||
* Best Overall Performance Award: NUS-I2R-NTU(NUS, I2U, NTU joint team) | * Best Overall Performance Award: NUS-I2R-NTU(NUS, I2U, NTU joint team) | ||
+ | [[文件:Olr17-nus.jpg|500px]] | ||
* Best Short-Utterance Performance Award: SASI (University of New South Wales, Sydney, Australia) | * Best Short-Utterance Performance Award: SASI (University of New South Wales, Sydney, Australia) | ||
+ | [[文件:Olr17-sasi.png|500px]] | ||
= Ground truth = | = Ground truth = | ||
The ground truth for the test is [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e0/Olr17-groundtruth.txt here]. | The ground truth for the test is [http://cslt.riit.tsinghua.edu.cn/mediawiki/images/e/e0/Olr17-groundtruth.txt here]. |
2017年12月15日 (五) 22:42的最后版本
Oriental Language Recognition (OLR) 2017 Challenge
Oriental languages involve interesting specialties. The OLR challenge series aim at boosting language recognition technology for oriental languages. Following the success of OLR Challenge 2016, the new challenge in 2017 follows the same theme, but sets up more challenging tasks in the sense of:
- more languages: OLR 2016 involves 7 languages, OLR 2017 involves 10 languages.
- shorter speech segments. OLR 2017 sets individual tasks for 1 second, 3 second and the original segments separately.
We will publish the results on a special session of APSIPA ASC 2017. See more details for the AP17 special session.
Announcement
- Number of teams that registered the challenge: 31.
Number of teams that wait to register the challenge: 38-31.
- Roobo agreed to sponsor OLR-2017. Teams with good results will be awarded, perhaps cash, but more probability a chatting robot. :)
- According to the request of some participants, the date for the result submission is changed to Oct. 10, 12:00 PM. The main reason is that some participants from China (particularly industrial participants) can not access the computing resource during Oct.1-Oct.7, which is China's national holiday.
Data
The challenge is based on two multilingual databases, AP16-OL7 that was designed for the OLR challenge 2016, and a new complementary AP17-OL3 database.
AP16-OL7 is provided by SpeechOcean (www.speechocean.com), and AP17-OL3 is provided by Tsinghua University, Northwest Minzu University and Xinjiang University, under the M2ASR project supported by NSFC.
The features for AP16-OL7 involve:
- Mobile channel
- 7 languages in total
- 71 hours of speech signals in total
- Transcriptions and lexica are provided
- The data profile is here
- The License for the data is here
The features for AP17-OL3 involve:
- Mobile channel
- 3 languages in total
- Tibetan provided by Prof. Guanyu Li@Northwest Minzu Univ.
- Uyghur and Kazak provided by Prof. Askar Hamdulla@Xinjiang University.
- 35 hours of speech signals in total
- Transcriptions and lexica are provided
- The data profile is here
- The License for the data is here
Evaluation plan
Refer to the scripts/paper following.
Evaluation tools
- The Kaldi-based baseline scripts here
Participation rules
- Participants from both academy and industry are welcome
- Publications based on the data provided by the challenge should cite the following paper:
Dong Wang, Lantian Li, Difei Tang, Qing Chen, AP16-OL7: a multilingual database for oriental languages and a language recognition baseline, APSIPA ASC 2016. pdf
Zhiyuan Tang, Dong Wang, Yixiang Chen, Qing Chen: AP17-OLR Challenge: Data, Plan, and Baseline, submitted to APSIPA ASC 2017. pdf
Important dates
- Jun. 20, AP17-OLR training/dev data release.
- Sep. 20, register deadline.
- Oct. 1, test data release.
-
Oct. 2, 12:00 PM, Beijing time, submission deadline. - Oct. 10, 12:00 PM, Beijing time, submission deadline.
- Oct. 22, 12:00 PM, Beijing time, delayed submission deadline.
- Dec. 12, 12:00 PM, Beijing time, extended submission deadline.
- APSIPA ASC 2017, results announcement.
Registration procedure
If you intend to participate the challenge, or if you have any questions, comments or suggestions about the challenge, please send email to the organizers:
- Dr. Dong Wang (wangdong99@mails.tsinghua.edu.cn)
- Dr. Zhiyuan Tang (tangzhiyuan12@mails.ucas.ac.cn)
- Ms. Qing Chen (chenqing@speechocean.com)
Organizers
- Dong Wang, Tsinghua University [home]
Error code: 127
- Zhiyuan Tang, Tsinghua University [home]
- Qing Chen, SpeechOcean
Registration status
Sponsor
Challenge results
The Oriental Language Recognition (OLR) Challenge 2017, co-organized by CSLT@Tsinghua University and Speechocean, was completed with a great success. The results have been published in the APSIPA ASC, Dec 12-15, 2017, Kuala Lumpur, Malaysia ( News file/新闻稿).
Overview
There are totally 31 teams that registered this challenge. Until the deadline of extended submission (2017/12/12), 19 teams submitted their results completely, 6 teams submitted partially or responded actively, and 6 teams did not give any response after the data download. The 19 complete submissions have been ranked in two lists, one is by the overall performance on all conditions, and the other is by the performance on the short-utterance condition.
Ranking List on Overall Performance
For the overall performance ranking list, we present the results and details of the 19 teams that have successfully submitted their results. Note that:
- The top 6 systems defeated the baseline system provided by us.
- The submissions with a star after the team name is an extended submission. This means they should not be treated equally as the regular submissions (without a star).
Ranking List on Short-Utterance Condition
For the short-utterance performance ranking list, we present the results of the 19 teams that have successfully submitted their results. Note that:
- The top 10 systems defeated the baseline system provided by us.
- The submissions with a star after the team name is an extended submission. This means they should not be treated equally as the regular submissions (without a star).
Failed participants
There are 6 teams failed to make a complete submission. These teams submitted partially or explained the reason of the failure. All these teams require to be anonymous. The names of these teams are:
- Chicken Dinner
- CLR
- LonelySpoon
- CIAIC
- asrboys
- 519
Non-response participants
There are 6 teams who downloaded the data but responded nothing still now. These teams are regarded as unfaithful data users:
Award
- Best Overall Performance Award: NUS-I2R-NTU(NUS, I2U, NTU joint team)
- Best Short-Utterance Performance Award: SASI (University of New South Wales, Sydney, Australia)
Ground truth
The ground truth for the test is here.