“ASR-events-BICS16”版本间的差异
(相同用户的3个中间修订版本未显示) | |||
第1行: | 第1行: | ||
− | |||
− | |||
[[文件:Bicstoken.png]] | [[文件:Bicstoken.png]] | ||
− | == | + | ==Special session on BICS 2016: Deep and/or Sparse Neural Models for Speech and Language Processing== |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
===Introduction=== | ===Introduction=== | ||
− | Large-scale deep neural models, e.g., deep neural networks (DNN) and recurrent neural networks (RNN), have demonstrated significant success in solving various challenging tasks of speech and language processing (SLP), including, amongst others, speech recognition, speech synthesis, document classification, question answering. This growing impact corroborates the neurobiological evidence concerning the presence of layer-wise deep processing in the human brain. | + | Large-scale deep neural models, e.g., deep neural networks (DNN) and recurrent neural networks (RNN), have demonstrated significant success in solving various challenging tasks of speech and language processing (SLP), including, amongst others, speech recognition, speech synthesis, document classification, question answering. This growing impact corroborates the neurobiological evidence concerning the presence of layer-wise deep processing in the human brain. On the other hand, sparse coding representation has also gained similar success in SLP, particularly in signal processing, demonstrating sparsity as another important neurobiological characteristic. |
+ | |||
+ | Traditionally, deep learning and sparse coding have been studied by different research communities. This special session on BICS 2016 (http://bii.ia.ac.cn/bics-2016/index.html) aims to offer a timely opportunity to researchers in the two areas to share their complementary results and methods, and help mutually promote development of new theories and methodologies for hybrid deep and sparsity based models, particularly in the field of speech and language processing. | ||
− | |||
− | |||
− | |||
===Scope=== | ===Scope=== | ||
− | |||
* Theories and methods for deep sparse or sparse deep models | * Theories and methods for deep sparse or sparse deep models | ||
第37行: | 第25行: | ||
* Acceptance notification: August 10, 2016 | * Acceptance notification: August 10, 2016 | ||
* Camera-ready due: September 10, 2016 | * Camera-ready due: September 10, 2016 | ||
− | + | ||
===Submission and publication=== | ===Submission and publication=== | ||
第43行: | 第31行: | ||
* The special session uses the same submission system as BICS 2016 (http://bii.ia.ac.cn/bics-2016/index.html). | * The special session uses the same submission system as BICS 2016 (http://bii.ia.ac.cn/bics-2016/index.html). | ||
* The accepted papers will be published in the Springer LNAI series. | * The accepted papers will be published in the Springer LNAI series. | ||
− | * Selected papers will be published in a special issue of Cognitive Computation Journal(http://link.springer.com/journal/12559) | + | * Selected papers will be published in a special issue of Cognitive Computation Journal(http://link.springer.com/journal/12559). |
+ | ===Organizers=== | ||
− | + | Dong Wang, Qiang Zhou (CSLT, Tsinghua University, China) | |
− | + | Email: wangdong99@mails.tsinghua.edu.cn; zq-lxd@mail.tsinghua.edu.cn | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | Amir Hussain (Cognitive Big Data Informatics Research Lab, University of Stirling, UK) | |
+ | Email: ahu@cs.stir.ac.uk |
2016年6月16日 (四) 11:25的最后版本
目录
Special session on BICS 2016: Deep and/or Sparse Neural Models for Speech and Language Processing
Introduction
Large-scale deep neural models, e.g., deep neural networks (DNN) and recurrent neural networks (RNN), have demonstrated significant success in solving various challenging tasks of speech and language processing (SLP), including, amongst others, speech recognition, speech synthesis, document classification, question answering. This growing impact corroborates the neurobiological evidence concerning the presence of layer-wise deep processing in the human brain. On the other hand, sparse coding representation has also gained similar success in SLP, particularly in signal processing, demonstrating sparsity as another important neurobiological characteristic.
Traditionally, deep learning and sparse coding have been studied by different research communities. This special session on BICS 2016 (http://bii.ia.ac.cn/bics-2016/index.html) aims to offer a timely opportunity to researchers in the two areas to share their complementary results and methods, and help mutually promote development of new theories and methodologies for hybrid deep and sparsity based models, particularly in the field of speech and language processing.
Scope
- Theories and methods for deep sparse or sparse deep models
- Theories and methods for hybrid deep neural models in SLP
- Theories and methods for hybrid sparse models in SLP
- Comparative study of deep/sparse neural and Bayesian based models
- Applications of deep and/or sparse models in SLP
Important dates
- Paper submission: July 20, 2016
- Acceptance notification: August 10, 2016
- Camera-ready due: September 10, 2016
Submission and publication
- The special session uses the same submission system as BICS 2016 (http://bii.ia.ac.cn/bics-2016/index.html).
- The accepted papers will be published in the Springer LNAI series.
- Selected papers will be published in a special issue of Cognitive Computation Journal(http://link.springer.com/journal/12559).
Organizers
Dong Wang, Qiang Zhou (CSLT, Tsinghua University, China) Email: wangdong99@mails.tsinghua.edu.cn; zq-lxd@mail.tsinghua.edu.cn
Amir Hussain (Cognitive Big Data Informatics Research Lab, University of Stirling, UK) Email: ahu@cs.stir.ac.uk