“Reading Task”版本间的差异
来自cslt Wiki
第6行: | 第6行: | ||
|align="center"| ICML 2015 ||align="center"| From Word Embeddings To Document Distances ||align="center" | - ||align="center" | - | |align="center"| ICML 2015 ||align="center"| From Word Embeddings To Document Distances ||align="center" | - ||align="center" | - | ||
|- | |- | ||
− | | ICML 2015 || Weight Uncertainty in Neural Network || - || | + | |align="center"| ICML 2015 ||align="center"| Weight Uncertainty in Neural Network ||align="center"| - ||align="center"| - |
|- | |- | ||
− | | ICML 2015 || | + | |align="center"| ICML 2015 ||align="center"| Long Short-Term Memory Over Recursive Structures ||align="center"| - ||align="center"| - |
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| Learning Transferable Features with Deep Adaptation Networks ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| Learning Word Representations with Hierarchical Sparse Coding ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| DRAW: A Recurrent Neural Network For Image Generation ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| Unsupervised Learning of Video Representations using LSTMs ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| MADE: Masked Autoencoder for Distribution Estimation ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| Hashing for Distributed Data ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| Is Feature Selection Secure against Training Data Poisoning? ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| Mind the duality gap: safer rules for the Lasso ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| PeakSeg: constrained optimal segmentation and supervised penalty learning for peak detection in count data ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| Generalization error bounds for learning to rank: Does the length of document lists matter? ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| Classification with Low Rank and Missing Data ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| Functional Subspace Clustering with Application to Time Series ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| Abstraction Selection in Model-based Reinforcement Learning ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| Learning Local Invariant Mahalanobis Distances ||align="center"| - ||align="center"| - | ||
+ | |- | ||
+ | |align="center"| ICML 2015 ||align="center"| A Stochastic PCA and SVD Algorithm with an Exponential Convergence Rate ||align="center"| - ||align="center"| - | ||
|- | |- | ||
|} | |} |
2015年7月24日 (五) 02:39的版本
Affiliation | Paper Name | Principal | Materials |
---|---|---|---|
ICML 2015 | From Word Embeddings To Document Distances | - | - |
ICML 2015 | Weight Uncertainty in Neural Network | - | - |
ICML 2015 | Long Short-Term Memory Over Recursive Structures | - | - |
ICML 2015 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | - | - |
ICML 2015 | Learning Transferable Features with Deep Adaptation Networks | - | - |
ICML 2015 | Learning Word Representations with Hierarchical Sparse Coding | - | - |
ICML 2015 | DRAW: A Recurrent Neural Network For Image Generation | - | - |
ICML 2015 | Unsupervised Learning of Video Representations using LSTMs | - | - |
ICML 2015 | MADE: Masked Autoencoder for Distribution Estimation | - | - |
ICML 2015 | Hashing for Distributed Data | - | - |
ICML 2015 | Is Feature Selection Secure against Training Data Poisoning? | - | - |
ICML 2015 | Mind the duality gap: safer rules for the Lasso | - | - |
ICML 2015 | PeakSeg: constrained optimal segmentation and supervised penalty learning for peak detection in count data | - | - |
ICML 2015 | Generalization error bounds for learning to rank: Does the length of document lists matter? | - | - |
ICML 2015 | Classification with Low Rank and Missing Data | - | - |
ICML 2015 | Functional Subspace Clustering with Application to Time Series | - | - |
ICML 2015 | Abstraction Selection in Model-based Reinforcement Learning | - | - |
ICML 2015 | Learning Local Invariant Mahalanobis Distances | - | - |
ICML 2015 | A Stochastic PCA and SVD Algorithm with an Exponential Convergence Rate | - | - |