<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="http://cslt.org/mediawiki/skins/common/feed.css?303"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="zh-cn">
		<id>http://cslt.org/mediawiki/index.php?action=history&amp;feed=atom&amp;title=LM_optimization_with_annealing_in_Chinese</id>
		<title>LM optimization with annealing in Chinese - 版本历史</title>
		<link rel="self" type="application/atom+xml" href="http://cslt.org/mediawiki/index.php?action=history&amp;feed=atom&amp;title=LM_optimization_with_annealing_in_Chinese"/>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php?title=LM_optimization_with_annealing_in_Chinese&amp;action=history"/>
		<updated>2026-04-14T22:31:57Z</updated>
		<subtitle>本wiki的该页面的版本历史</subtitle>
		<generator>MediaWiki 1.23.3</generator>

	<entry>
		<id>http://cslt.org/mediawiki/index.php?title=LM_optimization_with_annealing_in_Chinese&amp;diff=185&amp;oldid=prev</id>
		<title>166.111.134.19：以内容“There is a problem particular for Chinese when building LM.   We have known that word-based LM is better than character based LM, and we choose a word list for example ...”创建新页面</title>
		<link rel="alternate" type="text/html" href="http://cslt.org/mediawiki/index.php?title=LM_optimization_with_annealing_in_Chinese&amp;diff=185&amp;oldid=prev"/>
				<updated>2012-09-13T00:55:49Z</updated>
		
		<summary type="html">&lt;p&gt;以内容“There is a problem particular for Chinese when building LM.   We have known that word-based LM is better than character based LM, and we choose a word list for example ...”创建新页面&lt;/p&gt;
&lt;p&gt;&lt;b&gt;新页面&lt;/b&gt;&lt;/p&gt;&lt;div&gt;There is a problem particular for Chinese when building LM. &lt;br /&gt;
&lt;br /&gt;
We have known that word-based LM is better than character based LM, and we choose a word list for example 20k. The problem is that Chinese words are open while the characters are close.  For other words outside of 20k, if we just delete them from the training data, we will get loss. &lt;br /&gt;
&lt;br /&gt;
A possible solution is:&lt;br /&gt;
&lt;br /&gt;
1. segment words, and choose 20k list by frequency (some tips as well, e.g., substitue numbers)&lt;br /&gt;
2. for those words outside of 20k,  split them into sequences of short words (even characters), and then amend the word frequency&lt;br /&gt;
3. double check if the 20k word list changed. Since words after 20k usually do not take many counts, this should not change things significantly&lt;br /&gt;
4. use the splitting rules to split the corresponding words into short word sequences&lt;br /&gt;
5. re-train the model&lt;/div&gt;</summary>
		<author><name>166.111.134.19</name></author>	</entry>

	</feed>