weka: naive bayes
m_NumClasses???????????????訓(xùn)練數(shù)據(jù)中類的個(gè)數(shù)
m_NumAttributes??????????? 屬性的個(gè)數(shù)
m_Instances?????????????????? 訓(xùn)練數(shù)據(jù)
m_ClassDistribution????????類的概率分布, 即P(C)。 其類型是Estimator, 估值器
m_Distributions[m_NumAttributes][m_NumClasses]? 每個(gè)屬性在每個(gè)類中的概率分布。 其類型也是Estimator。
?
?
1) 數(shù)值類型要用連續(xù)分布的Estimator, 如NormalEstimator
2) 名詞性類型, 如(少年人、中年人、老年人), 需要離散化, 如DiscreteEstimator
?
然后增量式的訓(xùn)練。
?
針對(duì)每個(gè)樣本, 修改m_Distributions[m_NumAttributes][m_NumClasses], 以及 m_ClassDistribution
?
?
識(shí)別時(shí), 通過調(diào)用函數(shù) distributionForInstance, 該函數(shù)返回正規(guī)化的一組值, 表示該待測(cè)樣本在每個(gè)類中的概率
?
public void updateClassifier(Instance instance) throws Exception {if (!instance.classIsMissing()) {Enumeration enumAtts = m_Instances.enumerateAttributes();int attIndex = 0;while (enumAtts.hasMoreElements()) {Attribute attribute = (Attribute) enumAtts.nextElement();if (!instance.isMissing(attribute)) {m_Distributions[attIndex][(int)instance.classValue()].addValue(instance.value(attribute), instance.weight());}attIndex++;}m_ClassDistribution.addValue(instance.classValue(),instance.weight());}}
?
?
?
核心是? Estimator, 如NormalEstimator
//data表示屬性的值, weight表示該屬性值所在樣本的樣本權(quán)重 public void addValue(double data, double weight) {if (weight == 0) {return;}data = round(data);m_SumOfWeights += weight;m_SumOfValues += data * weight;m_SumOfValuesSq += data * data * weight;if (m_SumOfWeights > 0) {m_Mean = m_SumOfValues / m_SumOfWeights;double stdDev = Math.sqrt(Math.abs(m_SumOfValuesSq - m_Mean * m_SumOfValues) / m_SumOfWeights);// If the stdDev ~= 0, we really have no idea of scale yet, // so stick with the default. Otherwise...if (stdDev > 1e-10) {m_StandardDev = Math.max(m_Precision / (2 * 3), // allow at most 3sd's within one interval stdDev);}}}/*** Get a probability estimate for a value** @param data the value to estimate the probability of* @return the estimated probability of the supplied value*///計(jì)算該屬性的值為data的概率public double getProbability(double data) {data = round(data);double zLower = (data - m_Mean - (m_Precision / 2)) / m_StandardDev;double zUpper = (data - m_Mean + (m_Precision / 2)) / m_StandardDev;double pLower = Statistics.normalProbability(zLower);double pUpper = Statistics.normalProbability(zUpper);return pUpper - pLower;}
?
?
?
對(duì)應(yīng)到bogofilter的bayes,
類個(gè)數(shù)為2; 屬性個(gè)數(shù)就是token的個(gè)數(shù)
Estimator就是簡(jiǎn)單的計(jì)數(shù)
訓(xùn)練時(shí),
掃描待訓(xùn)練郵件的逐個(gè)token,增加其在對(duì)應(yīng)類別中的計(jì)數(shù)===>m_Distributions;
增加對(duì)應(yīng)類別的樣本數(shù)====> m_ClassDistribution
?
?
分類時(shí),
1)得到各個(gè)類的先驗(yàn)概率??????????????????????????? P(C)
2)乘以該屬性在該類中出現(xiàn)的概率??????????????? P(D|C), D為待測(cè)文檔,D由一組token組成
P(C|D) = P(C) *P(D|C)/P(D); 其中P(D)不必求,都一樣, 只需要求出P(C)*P(D|C)即可
其中, P(D|C) = P(t1|C)*,,,*P(tn|C)........條件獨(dú)立。 只需要 P(ti|C) 與 P(tj|C) 獨(dú)立即可。 因此是條件獨(dú)立
總結(jié)
以上是生活随笔為你收集整理的weka: naive bayes的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: weka: FCBFSearch
- 下一篇: trie 树