ML之NB:利用朴素贝叶斯NB算法(CountVectorizer+不去除停用词)对fetch_20newsgroups数据集(20类新闻文本)进行分类预测、评估
生活随笔
收集整理的這篇文章主要介紹了
ML之NB:利用朴素贝叶斯NB算法(CountVectorizer+不去除停用词)对fetch_20newsgroups数据集(20类新闻文本)进行分类预测、评估
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
ML之NB:利用樸素貝葉斯NB算法(CountVectorizer+不去除停用詞)對fetch_20newsgroups數據集(20類新聞文本)進行分類預測、評估
?
?
?
目錄
輸出結果
設計思路
核心代碼
?
?
?
?
輸出結果
?
設計思路
?
核心代碼
https://www.cnblogs.com/yunyaniu/articles/10465701.html
class MultinomialNB Found at: sklearn.naive_bayesclass MultinomialNB(BaseDiscreteNB):"""Naive Bayes classifier for multinomial modelsThe multinomial Naive Bayes classifier is suitable for classification withdiscrete features (e.g., word counts for text classification). Themultinomial distribution normally requires integer feature counts. However,in practice, fractional counts such as tf-idf may also work.Read more in the :ref:`User Guide <multinomial_naive_bayes>`.Parameters----------alpha : float, optional (default=1.0)Additive (Laplace/Lidstone) smoothing parameter(0 for no smoothing).fit_prior : boolean, optional (default=True)Whether to learn class prior probabilities or not.If false, a uniform prior will be used.class_prior : array-like, size (n_classes,), optional (default=None)Prior probabilities of the classes. If specified the priors are notadjusted according to the data.Attributes----------class_log_prior_ : array, shape (n_classes, )Smoothed empirical log probability for each class.intercept_ : propertyMirrors ``class_log_prior_`` for interpreting MultinomialNBas a linear model.feature_log_prob_ : array, shape (n_classes, n_features)Empirical log probability of featuresgiven a class, ``P(x_i|y)``.coef_ : propertyMirrors ``feature_log_prob_`` for interpreting MultinomialNBas a linear model.class_count_ : array, shape (n_classes,)Number of samples encountered for each class during fitting. Thisvalue is weighted by the sample weight when provided.feature_count_ : array, shape (n_classes, n_features)Number of samples encountered for each (class, feature)during fitting. This value is weighted by the sample weight whenprovided.Examples-------->>> import numpy as np>>> X = np.random.randint(5, size=(6, 100))>>> y = np.array([1, 2, 3, 4, 5, 6])>>> from sklearn.naive_bayes import MultinomialNB>>> clf = MultinomialNB()>>> clf.fit(X, y)MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)>>> print(clf.predict(X[2:3]))[3]Notes-----For the rationale behind the names `coef_` and `intercept_`, i.e.naive Bayes as a linear classifier, see J. Rennie et al. (2003),Tackling the poor assumptions of naive Bayes text classifiers, ICML.References----------C.D. Manning, P. Raghavan and H. Schuetze (2008). Introduction toInformation Retrieval. Cambridge University Press, pp. 234-265.http://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-classification-1.html"""def __init__(self, alpha=1.0, fit_prior=True, class_prior=None):self.alpha = alphaself.fit_prior = fit_priorself.class_prior = class_priordef _count(self, X, Y):"""Count and smooth feature occurrences."""if np.any((X.data if issparse(X) else X) < 0):raise ValueError("Input X must be non-negative")self.feature_count_ += safe_sparse_dot(Y.T, X)self.class_count_ += Y.sum(axis=0)def _update_feature_log_prob(self, alpha):"""Apply smoothing to raw counts and recompute log probabilities"""smoothed_fc = self.feature_count_ + alphasmoothed_cc = smoothed_fc.sum(axis=1)self.feature_log_prob_ = np.log(smoothed_fc) - np.log(smoothed_cc.reshape(-1, 1))def _joint_log_likelihood(self, X):"""Calculate the posterior log probability of the samples X"""check_is_fitted(self, "classes_")X = check_array(X, accept_sparse='csr')return safe_sparse_dot(X, self.feature_log_prob_.T) + self.class_log_prior_?
?
總結
以上是生活随笔為你收集整理的ML之NB:利用朴素贝叶斯NB算法(CountVectorizer+不去除停用词)对fetch_20newsgroups数据集(20类新闻文本)进行分类预测、评估的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: ML之SVM:利用SVM算法对手写数字图
- 下一篇: NLP:利用DictVectorizer