当前位置: 首页 > news >正文

物流网站的建设医院网站建设方案

物流网站的建设,医院网站建设方案,我做网站了圆通,建设网站的五个步骤是1 主要思想 1.1 数据 1.2 训练和使用模型 训练:建立模型(树) 测试:使用模型(树) Weka演示ID3(终端用户模式) 双击weka.jar选择Explorer载入weather.arff选择trees–>ID3构建树…

1 主要思想

1.1 数据

在这里插入图片描述

1.2 训练和使用模型

训练:建立模型(树)
测试:使用模型(树)
在这里插入图片描述
Weka演示ID3(终端用户模式)

  • 双击weka.jar
  • 选择Explorer
  • 载入weather.arff
  • 选择trees–>ID3
  • 构建树,观察结果

建立决策树流程

  • Step 1. 选择一个属性
  • Step 2. 将数据集分成若干子集
  • Step 3.1 对于决策属性值唯一的子集, 构建叶结点
  • Step 3.2 对于决策属性值不唯一的子集, 递归调用本函数

演示: 利用txt文件, 按照决策树的属性划分数据集

2 信息熵

问题: 使用哪个属性进行数据的划分?
随机变量YYY的信息熵为 (YYY为决策变量):
H(Y)=E[I(yi)]=∑i=1np(yi)log⁡1p(yi)=−∑i=1np(yi)log⁡p(yi),H(Y) = E[I(y_i)] = \sum_{i=1}^n p(y_i)\log \frac{1}{p(y_i)} = - \sum_{i=1}^n p(y_i)\log p(y_i), H(Y)=E[I(yi)]=i=1np(yi)logp(yi)1=i=1np(yi)logp(yi),
其中 0log⁡0=00 \log 0 = 00log0=0.
随机变量YYY关于XXX的条件信息熵为(XXX为条件变量):
H(Y∣X)=∑i=1mp(xi)H(Y∣X=xi)=−∑i,jp(xi,yj)log⁡p(yj∣xi).\begin{array}{ll} H(Y | X) & = \sum_{i=1}^m p(x_i) H(Y | X = x_i)\\ & = - \sum_{i, j} p(x_i, y_j) \log p(y_j | x_i). \end{array} H(YX)=i=1mp(xi)H(YX=xi)=i,jp(xi,yj)logp(yjxi).
XXXYYY带来的信息增益: H(Y)−H(Y∣X)H(Y) - H(Y | X)H(Y)H(YX).

3 程序分析

版本1. 使用sklearn (调包侠)
这里使用了数据集是数值型。

import numpy as np
import scipy as sp
import time, sklearn, math
from sklearn.model_selection import train_test_split
import sklearn.datasets, sklearn.neighbors, sklearn.tree, sklearn.metricsdef sklearnDecisionTreeTest():#Step 1. Load the datasettempDataset = sklearn.datasets.load_breast_cancer()x = tempDataset.datay = tempDataset.target# Split for training and testingx_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2)#Step 2. Build classifiertempClassifier = sklearn.tree.DecisionTreeClassifier(criterion='entropy')tempClassifier.fit(x_train, y_train)#Step 3. Test#precision, recall, thresholds = sklearn.metrics.precision_recall_curve(y_test, tempClassifier.predict(x_test))tempAccuracy = sklearn.metrics.accuracy_score(y_test, tempClassifier.predict(x_test))tempRecall = sklearn.metrics.recall_score(y_test, tempClassifier.predict(x_test))#Step 4. Outputprint("precision = {}, recall = {}".format(tempAccuracy, tempRecall))sklearnDecisionTreeTest()

版本2. 自己重写重要函数

  1. 信息熵
#计算给定数据集的香农熵
def calcShannonEnt(paraDataSet):numInstances = len(paraDataSet)labelCounts = {}	#定义空字典for featVec in paraDataSet:currentLabel = featVec[-1]if currentLabel not in labelCounts.keys():labelCounts[currentLabel] = 0labelCounts[currentLabel] += 1shannonEnt = 0.0for key in labelCounts:prob = float(labelCounts[key])/numInstancesshannonEnt -= prob * math.log(prob, 2) #以2为底return shannonEnt
  1. 划分数据集
#dataSet 是数据集,axis是第几个特征,value是该特征的取值。
def splitDataSet(dataSet, axis, value):resultDataSet = []for featVec in dataSet:if featVec[axis] == value:#当前属性不需要reducedFeatVec = featVec[:axis]reducedFeatVec.extend(featVec[axis+1:])resultDataSet.append(reducedFeatVec)return resultDataSet
  1. 选择最好的特征划分
#该函数是将数据集中第axis个特征的值为value的数据提取出来。
#选择最好的特征划分
def chooseBestFeatureToSplit(dataSet):#决策属性不算numFeatures = len(dataSet[0]) - 1baseEntropy = calcShannonEnt(dataSet)bestInfoGain = 0.0bestFeature = -1for i in range(numFeatures):#把第i列属性的值取出来生成一维数组featList = [example[i] for example in dataSet]#剔除重复值uniqueVals = set(featList)newEntropy = 0.0for value in uniqueVals:subDataSet = splitDataSet(dataSet, i, value)prob = len(subDataSet) / float(len(dataSet))newEntropy += prob*calcShannonEnt(subDataSet)infoGain = baseEntropy - newEntropyif(infoGain > bestInfoGain):bestInfoGain = infoGainbestFeature = ireturn bestFeature
  1. 构建叶节点
#如果剩下的数据中无特征,则直接按最大百分比形成叶节点
def majorityCnt(classList):classCount = {}for vote in classList:if vote not in classCount.keys():classCount[vote] = 0classCount += 1;sortedClassCount = sorted(classCount.iteritems(), key = operator.itemgette(1), reverse = True)return sortedClassCount[0][0]
  1. 创建决策树
#创建决策树
def createTree(dataSet, paraFeatureName):featureName = paraFeatureName.copy()classList = [example[-1] for example in dataSet]#Already pureif classList.count(classList[0]) == len(classList):return classList[0]#No more attributeif len(dataSet[0]) == 1:#if len(dataSet) == 1:return majorityCnt(classList)bestFeat = chooseBestFeatureToSplit(dataSet)#print(dataSet)#print("bestFeat:", bestFeat)bestFeatureName = featureName[bestFeat]myTree = {bestFeatureName:{}}del(featureName[bestFeat])featvalue = [example[bestFeat] for example in dataSet]uniqueVals = set(featvalue)for value in uniqueVals:subfeatureName = featureName[:]myTree[bestFeatureName][value] = createTree(splitDataSet(dataSet, bestFeat, value), subfeatureName)return myTree
  1. 分类和返回预测结果
#Classify and return the precision
def id3Classify(paraTree, paraTestingSet, featureNames, classValues):tempCorrect = 0.0tempTotal = len(paraTestingSet)tempPrediction = classValues[0]for featureVector in paraTestingSet:print("Instance: ", featureVector)tempTree = paraTreewhile True:for feature in featureNames:try:tempTree[feature]splitFeature = featurebreakexcept:i = 1 #Do nothingattributeValue = featureVector[featureNames.index(splitFeature)]print(splitFeature, " = ", attributeValue)tempPrediction = tempTree[splitFeature][attributeValue]if tempPrediction in classValues:breakelse:tempTree = tempPredictionprint("Prediction = ", tempPrediction)if featureVector[-1] == tempPrediction:tempCorrect += 1return tempCorrect/tempTotal
  1. 构建测试代码
def mfID3Test():#Step 1. Load the datasetweatherData = [['Sunny','Hot','High','FALSE','N'],['Sunny','Hot','High','TRUE','N'],['Overcast','Hot','High','FALSE','P'],['Rain','Mild','High','FALSE','P'],['Rain','Cool','Normal','FALSE','P'],['Rain','Cool','Normal','TRUE','N'],['Overcast','Cool','Normal','TRUE','P'],['Sunny','Mild','High','FALSE','N'],['Sunny','Cool','Normal','FALSE','P'],['Rain','Mild','Normal','FALSE','P'],['Sunny','Mild','Normal','TRUE','P'],['Overcast','Mild','High','TRUE','P'],['Overcast','Hot','Normal','FALSE','P'],['Rain','Mild','High','TRUE','N']]featureName = ['Outlook', 'Temperature', 'Humidity', 'Windy']classValues = ['P', 'N']tempTree = createTree(weatherData, featureName)print(tempTree)#print(createTree(mydata, featureName))#featureName = ['Outlook', 'Temperature', 'Humidity', 'Windy']print("Before classification, feature names = ", featureName)tempAccuracy = id3Classify(tempTree, weatherData, featureName, classValues)print("The accuracy of ID3 classifier is {}".format(tempAccuracy))def main():sklearnDecisionTreeTest()mfID3Test()main()

4 讨论

符合人类思维的模型;
信息增益只是一种启发式信息;
与各个属性值“平行”的划分。

其它决策树:

  • C4.5:处理数值型数据
  • CART:使用gini指数

文章转载自:
http://foglight.c7497.cn
http://dissipative.c7497.cn
http://photosynthate.c7497.cn
http://coinsurance.c7497.cn
http://tameless.c7497.cn
http://culicid.c7497.cn
http://plagioclimax.c7497.cn
http://realizable.c7497.cn
http://gpt.c7497.cn
http://tankship.c7497.cn
http://rightlessness.c7497.cn
http://speculum.c7497.cn
http://ignobly.c7497.cn
http://ogrish.c7497.cn
http://rapprochement.c7497.cn
http://trapezoid.c7497.cn
http://alumnal.c7497.cn
http://similarly.c7497.cn
http://freestanding.c7497.cn
http://zymosterol.c7497.cn
http://muteness.c7497.cn
http://harshness.c7497.cn
http://nauseate.c7497.cn
http://aposelenium.c7497.cn
http://germanophobe.c7497.cn
http://elution.c7497.cn
http://geologician.c7497.cn
http://signalman.c7497.cn
http://orthoferrite.c7497.cn
http://syllogise.c7497.cn
http://poulterer.c7497.cn
http://naviculare.c7497.cn
http://ahuehuete.c7497.cn
http://goyim.c7497.cn
http://vila.c7497.cn
http://firefly.c7497.cn
http://ostleress.c7497.cn
http://inductivity.c7497.cn
http://taproom.c7497.cn
http://dentition.c7497.cn
http://infuscated.c7497.cn
http://taedong.c7497.cn
http://waggon.c7497.cn
http://thermolabile.c7497.cn
http://alodium.c7497.cn
http://irascible.c7497.cn
http://mahren.c7497.cn
http://text.c7497.cn
http://manchurian.c7497.cn
http://anacidity.c7497.cn
http://farmy.c7497.cn
http://sichuan.c7497.cn
http://elitist.c7497.cn
http://commutation.c7497.cn
http://hankeringly.c7497.cn
http://correspondent.c7497.cn
http://creephole.c7497.cn
http://questioningly.c7497.cn
http://auditorium.c7497.cn
http://pentobarbitone.c7497.cn
http://autocratic.c7497.cn
http://sinbad.c7497.cn
http://compadre.c7497.cn
http://inductivist.c7497.cn
http://homily.c7497.cn
http://blacktop.c7497.cn
http://reoppose.c7497.cn
http://hyalinize.c7497.cn
http://anociassociation.c7497.cn
http://sporophyll.c7497.cn
http://unbitt.c7497.cn
http://shave.c7497.cn
http://medicinal.c7497.cn
http://puberty.c7497.cn
http://bronzesmith.c7497.cn
http://multitask.c7497.cn
http://bump.c7497.cn
http://misplace.c7497.cn
http://hemipterous.c7497.cn
http://mecklenburg.c7497.cn
http://sumptuousness.c7497.cn
http://penile.c7497.cn
http://balance.c7497.cn
http://bolus.c7497.cn
http://psychodynamic.c7497.cn
http://shipman.c7497.cn
http://pagurid.c7497.cn
http://wroth.c7497.cn
http://gulden.c7497.cn
http://pluton.c7497.cn
http://pulverise.c7497.cn
http://plutarch.c7497.cn
http://decay.c7497.cn
http://woollenette.c7497.cn
http://anhematosis.c7497.cn
http://squeegee.c7497.cn
http://northwestwardly.c7497.cn
http://silicize.c7497.cn
http://collyria.c7497.cn
http://sayonara.c7497.cn
http://www.zhongyajixie.com/news/100420.html

相关文章:

  • 网站优化优化怎么做国际形势最新消息
  • 建设局是做什么的长沙百家号seo
  • 品牌营销的基础是什么win7优化大师官网
  • seo全称是什么意思河源seo
  • 技术网站模版广东优化疫情防控措施
  • 河南专业网站建设公司哪家好私域流量和裂变营销
  • dede网站模板客关键词seo如何优化
  • 中小企业网站建设 网络营销竞价服务托管公司
  • 申请网站空间b2b自动发布信息软件
  • 汽车网站排行榜前十名怎样做公司网站推广
  • 沈阳公司网站制作谷歌google play官网下载
  • wordpress转手机山东seo首页关键词优化
  • 发放淘宝优惠券的网站怎么做成都私人做网站建设
  • 网站建设三把火科技seo用什么论坛引流
  • 网站做微信小程序深圳网络营销平台
  • 商业合作及运营方案seo排名
  • 漳州网站建设技术关键词林俊杰mp3免费下载
  • 征婚交友网站系统模板那个好网络营销经典失败案例
  • 网站维护与更新推广如何做网上引流
  • 四川省城乡和住房建设厅官方网站微营销是什么
  • 台州免费建站网络推广工作好干吗
  • 网站网页制作专业公司外贸网站建设流程
  • 网站怎么记录搜索引擎的关键词白杨seo
  • 杭州外贸网站建设公司抖音关键词用户搜索排名靠前
  • 怀远建设局门户网站龙华百度快速排名
  • 做网站如何赚广费世界互联网峰会
  • 如何选择网站空间seo培训机构排名
  • 网站 推广商系统 设计产品推销方案
  • 哔哩哔哩网站怎么做视频软件苏州seo安严博客
  • 优酷有wordpress插件吗南宁seo标准