当前位置: 首页 > news >正文

怎么做快播电影网站百度竞价推广计划

怎么做快播电影网站,百度竞价推广计划,有没有可以做各种字体的网站,小程序是什么技术若该文为原创文章,转载请注明原文出处。 一、CPU版本DEMO测试 1、创建一个新的虚拟环境 conda create -n course_torch_openvino python3.8 2、激活环境 conda activate course_torch_openvino 3、安装pytorch cpu版本 pip install torch torchvision torchau…

若该文为原创文章,转载请注明原文出处。

一、CPU版本DEMO测试

1、创建一个新的虚拟环境

conda create -n course_torch_openvino python=3.8

2、激活环境

conda activate course_torch_openvino

3、安装pytorch cpu版本

pip install torch torchvision torchaudio  -i https://pypi.tuna.tsinghua.edu.cn/simple

4、安装

使用的是yolov5-5版本,github上下载。

pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

5、运行demo

python demo.py

完整代码 

import cv2
import numpy as np
import torch
import time# model = torch.hub.load('./yolov5', 'custom', path='./weights/ppe_yolo_n.pt',source='local')  # local repo
model = torch.hub.load('./yolov5', 'custom', 'weights/poker_n.pt',source='local')
model.conf = 0.4cap = cv2.VideoCapture(0)fps_time = time.time()while True:ret,frame = cap.read()frame = cv2.flip(frame,1)img_cvt = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)# Inferenceresults = model(img_cvt)result_np = results.pandas().xyxy[0].to_numpy()for box in result_np:l,t,r,b = box[:4].astype('int')cv2.rectangle(frame,(l,t),(r,b),(0,255,0),5)cv2.putText(frame,str(box[-1]),(l,t-20),cv2.FONT_ITALIC,1,(0,255,0),2)now = time.time()fps_text = 1/(now - fps_time)fps_time =  nowcv2.putText(frame,str(round(fps_text,2)),(50,50),cv2.FONT_ITALIC,1,(0,255,0),2)cv2.imshow('demo',frame)if cv2.waitKey(10) & 0xFF == ord('q'):breakcap.release()
cv2.destroyAllWindows()

运行正常

二、YOLOV5转换成openvino

1、安装onnx

pip install onnx==1.11.0

2、修改文件

修改export.py 的第121行,修改成

opset_version=10

3、导出onnx

使用训练好的best.pt文件,把best.pt转成onnx文件

转换命令为:

python export.py --weights ../weights/best.pt --img 640 --batch 1

4、转成openvino

转换前先安装环境

pip install openvino-dev[onnx]==2021.4.0 
pip install openvino==2021.4.0

验证一下,输入mo -h

接下来转换模型,使用下面命令导出模型

mo --input_model weights/best.onnx  --model_name weights/ir_model   -s 255 --reverse_input_channels --output Conv_294,Conv_245,Conv_196

会生成3个文件, ir_model.xml就是要用的文件。

5、运行

python yolov5_demo.py -i cam -m weights/ir_model.xml   -d CPU

代码:


import logging
import os
import sys
from argparse import ArgumentParser, SUPPRESS
from math import exp as exp
from time import time,sleep
import numpy as np
import cv2
from openvino.inference_engine import IENetwork, IECorelogging.basicConfig(format="[ %(levelname)s ] %(message)s", level=logging.INFO, stream=sys.stdout)
log = logging.getLogger()def build_argparser():parser = ArgumentParser(add_help=False)args = parser.add_argument_group('Options')args.add_argument('-h', '--help', action='help', default=SUPPRESS, help='Show this help message and exit.')args.add_argument("-m", "--model", help="Required. Path to an .xml file with a trained model.",required=True, type=str)args.add_argument("-i", "--input", help="Required. Path to an image/video file. (Specify 'cam' to work with ""camera)", required=True, type=str)args.add_argument("-l", "--cpu_extension",help="Optional. Required for CPU custom layers. Absolute path to a shared library with ""the kernels implementations.", type=str, default=None)args.add_argument("-d", "--device",help="Optional. Specify the target device to infer on; CPU, GPU, FPGA, HDDL or MYRIAD is"" acceptable. The sample will look for a suitable plugin for device specified. ""Default value is CPU", default="CPU", type=str)args.add_argument("-t", "--prob_threshold", help="Optional. Probability threshold for detections filtering",default=0.5, type=float)args.add_argument("-iout", "--iou_threshold", help="Optional. Intersection over union threshold for overlapping ""detections filtering", default=0.4, type=float)return parserclass YoloParams:# ------------------------------------------- Extracting layer parameters ------------------------------------------# Magic numbers are copied from yolo samplesdef __init__(self,  side):self.num = 3 #if 'num' not in param else int(param['num'])self.coords = 4 #if 'coords' not in param else int(param['coords'])self.classes = 80 #if 'classes' not in param else int(param['classes'])self.side = sideself.anchors = [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0,198.0,373.0, 326.0] #if 'anchors' not in param else [float(a) for a in param['anchors'].split(',')]def letterbox(img, size=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True):# Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232shape = img.shape[:2]  # current shape [height, width]w, h = size# Scale ratio (new / old)r = min(h / shape[0], w / shape[1])if not scaleup:  # only scale down, do not scale up (for better test mAP)r = min(r, 1.0)# Compute paddingratio = r, r  # width, height ratiosnew_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))dw, dh = w - new_unpad[0], h - new_unpad[1]  # wh paddingif auto:  # minimum rectangledw, dh = np.mod(dw, 64), np.mod(dh, 64)  # wh paddingelif scaleFill:  # stretchdw, dh = 0.0, 0.0new_unpad = (w, h)ratio = w / shape[1], h / shape[0]  # width, height ratiosdw /= 2  # divide padding into 2 sidesdh /= 2if shape[::-1] != new_unpad:  # resizeimg = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))left, right = int(round(dw - 0.1)), int(round(dw + 0.1))img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # add bordertop2, bottom2, left2, right2 = 0, 0, 0, 0if img.shape[0] != h:top2 = (h - img.shape[0])//2bottom2 = top2img = cv2.copyMakeBorder(img, top2, bottom2, left2, right2, cv2.BORDER_CONSTANT, value=color)  # add borderelif img.shape[1] != w:left2 = (w - img.shape[1])//2right2 = left2img = cv2.copyMakeBorder(img, top2, bottom2, left2, right2, cv2.BORDER_CONSTANT, value=color)  # add borderreturn imgdef scale_bbox(x, y, height, width, class_id, confidence, im_h, im_w, resized_im_h=640, resized_im_w=640):gain = min(resized_im_w / im_w, resized_im_h / im_h)  # gain  = old / newpad = (resized_im_w - im_w * gain) / 2, (resized_im_h - im_h * gain) / 2  # wh paddingx = int((x - pad[0])/gain)y = int((y - pad[1])/gain)w = int(width/gain)h = int(height/gain)xmin = max(0, int(x - w / 2))ymin = max(0, int(y - h / 2))xmax = min(im_w, int(xmin + w))ymax = min(im_h, int(ymin + h))# Method item() used here to convert NumPy types to native types for compatibility with functions, which don't# support Numpy types (e.g., cv2.rectangle doesn't support int64 in color parameter)return dict(xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax, class_id=class_id.item(), confidence=confidence.item())def entry_index(side, coord, classes, location, entry):side_power_2 = side ** 2n = location // side_power_2loc = location % side_power_2return int(side_power_2 * (n * (coord + classes + 1) + entry) + loc)def parse_yolo_region(blob, resized_image_shape, original_im_shape, params, threshold):# ------------------------------------------ Validating output parameters ------------------------------------------    out_blob_n, out_blob_c, out_blob_h, out_blob_w = blob.shapepredictions = 1.0/(1.0+np.exp(-blob)) # ------------------------------------------ Extracting layer parameters -------------------------------------------orig_im_h, orig_im_w = original_im_shaperesized_image_h, resized_image_w = resized_image_shapeobjects = list()side_square = params.side * params.side# ------------------------------------------- Parsing YOLO Region output -------------------------------------------bbox_size = int(out_blob_c/params.num) #4+1+num_classesindex=0for row, col, n in np.ndindex(params.side, params.side, params.num):bbox = predictions[0, n*bbox_size:(n+1)*bbox_size, row, col]x, y, width, height, object_probability = bbox[:5]class_probabilities = bbox[5:]if object_probability < threshold:continuex = (2*x - 0.5 + col)*(resized_image_w/out_blob_w)y = (2*y - 0.5 + row)*(resized_image_h/out_blob_h)if int(resized_image_w/out_blob_w) == 8 & int(resized_image_h/out_blob_h) == 8: #80x80, idx = 0elif int(resized_image_w/out_blob_w) == 16 & int(resized_image_h/out_blob_h) == 16: #40x40idx = 1elif int(resized_image_w/out_blob_w) == 32 & int(resized_image_h/out_blob_h) == 32: # 20x20idx = 2width = (2*width)**2* params.anchors[idx * 6 + 2 * n]height = (2*height)**2 * params.anchors[idx * 6 + 2 * n + 1]class_id = np.argmax(class_probabilities)confidence = object_probabilityobjects.append(scale_bbox(x=x, y=y, height=height, width=width, class_id=class_id, confidence=confidence,im_h=orig_im_h, im_w=orig_im_w, resized_im_h=resized_image_h, resized_im_w=resized_image_w))if index >30:breakindex+=1return objectsdef intersection_over_union(box_1, box_2):width_of_overlap_area = min(box_1['xmax'], box_2['xmax']) - max(box_1['xmin'], box_2['xmin'])height_of_overlap_area = min(box_1['ymax'], box_2['ymax']) - max(box_1['ymin'], box_2['ymin'])if width_of_overlap_area < 0 or height_of_overlap_area < 0:area_of_overlap = 0else:area_of_overlap = width_of_overlap_area * height_of_overlap_areabox_1_area = (box_1['ymax'] - box_1['ymin']) * (box_1['xmax'] - box_1['xmin'])box_2_area = (box_2['ymax'] - box_2['ymin']) * (box_2['xmax'] - box_2['xmin'])area_of_union = box_1_area + box_2_area - area_of_overlapif area_of_union == 0:return 0return area_of_overlap / area_of_uniondef main():args = build_argparser().parse_args()# ------------- 1. Plugin initialization for specified device and load extensions library if specified -------------ie = IECore()if args.cpu_extension and 'CPU' in args.device:ie.add_extension(args.cpu_extension, "CPU")# -------------------- 2. Reading the IR generated by the Model Optimizer (.xml and .bin files) --------------------model = args.modelnet = ie.read_network(model=model)# ---------------------------------------------- 4. Preparing inputs -----------------------------------------------input_blob = next(iter(net.input_info))#  Defaulf batch_size is 1net.batch_size = 1# Read and pre-process input imagesn, c, h, w = net.input_info[input_blob].input_data.shape# labels_map = [x.strip() for x in f]labels_map = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light','fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow','elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee','skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard','tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple','sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch','potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone','microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear','hair drier', 'toothbrush']input_stream = 0 if args.input == "cam" else args.inputis_async_mode = Truecap = cv2.VideoCapture(input_stream)number_input_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))number_input_frames = 1 if number_input_frames != -1 and number_input_frames < 0 else number_input_frameswait_key_code = 1# Number of frames in picture is 1 and this will be read in cycle. Sync mode is default value for this caseif number_input_frames != 1:ret, frame = cap.read()else:is_async_mode = Falsewait_key_code = 0# ----------------------------------------- 5. Loading model to the plugin -----------------------------------------exec_net = ie.load_network(network=net, num_requests=2, device_name=args.device)cur_request_id = 0next_request_id = 1render_time = 0parsing_time = 0# ----------------------------------------------- 6. Doing inference -----------------------------------------------initial_w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))initial_h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))origin_im_size = (initial_h,initial_w)while cap.isOpened():# Here is the first asynchronous point: in the Async mode, we capture frame to populate the NEXT infer request# in the regular mode, we capture frame to the CURRENT infer requestif is_async_mode:ret, next_frame = cap.read()else:ret, frame = cap.read()if not ret:breakif is_async_mode:request_id = next_request_idin_frame = letterbox(frame, (w, h))else:request_id = cur_request_idin_frame = letterbox(frame, (w, h))in_frame0 = in_frame# resize input_frame to network sizein_frame = in_frame.transpose((2, 0, 1))  # Change data layout from HWC to CHWin_frame = in_frame.reshape((n, c, h, w))# Start inferencestart_time = time()exec_net.start_async(request_id=request_id, inputs={input_blob: in_frame})# Collecting object detection resultsobjects = list()if exec_net.requests[cur_request_id].wait(-1) == 0:output = exec_net.requests[cur_request_id].output_blobsstart_time = time()for layer_name, out_blob in output.items():layer_params = YoloParams(side=out_blob.buffer.shape[2])objects += parse_yolo_region(out_blob.buffer, in_frame.shape[2:],frame.shape[:-1], layer_params,args.prob_threshold)parsing_time = time() - start_time# Filtering overlapping boxes with respect to the --iou_threshold CLI parameterobjects = sorted(objects, key=lambda obj : obj['confidence'], reverse=True)for i in range(len(objects)):if objects[i]['confidence'] == 0:continuefor j in range(i + 1, len(objects)):if intersection_over_union(objects[i], objects[j]) > args.iou_threshold:objects[j]['confidence'] = 0# Drawing objects with respect to the --prob_threshold CLI parameterobjects = [obj for obj in objects if obj['confidence'] >= args.prob_threshold]for obj in objects:# Validation bbox of detected objectif obj['xmax'] > origin_im_size[1] or obj['ymax'] > origin_im_size[0] or obj['xmin'] < 0 or obj['ymin'] < 0:continuecolor = (0,255,0)det_label = labels_map[obj['class_id']] if labels_map and len(labels_map) >= obj['class_id'] else \str(obj['class_id'])cv2.rectangle(frame, (obj['xmin'], obj['ymin']), (obj['xmax'], obj['ymax']), color, 2)cv2.putText(frame,"#" + det_label + ' ' + str(round(obj['confidence'] * 100, 1)) + ' %',(obj['xmin'], obj['ymin'] - 7), cv2.FONT_ITALIC, 1, color, 2)# Draw performance stats over frameasync_mode_message = "Async mode: ON"if is_async_mode else "Async mode: OFF"cv2.putText(frame, async_mode_message, (10, int(origin_im_size[0] - 20)), cv2.FONT_ITALIC, 1,(10, 10, 200), 2)fps_time = time() - start_timeif fps_time !=0:fps = 1 / fps_timecv2.putText(frame, 'fps:'+str(round(fps,2)), (50, 50), cv2.FONT_ITALIC, 1, (0, 255, 0), 2)cv2.imshow("DetectionResults", frame)if is_async_mode:cur_request_id, next_request_id = next_request_id, cur_request_idframe = next_framekey = cv2.waitKey(wait_key_code)# ESC keyif key == 27:break# Tab keyif key == 9:exec_net.requests[cur_request_id].wait()is_async_mode = not is_async_modelog.info("Switched to {} mode".format("async" if is_async_mode else "sync"))cv2.destroyAllWindows()if __name__ == '__main__':sys.exit(main() or 0)

三、总结

通过openvino加速,CPU没有GPU下,从原本的20帧左右提升到50多帧,效果还可以,就 是用自己的模型,训练出来的效果不怎么好。

使用树莓派等嵌入板子使用openvino效果还可以。

如有侵权,或需要完整代码,请及时联系博主。


文章转载自:
http://grossdeutsch.c7617.cn
http://malefactress.c7617.cn
http://audiodontics.c7617.cn
http://ichthyophagist.c7617.cn
http://phosphoroscope.c7617.cn
http://chiasma.c7617.cn
http://pinnatipartite.c7617.cn
http://nominative.c7617.cn
http://nonevent.c7617.cn
http://citizenize.c7617.cn
http://maniple.c7617.cn
http://sorrel.c7617.cn
http://lala.c7617.cn
http://spurrier.c7617.cn
http://inappositely.c7617.cn
http://making.c7617.cn
http://mollah.c7617.cn
http://messuage.c7617.cn
http://mealie.c7617.cn
http://damiana.c7617.cn
http://allot.c7617.cn
http://solubilise.c7617.cn
http://forehandedly.c7617.cn
http://ziegler.c7617.cn
http://labouring.c7617.cn
http://caulocarpous.c7617.cn
http://teniasis.c7617.cn
http://adipocere.c7617.cn
http://lablab.c7617.cn
http://pulicide.c7617.cn
http://moving.c7617.cn
http://anilinctus.c7617.cn
http://noviciate.c7617.cn
http://phytogenic.c7617.cn
http://kazan.c7617.cn
http://ferrety.c7617.cn
http://pommy.c7617.cn
http://inexistent.c7617.cn
http://peacherino.c7617.cn
http://equivoque.c7617.cn
http://alf.c7617.cn
http://flauntiness.c7617.cn
http://aristocracy.c7617.cn
http://bunchberry.c7617.cn
http://undermine.c7617.cn
http://venenous.c7617.cn
http://marquise.c7617.cn
http://foxing.c7617.cn
http://comprehensible.c7617.cn
http://dumfound.c7617.cn
http://cervical.c7617.cn
http://flasher.c7617.cn
http://consensual.c7617.cn
http://mocock.c7617.cn
http://inodorous.c7617.cn
http://dop.c7617.cn
http://noncontentious.c7617.cn
http://agonist.c7617.cn
http://earthstar.c7617.cn
http://honorably.c7617.cn
http://amphigenous.c7617.cn
http://sequitur.c7617.cn
http://copydesk.c7617.cn
http://slalom.c7617.cn
http://toehold.c7617.cn
http://catalogic.c7617.cn
http://porkfish.c7617.cn
http://martyr.c7617.cn
http://prediabetic.c7617.cn
http://resend.c7617.cn
http://quaternize.c7617.cn
http://czechoslovak.c7617.cn
http://guffaw.c7617.cn
http://aphanitism.c7617.cn
http://chinchin.c7617.cn
http://chestertonian.c7617.cn
http://unpick.c7617.cn
http://chorus.c7617.cn
http://jocose.c7617.cn
http://chicanismo.c7617.cn
http://washaway.c7617.cn
http://renerve.c7617.cn
http://microkit.c7617.cn
http://subway.c7617.cn
http://kickball.c7617.cn
http://superintendent.c7617.cn
http://gentian.c7617.cn
http://nowt.c7617.cn
http://scleroprotein.c7617.cn
http://interdependence.c7617.cn
http://overfed.c7617.cn
http://gardner.c7617.cn
http://tajo.c7617.cn
http://fundus.c7617.cn
http://uranide.c7617.cn
http://eligibility.c7617.cn
http://sporocyte.c7617.cn
http://epitomize.c7617.cn
http://refutation.c7617.cn
http://polymnia.c7617.cn
http://www.zhongyajixie.com/news/70882.html

相关文章:

  • 苏州高端网站建设机构软文营销方案
  • 大访问量的网站怎么做优化灰色词快速排名接单
  • 沈阳网站托管公司百度的网址是多少
  • iis 没有新建网站广点通
  • 花都建网站公司百度灰色关键词排名推广
  • 深圳网站公司哪家好软文300字介绍商品
  • 优秀的网站设计方案dw友情链接怎么设置
  • 淘宝请人做网站靠谱吗百度网盘资源分享
  • 微机做网站的软件焦作关键词优化排名
  • 那些做黑网站的都是团体还是个人搜狗seo快速排名公司
  • 电视台网站模版南安网站建设
  • wordpress设置上传芜湖seo
  • 网页开发项目seo技巧
  • 做任务赚钱的游戏网站济南新站seo外包
  • 河北网站备案注销永久免费国外域名注册
  • 汽车网站名称世界杯球队最新排名
  • 建设银行信用卡中心网站网搜网
  • 网站建设投资预算网站seo链接购买
  • 阅读网站模板免费网站软件推荐
  • 个人作品集网站是怎么做西安网站制作工作室
  • 企业网站都需要备案吗如何开网站呢
  • 东莞网站建设(信科分公司)百度集团总部在哪里
  • 优秀网站制作建站模板免费下载
  • 小程序和网站的区别2021年热门关键词
  • 青岛模板化网站建设网站关键词排名分析
  • 来宾网站优化新浪体育最新消息
  • 武汉网站建设设计上海企业网站seo
  • 网站导航栏隐藏部分怎么做网站建设知名公司
  • 网站源码破解广州王牌seo
  • 做网站怎样投放广告seo课程多少钱