DAEMON Tools Lite 4.40.1 版本免费虚拟光驱软件下载(暂未上线)(daemon是什么意思啊)
1829
2022-05-30
1 ffmpeg概述
根据百度百科的定义,ffmpeg是开源的音视频处理库,它可以用来记录、转换数字音频、视频,并能将其转化为流的计算机程序。它包含了非常先进的音频/视频编解码库libavcodec。ffmpeg视频采集功能非常强大,不仅可以采集视频采集卡或USB摄像头的图像,还可以进行屏幕录制,同时还支持以RTP方式将视频流传送给支持RTSP的流媒体服务器,支持直播应用。ffmpeg库主要包含如下的核心组件:
ffmpeg:用于格式转换、解码或电视卡即时编码等
ffsever:一个 HTTP 多媒体即时广播串流服务器
ffplay:简单的播放器,使用ffmpeg 库解析和解码
libavformat:用于各种音视频封装格式的生成和解析
libavcodec:用于各种类型声音/图像编解码
libavutil:公共的工具函数
libswscale:用于视频场景比例缩放、色彩映射转换
libpostproc:用于后期效果处理
下面给出常见的ffmpeg命令:
#图片序列合成视频 ffmpeg -f images -i img%d.jpg myvideo.mpg #将视频分解成图片序列 ffmpeg -i myvideo.mpg image%d.jpg #从视频提取声音,存为demo.mp3 ffmpeg -i source_video.avi -vn -ar 44100 -ac 2 -ab 192 -f mp3 demo.mp3 #视频格式转换 ffmpeg -i video_src.mpg video_dest.avi #将.avi转成gif动画 ffmpeg -i video_dest.avi anime.gif #合成视频和音频 ffmpeg -i demo.wav -i video_src.avi video_dest.mpg
2 OpenCV概述
根据百度百科的定义,OpenCV是一个基于Apache2.0协议的开源跨平台计算机视觉和机器学习软件库,它可以运行在Linux、Windows、Android和Mac OS操作系统上。它提供了Python、Ruby、MATLAB等语言的接口,实现了图像处理和计算机视觉方面的很多通用算法。OpenCV用C++语言编写,轻量且高效,主要倾向于实时视觉应用。 当前在计算机视觉项目中,经常需要用到OpenCV,特别是Python版的接口。
3 docker镜像安装ffmpeg+OpenCV环境
之前的一篇博客《华为Atlas 500小站Docker镜像制作》介绍了如何制作Atlas 500 Docker镜像,下面的安装是基于之前的Docker镜像。首先以admin登录Atlas 500 智能小站,进入开发者模式(develop)。这里我们需要启动Docker镜像,在启动之前,需要查看一下当前的Docker镜像列表,命令如下:
Euler:~ # docker images
输出结果如下:
从上图输出结果可以看出,制作的Docker 镜像 workload-image:v1.0 在列表中。下面将基于此镜像来创建容器,命令如下:
docker run \ --device=/dev/davinci0 \ --device=/dev/davinci_manager \ --device=/dev/hisi_hdc \ --device /dev/devmm_svm \ -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ -v /home/data/miniD/driver/lib64:/home/data/miniD/driver/lib64 \ -v /run/board_cfg.ini:/run/board_cfg.ini \ -it workload-image:v1.0 bash
执行结果如下所示:
如果执行成功,会进入Docker容器的交互环境,这里可以执行Linux相关命令,其中的用户为root,而容器ID为e9c222179267 。这个ID后续的启动和停止容器等操作都会用到。此时容器中的Python环境还未安装,下面首先安装Python环境和相关依赖项,命令如下:
apt-get update apt-get install python3.7 apt-get install python3-pip apt-get install libtiff5-dev libjpeg8-dev zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python-tk
执行成功后,系统中会有python3.6的命令,没错就是python3.6 。继续执行如下命令安装:
python3.6 -m pip install --upgrade pip --user -i https://mirrors.huaweicloud.com/repository/pypi/simple python3.6 -m pip install Cython numpy pillow tornado==5.1.0 protobuf \ --user -i https://mirrors.huaweicloud.com/repository/pypi/simple
安装编辑器vim,命令如下:
apt-get install vim
安装Python OpenCV 库,命令如下:
apt-get install python3-opencv
下面介绍如何安装ffmpeg,这里的安装可以按照如下网址从源码进行编译安装,这个过程比较耗时,估计要1~2小时左右。参考网址如下: https://gitee.com/ascend/samples/blob/master/cplusplus/environment/opencv_install/README_300_CN.md
下面是按照上述文档将编译后的库进行打包成 ascend_ddk.tar.gz ,它将开发环境安装的ffmpeg、opencv库导入运行环境中,以提供运行使用。执行的操作如下:
mkdir $HOME/ascend_ddk scp -r HwHiAiUser@X.X.X.X:/home/HwHiAiUser/ascend_ddk/x86 $HOME/ascend_ddk scp -r HwHiAiUser@X.X.X.X:/usr/lib/x86_64-linux-gnu/lib* $HOME/ascend_ddk/x86/lib
下面给出依赖性文件,如下所示:
PyAV.tar.gz ascend_ddk.tar.gz (编译好的库打包)
其中的PyAV可以从下面的网址进行克隆源码,命令如下:
git clone https://gitee.com/mirrors/PyAV.git
将文件从Atlas 500小站拷贝到当前容器中,命令如下:
docker cp /opt/mount/docker05/ascend_ddk.tar.gz e9c222179267:/root/ docker cp /opt/mount/docker05/PyAV.tar.gz e9c222179267:/root/
上传成功后,可以在容器/root目录下查看:
解压安装,命令如下:
tar zxvf ascend_ddk.tar.gz ################################################# pip3 install Cython apt-get install pkg-config libxcb-shm0-dev libxcb-xfixes0-dev #使opencv能找到ffmpeg cp /root/ascend_ddk/x86/lib/pkgconfig/* /usr/share/pkgconfig ################################################# tar zxvf PyAV.tar.gz cd PyAV python3.6 setup.py build --ffmpeg-dir=/root/ascend_ddk/x86 python3.6 setup.py install
如果编译安装PyAV过程中报如下错误,则可以尝试的解决方案如下:
编译PyAv报错: Could not find libavdevice with pkg-config. Could not find libavfilter with pkg-config. 解决方法: 步骤1. cp /root/ascend_ddk/x86/lib/pkgconfig/* /usr/share/pkgconfig/ 步骤2. export PKG_CONFIG_PATH=/usr/share/pkgconfig/
编译PyAV稍微有点耗时,请耐心等待。等成功安装完成后,输出信息如下所示:
#xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Installing pyav script to /usr/local/bin Installed /usr/local/lib/python3.6/dist-packages/av-8.0.4.dev0-py3.6-linux-aarch64.egg Processing dependencies for av==8.0.4.dev0 Finished processing dependencies for av==8.0.4.dev0
成功安装后,还需要配置环境变量,具体操作如下所示:
vim /etc/ld.so.conf.d/ffmpeg.conf
添加如下内容:
/root/ascend_ddk/x86/lib
命令行执行如下命令,配置生效:
ldconfig
继续配置环境变量:
vim /etc/profile
在末尾追加如下内容:
export PATH=$PATH:/root/ascend_ddk/x86/bin export PYTHONPATH=/home/data/miniD/driver/lib64:$PYTHONPATH export LD_LIBRARY_PATH=/home/data/miniD/driver/lib64:$LD_LIBRARY_PATH
命令行执行如下命令,配置生效:
source /etc/profile
下面需要验证一下是否可以正常运行,命令如下:
cd ~ python3.6 ################Python环境下############ import av import cv2
正确执行,界面如下所示:
如果执行import av报如下错误,则检查环境变量是否配置正确,执行如下操作:
ImportError: libavcodec.so.58: cannot open shared object file: No such file or directory
如果import命令都可以正确执行,没有报错信息,则说明ffmpeg 、OpenCV和PyAV都已经安装成功。首先退出当前Docker容器,重新启动容器:
root@e9c222179267:~/dist# exit exit Euler:~ # docker ps -a CONTAINER ID IMAGE COMMAND e9c222179267 workload-image:v1.0 "bash" Euler:~ # docker start e9c222179267 e9c222179267 Euler:~ # docker attach e9c222179267 root@e9c222179267:~#
在容器中输入如下命令,查看当前npu信息:
root@e9c222179267:~# npu-smi info
界面如下所示:
下面给出官网示例的修改版,从摄像头用rstp协议拉取视频流,并通过resnet50.om 模型的分类,来确定当前的视频中是否有dog,且识别dog的种类。 可以代码如下所示:
# Copyright 2020 Huawei Technologies Co., Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import cv2 import datetime import argparse import numpy as np import acl import os from PIL import Image from constant import ACL_MEM_MALLOC_HUGE_FIRST, \ ACL_MEMCPY_HOST_TO_DEVICE, ACL_MEMCPY_DEVICE_TO_HOST, \ ACL_ERROR_NONE, IMG_EXT, NPY_FLOAT32 buffer_method = { "in": acl.mdl.get_input_size_by_index, "out": acl.mdl.get_output_size_by_index } def check_ret(message, ret): if ret != ACL_ERROR_NONE: raise Exception("{} failed ret={}" .format(message, ret)) class Net(object): def __init__(self, device_id, model_path): self.device_id = device_id # int self.model_path = model_path # string self.model_id = None # pointer self.context = None # pointer self.input_data = [] self.output_data = [] self.model_desc = None # pointer when using self.load_input_dataset = None self.load_output_dataset = None self.init_resource() def __del__(self): print("Releasing resources stage:") ret = acl.mdl.unload(self.model_id) check_ret("acl.mdl.unload", ret) if self.model_desc: acl.mdl.destroy_desc(self.model_desc) self.model_desc = None while self.input_data: item = self.input_data.pop() ret = acl.rt.free(item["buffer"]) check_ret("acl.rt.free", ret) while self.output_data: item = self.output_data.pop() ret = acl.rt.free(item["buffer"]) check_ret("acl.rt.free", ret) if self.context: ret = acl.rt.destroy_context(self.context) check_ret("acl.rt.destroy_context", ret) self.context = None ret = acl.rt.reset_device(self.device_id) check_ret("acl.rt.reset_device", ret) ret = acl.finalize() check_ret("acl.finalize", ret) print('Resources released successfully.') def init_resource(self): print("init resource stage:") ret = acl.init() check_ret("acl.init", ret) ret = acl.rt.set_device(self.device_id) check_ret("acl.rt.set_device", ret) self.context, ret = acl.rt.create_context(self.device_id) check_ret("acl.rt.create_context", ret) # load_model self.model_id, ret = acl.mdl.load_from_file(self.model_path) check_ret("acl.mdl.load_from_file", ret) print("model_id:{}".format(self.model_id)) self.model_desc = acl.mdl.create_desc() self._get_model_info() print("init resource success") def _get_model_info(self,): ret = acl.mdl.get_desc(self.model_desc, self.model_id) check_ret("acl.mdl.get_desc", ret) input_size = acl.mdl.get_num_inputs(self.model_desc) output_size = acl.mdl.get_num_outputs(self.model_desc) self._gen_data_buffer(input_size, des="in") self._gen_data_buffer(output_size, des="out") def _gen_data_buffer(self, size, des): func = buffer_method[des] for i in range(size): # check temp_buffer dtype temp_buffer_size = func(self.model_desc, i) temp_buffer, ret = acl.rt.malloc(temp_buffer_size, ACL_MEM_MALLOC_HUGE_FIRST) check_ret("acl.rt.malloc", ret) if des == "in": self.input_data.append({"buffer": temp_buffer, "size": temp_buffer_size}) elif des == "out": self.output_data.append({"buffer": temp_buffer, "size": temp_buffer_size}) def _data_interaction(self, dataset, policy=ACL_MEMCPY_HOST_TO_DEVICE): temp_data_buffer = self.input_data \ if policy == ACL_MEMCPY_HOST_TO_DEVICE \ else self.output_data if len(dataset) == 0 and policy == ACL_MEMCPY_DEVICE_TO_HOST: for item in self.output_data: temp, ret = acl.rt.malloc_host(item["size"]) if ret != 0: raise Exception("can't malloc_host ret={}".format(ret)) dataset.append({"size": item["size"], "buffer": temp}) for i, item in enumerate(temp_data_buffer): if policy == ACL_MEMCPY_HOST_TO_DEVICE: ptr = acl.util.numpy_to_ptr(dataset[i]) ret = acl.rt.memcpy(item["buffer"], item["size"], ptr, item["size"], policy) check_ret("acl.rt.memcpy", ret) else: ptr = dataset[i]["buffer"] ret = acl.rt.memcpy(ptr, item["size"], item["buffer"], item["size"], policy) check_ret("acl.rt.memcpy", ret) def _gen_dataset(self, type_str="input"): dataset = acl.mdl.create_dataset() temp_dataset = None if type_str == "in": self.load_input_dataset = dataset temp_dataset = self.input_data else: self.load_output_dataset = dataset temp_dataset = self.output_data for item in temp_dataset: data = acl.create_data_buffer(item["buffer"], item["size"]) _, ret = acl.mdl.add_dataset_buffer(dataset, data) if ret != ACL_ERROR_NONE: ret = acl.destroy_data_buffer(data) check_ret("acl.destroy_data_buffer", ret) def _data_from_host_to_device(self, images): print("data interaction from host to device") # copy images to device self._data_interaction(images, ACL_MEMCPY_HOST_TO_DEVICE) # load input data into model self._gen_dataset("in") # load output data into model self._gen_dataset("out") print("data interaction from host to device success") def _data_from_device_to_host(self): print("data interaction from device to host") res = [] # copy device to host self._data_interaction(res, ACL_MEMCPY_DEVICE_TO_HOST) print("data interaction from device to host success") result = self.get_result(res) self._print_result(result) def run(self, images): self._data_from_host_to_device(images) self.forward() self._data_from_device_to_host() def forward(self): print('execute stage:') ret = acl.mdl.execute(self.model_id, self.load_input_dataset, self.load_output_dataset) check_ret("acl.mdl.execute", ret) self._destroy_databuffer() print('execute stage success') def _print_result(self, result): vals = np.array(result).flatten() top_k = vals.argsort()[-1:-6:-1] print("======== top5 inference results: =============") for j in top_k: if vals[j] >= 0.5 : print(">>>>>>>>>>>>>>>>> find dog >>>>>>>>>>>>>") print("[%d]: %f" % (j, vals[j])) def _destroy_databuffer(self): for dataset in [self.load_input_dataset, self.load_output_dataset]: if not dataset: continue number = acl.mdl.get_dataset_num_buffers(dataset) for i in range(number): data_buf = acl.mdl.get_dataset_buffer(dataset, i) if data_buf: ret = acl.destroy_data_buffer(data_buf) check_ret("acl.destroy_data_buffer", ret) ret = acl.mdl.destroy_dataset(dataset) check_ret("acl.mdl.destroy_dataset", ret) def get_result(self, output_data): result = [] dims, ret = acl.mdl.get_cur_output_dims(self.model_desc, 0) check_ret("acl.mdl.get_cur_output_dims", ret) out_dim = dims['dims'] for temp in output_data: ptr = temp["buffer"] # 转化为float32类型的数据 data = acl.util.ptr_to_numpy(ptr, tuple(out_dim), NPY_FLOAT32) result.append(data) return result def transfer_pic(input_path): input_path = os.path.abspath(input_path) with Image.open(input_path) as image_file: image_file = image_file.resize((256, 256)) img = np.array(image_file) height = img.shape[0] width = img.shape[1] # 对图片进行切分,取中间区域 h_off = (height - 224) // 2 w_off = (width - 224) // 2 crop_img = img[h_off:height - h_off, w_off:width - w_off, :] # rgb to bgr,改变通道顺序 img = crop_img[:, :, ::-1] shape = img.shape img = img.astype("float16") img[:, :, 0] -= 104 img[:, :, 1] -= 117 img[:, :, 2] -= 123 img = img.reshape([1] + list(shape)) img = img.transpose([0, 3, 1, 2]) result = np.frombuffer(img.tobytes(), np.float16) return result if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--device', type=int, default=0) parser.add_argument('--model_path', type=str, default="./model/resnet50.om") parser.add_argument('--images_path', type=str, default="./data") args = parser.parse_args() print("Using device id:{}\nmodel path:{}\nimages path:{}" .format(args.device, args.model_path, args.images_path)) net = Net(args.device, args.model_path) cap = cv2.VideoCapture('rtsp://xxxxxx:554/Streaming/Channels/101') print(cap) ret,frame = cap.read() while ret: ret,frame = cap.read() cv2.imwrite(args.images_path+'/frame.jpg', frame) images_list = [os.path.join(args.images_path, img) for img in os.listdir(args.images_path) if os.path.splitext(img)[1] in IMG_EXT] for image in images_list: print("images:{}".format(image)) img = transfer_pic(image) net.run([img]) if cv2.waitKey(1) & 0xFF == ord('q'): print("===break=========") break print("cv2.destroyAllWindows()") cv2.destroyAllWindows() cap.release() print("*****run finish******")
执行如下命令,启动示例:
root@e9c222179267:~/dist# python3 ./src/main.py Using device id:0 model path:./model/resnet50.om images path:./data init resource stage: model_id:1 init resource success
关于示例项目其他文件和模型,可以参考官网samples示例【基于 Caffe ResNet-50 网络实现图片分类(同步推理)】。参考网址为:
https://gitee.com/ascend/samples/tree/master/python/level2_simple_inference/1_classification/resnet50_imagenet_classification
Atlas 200 DK开发者套件 Docker OpenCV 镜像服务
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。