Use Queue and thread to read Vid and write new video file

 

 

Similar Posts

https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2-videocapture-and-opencv/

http://algomuse.com/c-c/developing-a-multithreaded-real-time-video-processing-application-in-opencv

 

Ref.

https://stackoverflow.com/questions/37140643/how-to-save-two-cameras-data-but-not-influence-their-picture-acquire-speed/37146523#37146523

 

Stream a mp4 video file to V4l2loopback device.02

Read the mp4 file and stream it to Virtual Video device using opencv, linux ioctl, and gst-launch

Load a virtual video (camera) device:

jiafei427@BIGBOB:~/Workplace/v4l2loopback$ modprobe v4l2loopback
jiafei427@BIGBOB:~/Workplace/v4l2loopback$ ls /dev/video0
/dev/video0

read mp4 video file and stream to virtual video device:

#include "opencv2/opencv.hpp"
#include "iostream"
#include "sstream"
#include "string"
#include "fcntl.h"
#include "unistd.h"
#include "sys/ioctl.h"
#include "linux/videodev2.h"

using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
if(argc < 2)
return -1;

// Open specified Video file.
VideoCapture cap(argv[1]);
if (!cap.isOpened()) {
cerr  8) & 255, (fourcc >> 16) & 255, (fourcc >> 24) & 255);

TickMeter t;
Mat frame;
int interval = 1000 / fps;
int waitTime = interval;
int size = 0;
size_t written = 0;

//Show Info.
cout << "FPS: \t\t\t" << fps << endl;
cout << "# frame: \t\t" << x << "\n";
cout<<"Fourcc: \t\t" << fourcc_str <<endl;
cout<<"Time to play: \t\t"<< x/fps<<" s\n";

//Open V4l2 device
int v4l2lo;
if(argc == 3)
v4l2lo = open(argv[2], O_WRONLY);
else
v4l2lo = open("/dev/video0", O_WRONLY);

if(v4l2lo < 0) {
std::cout << "Error opening v4l2l device: " << strerror(errno);
exit(-2);
}
{
struct v4l2_format v;
int t;
v.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
t = ioctl(v4l2lo, VIDIOC_G_FMT, &v);
if( t < 0 ) {
exit(t);
}
v.fmt.pix.width = width;
v.fmt.pix.height = height;
v.fmt.pix.pixelformat = V4L2_PIX_FMT_RGB24;
vidsendsiz = width * height * 3;
v.fmt.pix.sizeimage = vidsendsiz;
t = ioctl(v4l2lo, VIDIOC_S_FMT, &v);
if( t < 0 ) {
exit(t);
}
}

if (!cap.isOpened())
{
cout frame; // get a new frame

if (frame.empty()) {
cerr << "Finished Streaming.\n";
cap.set(CAP_PROP_POS_AVI_RATIO , 0);
}

//Do we really need this?
size = frame.total() * frame.elemSize();
if (size != vidsendsiz) {
std::cout << "size != vidsendsiz " << size << " / " << vidsendsiz << std::endl;
}
//write to v4l2 device
written = write(v4l2lo, frame.data, size);
if (written < 0) {
std::cout <= 0)
break;

t.reset();
}
cap.release();

return 0;
}
Compile it and launch
$ g++ mp4ToV4l2.cpp -o mp4ToV4l2 $(pkg-config opencv4 --libs --cflags) -std=c++11
$ ./mp4ToV4l2 imageCompilationVideo.mp4
Play video (camera) with gst-launch
$ gst-launch-1.0 v4l2src device=/dev/video0 ! xvimagesink

Ref.
https://www.learnopencv.com/read-write-and-display-a-video-using-opencv-cpp-python/

Stream a mp4 video file to V4l2loopback device.01

Read the mp4 file and stream it to Virtual Video device using v4l2loopback, ffmpeg, gst-launch

Load a virtual video (camera) device:

jiafei427@BIGBOB:~/Workplace/v4l2loopback$ modprobe v4l2loopback
jiafei427@BIGBOB:~/Workplace/v4l2loopback$ ls /dev/video0
/dev/video0

read mp4 video file and stream to virtual video device:


$ ffmpeg -re -i input.mp4 -map 0:v -f v4l2 /dev/video0

Play video (camera) with gst-launch
$ gst-launch-1.0 v4l2src device=/dev/video0 ! xvimagesink

Also we can write time information within the output stream with following:

ffmpeg -re -hide_banner -i LIVE_INPUT \
-vf drawtext="fontsize=90:fontcolor=white: \
              fontfile=/Windows/Fonts/arial.ttf:text='%{localtime\:%X}'" \
-f LIVE_OUTPUT

Change FPS for the virtual video:


$ sudo apt install v4l2loopback-utils

$ v4l2loopback-ctl set-fps 10 /dev/video0

$ v4l2-ctl --all -d 0

FYI.

I have the latest version of ffmpeg installed in Ubuntu 16.04, and I can output video to a virtual device using different commands.
For example:
ffmpeg -f x11grab -framerate 15 -video_size 1280x720 -i :0.0 -f v4l2 /dev/video0  -> with this one I can capture the entire screen and output it to /dev/video0 (my virtual camera)
ffmpeg -re -i input.mp4 -map 0:v -f v4l2 /dev/video0 -> I can also use this one with a video file
ffmpeg -re -i /dev/video1 -map 0:v -f v4l2 /dev/video0 -> I've also been able to use this one where I can capture from /dev/video1(which is a real web camera) and output it to the virtual camera.
ffmpeg -f x11grab -r 12 -s 1920x1080 -i :0.0+0,0 -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f v4l2 -vf 'scale=800:600' /dev/video22

FFmpeg Cheat Sheet for 360º video

Brought to you by Headjack
FFmpeg is one of the most powerful tools for video transcoding and manipulation, but it’s fairly complex and confusing to use. That’s why I decided to create this cheat sheet which shows some of the most often used commands.
Let’s start with some basics:

  • ffmpeg calls the FFmpeg application in the command line window, could also be the full path to the FFmpeg binary or .exe file
  • -i is follwed by the path to the input video
  • -c:v sets the video codec you want to use
    Options include libx264 for H.264, libx265 for H.265/HEVC, libvpx-vp9 for VP9, and copy if you want to preserve the video codec of the input video
  • -b:v sets the video bitrate, use a number followed by M to set value in Mbit/s, or K to set value in Kbit/s
  • -c:a sets the audio codec you want to use Options include aac for use in combination with H.264 and H.265/HEVC, libvorbis for VP9, and copy if you want to preserve the audio codec of the input video
  • -b:a sets the audio bitrate of the output video
  • -vf sets so called video filters, which allow you to apply transformations on a video like scale for changing the resolution and setdar for setting an aspect ratio
  • -r sets the frame rate of the output video
  • -pix_fmt sets the pixel format of the output video, required for some input files and so recommended to always use and set to yuv420p for playback
  • -map allows you to specify streams inside a file
  • -ss seeks to the given timestamp in the format HH:MM:SS
  • -t sets the time or duration of the output

Get video info

ffmpeg -i input.mp4

Transcode video

The simplest example to transcode an input video to H.264:

ffmpeg -i input.mp4 -c:v libx264 output.mp4

However, a more reasonable example, which includes setting an audio codec, setting the pixel format and both a video and audio bitrate, would be:

ffmpeg -i input.mp4 -c:v libx264 -b:v 30M -pix_fmt yuv420p -c:a aac -b:a 192K output.mp4

To tanscode to H.265/HEVC instead, all we do is change libx264 to libx265:

ffmpeg -i input.mp4 -c:v libx265 -b:v 15M -pix_fmt yuv420p -c:a aac -b:a 192K output.mp4

iOS 11 and OSX 11 now support HEVC playback, but you have to make sure you use FFmpeg 3.4 or higher, and then add -tag:v hvc1 to your encode, or else you won’t be able to play the video on your Apple device.

For VP9 we have to change both the video and the audio codec, as well as the file extension of the ouput video. We also added -threads 16 to make sure FFmpeg uses multi-threaded rendering to speed things up significantly:

ffmpeg -i input.mp4 -threads 16 -c:v libvpx-vp9 -b:v 15M -pix_fmt yuv420p -c:a libvorbis -b:a 192K output.webm

You may have noticed we also halved the video bitrate from 30M for H.264 to 15M for H.265/HEVC and VP9. This is because the latter ones are advanced codecs which output the same visual quality video at about half the bitrate of H.264. Sweet huh! They do take way longer to encode though and are not as widely supported as H.264 yet.
Hardware accelerated encoding

We just saw how to encode to H.264 using the libx264 codec, but the latest Zeranoe FFmpeg builds for Windows now support hardware accelerated encoding on machines with Nvidia GPUs (even older ones), which significantly speeds up the encoding process. You use this powerful feature by changing the libx264 codec to h264_nvenc:

ffmpeg -i input.mp4 -c:v h264_nvenc output.mp4

To use hardware acceleration for H.265/HEVC, use hevc_nvenc instead:

ffmpeg -i input.mp4 -c:v hevc_nvenc output.mp4

If you get any error messages, either your FFmpeg version or your GPU does not support hardware acceleration, or you are using an unsupported -pix_fmt. There is unfortunately no hardware acceleration support in FFmpeg for the VP9 codec.

We noticed one strange artefact when using h264_nvenc and hevc_nvenc in combination with scaling. For example, when we scaled a 4096×4096 video down to 3840×2160 pixels, the height of the output video showed correctly as 2160 pixels, but the stored_height was 2176 pixels for some reason, which causes issues when trying to play it back on Android 360º video players.
Resize video to UHD@30fps

At the moment, the most common playback resolution for 360º video is the UHD resolution of 3840x2160 at 30 frames per second. The commands we have to add for this are:

-vf scale=3840x2160,setdar=16:9 -r 30

Which results in something like this:

ffmpeg -i input.mp4 -vf scale=3840x2160,setdar=16:9 -r 30 -c:v libx265 -b:v 15M -pix_fmt yuv420p -c:a aac -b:a 192K output.mp4

Add, remove, extract or replace audio

Add an audio stream to a video without re-encoding:

ffmpeg -i input.mp4 -i audio.aac -c copy output.mp4

However, in most cases you will have to re-encode the audio to fit your video container:

ffmpeg -i input.mp4 -i audio.wav -c:v copy -c:a aac output.mp4

Remove an audio stream from the input video using the -an command:

ffmpeg -i input.mp4 -c:v copy -an output.mp4

Extract an audio stream from the input video using the -vn command:

ffmpeg -i input.mp4 -vn -c:a copy output.aac

Replace an audio stream in a video using the -map command:

ffmpeg -i input.mp4 -i audio.wav -map 0:0 -map 1:0 -c:v copy -c:a aac output.mp4

You could add the -shortest command to force the output video to take the length of the shortest input file if the input audio file and the input video file are not exactly the same length
Sequence to video

Many high-end video pipelines work with DPX, EXR or TIFF sequences. To transform these sequences into video files, the easiest way is to specify the first file in the sequence as the input and then use -framerate to set the input frame rate and -r to set the output frame rate:

ffmpeg -i input_0001.dpx -framerate 59.94 -c:v libx264 -b:v 30M -r 29.97 -an output.mp4

Stereo to mono

We can use video filters to cut the bottom half of a stereoscopic top-bottom video to turn it into a monoscopic video:

ffmpeg -i input.mp4 -vf crop=h=in_h/2:y=0 -c:a copy output.mp4

Cut a piece out of a video

Use -ss to set the start time in the video and -t to set the duration of the segment you want to cut

ffmpeg -ss 00:01:32 -i input.mp4 -c:v copy -c:a copy -t 00:00:10 output.mp4

The above command seeks to 1.32 minutes in the video, and then outputs the next 10 seconds. As you can see, -ss is placed before the -i command, which results in way faster (but slightly less accurate) seeking.
Concatenate two videos

Concatenation is not possible with all video formats, but it works fine for MP4 files for example. There are a couple of ways to concatenate video files, but I will only describe the way that worked for me here, which requires you to create a txt file with the paths to the files you want to concatenate.

Only if the files you want to concatenate have the exact same encoding settings can you concatenate without re-encoding:

ffmpeg -f concat -i files.txt -c copy output.mp4

In the files.txt file, place urls to the files you want to concatenate:

file '/path/to/video1.mp4'
file '/path/to/video2.mp4'
file '/path/to/video3.mp4'

You can add -safe 0 if you are using absolute paths. If you miss some frames after concatenation, keep in mind that the concatenation happens on I-frames, so if you don’t cut at exactly the right frame, FFmpeg will discard all frames up to the nearest I-frame before concatenating.

Ref.

 

v4l2loopback related issues

Somehow, my ubuntu won’t load the virtual video device and gave me following error messages:


jiafei427@BIGBOB:~/Workplace/v4l2loopback$ modprobe v4l2loopback
modprobe: ERROR: could not insert 'v4l2loopback': Operation not permitted
jiafei427@BIGBOB:~/Workplace/v4l2loopback$ sudo modprobe v4l2loopback
modprobe: ERROR: could not insert 'v4l2loopback': Exec format error
jiafei427@BIGBOB:~/Workplace/v4l2loopback$ sudo modprobe v4l2loopback.ko
modprobe: FATAL: Module v4l2loopback.ko not found in directory /lib/modules/4.15.0-47-generic

Damn, that just worked out few days ago, what happened?

even with recompiled version, it still got no chance to load it.

So I checked the kernerl message:


$ dmesg -c

[23642.624867] v4l2loopback: version magic '4.15.0-46-generic SMP mod_unload ' should be '4.15.0-47-generic SMP mod_unload '
[23651.526915] v4l2loopback: version magic '4.15.0-46-generic SMP mod_unload ' should be '4.15.0-47-generic SMP mod_unload '
[23702.027815] v4l2loopback: version magic '4.15.0-46-generic SMP mod_unload ' should be '4.15.0-47-generic SMP mod_unload '
[23721.214258] v4l2loopback: version magic '4.15.0-46-generic SMP mod_unload ' should be '4.15.0-47-generic SMP mod_unload '
[23911.275742] v4l2loopback: version magic '4.15.0-46-generic SMP mod_unload ' should be '4.15.0-47-generic SMP mod_unload '

seems, our Mr. modprobe is having trouble with loading module with exact kernel version.

Then I googled it, some guy on the web says he figured it out with insmod command:


jiafei427@BIGBOB:~/Workplace/v4l2loopback$ sudo insmod v4l2loopback.ko
jiafei427@BIGBOB:~/Workplace/v4l2loopback$ modprobe v4l2-
v4l2-common v4l2-dv-timings v4l2-flash-led-class v4l2-fwnode v4l2-mem2mem v4l2-tpg
jiafei427@BIGBOB:~/Workplace/v4l2loopback$ modprobe v4l2loopback
jiafei427@BIGBOB:~/Workplace/v4l2loopback$ ls /dev/video0
/dev/video0

What the…

that just solved my issue;;;

Install OpenCV4 in Ubuntu 16.04

Remove pre-installed opencv because opencv4 will be conflict with the previous versions: (following means you don’t have opencv installed in your pc)


webnautes@webnautes-pc:~$ pkg-config --modversion opencv
Package opencv was not found in the pkg-config search path.
Perhaps you should add the directory containing `opencv.pc'
to the PKG_CONFIG_PATH environment variable
No package 'opencv' found

If you’ve installed opencv 2.4, you will see following

webnautes@webnautes-pc:~$ pkg-config --modversion opencv
2.4.9.1

Remove opencv related stuff to move-on:

&amp;lt;/pre&amp;gt;&amp;lt;div dir="ltr"&amp;gt;$ sudo apt-get purge &amp;amp;nbsp;libopencv* python-opencv
$ sudo apt-get autoremove&amp;lt;/div&amp;gt;&amp;lt;pre&amp;gt;

Install the stuffs needed for opencv:

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install build-essential cmake
$ sudo apt-get install pkg-config
$ sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev
$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libxvidcore-dev libx264-dev libxine2-dev
$ sudo apt-get install libv4l-dev v4l-utils
$ sudo apt-get install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev
$ sudo apt-get install libqt4-dev
$ sudo apt-get install mesa-utils libgl1-mesa-dri libqt4-opengl-dev
$ sudo apt-get install libatlas-base-dev gfortran libeigen3-dev
$ sudo apt-get install python2.7-dev python3-dev python-numpy python3-numpy</pre>

OR simpler version of installation list:

</pre>
$ sudo apt-get install build-essential cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev $ sudo apt-get install python3.5-dev python3-numpy libtbb2 libtbb-dev $ sudo apt-get install libjpeg-dev libpng-dev libtiff5-dev libjasper-dev libdc1394-22-dev libeigen3-dev libtheora-dev libvorbis-dev libxvidcore-dev libx264-dev sphinx-common libtbb-dev yasm libfaac-dev libopencore-amrnb-dev libopencore-amrwb-dev libopenexr-dev libgstreamer-plugins-base1.0-dev libavutil-dev libavfilter-dev libavresample-dev 

 

Get Open-CV:


$ sudo -s

$ cd /opt

/opt$ git clone https://github.com/Itseez/opencv.git

/opt$ git clone https://github.com/Itseez/opencv_contrib.git

build and install opencv:

</pre>
/opt$ cd opencv /opt/opencv$ mkdir release /opt/opencv$ cd release /opt/opencv/release$ cmake -D BUILD_TIFF=ON -D WITH_CUDA=OFF -D ENABLE_AVX=OFF -D WITH_OPENGL=OFF -D WITH_OPENCL=OFF -D WITH_IPP=OFF -D WITH_TBB=ON -D BUILD_TBB=ON -D WITH_EIGEN=OFF -D WITH_V4L=OFF -D WITH_VTK=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=/opt/opencv_contrib/modules -D OPENCV_GENERATE_PKGCONFIG=ON /opt/opencv/ /opt/opencv/release$ make -j4 /opt/opencv/release$ make install /opt/opencv/release$ cp ./unix-install/opencv4.pc /usr/share/pkgconfig/ /opt/opencv/release$ ldconfig /opt/opencv/release$ exit /opt/opencv/release$ cd ~
<pre>

  Now to check if OpenCV is installed on a machine, run the following commands:

 
$ pkg-config --modversion opencv4
4.1.0

  Create a C++ program Follow the commands:

</pre>
$ mkdir cpp_test $ cd cpp_test $ touch main.cpp 

The above command will create a folder called cpp_test and create a main.cpp file inside it
Now place any .jpeg image inside the cpp_test folder.
So Now your cpp_test folder will contain two files as follows
.
├── sample.jpeg
└── main.cpp

Now open the main.cpp and add the following code

</pre>
#include #include int main( int argc, char** argv ) { cv::Mat image; image = cv::imread("sample.jpeg" , cv::IMREAD_COLOR); if(! image.data ) { std::cout &lt;&lt; &quot;Could not open or find the image&quot; &lt;&lt; std::endl ; return -1; } cv::namedWindow( &quot;Display window&quot;, cv::WINDOW_AUTOSIZE ); cv::imshow( &quot;Display window&quot;, image ); cv::waitKey(0); return 0; }
<pre>

Now compile your code with the following command and run it

$ g++ main.cpp -o output $(pkg-config opencv4 --libs --cflags) -std=c++11
$ ./output

 

Or, you can try some of the samples from opencv sample folder:

$ g++ -o facedetect /opt/opencv/samples/cpp/facedetect.cpp $(pkg-config opencv4 --libs --cflags) -std=c++11
$ ./facedetect --cascade="/usr/local/share/opencv4/haarcascades/haarcascade_frontalface_alt.xml" --nested-cascade="/usr/local/share/opencv4/haarcascades/haarcascade_eye_tree_eyeglasses.xml" --scale=1.3

 

 

 

 

 

Ref.:
http://www.codebind.com/cpp-tutorial/install-opencv-ubuntu-cpp/

https://webnautes.tistory.com/1030

https://webnautes.tistory.com/818

 

 

 

 

 

Protocol Buffer notes

see https://github.com/protocolbuffers/protobuf/blob/master/src/README.md:

Prerequesites

$ sudo apt-get install autoconf automake libtool curl make g++ unzip  autoremove libprotobuf-dev protobuf-compiler

Installation

  1. From this page, download the protobuf-all-[VERSION].tar.gz.
  2. Extract the contents and change in the directory
  3. ./configure
  4. make
  5. make check
  6. sudo make install
  7. sudo ldconfig # refresh shared library cache.

Check if it works

$ protoc --version
libprotoc 3.6.1
Overall.jpg
—-to be continued—-
Ref.

v4l2loopback related notes

told to follow the AR (Augmented Reality) project and needed to make two process to read the video frames from single camera, then we found out we need to create a virtual video device.

I first installed gstreamer packages on Ubuntu 18.04https://gstreamer.freedesktop.org/documentation/installing/on-linux.html

$ git clone https://github.com/umlaeute/v4l2loopback.git
$ cd v4l2loopback
$ make
$ sudo make install

I got warning message as here on Ubuntu 18.04 LTS https://github.com/umlaeute/v4l2loopback/issues/139 (but it didn’t prevent me from loading v4l2loopback driver)

$ sudo depmod -a

I just have 1 webcam on my laptop /dev/video0 and I wanted to get 2 streams from the same hardware. Based on https://github.com/umlaeute/v4l2loopback/blob/master/README.md

$ modprobe v4l2loopback devices=2

There should now be /dev/video1 and /dev/video2 created assuming /dev/video0 was the only video device.

Now I run the following in one terminal window

gst-launch-1.0 v4l2src device=/dev/video0 ! tee name=t ! queue ! v4l2sink device=/dev/video1 t. ! queue ! v4l2sink device=/dev/video2

I open 2 more tabs

In the first tab

gst-launch-1.0 v4l2src device=/dev/video1 ! videoconvert ! ximagesink

In the second tab

gst-launch-1.0 v4l2src device=/dev/video2 ! videoconvert ! ximagesink

Now one should see 2 video streams

UPDATE

Even if I use the same /dev/video1 device multiple times it all gives me that many stream. example.

In the first tab

gst-launch-1.0 v4l2src device=/dev/video1 ! videoconvert ! ximagesink

In the second tab

gst-launch-1.0 v4l2src device=/dev/video1 ! videoconvert ! ximagesink

In the third tab

gst-launch-1.0 v4l2src device=/dev/video1 ! videoconvert ! ximagesink

gives me three streams.

 

To Install other related utill:

In a terminal:

sudo apt-get install v4l-utils

 

 

————————–TO BE CONTINUED——————————

 

Ref.

https://askubuntu.com/questions/165727/is-it-possible-for-two-processes-to-access-the-webcam-at-the-same-time

http://www.techytalk.info/webcam-settings-control-ubuntu-fedora-linux-operating-system-cli/

http://blog.naver.com/chandong83/221262714615