No HDMI support via the USB-C displayport. While I don t expect
to go to conferences or even classes in the next several months,
I hope this can be fixed before I do. It s a potential important
issue for me.
/usr/bin/wf-recorder -g '0,32 960x540' -t --muxer=v4l2 --codec=rawvideo --pixelformat=yuv420p --file=/dev/video10
/dev/video10
). You will note
I m grabbing a 960 540 rectangle, which is the top of my screen
(1920x1080) minus the Waybar. I think I ll increase it to 960 720, as
the projector to which I connect the Raspberry has a 4 3 output.
After this is sent to /dev/video10
, I tell ffmpeg
to send it via
RTP to
the fixed address of the Raspberry:
/usr/bin/ffmpeg -i /dev/video10 -an -f rtp -sdp_file /tmp/video.sdp rtp://10.0.0.100:7000/
/tmp/video.sdp
is created in the laptop itself; this file describes the stream s
metadata so it can be used from the client side. I cheated and copied
it over to the Raspberry, doing an ugly hardcode along the way:
user@raspi:~ $ cat video.sdp
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 10.0.0.100
t=0 0
a=tool:libavformat 58.76.100
m=video 7000 RTP/AVP 96
b=AS:200
a=rtpmap:96 MP4V-ES/90000
a=fmtp:96 profile-level-id=1
user
user, and dropped the following in my user s .xsession
:
setterm -blank 0 -powersave off -powerdown 0
xset s off
xset -dpms
xset s noblank
mplayer -msglevel all=1 -fs /home/usuario/video.sdp
modprobe v4l2loopback exclusive_caps=1
The option exclusive_caps
configures the module into a mode where it
initially presents a write-only interface, but once a process has opened
a file handle, it then switches to read-only for subsequent processes.
Assuming there are no other camera devices connected at the time of
loading the module, it will create /dev/video0
.1
I experimented briefly with OBS Studio, the
very versatile and feature-full streaming tool, which confirmed that I
could use filters on the source video to fix the aspect ratio, and emit
the result to the virtual device. I don't otherwise use OBS, though, so
I achieve the same result using ffmpeg:
fmpeg -s 720x480 -i /dev/video1 -r 30 -f v4l2 -vcodec rawvideo \
-pix_fmt yuyv422 -s 720x405 /dev/video0
The source options are to select the source video mode I want. The
codec and pixel formats are to match what is being emitted (I determined
that using ffprobe
on the camera device). The resizing is triggered by
supplying a different size to the -s
parameter. I think that is equivalent
to explicitly selecting a "scale" filter, and there might be other filters
that could be used instead (to add pillar boxes for example).
This worked just as well. In Google Meet, I select the Virtual Camera,
and Google Meet is presented with only one video mode, in the correct aspect
ratio, and no configurable options for it, so it can't misbehave.
Future
I'm planning to automate the loading (and unloading) of the module and starting
the ffmpeg
process in response to the real camera device being plugged or
unplugged, using systemd events and services. (I don't leave the camera plugged
in all the time due to some bad USB behaviour I've experienced if I do so.)
If I get that working, I will write a follow-up.
This is a kernel module I cooked in a couple of days. The idea is to expose a v4l device that gets its data from user space.
I had 2 use cases in mind:
1) Educational purpose for myself (I'm really a kernel noob).
2) Streaming movies over skype, google talk, ... etc.
The idea could be good or completely rubbish but hey ? Learning can only be done with stupid ideas!
The code is highly unstable. It shouldn't oops the kernel but I'm not responsible. I've been developing and testing it inside qemu.
Clone it from the git repository via:
git clone git@gitorious.org:vcamera/vcamera.git
Here are a few missing bits off the top of my head:
* I'm not following the kernel coding style yet ;-)
* I'm sure my locking, unlocking and concurrency handling is flawed.
* The code is a bit fragile.
* It'd be nice to implement mmap support for the character device. This should eliminate data copies.
* Perhaps expose the character device all the time and generate "fake" frames when streaming starts ? Problem now is one has to be very fast in feeding data to the module otherwise select() on the v4l device will timeout.
* Many more...
If someone finds this idea useful, please drop me a line.
Comments, use cases, ideas and tips are really welcomed!
If I see a lot of interest, I might try to push it to the kernel tree one day ;-)
Update: Seems vloopback already exists and renders my code useless. I might still do something with it as my idea seems a bit simpler but whatever.
Update 2: There's also v4l2loopback and its fork. read more