Wyze Cam V2 Support?

Hello everyone!

I am wondering if there has been any progress on this or if you can help me. I have tried both instructions from @SrgntBallistic and @atom6 and get stuck on both. When I try to 'ffserver &' I get
"[1] 12580
pi@octopi : ~ $ -bash: ffserver: command not found";

when I try the VLC approach, I run the the './http_stream.sh' and get the same problem. Blank line and nothing happens. I have updated the firmware on the Wyze cam and get a stream in VLC, I just can't get it set up to get running in Octopi. I'm fairly novice at programming things like this (meaning I follow instructions fairly well but couldn't make anything up on my own). Thanks in advance for any input!

I've been closely reading this topic, as I just got OctoPrint installed today on an RPi4 and I have a Wyze V2 ready to go for this! Has there been any updates to getting this optimized?

I have been playing around tonight to reduce the load of my Octoprint streaming from my Wyze on my Pi 3+ and have reduce the load by a lot.

Using the Official rtsp firmware from Wyze.


cvlc -R rtsp://<rtsp user>:<rtsp password>@<rtsp ip addr>/live --sout-x264-preset fast --sout='#transcode{acodec=none,vcodec=MJPG,vb=1000,fps=0.5}:standard{mux=mpjpeg,access=http{mime=multipart/x-mixed-replace; boundary=--7b3cc56e5f51db803f790dad720ed50a},dst=:8899/videostream.cgi}' --sout-keep

now in octoprint setup in the webcam tab, under the URL Stream I put in: http://<octoprint ip address>:8899/videostream.cgi

2 Likes

I made a Docker container for it. https://hub.docker.com/r/eroji/rtsp2mjpg

UPDATE: I updated the base image to alpine. Now the image is 21MB compressed and uses 100MB less memory when running. I noticed the original ubuntu image with its flavor of ffmpeg was leaking memory over time. The stream would still die after about a day of run time. Hopefully alpine one works better. Looks very steady and flat so far.

image

1 Like

I updated the GitHub with docker-compose. You can pull it down yourself and run it as a service in docker.

Just run docker-compose up -d after cloning the repo. This will include an nginx proxy that listens on port 80. So you can get to the live stream at http://<ip>/live.mjpg and still snapshot at http://<ip>/still.jpg.

I played with the ffmpeg flags quite a bit and got it relatively stable. However, ever so often the ffmpeg process would still die with an error like this.

[rtsp @ 0x7ffbe78c3460] CSeq 11 expected, 0 received.

My ffmpeg command is currently:

/usr/bin/ffmpeg -hide_banner -loglevel info -rtsp_transport tcp -nostats -use_wallclock_as_timestamps 1 -i rtsp://user:pass@mycamera/live -async 1 -vsync 1 http://127.0.0.1:8090/feed.ffm

If anyone here is a ffmpeg expert, feel free to comment on how I can improve it to make it completely stable!

I've never used docker. This is meant to run on a Raspberry Pi? Could you point me in the direction of some stuff to get up to speed? Thanks

I might give this a try. Are you kicking it off in a bashrc file or manually running it?

If you use docker-compose, it will create a persistent instance of the docker containers. Now, I didn't test it on a Pi, so the base image may need to be changed to support ARM platform. Let me know if that's the case and I will update it to include an ARM alternative.

In theory you could use the webcamstreamer plugin to do this by modifying the advanced setup section and matching your ffmpeg command I suppose.

Either case, if you do that or try to run your docker image (once made ARM compatible) on an octopi instance you'd have to disable the webcamd service in order to release the lock on the video device. And if you're docker image is using port 80, not sure how that would play out since haproxy tunnels port 80 by default to octoprint's port 5000.

Looks like docker-compose.yaml file is missing in the repo...

Give me a few, I'll push it. Trying to test build the armv7 image and commit that as well. No idea how well it will work though as I never tried to make a image for Pi before. Even if it works, I doubt Pi 3b+ would be able to handle the load. Maybe a 4b could...

This is the one that webcam streamer uses, in case it helps...it would just be missing ffserver.

Everything is committed and I built and pushed the armhf image to Docker Hub as well. I tried it with default ffmpeg/ffserver flags and config and it's just way too heavy for Pi 3b+. Someone tuning will be required. I'll let others have at it. I also made a armhf version of docker-compose, but seeing as how the main container is already so taxing, I got rid of the nginx proxy.

So far, the best I can do with ffmpeg flags is the following. This is on a x64 system with enough resources. However, the ffmpeg process still dies occasionally.

/usr/bin/ffmpeg -hide_banner -loglevel info -rtsp_transport tcp -nostats -use_wallclock_as_timestamps 1 -i rtsp://user:pass@mycamera/live -async 1 -vsync 1 http://127.0.0.1:8090/feed.ffm

Interesting, I looked at his Dockerfile and it does appear to be something I can base it off on. I'll give it a try.

PS: Building image on a pi3 is absurdly slow

The eroji/rtsp2mjpg:armhf image is now updated with ffmpeg/ffserver compiled with omax driver from source version 3.4.7, the last version with ffserver built in. I also updated all the related files, including docker-compile.yaml. I gave it a try running on my pi3b+ and it went a few more seconds further before ffmpeg went south. So feel free to tweak the ffmpeg flags until it can sustain the stream. You may also need to change the ffserver.conf, which in that case just create one in the local directory and mount it in by adding the mount in the docker-compose.

UPDATE: I think I found the root cause of ffmpeg dying. None of my other wifi devices do this...

4 bytes from 10.64.20.21: icmp_seq=54 ttl=63 time=1.29 ms
64 bytes from 10.64.20.21: icmp_seq=55 ttl=63 time=1.17 ms
64 bytes from 10.64.20.21: icmp_seq=56 ttl=63 time=321 ms
64 bytes from 10.64.20.21: icmp_seq=57 ttl=63 time=308 ms
64 bytes from 10.64.20.21: icmp_seq=58 ttl=63 time=1.15 ms
64 bytes from 10.64.20.21: icmp_seq=59 ttl=63 time=3.95 ms
64 bytes from 10.64.20.21: icmp_seq=60 ttl=63 time=63.3 ms
64 bytes from 10.64.20.21: icmp_seq=61 ttl=63 time=195 ms
64 bytes from 10.64.20.21: icmp_seq=62 ttl=63 time=5.48 ms
64 bytes from 10.64.20.21: icmp_seq=63 ttl=63 time=9.67 ms
64 bytes from 10.64.20.21: icmp_seq=64 ttl=63 time=1.25 ms
64 bytes from 10.64.20.21: icmp_seq=65 ttl=63 time=4.00 ms
64 bytes from 10.64.20.21: icmp_seq=66 ttl=63 time=3.96 ms
64 bytes from 10.64.20.21: icmp_seq=67 ttl=63 time=5.30 ms
64 bytes from 10.64.20.21: icmp_seq=68 ttl=63 time=1.69 ms
64 bytes from 10.64.20.21: icmp_seq=69 ttl=63 time=1.22 ms
64 bytes from 10.64.20.21: icmp_seq=70 ttl=63 time=119 ms
64 bytes from 10.64.20.21: icmp_seq=71 ttl=63 time=1.35 ms

UPDATE #2: After digging around for 2 days I finally tracked the issue down to my POE switch that all my wireless APs are connected to. The uplink cable on the switch was throwing errors for probably more than a year, which was causing random packet loss. It wasn't until I actually tried to do something latency sensitive like realtime RTSP streaming over wifi that I finally caught the problem. Anyways, this stream is solid now. No random dying and restarts.

Does this make the Wyse cameras usable with Octolapse? I'm a little in the dark on that part, I figure it should, since Octolapse is just grabbing whatever the webcam stream is kicking out, but would appreciate a slightly more informed second opinion.

I've got this nearly working using two different methods, both of which currently have some advantages and drawbacks, but both of them aren't working the way I'd hope they would just yet.

Here's a WIP guide I'm building for running multiple octoprint instances on a single server which has the ffmpeg restreaming method I started out with.

This method has two drawbacks:

  • The cpu power required is pretty high unless you get HEVC encoding working, which I haven't managed yet
  • The octoprint instance will load the first frame of the mjpeg stream only. Manually refreshing is required to get any further frames

The other method I'm testing now is using VLC

I'm still working on the systemd service units for this, but after installing vlc and vlc-bin packages you can run this command in a screen session to test it out:

cvlc -R rtsp://<camera_username>:<camera_pass>@<camera_hostname>/live --sout-x264-preset fast --sout="#transcode{acodec=none,vcodec=MJPG,vb=10000,fps=5}:standard{mux=mpjpeg,access=http{mime=multipart/x-mixed-replace; boundary=--7b3cc56e5f51db803f790dad720ed50a},dst=:8990/videostream.mjpeg}" --sout-keep

and then putting into the octoprint config:

http://<octoprint_hostname>:8990/videostream.mjpeg

Good news is that this stream does work as expected.

The issue I'm having with this method is that I cannot for the life of me get nginx's reverse proxy to work. Here's the config I set up for it:

location /webcam_printer0/ {
        proxy_pass http://127.0.0.1:8990/videostream.mjpeg;
    }

when I try to access the new url, nginx reports a 404.

Once I can get either of these methods working properly, I plan to build a plugin for Octoprint to make it easy and accessible to everyone.

I'm also hoping to incorporate Intel's QuickSync (HEVC) and have been poking around with the concepts in this gist

3 Likes

A plugin would be great. I just got a Wyze V2 and thought the integration would be much easier. I've been mindlessly copying and pasting the linux lines and following the instructions, but my lack of experience is really shining through. Excited to see what comes.

2 Likes

I followed your route, but decided to run ffserver & ffmpeg on my beefier server, and not on my OctoPi (old pi zero). But, in order to do this and not run ffmpeg and stream stuff all the time, I connected my camera up to the same Tasmota outlet that my Printer was on, and setup a monit script to determine when that camera was on and only stream when it was on. Here's my setup.

ffserver.conf, wyze.conf, start_ffmpeg.sh, stop_ffmpeg.sh are all here: https://gist.github.com/bdwilson

I start ffserver on boot via /etc/rc.local script. It always runs. Streaming only runs while the camera is powered on (when the printer is powered on). I use the URL's to my "beefy" server (ubuntu 18.04):

mjpeg: hxxp://server_ip:8090/camera.mjpeg
still: hxxp://server_ip:8090/static-camera.jpg

ffmpeg takes about 18% of a quad core old intel i5, but it's only taking it up while I'm printing. The tasmota plugin combined with shutdownprinter plugin take care of turning things off when printing is done, and monit knows when the camera is off to stop streaming. Pi zero is still good enough to do the time lapse work.

1 Like

I know this can be done better if I can manage to get VAAPI or one of the other Intel QuickSync drivers to play nice with ffmpeg, but so far I've been unsuccessful.

I'm running this on a 7th gen i3-7100, and I know it's got quicksync hardware, and I can get the transcodes to work as expected, but for some reason I just can't for the life of me get those two features to work together:

working transcode command:

/usr/bin/ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -i "rtsp://username:password@cameraIP/live" http://localhost:8090/cr10.ffm

but this doesn't utilize the actual hardware accelleration. This command "should" but it errors:

/usr/bin/ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i "rtsp://username:password@cameraIP/live" http://localhost:8090/cr10.ffm

pyr0ball@octoprint-nuc:~$ /usr/bin/ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i "rtsp://username:password@cameraIP/live" http://localhost:8090/cr10.ffm
ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers
  built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
  configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100
  libavdevice    57. 10.100 / 57. 10.100
  libavfilter     6.107.100 /  6.107.100
  libavresample   3.  7.  0 /  3.  7.  0
  libswscale      4.  8.100 /  4.  8.100
  libswresample   2.  9.100 /  2.  9.100
  libpostproc    54.  7.100 / 54.  7.100
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, rtsp, from 'rtsp://usernname:password@cameraIP/live':
  Metadata:
    title           : Session streamed by "wyze"
    comment         : live
  Duration: N/A, start: 0.000125, bitrate: N/A
    Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, 15 fps, 15 tbr, 90k tbn, 30 tbc
    Stream #0:1: Audio: pcm_alaw, 8000 Hz, mono, s16, 64 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
  Stream #0:0 -> #0:1 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
Impossible to convert between the formats supported by the filter 'Parsed_null_0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0
Conversion failed!