HQ Image/Camera Service Request

Camera model
Raspberry Pi Camera v2.1

What is the problem?
Low quality Jpg snapshots

What did you already try to solve it?
Adjust focus, set resolution and frame rate in octopi.txt
I have the configuration file, focus dialed in and webcam settings configured but would like a higher quality source image file from the rapsi cam, the quality of the single video frame even at 4K is still lower quality than a single camera jpeg image at 3280 × 2464 pixels.
I would like to have the camera port not the video port send a HQ image to the snap shot folder.
If I understand correctly at the moment the timelapse feature sends single frames captured from the video port to the snap shot folder ready for rendering.
I would like to be able to have the same workflow as is but have the camera port activate instead of the video port for the jpeg asset much the same as the configuration for the external camera setup.
When I connect my Sony a6300 up and configure octoprint for an external camera I get
24mp 3840x2160 single jpeg images transferred to the snapshot folder ready to render, is it possible for someone to be able to make the raspi cam do this? I would be happy to fund a donation if somebody can make this happen??

Logs (/var/log/webcamd.log, syslog, dmesg, ... no logs, no support)

Additional information about your setup (OctoPrint version, OctoPi version, ...)

Pi4b 8Gb
Raspi cam v2.1
Hyperpixel 4
Octopi .17
Octoprint 1.42
Octodash octolapse plug-ins.

Anyone please?

Can you share a sample image? Then at least we can see what you mean by "low quality image". Is it the resilution, the compression quality or the focus?

Please explain what you mean by the "camera port" and the "video port".

If you use the standard timelapse functionality in OctoPrint on an OctoPi install, both the snapshot and the streaming video are operated by the same program which is geared towards the streaming video part. In any case, the resolution of the stream and the resolution of the snapshot will be the same.If you set a resolution that is not available for video, a lower resolution will be picked instead.

Thanks @fieldifview, I’ll see if I kept some of the stills pre render if not I’ll hook the Sony back up and post some.
Basically a video frame at 1080 resolution is roughly the equivalent to a 2mp still image, for comparison a 4K video frame is about 9mp. I can only stream at 1080 in octolapse but 720 is smoother streaming and results in 0.9mp resolution still images, this coupled with a typical shutter speed of 1/60 sec is only giving low res motion blurred images from the raspi cam which I dialed in focus using a lens calibration image.
With a single image from the camera port the resolution is 8mp and uses the full sensor (can’t remember what) but at a faster shutter speed which results In larger and sharper image without any motion blur. In the camera documentation (best description I’ve found is at the link below) https://picamera.readthedocs.io/en/release-1.10/fov.html#camera-modes
it describes there are 3 ports: still, video and preview. We can set these in the modes for camera by using raspistill, raspivid and as I understood the preview port always runs but needs needs to be connected to an output to run. Mjpg-streamer uses the video port and takes still frames when triggered from the video stream, if I set 1280x720 30fps I actually get a set of jpeg images captured that each equal to 0.9mp each, I can’t stream 4K for 9mp equivalent stills so I would like to use the still image camera mode and have 8mp jpeg images to make the timlapse from and 1fps will work fine for preview.

As far as I know, only one of the 3 "ports" (confusing term) can be used at the same time. Since mjpegstreamer is focused on streaming mjpeg, it uses the video "port". So the resolution and datarate are limited to what is configured for the video port. The only way to use the stills port is not to use the video port, so not to use the same camera for streaming video.