Replies: 7 comments 5 replies
-
Programming language/frameworks integrationThis is a thread to discuss various higher-level languages integration, such as Pigweed, C++, MicroPython, Rust, WebASM, Arduino... |
Beta Was this translation helpful? Give feedback.
-
Dynamic Pipelines: media handlingSome essential feature of media systems is the ability to receive a stream from another system, for instance network, or stored file, or Bluetooth, etc. Multi-media means that streams will not always contain content know in advance when it is received, and the handling of the format will depend on a complex matrix of available hardware and software support. To simplify this matrix, as well as making media handling possible in the first place, a pipeline such as libZMP becomes necessary. Such pipeline exist in any system able to handle media, often boiling down to FFmpeg and Gstreamer. This is very well described in Gstreamer documentation about dynamic pipelines. |
Beta Was this translation helpful? Give feedback.
-
Pipewire-style hardware abstractionOne particular use-case for pipeline engines is the ability to provide standard sources/sinks representing the underlying hardware as a mixing table. Each underlying hardware has an identifier. Each identifiers is bound to roles (such as "the local speaker", or "the local microphone"). Configuration and applications allows to match sources and sinks (connect this source and sink if the user did not enable mute).
All available sources and sinks and their capabilities are exposed on a same grid. -- qpwgraph screenshot showing a pipewire graph as example The local hardware and applications as well as the "middleware" are all represented on a same grid allowing arbitrary "A to B" connection at various performance as system permits. Example: screen sharingWhile this seems an advanced feature reserved to only desktop users, but this is not true: MCUs are perfectly capable of piping the LVGL output back to a H.264 hardware encoder, and further pack this into an srtp:// stream sent over WiFi. The reason why it might appear less doable on an MCU is only because so far, MCUs did not have an ecosystem to allow them handling these sort of integration. A LVGL screen-sharing endpoint shows-up as an application source tagged as "display loopback", and applications wanting to implement screen-sharing can then select such a libZMP stream, and add an extra layer of hardware encoding as configured. Which hardware this runs on becomes an application detail, up to system integrator to select compatible hardware (i.e. video scaler like DMA2D things, H.264/MJPEG encoder...). The key element is to flatten all the APIs on a same level, allowing routing to be done at runtime, which is what libZMP provides as a first brick. |
Beta Was this translation helpful? Give feedback.
-
Libcamera-style integrationA particular pipeline engine found on Linux is libcamera, which does not use gstreamer or pipewire, but instead integrate with them. Libcamera is built out of the need to handle "complex cameras": video harware where the image data decoding and tuning is left for the host MCU rather than a dedicated ISP chip or directly on the sensor. This means that microprocessors, and lately, microcontrolers, inherit the duty of handling the bayer data from sensors, and pursue with all the subsequent image tuning. Like with libcamera, libZMP can be used as a toolkit for inter-connecting such elements and provide a system a simplified, rich "camera pipeline endpoint". For instance, video conferencing on Linux-based laptops goes through pipewire exposing a libcamera source to the web browser sink. Reminder: raw images from "good" image sensors tend to look like this before correction (here is extreme example): As features are provided to camera (and audio) "default pipelines" or "hardware-specific pipelines", this allows to progressively integrate better and better features, such as improved calibration for a particular sensor, or integration of fish-eye correction... All such features being possible to integrate without requiring modification in the application in case a pipeline engine is present and provides a "Camera 0" output endpoint. |
Beta Was this translation helpful? Give feedback.
-
Audio subsystem for Zephyr and BluetoothBy lacking a dedicated audio subsystem that unifies all the different audio driver sources, application developers are left packing everything manually and fixing bugs repeatedly on every new project. Having an uniform source/sink strategy helps with reducing the API churn that every developer has to go through to play/record/encode/decode/transmit live media. MCUs and DSPs are often used for audio processing, but little existing infrastructure helps building streams and wrap all the runtime decoding, or solve the synchronization and timing issues at a platform level. A multimedia pipeline does not imply large runtime overhead. The amount of abstraction is light, and not necessarily slower than some bare metal applications. This can help building more complex multi-channel applications, and help wrapping audio encoding/decoding. Audio over Bluetooth faces the same challenges as video over network, and scaling the sampling resolution and compression levels can help fitting the incoming data in the available bandwidth. Doing so already requires advanced pipeline control to detect congestion and propagate it back all across the pipeline down to audio hardware re-configuration. |
Beta Was this translation helpful? Give feedback.
-
Video to displayIn some cases, an embedded system needs to do this: i.e. dead angle camera on vehicles (from airplanes to cars to extra add-on for buses/trucks), or a temporary preview of a camera used for other purposes (i.e. to check if it works). This seems like a very basic example, but:
Even the most simple minimal example has a lot of complexity to solve, in hardware-specific ways, like the current Zephyr video capture sample show. Video pipelines help absorbing all this complexity and take it away from every sample, and beyond this, away from the application developer. |
Beta Was this translation helpful? Give feedback.
-
I am not sure to understand this 1st phrases of the PR description. Could you elaborate more ? |
Beta Was this translation helpful? Give feedback.



Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
My motivation for this is replacing the DDR3 RAM chip of Linux-based cameras by using Zephyr instead of Linux (and an MCU instead of an MPU eventually) as well as shrink video systems in various metrics.
In Linux world, pipelines are present through many different frameworks. In Zephyr, because we start ex-nihilo, it is possible to use libZMP as a single pipeline framework on top of which build many things.
None of the below is currently supported out of the box, and at the same time, everything becomes doable once libZMP is merged. You would need to either be patient or reach in for how to accelerate things and contribute. :)
Note that libZMP is not my project, and I am only trying to put it in context.
The official issue is #98514
Beta Was this translation helpful? Give feedback.
All reactions