Streamer Concepts

The KreaTV TV Application Platform handles all processing of media streams in Streamer processes. Each streamer process is capable of handling one media stream at a time. By executing multiple Streamer processes concurrently, the platform can handle several media streams in parallel. Each Streamer process executes independently, but all Streamers executing in the platform are controlled by the Media Manager which coordinates the use of limited resources like video and audio hardware decoders and output to the display.

The Media Manager may start a new Streamer process to meet an increasing demand from applications. In the same way, the Media Manager may terminate Streamer processes when they are no longer needed.

Pipeline

The Streamer handles components called elements which are capable of processing stream data. The Streamer core, the part of the streamer that is not made up of elements, organizes these elements in a pipeline. The pipeline has a source element in one end and one or usually several sink elements in the other end. In between these endpoints, the pipeline may contain any number of intermediary elements.

All elements except source elements have one input pad, and all elements except sink elements have one or several output pads. The Streamer logically connects the element's output pads to input pads of other elements, forming the chain of elements which constitutes the pipeline. The stream data which enters the pipeline in the source element is pushed through element by element in the pipeline and eventually exits the pipeline. Most of the stream data will exit the pipeline in one of the sink elements, but it is possible for any element to release stream data.

An example pipeline with the most fundamental elements. A real pipeline would contain several other elements.

Hardware Acceleration

Some streamer elements are Hardware accelerated. The main difference from the not accelerated elements is that the video and audio data can be handled entirely below the Hardware Abstraction Layer, and the elements within the streamer pipeline are used to control the underlying hardware components. This technique is i.e. used for the HW Multicast Source element, as well as the HW DVR element.

Hardware-accelerated streamer elements

In the hardware accelerated case, the source element no longer streams the video and audio to the Framing element; it instead connects through the HAL to a socket. The streaming video and audio data is passed directly to the Demuxer in hardware, which passes PAT and PMT information up to TS Root. This acts as a new source for a reduced data stream. The video and audio data from the demuxer is passed on directly to the decoding hardware. Performance is increased by handling the video and audio entirely in hardware. (To disable the HW acceleration, remove libtsmulticasthwelement.so from /usr/bin/streamer and compare the pipelines).

Stream Segments

The data stream pushed through the pipeline is divided into segments. A segment denotes a continuous stream section with arbitrary length. Every piece of data in the stream is associated with exactly one segment. An element can split a segment up into several smaller segments covering the same continuous section of the stream as the original one, but the reverse operation is only possible under special circumstances, i.e. two segments cannot be merged to form one larger segment unless the element is immediately following the source element. The postpone pad used for merging segments is described in the Element Data Flow section.

Segments are strictly ordered. Segments must be processed in the order given by the sections they cover in the stream. Segments cannot switch location with each other and it is not possible to expand or insert actual data in the stream. On the other hand, the data in the stream may change and even move around. The typical example is a conditional access element which descrambles the stream. It has to read the scrambled stream, decode the data and write the descrambled data back to the stream for additional processing by other elements further down the pipeline. In this case the segment remains the same but data changes in the corresponding section of the stream.

Since an element may have multiple output pads but only a single input pad, the pipeline will fork in such an element. However, every section of the stream may travel through the pipeline on one path only, effectively forcing the element with multiple output pads to select exactly one output for each segment. If one part of a segment, which an element has received on its input pad, must be sent to one output pad and the rest of the segment must be sent to another output pad, the element must split the segment in two and send each piece to the appropriate output pad.

Frames and Focus

As stated previously, a segment covers a continuous section of the stream. This section is divided into a series of frames, all with the same length. The segment size thus is a multiple of the frame size. The frames in a segment has a focus, which is the part of each frame that carries any data of interest. The focus is used to exclude information associated with lower levels of a protocol as decoding proceeds in the pipeline, e.g. each frame might be an MPEG-2 Transport Stream packet and the focus might be on the payload in each packet.

A segment divided into three frames.

Communication Between Elements

The stream itself constitutes the vast majority of information passed between elements, but that is not the only way in which elements may communicate with each other. In general, elements participating in a pipeline are not aware of each other's presence. When an element receives a segment on its input pad, it does not know which one of the other elements sent that segment, neither does it know which element will receive a segment which it sends to one of its output pads. Since elements have limited knowledge about their environment, there are general communication mechanisms provided for the elements, by which the elements may send and receive tagged messages visible to all elements.

Stream Metadata

Just as an element can send a stream segment to one of its output pads, the element may also send an arbitrary message to a pad. This message is called stream metadata and usually contains additional information about the data in the stream or information that is available in the stream but provided as metadata in a more comprehensible form.

The metadata is associated with the segment in the stream at which it is sent, and follows that segment as it is processed by the next element in the pipeline. The motivation for this is that the metadata describes the current state of the stream, and the state may change from one segment to another, so consequently the metadata must be associated with the actual position of change. That way, the metadata may carry information that is associated with a specific position in the stream, for example when the program definition carried in the stream changes.

Blackboard

All elements have permission to read and write on a global blackboard. The blackboard accepts parameter values with arbitrary length and content, but each parameter must be tagged by a name. If an element writes a parameter with a name that already exists on the blackboard, the new value will replace the previous one.

The parameters on the blackboard is supposed to reflect the state of the stream as experienced by the user, i.e. the value valid for each parameter at the position in the stream that is shown to user on the TV set. Since a considerable amount of the stream may be buffered in the streamer, the blackboard is supposed to reflect the state of the stream as it is at the end of that buffer. To accomplish that pattern, stream metadata that is published by an element is written to the blackboard when the stream has been consumed to the position of the metadata, i.e. all segments up to that point in the buffer have been released.