posted by Christian F.K. Schaller on Tue 13th Jan 2004 19:28 UTC
IconAbout 3 years ago I was looking around for something to add multimedia capabilities to my GNOME desktop. At that point in time there wasn't really that much around. I think the most advanced video player for Linux in those days was XAnim, which was neither were moving quickly or could qualify as free software, except in the beer context. Projects like Xine and mplayer had either just started up or not come into existence yet.


Also my interest wasn't purely aimed at playing back media files on my own machine, but also to see if there was something out there I could help push forward to give Linux developers and users something competitive to Microsoft's DirectMedia and Apple's Quicktime. Which meant I was looking for something which would also allow developers to relatively easily do more advanced stuff.

Anyway I started looking around and discovered GStreamer. I guess what pulled me in was the screenshots of the pipeline editor which gave me the clear feeling that this was more than just a playback application, but something that could be used for a much wider range of applications. Based on that I decided to do an interview with the developers for the news website I was involved with at the time (now gone). I guess I never left after doing that :)

The Basics

The core concept in GStreamer is that of a pipeline system which your media streams through. This means you have one or more sources which can be anything like a file, an URL or a hardware device. Depending on how you construct your pipeline you can then have lots of things happening to that media stream before it ends up in one or more sinks at the other end of your pipeline. The sinks can be like the sources; a web stream, a file or hardware device; all depending on what plugins and elements you have installed.

So what can happen in the pipeline? Well the possibilities are almost endless. In GStreamer there are some different classes of elements. There is the basic stuff like elements for decoding different formats, demuxers for splitting the audio and video into separate streams, muxers and encoders for merging the streams back together and encoding them in a format of choice. Then there is a class of filter elements. Filters can do things ranging from technical transformations like colorspace conversions, stereo to mono and vice versa, to adding effects to video images like make them look old or psychedelic or make the video look as if a bug is looking at it. There is no limit to the number of elements you can apply to a given pipeline except the limitations your hardware imposes on you. For instance if you are doing an application that needs to do work in real-time, it puts limitations on what kind of things you can do, because if you do too much your machine will simply not be able to do the computations fast enough. The GStreamer architecture is designed however so that the pipeline system itself adds no latency to the pipeline. This is a prerequisite for many types of applications which demand low latency.

GStreamer also contains an advanced system for negotiating capabilities. This means that GStreamer itself can assemble a line of elements that takes the input it gets and transforms it into a form that the output device you are using supports. So if your output device is something using the I420 colorspace but the video stream coming in is in the RGBA colorspace then GStreamer will be able to automatically assemble a pipeline that converts the colorspace for you. Since GStreamer handles these things itself, a developer who wants to write media applications doesn't need to learn about colorspaces, bitrates and sound card clock speeds; GStreamer provides you with an easy to use API that lets you focus on your actual application instead of worrying about what kinds of things happen at the lower levels.

Developing GStreamer has been a lot harder than I think was anticipated when the project started out. Unlike many other free software projects, GStreamer was not a simple re-implementation of something which had been done before. I guess we did what Steve Balmer claims free software never does: we innovated. The basic design and basic idea came from a research project at Portland University, research work in which GStreamer project founder Erik Walthinsen participated. It was loosely modeled on DirectShow. This research gave us the basics, but when you take something out of the lab and place it in the real world, many new issues tend to arise quickly. This means that the last three years, we have had many re-writes of core modules as the original design needed extending in order to allow people to use GStreamer for more and more varied stuff and as real world worries such as support for legacy formats reared their head.

It is important to know that GStreamer has always been focused on two things : keeping the core media agnostic and keeping it GUI independent. In fact many of the first commercial users of GStreamer used it on the server for things like audio formats conversion at a telecom, recording and storing clips of live news and archiving at a radio station, and similar applications.

Table of contents
  1. "GStreamer Intro, Page 1"
  2. "GStreamer Intro, Page 2"
  3. "GStreamer Intro, Page 3"
e p (0)    53 Comment(s)

Technology White Papers

See More