Linux on board

Accessing the Nokia N800 camera

Build an application to access the Webcam


Content series:

This content is part # of # in the series: Linux on board

Stay tuned for additional content in this series.

This content is part of the series:Linux on board

Stay tuned for additional content in this series.

The first installment in this three-part series showed you what's in the Nokia N800 Linux® installation, listed its technical specs and physical parameters, and explained how to set up and test the build environment.


Our starting point for this installment is a camera application described in a tutorial on the site. Rather than duplicate their effort and explanations here, I'll simply point you to the maemo article and suggest you familiarize yourself with the information contained in it: "How to use the Camera API."

Although you can run the camera application on the N800 without shell access, I found it immensely useful to load a terminal program on the N800. I also installed an ssh server so I could use a real keyboard while testing. Just a reminder: If you install an ssh server on the N800 (like the 770 before it), the system will allow root logins over ssh (password "rootme"). It should go without saying that you should change this, but in a world where people click on attachments to see what they do, I'll say it anyway: Change the root password.

Meet GStreamer

GStreamer is an open source multimedia framework—behind this buzzphrase, you'll find that it provides glue code to connect streams of various media (such as audio or video). That alone wouldn't be so interesting; the bonus is that it includes support for a broad variety of tasks. The GStreamer libraries are installed on the N800 by default, but not the command-line utilities. If you have a root shell, you can remedy this and follow along with a few more commands (see the Getting more software sidebar). Just install the gstreamer-tools package from the maemo repository and you'll have the command-line front ends.

If you're following along, pop your camera out. If you haven't already, you should disable the "start when camera opened" feature (look in the Tools menu) of the video chat application.

The big thing gstreamer likes to do is set up pipelines. Like the pipelines used in UNIX® shells to assemble little widgets together into full applications, gstreamer pipelines combine things together. While I'll be showing a C program to use the gstreamer libraries shortly, there's also a command-line tool called gst-launch; it's in the gstreamer-tools package which you can install with apt-get -- but not with Application Manager which shows a filtered subset of available items. (You also need to have some repositories added to your apt-get configuration; see the Getting more software sidebar.)

Listing 1. Installing the gstreamer-tools
# apt-get install gstreamer-tools
Reading package lists... Done
Building dependency tree... Done
The following extra packages will be installed:
The following NEW packages will be installed:
  gstreamer-tools gstreamer0.10-tools
0 upgraded, 2 newly installed, 0 to remove and 9 not upgraded.
Need to get 41.1kB of archives.
After unpacking 164kB of additional disk space will be used.
Do you want to continue [Y/n]?
WARNING: The following packages cannot be authenticated!
  gstreamer0.10-tools gstreamer-tools
Install these packages without verification [y/N]? y
Setting up gstreamer0.10-tools (0.10.9-osso11) ...
Setting up gstreamer-tools (0.10.9-osso11) ...

Note that the packages aren't "authenticated"—harmless enough in this case. Here's a simple command line invocation:

$ gst-launch v4l2src ! xvimagesink

Yes, that's an exclamation mark, not a pipe. If you're following along, you now have live video from your camera on screen. It's that easy to get started.

The gst-launch program is a simple wrapper that opens specified plug-ins and joins them together. (In gstreamer, any widget it can connect is called a plug-in; plug-ins are identified by name and are loaded dynamically at runtime.) The exclamation mark is used the same way as a pipe in shell commands. Each plug-in is called an element in this context.

To follow most of what gstreamer does, notice that:

  • src: A source is anything that produces media.
  • sink: A sink is anything that consumes media.

In our present situation, the v4l2src plug-in is the video4linux2 source; its default behavior is to find the first supported video device known to the kernel and stream video off it. The xvimagesink plug-in displays a stream of video on a graphical display.

Some elements are both sources and sinks. Each plug-in can define pads or connectors; a given pad is either a source or a sink. Many plug-ins support one of each and work by converting inputs to produce outputs.

A number of special elements exist. For example, there is a special element called fakesink, which does nothing but consume output. Why would you need something like this? Because an element with an output that isn't connected will simply sit there waiting for someone to consume its output.

Some elements can gain additional pads during use. Most noticeably, the tee pad allows you to connect more than one sink to it, splitting a stream up and providing multiple copies. This could be useful if you want to perform two operations on the stream, each requiring different and incompatible filters.

You could design a Webcam application using gstreamer in several ways. The basic idea is that you connect a video source to an image sink (like the screen) and somewhere along the way steal a copy of the image to encode as a JPEG. By default, the video coming in from the camera is not in an RGB format. This is where gstreamer's framework starts to really show off; the automatic negotiation of capabilities makes it a breeze to request data in a particular format or scale without necessarily performing an explicit conversion.

Improving performance

Let's do quick review of the existing material. Our starting point was the example camera program from the maemo site (Related topics). This program works out of the box, although it is a little slow in spots. The basic design is to start with the video4linux source, filter it into RGB, and then send one copy to the screen and one to an image sink that can occasionally save JPEGs.

In fact, the color space conversion is expensive and unnecessary except when saving an actual JPEG file. The standard xvimagesink element displays frames encoded in the camera's YUV format just fine.

The first change you might try is to move the colorspace filter onto the path that leads to the JPEG conversion. The original code feeds the output of the filter into a tee element, which then goes both to an image sink (used to save JPEG files when needed) and to the screen. This requires every single frame to get converted.

Once the colorspace filter is on the other side of the tee, it's still getting run every frame, but it's now possible to change this. The gstreamer library has a concept of probe functions used to modify the behavior of elements. Probe functions get called on every frame that reaches a given pad. If the probe function returns TRUE, the frame is passed down the stream, but if it returns FALSE, the frame is dropped. The example code works by registering a special save as JPEG probe function whenever the user pushes the button. The probe function then saves a frame and deregisters itself.

However, another option is available: Register a probe that normally returns FALSE and attach it before the color space conversion. Then just arrange for that probe to return TRUE if the user clicks the "take photo" button. This means that the rest of the time, there's simply no data in the stream the colorspace filter works on.

At this point, the change to the pipeline setup is simple enough: A call to add a probe to one of the output pads of the tee filter. The probe is what's called a buffer probe—it gets called only on actual blocks of data such as frames of video. Out-of-band information isn't affected.

Here's the code to add the callback:

Listing 2. Probing the data stream
pad = gst_element_get_static_pad(queue, "src");
if (pad) {
  gst_pad_add_buffer_probe(pad, G_CALLBACK(only_when_saving), appdata);
} else {
  fprintf(stderr, "couldn't get sink.\n");

The only_when_saving function is a standard probe callback. In our case, it's particularly simple because it needs to check only a single boolean value.

Listing 3. Should I stay or should I go?
/* This callback indicates whether or not we want a picture */
static gboolean only_when_saving(
		GstElement *element,
		GstBuffer *buffer, AppData *appdata)
	if (appdata->save_next_frame) {
		appdata->save_next_frame = FALSE;
		return TRUE;
	} else {
		return FALSE;

If the save_next_frame value has been set, this function clears it but returns TRUE, causing the frame to pass on through the rest of the pipeline. Otherwise, this function returns FALSE, causing the frame to be dropped.

This should produce a noticeable performance improvement, but on my system it still had a lot of stuttering. The original example used queue objects for both streams. Removing the queue from the screen stream eliminates the stutter, except when saving images. The queue on the other stream dramatically improves perceived performance by ensuring that JPEG conversion happens in a separate thread from the main display. This also allows you to remove the more complicated register/deregister code from the save-as-JPEG probe and the button press callback, replacing them with simple writes to the save_next_frame boolean. (For this implementation, I didn't worry about the race condition.)

Saving JPEG files

I originally used the jpegenc plug-in to gstreamer. This plug-in element does exactly what it sounds like: it converts image frames to JPEG. Unfortunately, it's not included in the basic N800 distribution. Worse, the package it should be in (the plugins-good distribution for gstreamer) was distributed in a stripped-down form for the N800 that excluded this one.

There's a simple enough manual work-around for testing purposes; download the plug-ins on scratchbox, configure and make, then copy the shared library for the plug-in to the N800. Still a lot of work. The example_camera program went with gdk_pixbuf; much, much easier to install (being already installed).

However, gdk_pixbuf seemed a bit of overkill and I've always been fond of the standard libjpeg library, also included with the N800. There's a sinister reason to go with this library, and that reason will become clear in the next article. For now, I'll just show you the fairly simple code used to compress a file using libjpeg:

Listing 4. Creating a JPEG file
/* Creates a jpeg file from the buffer's raw image data */
static gboolean create_jpeg(unsigned char *data)
             FILE *out;
             struct jpeg_compress_struct comp;
             struct jpeg_error_mgr error;

             out = fopen("test.jpg", "wb");

             comp.err = jpeg_std_error(&error);
             jpeg_stdio_dest(&comp, out);
             comp.image_width = 640;
             comp.image_height = 480;
             comp.input_components = 3;
             comp.in_color_space = JCS_RGB;
             jpeg_set_quality(&comp, 90, TRUE);
             jpeg_start_compress(&comp, TRUE);
             for (int i = 0; i < 480; ++i) {
                         jpeg_write_scanlines(&comp, &data, 1);
                         data += (640 * 3);

             return TRUE;

And that's it; our file gets saved as test.jpg.

Next step

Now that the camera application is doing what it needs to do—capturing single frames of video and converting them to a popular format—the next step is to integrate it with the rest of the system and upload files. The third and final installment in this N800 series shows you how.

Downloadable resources

Related topics


Sign in or register to add and subscribe to comments.

ArticleTitle=Linux on board: Accessing the Nokia N800 camera