Data visualization with Processing, Part 2: Intermediate data visualization using interfaces, objects, images, and applications

Part 1 of this "Data visualization with Processing" series introduces the Processing language and development environment and demonstrated the language's basic graphical capabilities. This second article explores Processing's more advanced features, including UIs and object-oriented programming. Learn about image processing and how to convert your Processing application into a Java™ applet suitable for the web, and explore an optimization algorithm that lends itself well to visualization.


M. Tim Jones, Consultant Engineer, 独立作家

M. Tim JonesM. Tim Jones is an embedded firmware architect and the author of Artificial Intelligence: A Systems Approach, GNU/Linux Application Programming,AI Application Programming, and BSD Sockets Programming from a Multi-language Perspective. His engineering background ranges from the development of kernels for geosynchronous spacecraft to embedded systems architecture and networking protocols development. He is a senior architect for Emulex Corp. in Longmont, Colo.

11 January 2011

Also available in Japanese Portuguese

In Part 1 of this series, you saw how powerful Processing is as a visualization language and environment. This installment continues the exploration of Processing, beginning with a review of its user interaction features.

Keyboard and mouse

Processing not only creates a simple way to visualize data but also supports user input from the mouse and keyboard. Processing makes this possible through a set of functions and callbacks to notify a Processing program when the user has provided input.

Keyboard events

Processing provides a small set of keyboard functions to notify a Processing application that a key has been pressed or released. You can also parse that input farther, in the event special characters are presented to the user.

Use the keyPressed function to indicate a key-press event. You can define this function in your application, and it will be called when a key-press event occurs. Then use a special variable called key within the callback function to identify the actual key the user pressed. Similarly, when a key is released, you can catch this event using the keyReleased function. Note that both functions yield the same information, but allow you to define when to trigger your action.

Listing 1 shows the keyPressed and keyReleased functions. Within either function, the program can parse two types of user keystrokes: ASCII characters, which are uncoded, and non-ASCII characters (such as the arrow keys), which require coding. For coded characters, Processing sets the key variable to the CODED token to indicate that another special variable called keyCode must be inspected. Therefore, if the key is not CODED, the key variable contains the keystroke. If the key is CODED, the keyCode variable contains the actual character: UP, DOWN, LEFT, RIGHT, ALT, CONTROL, or SHIFT.

Listing 1. The keyPressed and keyReleased callbacks
void keyPressed()
  if (key == CODED) {
    if (keyCode == DOWN) println("Key pressed: Down arrow");
    if (keyCode == SHIFT) println("Key pressed: Shift key");
  } else {
    println("Key pressed: " + key );

void keyReleased()
  if (key == CODED) {
    if (keyCode == DOWN) println("Key released: Down arrow");
    if (keyCode == SHIFT) println("Key released: Shift key");
  } else {
    println("Key released: " + key );

There's also a special keyPressed variable you can use within a Processing function. The keyPressed function returns a Boolean indicating that a key is being pressed (True) or that no key is pressed (False). You can use this variable to manage key events within a Processing application (outside of the typical callback structure).

Mouse events

Mouse events follow a similar structure to keyboard events, except that there are multiple functions to support the variety of mouse-based events that can occur. For mouse events, four basic callbacks can be defined:

  • mousePressed
  • mouseReleased
  • mouseMoved
  • mouseDragged

The mousePressed callback function is called when the user clicks a mouse button. Within the callback function, you can identify the particular mouse button through the mouseButton variable (LEFT, CENTER, RIGHT). The mouseReleased callback function is called when the mouse button is released. A combination of mousePressed and mouseReleased is called mouseClicked (another callback function you can use). The mouseMoved function is called every time the mouse moves but no mouse button is clicked. Finally, the mouseDragged function is called when the mouse is moved and a mouse button is clicked.

A number of special variables are available for use, as well. First, the mouseX and mouseY variables contain the current location of the mouse. You can capture the previous position of the mouse (from the prior frame) using the pmouseX and pmouseY variables. You can use the mouseClicked variable within a Processing application to detect whether a mouse button is currently clicked. Combine this variable with the mouseButton to identify the button currently clicked. Listing 2 shows these callbacks and special variables.

Listing 2. Demonstrating basic mouse event callbacks
int curx, cury;

void setup() {
  size(100, 100);
  curx = cury = 0;

void mousePressed() {
  println( "Mouse Pressed at " + mouseX + " " + mouseY );
  if (mousePressed && (mouseButton == LEFT)) {
    curx = mouseX;
    cury = mouseY;

void mouseReleased() {
  println( "Mouse Released at " + mouseX + " " + mouseY );

void mouseMoved() {
  println( "Mouse moved, now at " + mouseX + " " + mouseY );

void mouseDragged() {
  println( "Mouse moved from " + curx + " " + cury + 
           " to " + mouseX + " " + mouseY );

These mouse and keyboard event functions provide the basis for building UIs. After placing objects on the display, you can use the mouse event functions to identify whether buttons were pressed (that is, whether a pixel within the region of the object was the mouse cursor when the mouse button was pressed).

Object-oriented programming

Processing permits the use of object-oriented programming (OOP) techniques to make Processing applications simpler to develop and more maintainable. Processing itself is object-oriented, but permits the development of applications that ignore object concepts. OOP is important because it provides advantages like information hiding, modularity, and encapsulation.

Processing, like other object-oriented languages, uses the concept of the Class to define an object template. An object defined from a class maintains a set of data and associated operations that can be performed on that data. Let's begin with the development of a simple class, then extend that example to incorporate multiple objects of that class.

The class in Processing defines a set of data and functions (or methods) that apply to that data. The data in this example is a circle that has coordinates in the 2-D space (x and y) and a diameter. You can initialize the circle with the x and y coordinates and a diameter of 1, which in this example simply says that the circle is now used. You provide this information using the init function. Over time, the circle grows. If the diameter is greater than zero (indicating that it's a used object), then you increment the size of the circle by incrementing its diameter. This incrementing is provided by the spread function. Finally, the show function exposes the circle in the display. As long as the diameter is greater than zero (a valid circle), you create the circle using the ellipse function. Once the circle has grown to a particular size, you cancel it by setting its diameter to zero. This sample class is shown in Listing 3.

Listing 3. Sample class of a spreading drop
class Drop {

  int x, y;		// Coordinate (center of circle)
  int diameter;		// Diameter of circle (unused == 0).
  void init( int ix, int iy ) {
   x = ix;
   y = iy;
   diameter = 1;
  void spread() {
    if (diameter > 0) diameter += 1;
  void show() {
    if (diameter > 0) {
      ellipse( x, y, diameter, diameter );
      if (diameter > 500) diameter = 0;

Now let's look at how to use the Drop class to build some graphics that employ user input. Listing 4 presents the application that uses the Drop class. The first step is to create an array of drops (called drops). Follow this with a few definitions (number of drops and a working index in the drops array). In the setup function, you create your display window and initialize the drops array (all diameters are zero, or unused). The draw function is quite simple, as the core functionality of the drop is within the class itself (spread and show, from Listing 3). Finally, add the UI portion, which allows the user to define where the drop begins. The mousePressed callback function initializes the drop with the current mouse position (which now has a diameter and is used), then increments the current drop index.

Listing 4. Application to build multiple user-defined drops
Drop[] drops;
int numDrops = 30;
int curDrop = 0;

void setup() {
  size(400, 400);
  drops = new Drop[numDrops];
  for (int i = 0 ; i < numDrops ; i++) {
    drops[i] = new Drop();
    drops[i].diameter = 0;

void draw() {
  for (int i = 0 ; i < numDrops ; i++) {

void mousePressed() {
  drops[curDrop].init( mouseX, mouseY );
  if (++curDrop == numDrops) curDrop = 0;

You can see the output of the application from Listing 3 and Listing 4 in Figure 1. The mouse was clicked a number of times, resulting in spreading drops, as shown.

Figure 1. Display window of the application in Listings 3 and 4
A black background with an abstract drawing of overlapping white circles of varying size giving a cloud-like shape

Image processing

Processing provides useful and interesting capabilities for image processing. This section explores image filtering, blending, and support for user-defined image processing using pixels.

Image filtering

Processing provides canned image processing capabilities through the filter function. This function applies a filtering mode directly to the display window. Listing 5 shows the use of a filter within a simple Processing application that relates to Figure 2 for the various types of output. Note that in Listing 5 you perform only the filter for BLUR, but other possibilities are shown in Figure 2 (with each code counterpart).

As filter operates on the contents of the display window, you simply need to provide the filter mode (the type of filter operation to perform) and quality (that is, the argument for the filter mode). Listing 5 begins with a declaration of the PImage type, which is the data type for storing images. Next, within the setup function, you load your particular image into the PImage data type (img1). With the image loaded, you know the size of the image, which you use to set up the size of the window (using the width and height attributes of the PImage instance). Within the draw function, you display the image with a call to image. The image function requests that the image be displayed to the display window, including the x and y coordinates for the upper-left corner of the image. (Note that it's also possible here to specify the width and height of the image.) Finally, you perform your particular filter — in this case, the BLUR mode. See Figure 2 for other filter options and the result as compared to the original image (also provided in Figure 5 below).

Listing 5. A simple filter application
PImage img1;

void setup() {
  img1 = loadImage("alaska1.png");
  size(img1.width, img1.height);

void draw() {
  image(img1, 0, 0);
  filter(BLUR, 2);

As shown in Figure 2, Processing provides some canned image-processing operations commonly found in image manipulation applications. But you can also manipulate images on a pixel-by-pixel basis.

Figure 2. Examples of filter operations
Montage of photos shows the results of various kinds of filtering such as BLUR, GRAY and INVERT

Not shown here are other possible filter operations, including:

  • OPAQUE— Sets the alpha channel to opaque
  • ERODE— Reduces the light areas based on the provided quality parameter
  • DILATE— Increases the light areas based on the provided quality parameter

The Resources section provides information about additional filter modes.

Image blending

Images can be blended, which occurs a pixel at a time for each image (or region of an image, if desired). This functionality mimics some of the features found in Adobe® Illustrator® and Photoshop®.

Listing 6 shows the ADD blending operation on two images (shown in Figure 3). As shown, two images are loaded with loadImage, then img2 is blended with img1 using the ADD mode. In the blend call, you specify the source image to blend (img2) to the destination (img1). The next four parameters are the source images, x and y coordinates, and width and height. The next parameters are the destination's upper-left corner and the destination image's width and height. Finally, you define the mode parameter. In this case, you request an ADD blend, which implements the operation dest_img_pixel += src_img_pixel*factor (maxed at 255).

Listing 6. Blending images
void setup() {
  size(237, 178);

void draw() {
  PImage img1 = loadImage("alaska1.png");
  PImage img2 = loadImage("alaska2.png");
  img1.blend( img2, 0, 0, 237, 178, 0, 0, 237, 178, ADD );
  image(img1, 0, 0);

Other operations include BLEND (no ceiling), SUBTRACT, DARKEST (take the darkest pixel), LIGHTEST (take the lightest pixel), MULTIPLY (which darkens the image), and numerous others. The Resources section provides links to additional blend-mode information.

Figure 3. Image output of the blend operation
Montage shows two original images of a seagull and a natural bay with a third photo of the two blended together

Pixels array

The final image processing technique takes a more manual approach. In this mode, you can manipulate each pixel individually. The display window is made up of a 1-D array of color types. After displaying an image (as shown in Listing 7 with the background function), you can access the pixels in the display window in the pixels array through the loadPixels function. The loadPixels function loads the display window into the pixels array, while the updatePixels function updates the display window from the pixels array.

Listing 7. Image manipulation with the pixel map
void setup() {
  size(237, 178);

void draw() {
  PImage img = loadImage("alaska2.png");
  for (int i = 0 ; i < img.width*img.height ; i++) {
    color p = pixels[i];
    float r = red(p)/2;
    float g = green(p);
    float b = blue(p);
    pixels[i] = color(r, g, b);

While the display window is in the pixels array, you can manipulate it using a variety of means. Listing 7 shows how to modify the display by first creating a color type instance from each pixel (variable p). You can break this variable down farther into the individual colors using the red, green, and blue functions. In this example, you halve the red component of the image in the display window, then load the pixel back using the color function, which takes the individual colors and reconstructs the pixel. The before and after of Listing 7 is shown in Figure 4.

Figure 4. Manual image manipulation with pixels
Montage shows the orginal photo of a seagull with a bluer picture after the red has been reduced

Particle swarms

Let's look at an application that demonstrates some of the features of Processing — in particular, OOP. This example comes from numerical optimization and machine learning.

Particle swarms is an optimization technique that's inspired by nature. It uses a population of candidate solutions (particles) whose movement is guided by the best found solutions in the search space (both personal best for the particle and the global best solution). Particle Swarm Optimization (PSO) is simple and provides interesting visual representations of a search space, making it ideal for use in exploring a data visualization language (see Resources for more information). The swarm moves across a 2-D space looking for the global optimum.

Optimization techniques

Particle swarms is an interesting new optimization method that is useful for function optimization and interesting to visualize. A number of other methods exist with similar capabilities, such as ant colony optimization, which uses swarms of ants to solve path-finding problems, and genetic algorithms, which use populations of candidate solutions for general problem solving.

PSO implementation

The Processing implementation of PSO is made up of two classes. The first is the Particle class, which implements an individual particle. Per the PSO, each particle maintains its current location, velocity, current and best fitness, and best particle solution (see Listing 8). The Particle class provides a number of methods to support the PSO, including a constructor (which randomly places the particle in the search space), a function to calculate the fitness (in this case, of the sombrero function, where fitness is z), an update function (to move the particle based on its current velocity vector, and a show function that displays the particle in the search space. Three other helper functions exist that expose elements of the particle to the user (fitness and x and y location).

Listing 8. The Particle class for the PSO
class Particle {

  float locX, locY;
  float velX = 0.0, velY = 0.0;
  float fitness = 0.0;
  float bestFitness = -10.0;
  float pbestX = 0.0, pbestY = 0.0; // Best particle solution
  float vMax = 10.0; // Max velocity
  float dt = 0.1;    // Used to constrain changes to each particle

  Particle() {
    locX = random( dimension );
    locY = random( dimension );

  void calculateFitness() {
    // Clip the particles
    if ((locX < 0) || (locX > dimension) || 
        (locY < 0) || (locY > dimension)) fitness = 0;
    else {
      // Calculate fitness based on the sombrero function.
      float x = locX - (dimension / 2);
      float y = locY - (dimension / 2);
      float r = sqrt( (x*x) + (y*y) );
      fitness = (sin(r)/r);
    // Maintain the best particle solution
    if (fitness > bestFitness) {
      pbestX = locX; pbestY = locY;
      bestFitness = fitness;
  void update( float gbestX, float gbestY, float c1, float c2 ) {
    // Calculate particle.x velocity and new location
    velX = velX + (c1 * random(1) * (gbestX - locX)) + 
                  (c2 * random(1) * (pbestX - locX));
    if (velX > vMax) velX = vMax;
    if (velX < -vMax) velX = -vMax;
    locX = locX + velX*dt;

    // Calculate particle.y velocity and new location
    velY = velY + (c1 * random(1) * (gbestY - locY)) + 
                  (c2 * random(1) * (pbestY - locY));
    if (velY > vMax) velY = vMax;
    if (velY < -vMax) velY = -vMax;
    locY = locY + velY*dt;
  void show() {
    point( (int)locX, (int)locY);
  float pFitness() {
    return fitness;
  float xLocation() {
    return locX;
  float yLocation() {
    return locY;


Next is the class for the swarm (see Listing 9), which maintains an array of Particles (created and initialized in the swarm constructor), the current global best solution (x and y coordinates), and two learning factors. The learning factors provide a measure of focus for whether the particle should swarm (search) around the personal best solution (c2) or around the global best solution (c1). Each factor indicates the influence that the best solutions apply to the particle.

The run method performs a step in the PSO simulation. First, it calculates the fitness of each particle in the swarm. Then, it finds the global best solution. With this information, it calls update to move a particle and show to display it for each particle in the swarm.

Listing 9. Swarm class for the PSO
class Swarm {

  float gbestX = 0.0, gbestY = 0.0;   // Global best solution
  float c1 = 0.1, c2 = 2.0;           // Learning factors

  Particle swarm[];

  Swarm() {
    swarm = new Particle[numParticles];
    for (int i = 0 ; i < numParticles ; i++) {
      swarm[i] = new Particle();
  void run() {
    // Calculate each particle's fitness
    for (int i = 0 ; i < numParticles ; i++) {
    // Update each particle and display it.
    for (int i = 0 ; i < numParticles ; i++) {
      swarm[i].update( gbestX, gbestY, c1, c2 );
  void findGlobalBest() {
    float fitness = -10.0;
    for (int i = 0 ; i < numParticles ; i++) {
      if (swarm[i].pFitness() > fitness) {
        gbestX = swarm[i].xLocation(); gbestY = swarm[i].yLocation();
        fitness = swarm[i].pFitness();
  void showGlobalBest() {
    println("Best Particle Result: " + gbestX + " " + gbestY);

Finally, the user application that uses the classes defined here is shown in Listing 10. This application defines some of the configurable items for the PSO, such as the number of particles, the size of the display window (dimension), and the swarm itself. The setup function prepares the window and colors, and draw performs the invocation of the swarm, emitting the global best solution every 10 iterations. As this simulation uses the sombrero function for optimization, the optimum is the center of the display.

Listing 10. Application to drive the PSO
// Particle Swarm Optimization
int numParticles = 200;
int iteration = 0;
float dimension = 500;
Swarm mySwarm = new Swarm();

void setup() {
  size( int(dimension), int(dimension));

void draw() {
  background(255); // remove for trails;
  if ((iteration++ % 10) == 0) mySwarm.showGlobalBest();

The following two figures illustrate the output of the PSO simulation in Processing. Figure 5 shows a time lapse of the PSO, while Figure 6 shows the PSO with trails, which identifies the path of the particles toward the optimum.

Figure 5. Time lapse of the PSO simulation
Figure shows a collection of pictures with time markers, starting as a series of scattered dots that over time collect into the center

From the trails shown in Figure 6, it's easy to see the path of the particles as they converge toward the optimal solution at the center. You can see loops in some of the paths, indicating that the particle is swarming its best local solution before continuing toward the global optimum.

Figure 6. Trails of the PSO simulation
A white background with a series of black scribbles emenating from the center in a burst pattern

Application conversion

Recall from Part 1 that Processing code is converted to the Java language before execution, which makes it easy to convert a Processing application into a Java applet or application. To perform this conversion, click File in the Processing Development Environment (PDE), and then click Export to export an applet or Export Application to export a Java application. The sketchbook directory will then contain the code and associated files for this operation. Listing 11 shows the exported applet (the applet subdirectory), the exported application (three application directories for the particular target), and the source itself (pso.pde).

Listing 11. The Processing Sketchbook subdirectory after export
mtj@ubuntu:~/sketchbook/pso$ ls
applet  application.linux  application.macosx  pso.pde

In the applet subdirectory, you can find the original processing source, the converted Java source, the JAR, and a sample index.html file to view the result.

Going further

This second installment explored UIs in the context of mouse and keyboard events, looked at Processing's approach to OOP, and explored a number of additional Processing applications. Part 3 looks at the 3-D capabilities of Processing and develop a visualization application that uses networking for data collection.



  • To learn more about Processing, download the latest version, and find interesting examples and tutorials, check out Once you've developed your application, be sure to share it at
  • Be sure to read about filter modes and blend options.
  • PSO is a relatively new optimization technique that uses swarms of particles to solve optimization problems. You can learn more about PSO and other techniques, such as termite swarms and ant colonies, at the
  • To listen to interesting interviews and discussions for software developers, check out developerWorks podcasts.
  • Stay current with developerWorks' Technical events and webcasts.
  • Follow developerWorks on Twitter.
  • Check out upcoming conferences, trade shows, webcasts, and other Events around the world that are of interest to IBM open source developers.
  • Visit the developerWorks Open source zone for extensive how-to information, tools, and project updates to help you develop with open source technologies and use them with IBM's products, as well as our most popular articles and tutorials.
  • Watch and learn about IBM and open source technologies and product functions with the no-cost developerWorks On demand demos.

Get products and technologies


  • Participate in developerWorks blogs and get involved in the developerWorks community.
  • Get involved in the developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.


developerWorks: Sign in

Required fields are indicated with an asterisk (*).

Need an IBM ID?
Forgot your IBM ID?

Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.


All information submitted is secure.

Dig deeper into Open source on developerWorks

Zone=Open source
ArticleTitle=Data visualization with Processing, Part 2: Intermediate data visualization using interfaces, objects, images, and applications