May 27, 2015


Halftoning in the non-photorealistic rendering space is a way to simulate continuous tones through a use of dots that vary in size and/or spacing. Many halftoning techniques use some form of algorithm to change the selected pixel or set of pixels into a combination of two colors (that are usually contrasting such as black and white). Stippling, on the other hand, start with a set of dots. The dots are manually moved to locations on the image depending on the original color of the input image.

There are many ways to stipple an image with pros and cons for each. The one that I implemented is stippling using the weighted voronoi method. The paper that motivated this project can be found here. The paper provides more mathematical outline of how it works. The awesome thing about this technique is it generates an evenly spaced out distribution of stipples, with closer spacing in darker areas to create the stippled image effect. In summary, the technique works somewhat like this on a computer:

  1. Convert the given image that we want to stipple into a black and white color so that every pixel can be mapped to a range of 0 to 1.
  2. Given a number of stipples and an image to stipple, create randomly distributed dots based on some sampling algorithm. This sampling algorithm can be as simple as randomly distributing the points anywhere in the image to rejection sampling or stratified sampling.
  3. Using the stipple locations as points on a plane, create a voronoi diagram for the image.
  4. For each stipple, move it to the weighted centroid of its voronoi region. What does it mean by weighted centroid? Well if we move every stipple to its actual centroid, the points would slowly converge (to a certain threshold, it hasn't actually been proven that it would completely converge yet) to an evenly spaced layout. This is what the NPR community loves to call Lloyd's algorithm, named after Stuart P. The paper tweaks Lloyd's method and adds weights into the calculation. Areas that are in the pixel grid of darker colors (closer to the value of 0) would get more weight. Hence, the weighted centroid would orient itself towards darker pixels, hence producing the stipple effect.
  5. Repeat this in a loop until it converges below a certain threshold.


The goal of the project was to understand one or two different ways of doing stippling for non-photorealistic purpopses. Ideally, we'd like to further extend on the research. However, given the amount of time for this project, the more realistic goal is to see the technique in action and make some improvements. Hence, in order to focus on the technique and algorithm, I decided to use Processing, which is a development environment and programming language designed specifically for visual displays of 2D and 3D art. Processing has libraries that allow easy reading of external images, drawing, and outputing to PDF and other file formats.

In terms of the implementation of each step, below is a general description. The numbers coorespond to the numbers in the overview section above.

  1. Processing has a filter attribute that can turn any image into a black or white image. The pixels of the image can also be called using brightness(), which gives back a brightness value between 0-255, which can easily be mapped to 0-1.
  2. The default implementation of this paper uses simple random sampling to distribute the stipples around the image. However, stratified sampling can also be toggled on in this implementation. In statistics, the idea of stratified sampling is to sample each population independently so you either get the same number of randomness per population or have a controlled number. In this case, I can split the pixels up into different brightness levels. From there, I can put more stipples in the darker areas from the beginning.
  3. In the paper, a trick was mentioned to produce voronoi regions. Instead of using the CPU to calculate the regions, we can use the GPU and its depth buffer to do the work for us. For each stipple, we draw a cone that faces away from us such that it's far away enough to cover all regions along the plane. Then, we can use orthographic projection to display a good approximation of the voronoi regions. Usually, using programs like OpenGL, the GPU will render the pixels and use the depth buffer to figure out what's in front. Then, we'd need a way to read the pixels back from the frame buffer to the CPU again. Luckily, processing lets you access the pixels and has a depth buffer too! My implementation uses this trick to draw the voronoi diagram onto another image buffer that we use later for the stipple location calculations.
  4. In this step, I follow the weighted voronoi method to calculate the stipple locations.


After the implementation of the technique, I really wanted to see what would happen if different stipples are added in different locations, to have a tool to effect stipples in certain areas. The image difference between a simple random sampling technique and the stratified sampling technique was very noticable. Hence, I wanted to be able to have more control in terms of how I want my stipples to look. It turns out, another research was done for interactive techniques in terms of stipples which can be found here. This paper mentions a stipple modification tool that allows users to manually make their own stippled drawings. They mention 4 brush modification tools that I have modified to include for the weighed voronoi technique:

  1. Edit brushes. This adds or removes points. In my implementation, I let users add the points by drag selecting areas where they want to add.
  2. Edit brushes with fixed dots. This adds points that aren't effected by later movements.
  3. Relaxation brush. This brush allows the user to be able to relax the positions of the stipple dots. In the weighted voronoi technique, the relaxation always occurs. Hence, instead of doing a relaxation, I use this tool to relax fixed dots created by the previous brush.
  4. Jitter brush. This brush lets users jitter a certain offset given by the percentage of the average point to point distance.
  5. Shape brush. This brush allows users to modify the parameters of the dots themselves. There were two ways mentioned in the paper. One was to modify the size, and the ohter was to deform the circular shape to be another shape. I decided to go with a size modification since I felt like it would produce the most difference in the stipple simulations.

In order to allow my extension to work, I needed to modify things in the implementation. The core algorithm actually stayed the same. The main component that changed was the list that kept the stipple locations. Adding and removing stipples meant either adding new stipples at the mouse location to the list, or finding the stipples in and around the mouse location and removing them. A new property for stipples was added for the fixed dots. This is so that when we calculate the weighted voronoi relaxation technique, the fixed points are not included in the voronoi region rendering. The relaxation brush takes these fixed points and relaxes them in proximity to all other points, including normal points. The jitter brush takes the points around the mouse and updates their location. The jitter brush didn't seem to be very useful since the weighted voronoi technique would arrange the stipples back into an evenly spaced distribution after it's been jittered. Hence, the only difference is stipples are now at a different location, but with same spacing. Finally, shape brush only modified the stipple's size property. Hence, when drawing, the stipples would be bigger or smaller. Below is a video showing the interaction:

Overall, the extension to the project was a success. I was able to tell a difference between stipples created by the regular weighted voronoi technique and stipples that are created with the technique and the extension as seen below. I was quite disappointed with the jitter brush. However, the jitter brush may be useful for other techniques that do not do relaxation for example. However, it does look cool interactively, looks almost like a miniature fluid simulation! :)

User Interface

This is the section that I wish I had more time to polish since the large focus of the project was on the technique and not the user interactions with the technique. Users can specify an input file before running the program. Later, I may want to add an upload section in the main program. The extension was mostly interactive. There is currently no user interface that is visibly available. However, there are hidden keys (yay best design ever! not really though.) that users can use to get the different brushes. Below are the keyboard shortcuts for brush input. Each key gives users a state that they're in, much like a brush in photoshop, that they can use to interact with the stippled drawing.

  • 'n' - normal state
  • 'a' - edit add normal stipple
  • 'f' - edit add fixed stipple
  • 'e' - edit remove stipple
  • 'r' - relax brush
  • 'j' - jitter brush
  • '2' - size brush enlarge stipple size
  • '1' - size brush reduce stipple size
  • 'i' - toggle show original image underneath
  • '+' - increase brush size
  • '-' - decrease brush size

Sample output