A Note on Graph Theory Applications in Image Processing: Flood Fill, Image Segmentation, Palette Generation

Aucoeur Ngo
Geek Culture

--

I recently took a CS course that covered graph theory, data structures and algorithms.

We covered a lot of the real-life problems that graphs can model and help solve, like social networks, map/destination routing, or scheduling and it got me wondering what other areas were solving problems using graphs and graph algorithms that I had never considered before (a lot of areas).

One area that really piqued my interest was how graph algorithms are utilized in Image Processing.

Surprising no one, Image Processing is an incredibly extensive topic with chock-full of fascinating and crazy-optimized algorithms, but heck, here are three topics that blow my mind, paired with the most basic version of their corresponding algorithms that I learned from class:

  1. Flood/Bucket Fill with Depth-First Search
  2. Image Segmentation/Edge Detection using Minimum Spanning Trees & Kruskal’s Algorithm
  3. Color Quantization via Union Find & Connected Components

Flood Fill

If you’ve ever used any sort of graphic editing software, from Photoshop to MS Paint, you’re probably familiar with the Bucket tool. Flood Fill is an algorithm that looks for nodes that are connected to the start node and finds a path by which to change to the target color.

The flood fill algorithm can be modeled as a graph traversal, representing the given image as a matrix and considering every pixel as a vertex that is connected to pixels around it, above, below, to its left and right, and in the case of 8-directions, to the diagonal neighboring pixels at the given pixel’s corners.

Image Segmentation

http://people.cs.uchicago.edu/~pff/papers/seg-ijcv.pdf

In 2004, Felzenszwalb introduced a segmentation method based on Kruskal’s Minimum Spanning Tree algorithm. Edges are considered in increasing order of weight; their endpoint pixels are merged into a region if this doesn’t cause a cycle in the graph, and if the pixels are ‘similar’ to the existing regions’ pixels. Pixel similarity is judged by a heuristic, which compares the weight to a per-segment threshold. The algorithm outputs multiple unconnected MSTs, or a forest where each tree corresponds to a segment.

In the case of image segmentation, the elements in Vertex are pixels and the weight of an edge is some measure of the dissimilarity between the two pixels connected by that edge .

https://sandipanweb.wordpress.com/2018/02/25/graph-based-image-segmentation-in-python/

Color Quantization

A graph, containing vertices and connecting edges, is constructed from relevant input data.

The vertices contain information required by the comparison heuristic, while the edges indicate connected ‘neighbors’. An algorithm traverses the graph, labeling the vertices based on the connectivity and relative values of their neighbors. Connectivity is determined by the medium; image graphs, for example, can be 4-connected neighborhood or 8-connected neighborhood.

Following the labeling stage, the graph may be partitioned into subsets, after which the original information can be recovered and processed.

via https://en.wikipedia.org/wiki/Color_quantization

Borrowed from Matt DesLauriers’ ATCQ implementation (https://github.com/mattdesl/atcq), but IMO, an excellent visual for what Color Quantization is in general

The Union-Find data structure is well known in the image processing community because of its use in efficient connected component labeling algorithms.

“The goal of this is to produce a palette of disparate colors, by iteratively trimming away very close colors (deleting the least-weighted color) until you end up with the target count.” — Matt DesLauriers

Generic algorithms for union-find functions and getting connected components

--

--