Jason Israel and Michael Kamm
We would like to implement a parallel example-based style transfer algorithm. We are going to do this on the NVIDIA GPUs in the lab. We are looking to achieve speedup relative to OpenCV's FLANN (Fast Approximate Nearest-Neighbor Search Library).
Example-based style transfer is the process of transferring the "style" of one image to another. For example, if we have an image of a watercolor painting and an image of a real-world photograph, we'd like to transfer the "style" of the painting to the photograph, while preserving the photographs structure (images below taken from http://www.mrl.nyu.edu/projects/image-analogies/watercolor.html)
[Source Image] [Unfiltered Destination Image] [Filtered Destination Image]
An early pixel-based solution of example-based style transfer is by using "Image Analogies". An image analogy takes an unfiltered image (A), a styled version of the first image (A'), and applies the style to a third image (B), to create the result (B'). In the form of an analogy, this is A : A' :: B : B'.
A simple description of the Image Analogy algorithm is as follows:
Compute Gaussian pyramids for A, A', B for each level l, from coarsest to finest: for each pixel q of B'_l (scan-line order) p = BestMatch(A, A', B, B', l, q) B'_l(q) = A'_l(p) return B'
BestMatch tries to find the best pixel p in A, such that the neighborhood around A(p) is similar to B(q). We also want to make sure that pixel p would give a coherent result in the already synthesized neighborhood of B'(q). The initial algorithm uses a serial approximate-nearest-neighborhood search to find the best pixel.
We see two places in this algorithm where parallelism can be used. One thing we could improve is the approximate nearest neighborhood search. For each pixel q in B', we are trying to find the pixel p in A that has the most similar neighborhood to q in B. Calculating the distances from the neighborhood around B(q) to all the neighborhoods in A is independent, so we will definitely try to parallelize this step.
Another way we may be able to speed up this program is by parallelizing across all pixels q in B'. Doing this, however, may lead to very incoherent results (the structure of the original image will get diminished).
There are also patch-based solutions to the image style transfer problem.
There are two major challenges in this project:
1) We need to improve the speed of the approximate-nearest-neighborhood search.
2) We would like to parallelize across the pixels in the image while maintaining coherence in the result.
We started off with a serial, purely exhaustive brute-force solution. We quickly found out that a pure brute-force solution is not adequate for this problem on anything but a small scale. However, this solution is always exactly correct, barring some minor sub-optimal choices by the initial k-coherence algorithm. On the small images shown here, the brute-force algorithm required about 15 minutes to complete.
Our next version avoided brute-force as much as possible, by taking advantage of OpenCV's FLANN library.
This achieved much faster results, and we could start to run our algorithm on larger images with artistic filters. These images took about 15 seconds to render. A pure brute-force solution would have taken several hours to complete.
This algorithm breaks away from the inherent dependencies of neighborhoods in the interest of parallelism. This comes at a cost of correctness, but acceptable correctness is achieved through the use of several correction passes, where the neighborhoods are updated between passes.
More details on the CUDA execution can be found in the Final Paper at the top of the page.
The basis of our algorithm comes from "Image Analogies" http://www.mrl.nyu.edu/publications/image-analogies/analogies-300dpi.pdf.
We have also found a paper detailing a CUDA specific approach to image analogies. http://download.springer.com/static/pdf/45/art%253A10.1007%252Fs00371-012-0701-4.pdf?auth66=1396815636_90dc75f6739fe23fb6a179e7de74a43e&ext=.pdf.
Another approach to style transfer is by using stitching patches of the source image to create the destination. Here is a paper describing that approach: http://mmlab.siat.ac.cn/sfchen/Publications/TMM13-style.pdf
We also found a couple papers on Texture Synthesis on the GPU:
We are choosing to use the NVIDIA GPUs in the lab. Like the second assignment, this project involves math-intensive image manipulation. We plan on using C++ and Cuda. We also use OpenCV for image preprocessing.