So there is a process where you can look at the gradient map of an image, this being the point wise derivatives of the image (aka relative rate of change (aka edge detection)), and based on the values of the gradients, find a path of the least noticeable pixels. This has huge implications, the first example was in image blending. Taking two images you want to blend together, you can get a rough idea of where you want to overlap the pictures, and take a gradient map of the two put together. Then applying this process of finding the "least noticeable" path, you can simply cut both images along the line and piece the two together, with no need for blurring. The easiest way to visualize this would be to imagine taking two images of bricks, and blending them together. If you were just to blur the straight edges, you are going to get fuzzy bricks in one region of the pictures. With this process you would cut the bricks along the mortar lines and match the two images together based on these lines, minimizing the effect of the process. If people are having a hard time visualizing, I can post a good example from lecture.
So another application is "Context aware Image Resizing." I don't really want to try to explain this in detail, so I will do the same thing my professor did, check out this video:
So cool! Apparently the undergrad students are going to be implementing this. But the grad students are going to be taking it a step farther... I don't know what that is going to look like.
That was fascinating!!!
ReplyDeletetoo bad your Mom's computer can't re-size like these on the video...
ReplyDeleteTheforK