我希望创建一个图像基表,然后将任何新图像与之进行比较,以确定新图像是否与基表完全相同(或接近)。
例如:如果你想减少100次相同图像的存储,你可以存储它的一个副本,并提供它的参考链接。当输入一个新图像时,你想要与现有的图像进行比较,以确保它不是重复的…想法吗?
我的一个想法是缩小到一个小缩略图,然后随机选择100个像素位置并进行比较。
我希望创建一个图像基表,然后将任何新图像与之进行比较,以确定新图像是否与基表完全相同(或接近)。
例如:如果你想减少100次相同图像的存储,你可以存储它的一个副本,并提供它的参考链接。当输入一个新图像时,你想要与现有的图像进行比较,以确保它不是重复的…想法吗?
我的一个想法是缩小到一个小缩略图,然后随机选择100个像素位置并进行比较。
当前回答
正如卡特曼所指出的,您可以使用任何类型的哈希值来查找精确的重复项。
寻找近距离图像的一个起点可能在这里。这是CG公司用来检查修改后的图像是否仍然显示本质上相同的场景的工具。
其他回答
I have an idea, which can work and it most likely to be very fast. You can sub-sample an image to say 80x60 resolution or comparable, and convert it to grey scale (after subsampling it will be faster). Process both images you want to compare. Then run normalised sum of squared differences between two images (the query image and each from the db), or even better Normalised Cross Correlation, which gives response closer to 1, if both images are similar. Then if images are similar you can proceed to more sophisticated techniques to verify that it is the same images. Obviously this algorithm is linear in terms of number of images in your database so even though it is going to be very fast up to 10000 images per second on the modern hardware. If you need invariance to rotation, then a dominant gradient can be computed for this small image, and then the whole coordinate system can be rotated to canonical orientation, this though, will be slower. And no, there is no invariance to scale here.
如果你想要更一般的东西或使用大数据库(百万张图片),那么 你需要研究图像检索理论(在过去5年里出现了大量的论文)。 在其他答案中有一些提示。但这可能有点过头了,建议直方图方法就可以了。尽管我认为是多种不同的组合 快速的方法会更好。
我认为值得在此基础上添加我构建的phash解决方案,我们已经使用了一段时间:Image:: phash。它是一个Perl模块,但主要部分是用c语言编写的。它比phash.org快几倍,并且为基于dct的phash提供了一些额外的特性。
我们已经在MySQL数据库上建立了数以千万计的图像索引,所以我想要一些快速的东西,也想要一种使用MySQL索引的方法(这与汉明距离不工作),这导致我使用“减少”哈希进行直接匹配,模块文档讨论了这一点。
使用起来很简单:
use Image::PHash;
my $iph1 = Image::PHash->new('file1.jpg');
my $p1 = $iph1->pHash();
my $iph2 = Image::PHash->new('file2.jpg');
my $p2 = $iph2->pHash();
my $diff = Image::PHash::diff($p1, $p2);
下面是解决这个问题的三种方法(还有很多其他方法)。
The first is a standard approach in computer vision, keypoint matching. This may require some background knowledge to implement, and can be slow. The second method uses only elementary image processing, and is potentially faster than the first approach, and is straightforward to implement. However, what it gains in understandability, it lacks in robustness -- matching fails on scaled, rotated, or discolored images. The third method is both fast and robust, but is potentially the hardest to implement.
关键点匹配
Better than picking 100 random points is picking 100 important points. Certain parts of an image have more information than others (particularly at edges and corners), and these are the ones you'll want to use for smart image matching. Google "keypoint extraction" and "keypoint matching" and you'll find quite a few academic papers on the subject. These days, SIFT keypoints are arguably the most popular, since they can match images under different scales, rotations, and lighting. Some SIFT implementations can be found here.
关键点匹配的一个缺点是简单实现的运行时间:O(n^2m),其中n是每张图像中的关键点数量,m是数据库中的图像数量。一些聪明的算法可能会更快地找到最接近的匹配,比如四叉树或二进制空间分区。
备选方案:直方图法
Another less robust but potentially faster solution is to build feature histograms for each image, and choose the image with the histogram closest to the input image's histogram. I implemented this as an undergrad, and we used 3 color histograms (red, green, and blue), and two texture histograms, direction and scale. I'll give the details below, but I should note that this only worked well for matching images VERY similar to the database images. Re-scaled, rotated, or discolored images can fail with this method, but small changes like cropping won't break the algorithm
Computing the color histograms is straightforward -- just pick the range for your histogram buckets, and for each range, tally the number of pixels with a color in that range. For example, consider the "green" histogram, and suppose we choose 4 buckets for our histogram: 0-63, 64-127, 128-191, and 192-255. Then for each pixel, we look at the green value, and add a tally to the appropriate bucket. When we're done tallying, we divide each bucket total by the number of pixels in the entire image to get a normalized histogram for the green channel.
For the texture direction histogram, we started by performing edge detection on the image. Each edge point has a normal vector pointing in the direction perpendicular to the edge. We quantized the normal vector's angle into one of 6 buckets between 0 and PI (since edges have 180-degree symmetry, we converted angles between -PI and 0 to be between 0 and PI). After tallying up the number of edge points in each direction, we have an un-normalized histogram representing texture direction, which we normalized by dividing each bucket by the total number of edge points in the image.
为了计算纹理尺度直方图,对于每个边缘点,我们测量到具有相同方向的下一个最近边缘点的距离。例如,如果边缘点A的方向是45度,算法就会沿着这个方向走,直到找到另一个方向为45度的边缘点(或在合理偏差范围内)。在计算每个边缘点的距离后,我们将这些值转储到直方图中,并通过除以边缘点的总数来归一化。
现在每张图像有5个直方图。要比较两张图像,需要取每个直方图桶之间差值的绝对值,然后将这些值相加。例如,为了比较图像A和B,我们将计算
|A.green_histogram.bucket_1 - B.green_histogram.bucket_1|
对于绿色直方图中的每个桶,并对其他直方图重复,然后将所有结果相加。结果越小,匹配越好。对数据库中的所有图像重复此操作,结果最小的匹配者获胜。您可能希望有一个阈值,超过这个阈值,算法就会得出没有找到匹配的结论。
第三个选择-关键点+决策树
A third approach that is probably much faster than the other two is using semantic texton forests (PDF). This involves extracting simple keypoints and using a collection decision trees to classify the image. This is faster than simple SIFT keypoint matching, because it avoids the costly matching process, and keypoints are much simpler than SIFT, so keypoint extraction is much faster. However, it preserves the SIFT method's invariance to rotation, scale, and lighting, an important feature that the histogram method lacked.
更新:
我的错误——语义德克顿森林论文并不是专门关于图像匹配的,而是关于区域标记的。关于匹配的原始论文是:使用随机树的关键点识别。此外,下面的论文继续发展的想法,并代表了艺术的状态(c. 2010):
快速关键点识别使用随机蕨类-更快,更可扩展比Lepetit 06 概要:二进制健壮的独立基本特征-不太健壮但非常快-我认为这里的目标是在智能手机和其他手持设备上进行实时匹配
几年前,我用PHP编写了一个非常简单的图像比较解决方案。它为每张图像计算一个简单的哈希值,然后找出差值。它的工作非常好裁剪或裁剪与翻译版本的同一图像。
首先,我将图像调整为较小的尺寸,如24x24或36x36。然后我取每一列的像素,并找出这一列的平均R,G,B值。
在每一列都有自己的三个数字之后,我进行两次传递:第一次是奇数列,第二次是偶数列。第一次传递将所有处理过的cols相加,然后除以它们的数([1]+ [2]+ [5]+ [N-1] / (N/2))。第二步以另一种方式进行:([3]-[4]+[6]-[8]…(n /2))。
现在我有两个数。正如我在实验中发现的,第一个是主要的:如果它与另一张图像的值相差甚远,那么从人类的角度来看,它们根本就不相似。
So, the first one represents the average brightness of the image (again, you can pay most attention to green channel, then the red one, etc, but the default R->G->B order works just fine). The second number can be compared if the first two are very close, and it in fact represents the overall contrast of the image: if we have some black/white pattern or any contrast scene (lighted buildings in the city at night, for example) and if we are lucky, we will get huge numbers here if out positive members of sum are mostly bright, and negative ones are mostly dark, or vice versa. As I want my values to be always positive, I divide by 2 and shift by 127 here.
我在2017年用PHP写的代码,似乎我丢失了代码。但我还有截图:
同样的图片:
黑白版:
裁剪版:
另一张图片,翻译版:
和第四色域一样,但另一个场景:
我调整了差异阈值,因此结果非常好。但是正如你所看到的,这个简单的算法对于简单的场景翻译并没有什么好处。
另一方面,我可以注意到,修改可以从两张图像中每一张以75- 80%的比例制作裁剪副本,角落4个,角落和边缘中间8个,然后以同样的方式将裁剪的变体与另一张完整的图像进行比较;如果其中一个相似度得分明显更高,那么就使用它的值而不是默认值)。
如果您有大量的图像,请查看Bloom过滤器,它使用多个散列来获得概率高但效率高的结果。如果图像的数量不是很大,那么像md5这样的加密散列应该足够了。