我试图创建一个快速的2D点内多边形算法,用于命中测试(例如多边形.contains(p:点))。对有效技术的建议将不胜感激。
当前回答
在大多数情况下,这是一个比其他算法都快的算法。
它又新又雅致。我们花费O(n * log(n))时间构建一个表,允许我们在O(log(n) + k)时间内测试多边形中的点。
与光线跟踪或角度不同,使用扫描光束表可以更快地对同一多边形进行多次检查。我们必须预先构建一个扫描束活动边表,这是大多数代码正在做的事情。
We calculate the scanbeam and the active edges for that position in the y-direction. We make a list of points sorted by their y-component and we iterate through this list, for two events. Start-Y and End-Y, we track the active edges as we process the list. We process the events in order and for each scanbeam we record the y-value of the event and the active edges at each event (events being start-y and end-y) but we only record these when our event-y is different than last time (so everything at the event point is processed before we mark it in our table).
我们得到我们的表格:
[] p6p5、p6p7 p6p5, p6p7, p2p3, p2p1 p6p7, p5p4, p2p3, p3p1 p7p8, p5p4, p2p3, p2p1 p7p8, p5p4, p3p4, p2p1 p7p8 p2p1、 p7p8、p1p0 p8p0、p1p0 []
在构建该表之后,实际执行工作的代码只有几行。
注意:这里的代码使用复数值作为点。所以。real是。x。imag是。y。
def point_in_scantable(actives_table, events, xi, point):
beam = bisect(events, point.imag) - 1 # Binary search in sorted array.
actives_at_y = actives_table[beam]
total = sum([point.real > xi(e, point.imag) for e in actives_at_y])
return bool(total % 2)
我们对事件进行二进制搜索,以找到特定值的actives_at_y。对于在y点的所有活动,我们计算在我们点的特定y点的x段值。每次x截距大于点的x分量时加1。然后对总数乘以2。(这是偶数-奇数填充规则,你可以很容易地适应任何其他填充规则)。
完整的代码:
from bisect import bisect
def build_edge_list(polygon):
edge_list = []
for i in range(1, len(polygon)):
if (polygon[i].imag, polygon[i].real) < (polygon[i - 1].imag, polygon[i - 1].real):
edge_list.append((polygon[i], i))
edge_list.append((polygon[i - 1], ~i))
else:
edge_list.append((polygon[i], ~i))
edge_list.append((polygon[i - 1], i))
def sort_key(e):
return e[0].imag, e[0].real, ~e[1]
edge_list.sort(key=sort_key)
return edge_list
def build_scanbeam(edge_list):
actives_table = []
events = []
y = -float("inf")
actives = []
for pt, index in edge_list:
if y != pt.imag:
actives_table.append(list(actives))
events.append(y)
if index >= 0:
actives.append(index)
else:
actives.remove(~index)
y = pt.imag
return actives_table, events
def point_in_polygon(polygon, point):
def x_intercept(e, y):
pt0 = polygon[e-1]
pt1 = polygon[e]
if pt1.real - pt0.real == 0:
return pt0.real
m = (pt1.imag - pt0.imag) / (pt1.real - pt0.real)
b = pt0.imag - (m * pt0.real)
return (y - b) / m
edge_list = build_edge_list(polygon)
actives_table, events = build_scanbeam(edge_list)
try:
if len(point):
return [point_in_scantable(actives_table, events, x_intercept, p) for p in point]
except TypeError:
return point_in_scantable(actives_table, events, x_intercept, point)
def point_in_scantable(actives_table, events, xi, point):
beam = bisect(events, point.imag) - 1 # Binary search in sorted array.
actives_at_y = actives_table[beam]
total = sum([point.real > xi(e, point.imag) for e in actives_at_y])
return bool(total % 2)
如果忽略,则扫描表的构建时间为O(n*log(n))。我们实际上是在O(log(n) + k)时间内查到的。其中n是多边形中段数的大小,k是该多边形中典型的活动边数。其他的光线追踪方法实际上需要O(n)时间。每次我们检查一个点,它迭代整个多边形。所以即使有这个明显的次优实现,它也轻而易举地打败了其他所有的。
There's a few performance tricks that could be done, for example, we can lower the time complexity to O(log(n) + log(k)) time. To do this we would implement Bentley-Ottmann into the sweep line, and rather than processing the intersections as different events, we split the lines at the intersections. We then also sort the active edges by their x-intercepts. We then know that no intersections occur during a scanbeam and since we sorted our segments (taking care to order them correctly within the scanbeam even if they start at the same initial point (you need to look at the slopes, or just compare midpoints of the segments). We then have a sorted intersection-less actives lists scanbeam table which means we can binary search into active edge list as well. Though that sounds like a lot of work for a value of k which is going to be typically 2 or maybe 4.
此外,由于这基本上变成了一个查找表和一些x截距的最小计算,它更能用GPU完成。你不再需要在多边形上循环了。所以你可以用numpy这样的东西来大量计算这些点,这样你就可以一次做所有的计算,从而提高性能。
其他回答
简单的解决方案是将多边形划分为三角形,并按这里解释的那样对三角形进行测试
如果你的多边形是凸多边形,可能有更好的方法。把这个多边形看作是无限条线的集合。每一行将空间一分为二。对于每一个点,很容易判断它是在直线的一边还是另一边。如果一个点在所有直线的同一侧,那么它在多边形内。
这似乎在R中工作(为丑陋道歉,希望看到更好的版本!)。
pnpoly <- function(nvert,vertx,verty,testx,testy){
c <- FALSE
j <- nvert
for (i in 1:nvert){
if( ((verty[i]>testy) != (verty[j]>testy)) &&
(testx < (vertx[j]-vertx[i])*(testy-verty[i])/(verty[j]-verty[i])+vertx[i]))
{c <- !c}
j <- i}
return(c)}
VBA版本:
注意:请记住,如果你的多边形是地图中的一个区域,纬度/经度是Y/X值,而不是X/Y(纬度= Y,经度= X),因为从我的理解来看,这是历史含义,因为经度不是一个测量值。
类模块:CPoint
Private pXValue As Double
Private pYValue As Double
'''''X Value Property'''''
Public Property Get X() As Double
X = pXValue
End Property
Public Property Let X(Value As Double)
pXValue = Value
End Property
'''''Y Value Property'''''
Public Property Get Y() As Double
Y = pYValue
End Property
Public Property Let Y(Value As Double)
pYValue = Value
End Property
模块:
Public Function isPointInPolygon(p As CPoint, polygon() As CPoint) As Boolean
Dim i As Integer
Dim j As Integer
Dim q As Object
Dim minX As Double
Dim maxX As Double
Dim minY As Double
Dim maxY As Double
minX = polygon(0).X
maxX = polygon(0).X
minY = polygon(0).Y
maxY = polygon(0).Y
For i = 1 To UBound(polygon)
Set q = polygon(i)
minX = vbMin(q.X, minX)
maxX = vbMax(q.X, maxX)
minY = vbMin(q.Y, minY)
maxY = vbMax(q.Y, maxY)
Next i
If p.X < minX Or p.X > maxX Or p.Y < minY Or p.Y > maxY Then
isPointInPolygon = False
Exit Function
End If
' SOURCE: http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html
isPointInPolygon = False
i = 0
j = UBound(polygon)
Do While i < UBound(polygon) + 1
If (polygon(i).Y > p.Y) Then
If (polygon(j).Y < p.Y) Then
If p.X < (polygon(j).X - polygon(i).X) * (p.Y - polygon(i).Y) / (polygon(j).Y - polygon(i).Y) + polygon(i).X Then
isPointInPolygon = True
Exit Function
End If
End If
ElseIf (polygon(i).Y < p.Y) Then
If (polygon(j).Y > p.Y) Then
If p.X < (polygon(j).X - polygon(i).X) * (p.Y - polygon(i).Y) / (polygon(j).Y - polygon(i).Y) + polygon(i).X Then
isPointInPolygon = True
Exit Function
End If
End If
End If
j = i
i = i + 1
Loop
End Function
Function vbMax(n1, n2) As Double
vbMax = IIf(n1 > n2, n1, n2)
End Function
Function vbMin(n1, n2) As Double
vbMin = IIf(n1 > n2, n2, n1)
End Function
Sub TestPointInPolygon()
Dim i As Integer
Dim InPolygon As Boolean
' MARKER Object
Dim p As CPoint
Set p = New CPoint
p.X = <ENTER X VALUE HERE>
p.Y = <ENTER Y VALUE HERE>
' POLYGON OBJECT
Dim polygon() As CPoint
ReDim polygon(<ENTER VALUE HERE>) 'Amount of vertices in polygon - 1
For i = 0 To <ENTER VALUE HERE> 'Same value as above
Set polygon(i) = New CPoint
polygon(i).X = <ASSIGN X VALUE HERE> 'Source a list of values that can be looped through
polgyon(i).Y = <ASSIGN Y VALUE HERE> 'Source a list of values that can be looped through
Next i
InPolygon = isPointInPolygon(p, polygon)
MsgBox InPolygon
End Sub
Like David Segonds' answer suggests I use an approach of angle summation derived from my concave polygon drawing algorithm. It relies of adding up the approximate angles of subtriangles around the point to obtain a weight. A weight around 1.0 means the point is inside the triangle, a weight around 0.0 means outside, a weight around -1.0 is what happens when inside the polygon but in reverse order (like with one of the halves of a bowtie-shaped tetragon) and a weight of NAN if exactly on an edge. The reason it's not slow is that angles don't need to be estimated accurately at all. Holes can be handled by treating them as separate polygons and subtracting the weights.
typedef struct { double x, y; } xy_t;
xy_t sub_xy(xy_t a, xy_t b)
{
a.x -= b.x;
a.y -= b.y;
return a;
}
double calc_sharp_subtriangle_pixel_weight(xy_t p0, xy_t p1)
{
xy_t rot, r0, r1;
double weight;
// Rotate points (unnormalised)
rot = sub_xy(p1, p0);
r0.x = rot.x*p0.y - rot.y*p0.x;
r0.y = rot.x*p0.x + rot.y*p0.y;
r1.y = rot.x*p1.x + rot.y*p1.y;
// Calc weight
weight = subtriangle_angle_approx(r1.y, r0.x) - subtriangle_angle_approx(r0.y, r0.x);
return weight;
}
double calc_sharp_polygon_pixel_weight(xy_t p, xy_t *corner, int corner_count)
{
int i;
xy_t p0, p1;
double weight = 0.;
p0 = sub_xy(corner[corner_count-1], p);
for (i=0; i < corner_count; i++)
{
// Transform corner coordinates
p1 = sub_xy(corner[i], p);
// Calculate weight for each subtriangle
weight += calc_sharp_subtriangle_pixel_weight(p0, p1);
p0 = p1;
}
return weight;
}
因此,对于多边形的每一段,都形成一个子三角形,并计算点,然后旋转每个子三角形以计算其近似角度并添加到权重。
调用subtriangle_angle_approx(y, x)可以替换为atan2(y, x) / (2.*pi),但是一个非常粗略的近似值就足够精确了:
double subtriangle_angle_approx(double y, double x)
{
double angle, d;
int obtuse;
if (x == 0.)
return NAN;
obtuse = fabs(y) > fabs(x);
if (obtuse)
swap_double(&y, &x);
// Core of the approximation, a very loosely approximate atan(y/x) / (2.*pi) over ]-1 , 1[
d = y / x;
angle = 0.13185 * d;
if (obtuse)
angle = sign(d)*0.25 - angle;
return angle;
}
在大多数情况下,这是一个比其他算法都快的算法。
它又新又雅致。我们花费O(n * log(n))时间构建一个表,允许我们在O(log(n) + k)时间内测试多边形中的点。
与光线跟踪或角度不同,使用扫描光束表可以更快地对同一多边形进行多次检查。我们必须预先构建一个扫描束活动边表,这是大多数代码正在做的事情。
We calculate the scanbeam and the active edges for that position in the y-direction. We make a list of points sorted by their y-component and we iterate through this list, for two events. Start-Y and End-Y, we track the active edges as we process the list. We process the events in order and for each scanbeam we record the y-value of the event and the active edges at each event (events being start-y and end-y) but we only record these when our event-y is different than last time (so everything at the event point is processed before we mark it in our table).
我们得到我们的表格:
[] p6p5、p6p7 p6p5, p6p7, p2p3, p2p1 p6p7, p5p4, p2p3, p3p1 p7p8, p5p4, p2p3, p2p1 p7p8, p5p4, p3p4, p2p1 p7p8 p2p1、 p7p8、p1p0 p8p0、p1p0 []
在构建该表之后,实际执行工作的代码只有几行。
注意:这里的代码使用复数值作为点。所以。real是。x。imag是。y。
def point_in_scantable(actives_table, events, xi, point):
beam = bisect(events, point.imag) - 1 # Binary search in sorted array.
actives_at_y = actives_table[beam]
total = sum([point.real > xi(e, point.imag) for e in actives_at_y])
return bool(total % 2)
我们对事件进行二进制搜索,以找到特定值的actives_at_y。对于在y点的所有活动,我们计算在我们点的特定y点的x段值。每次x截距大于点的x分量时加1。然后对总数乘以2。(这是偶数-奇数填充规则,你可以很容易地适应任何其他填充规则)。
完整的代码:
from bisect import bisect
def build_edge_list(polygon):
edge_list = []
for i in range(1, len(polygon)):
if (polygon[i].imag, polygon[i].real) < (polygon[i - 1].imag, polygon[i - 1].real):
edge_list.append((polygon[i], i))
edge_list.append((polygon[i - 1], ~i))
else:
edge_list.append((polygon[i], ~i))
edge_list.append((polygon[i - 1], i))
def sort_key(e):
return e[0].imag, e[0].real, ~e[1]
edge_list.sort(key=sort_key)
return edge_list
def build_scanbeam(edge_list):
actives_table = []
events = []
y = -float("inf")
actives = []
for pt, index in edge_list:
if y != pt.imag:
actives_table.append(list(actives))
events.append(y)
if index >= 0:
actives.append(index)
else:
actives.remove(~index)
y = pt.imag
return actives_table, events
def point_in_polygon(polygon, point):
def x_intercept(e, y):
pt0 = polygon[e-1]
pt1 = polygon[e]
if pt1.real - pt0.real == 0:
return pt0.real
m = (pt1.imag - pt0.imag) / (pt1.real - pt0.real)
b = pt0.imag - (m * pt0.real)
return (y - b) / m
edge_list = build_edge_list(polygon)
actives_table, events = build_scanbeam(edge_list)
try:
if len(point):
return [point_in_scantable(actives_table, events, x_intercept, p) for p in point]
except TypeError:
return point_in_scantable(actives_table, events, x_intercept, point)
def point_in_scantable(actives_table, events, xi, point):
beam = bisect(events, point.imag) - 1 # Binary search in sorted array.
actives_at_y = actives_table[beam]
total = sum([point.real > xi(e, point.imag) for e in actives_at_y])
return bool(total % 2)
如果忽略,则扫描表的构建时间为O(n*log(n))。我们实际上是在O(log(n) + k)时间内查到的。其中n是多边形中段数的大小,k是该多边形中典型的活动边数。其他的光线追踪方法实际上需要O(n)时间。每次我们检查一个点,它迭代整个多边形。所以即使有这个明显的次优实现,它也轻而易举地打败了其他所有的。
There's a few performance tricks that could be done, for example, we can lower the time complexity to O(log(n) + log(k)) time. To do this we would implement Bentley-Ottmann into the sweep line, and rather than processing the intersections as different events, we split the lines at the intersections. We then also sort the active edges by their x-intercepts. We then know that no intersections occur during a scanbeam and since we sorted our segments (taking care to order them correctly within the scanbeam even if they start at the same initial point (you need to look at the slopes, or just compare midpoints of the segments). We then have a sorted intersection-less actives lists scanbeam table which means we can binary search into active edge list as well. Though that sounds like a lot of work for a value of k which is going to be typically 2 or maybe 4.
此外,由于这基本上变成了一个查找表和一些x截距的最小计算,它更能用GPU完成。你不再需要在多边形上循环了。所以你可以用numpy这样的东西来大量计算这些点,这样你就可以一次做所有的计算,从而提高性能。
推荐文章
- 加快R中的循环操作
- INT和VARCHAR主键之间有真正的性能差异吗?
- c++标准是否要求iostreams的性能很差,或者我只是在处理一个糟糕的实现?
- 大概的成本访问各种缓存和主存储器?
- 模拟慢速互联网连接
- 如何检查表上持有哪些锁
- 检查字符串是否包含字符串列表中的元素
- Scala vs Python的Spark性能
- 现代c++能让你免费获得性能吗?
- 对于PostgreSQL表来说,多大才算太大?
- 即使从未抛出异常,使用try-catch块的代价是否昂贵?
- 如何有效地从数组列表或字符串数组中删除所有空元素?
- 在SQL Server上使用varchar(MAX) vs TEXT
- .toArray(new MyClass[0]) or .toArray(new MyClass[myList.size()])?
- 是什么导致JNI调用变慢?