让我们假设我们有一个数据集,它大概是

import numpy as np
x = np.linspace(0,2*np.pi,100)
y = np.sin(x) + np.random.random(100) * 0.2

Therefore we have a variation of 20% of the dataset. My first idea was to use the UnivariateSpline function of scipy, but the problem is that this does not consider the small noise in a good way. If you consider the frequencies, the background is much smaller than the signal, so a spline only of the cutoff might be an idea, but that would involve a back and forth fourier transformation, which might result in bad behaviour. Another way would be a moving average, but this would also need the right choice of the delay.

有什么提示/书籍或链接可以解决这个问题吗?


当前回答

我喜欢Savitzky-Golay滤镜。它使用最小二乘将数据的一个小窗口回归到一个多项式上,然后使用多项式来估计窗口中心的点。最后,窗口向前移动一个数据点,然后重复这个过程。这一直持续到每个点都相对于它的邻居进行了最佳调整。即使是非周期和非线性源的噪声样本,它也能很好地工作。

下面是一个完整的食谱示例。请参阅下面的代码,了解它使用起来有多简单。注意:我省略了定义savitzky_golay()函数的代码,因为您可以从上面链接的烹饪书示例中复制/粘贴它。

import numpy as np
import matplotlib.pyplot as plt

x = np.linspace(0,2*np.pi,100)
y = np.sin(x) + np.random.random(100) * 0.2
yhat = savitzky_golay(y, 51, 3) # window size 51, polynomial order 3

plt.plot(x,y)
plt.plot(x,yhat, color='red')
plt.show()

更新:我注意到我链接的食谱例子已经被撤下了。幸运的是,正如@dodohjk(感谢@bicarlsen提供的更新链接)指出的那样,Savitzky-Golay过滤器已经被合并到SciPy库中。 要使用SciPy源代码调整上面的代码,输入:

from scipy.signal import savgol_filter
yhat = savgol_filter(y, 51, 3) # window size 51, polynomial order 3

其他回答

如果你对周期信号的“平滑”版本感兴趣(就像你的例子),那么FFT是正确的方法。进行傅里叶变换并减去低贡献频率:

import numpy as np
import scipy.fftpack

N = 100
x = np.linspace(0,2*np.pi,N)
y = np.sin(x) + np.random.random(N) * 0.2

w = scipy.fftpack.rfft(y)
f = scipy.fftpack.rfftfreq(N, x[1]-x[0])
spectrum = w**2

cutoff_idx = spectrum < (spectrum.max()/5)
w2 = w.copy()
w2[cutoff_idx] = 0

y2 = scipy.fftpack.irfft(w2)

即使你的信号不是完全周期性的,这也能很好地去除白噪声。有许多类型的过滤器可以使用(高通,低通,等等…),合适的一个取决于你正在寻找什么。

对于我的一个项目,我需要为时间序列建模创建间隔,为了使过程更高效,我创建了tsmoothie:一个python库,用于以向量化的方式平滑时间序列和异常值检测。

它提供了不同的平滑算法以及计算间隔的可能性。

这里我使用了一个convolutionsmooth,但是你也可以测试其他的。

import numpy as np
import matplotlib.pyplot as plt
from tsmoothie.smoother import *

x = np.linspace(0,2*np.pi,100)
y = np.sin(x) + np.random.random(100) * 0.2

# operate smoothing
smoother = ConvolutionSmoother(window_len=5, window_type='ones')
smoother.smooth(y)

# generate intervals
low, up = smoother.get_intervals('sigma_interval', n_sigma=2)

# plot the smoothed timeseries with intervals
plt.figure(figsize=(11,6))
plt.plot(smoother.smooth_data[0], linewidth=3, color='blue')
plt.plot(smoother.data[0], '.k')
plt.fill_between(range(len(smoother.data[0])), low[0], up[0], alpha=0.3)

我还指出tsmoothie可以用向量化的方式对多个时间序列进行平滑

这个问题已经得到了彻底的回答,所以我认为对所提议的方法进行运行时分析会很有兴趣(至少对我来说是这样)。我还将查看噪声数据集中心和边缘的方法的行为。

博士TL;

                    | runtime in s | runtime in s
method              | python list  | numpy array
--------------------|--------------|------------
kernel regression   | 23.93405     | 22.75967 
lowess              |  0.61351     |  0.61524 
naive average       |  0.02485     |  0.02326 
others*             |  0.00150     |  0.00150 
fft                 |  0.00021     |  0.00021 
numpy convolve      |  0.00017     |  0.00015 

*savgol with different fit functions and some numpy methods

Kernel回归的伸缩性很差,Lowess稍微快一点,但两者都能产生平滑的曲线。Savgol在速度上是一个中间地带,可以产生跳跃和平滑的输出,这取决于多项式的等级。FFT非常快,但只适用于周期性数据。

使用numpy的移动平均方法更快,但显然会生成一个包含步骤的图。

设置

我以正弦曲线的形状生成了1000个数据点:

size = 1000
x = np.linspace(0, 4 * np.pi, size)
y = np.sin(x) + np.random.random(size) * 0.2
data = {"x": x, "y": y}

我把它们传递给一个函数来测量运行时并绘制结果拟合:

def test_func(f, label):  # f: function handle to one of the smoothing methods
    start = time()
    for i in range(5):
        arr = f(data["y"], 20)
    print(f"{label:26s} - time: {time() - start:8.5f} ")
    plt.plot(data["x"], arr, "-", label=label)

我测试了许多不同的平滑功能。Arr是要平滑的y个值的数组,并张成平滑参数。越低,拟合越接近原始数据,越高,得到的曲线越平滑。

def smooth_data_convolve_my_average(arr, span):
    re = np.convolve(arr, np.ones(span * 2 + 1) / (span * 2 + 1), mode="same")

    # The "my_average" part: shrinks the averaging window on the side that 
    # reaches beyond the data, keeps the other side the same size as given 
    # by "span"
    re[0] = np.average(arr[:span])
    for i in range(1, span + 1):
        re[i] = np.average(arr[:i + span])
        re[-i] = np.average(arr[-i - span:])
    return re

def smooth_data_np_average(arr, span):  # my original, naive approach
    return [np.average(arr[val - span:val + span + 1]) for val in range(len(arr))]

def smooth_data_np_convolve(arr, span):
    return np.convolve(arr, np.ones(span * 2 + 1) / (span * 2 + 1), mode="same")

def smooth_data_np_cumsum_my_average(arr, span):
    cumsum_vec = np.cumsum(arr)
    moving_average = (cumsum_vec[2 * span:] - cumsum_vec[:-2 * span]) / (2 * span)

    # The "my_average" part again. Slightly different to before, because the
    # moving average from cumsum is shorter than the input and needs to be padded
    front, back = [np.average(arr[:span])], []
    for i in range(1, span):
        front.append(np.average(arr[:i + span]))
        back.insert(0, np.average(arr[-i - span:]))
    back.insert(0, np.average(arr[-2 * span:]))
    return np.concatenate((front, moving_average, back))

def smooth_data_lowess(arr, span):
    x = np.linspace(0, 1, len(arr))
    return sm.nonparametric.lowess(arr, x, frac=(5*span / len(arr)), return_sorted=False)

def smooth_data_kernel_regression(arr, span):
    # "span" smoothing parameter is ignored. If you know how to 
    # incorporate that with kernel regression, please comment below.
    kr = KernelReg(arr, np.linspace(0, 1, len(arr)), 'c')
    return kr.fit()[0]

def smooth_data_savgol_0(arr, span):  
    return savgol_filter(arr, span * 2 + 1, 0)

def smooth_data_savgol_1(arr, span):  
    return savgol_filter(arr, span * 2 + 1, 1)

def smooth_data_savgol_2(arr, span):  
    return savgol_filter(arr, span * 2 + 1, 2)

def smooth_data_fft(arr, span):  # the scaling of "span" is open to suggestions
    w = fftpack.rfft(arr)
    spectrum = w ** 2
    cutoff_idx = spectrum < (spectrum.max() * (1 - np.exp(-span / 2000)))
    w[cutoff_idx] = 0
    return fftpack.irfft(w)

结果

速度

运行时超过1000个元素,在python列表和numpy数组上进行测试,以保存值。

method              | python list | numpy array
--------------------|-------------|------------
kernel regression   | 23.93405 s  | 22.75967 s
lowess              |  0.61351 s  |  0.61524 s
numpy average       |  0.02485 s  |  0.02326 s
savgol 2            |  0.00186 s  |  0.00196 s
savgol 1            |  0.00157 s  |  0.00161 s
savgol 0            |  0.00155 s  |  0.00151 s
numpy convolve + me |  0.00121 s  |  0.00115 s
numpy cumsum + me   |  0.00114 s  |  0.00105 s
fft                 |  0.00021 s  |  0.00021 s
numpy convolve      |  0.00017 s  |  0.00015 s

特别是核回归在计算超过1k个元素时非常缓慢,当数据集变得更大时,lowess也会失败。Numpy卷积和FFT特别快。我没有研究随着样本量增加或减少的运行时行为(O(n))。

边缘的行为

我将把这部分分成两部分,以保持图像的可理解性。

Numpy方法+ savgol 0:

这些方法计算的是数据的平均值,图形不是平滑的。当用于计算平均值的窗口没有触及数据的边缘时,它们(numpy.cumsum除外)都会产生相同的图形。numpy的差异。Cumsum很可能是由于窗口大小的“关闭一个”错误。

当方法必须处理更少的数据时,有不同的边缘行为:

savgol 0: continues with a constant to the edge of the data (savgol 1 and savgol 2 end with a line and parabola respectively) numpy average: stops when the window reaches the left side of the data and fills those places in the array with Nan, same behaviour as my_average method on the right side numpy convolve: follows the data pretty accurately. I suspect the window size is reduced symmetrically when one side of the window reaches the edge of the data my_average/me: my own method that I implemented, because I was not satisfied with the other ones. Simply shrinks the part of the window that is reaching beyond the data to the edge of the data, but keeps the window to the other side the original size given with span

复杂的方法:

这些方法都以非常适合数据的方式结束。savgo1以一条直线结束,savgo2以一条抛物线结束。

曲线的行为

在数据中间展示不同方法的行为。

不同的savgol和平均滤波器产生一个粗糙的线,低,fft和核回归产生一个平滑的拟合。当数据发生变化时,Lowess似乎在偷工减料。

动机

I have a Raspberry Pi logging data for fun and the visualization proved to be a small challenge. All data points, except RAM usage and ethernet traffic are only recorded in discrete steps and/or inherently noisy. For example the temperature sensor only outputs whole degrees, but differs by up to two degrees between consecutive measurements. No useful information can be gained from such a scatter plot. To visualize the data I therefore needed some method that is not too computationally expensive and produced a moving average. I also wanted nice behavior at the edges of the data, as this especially impacts the latest info when looking at live data. I settled on the numpy convolve method with my_average to improve the edge behavior.

从SciPy Cookbook中清晰地定义了1D信号的平滑,向您展示了它是如何工作的。

快捷方式:

import numpy

def smooth(x,window_len=11,window='hanning'):
    """smooth the data using a window with requested size.

    This method is based on the convolution of a scaled window with the signal.
    The signal is prepared by introducing reflected copies of the signal 
    (with the window size) in both ends so that transient parts are minimized
    in the begining and end part of the output signal.

    input:
        x: the input signal 
        window_len: the dimension of the smoothing window; should be an odd integer
        window: the type of window from 'flat', 'hanning', 'hamming', 'bartlett', 'blackman'
            flat window will produce a moving average smoothing.

    output:
        the smoothed signal

    example:

    t=linspace(-2,2,0.1)
    x=sin(t)+randn(len(t))*0.1
    y=smooth(x)

    see also: 

    numpy.hanning, numpy.hamming, numpy.bartlett, numpy.blackman, numpy.convolve
    scipy.signal.lfilter

    TODO: the window parameter could be the window itself if an array instead of a string
    NOTE: length(output) != length(input), to correct this: return y[(window_len/2-1):-(window_len/2)] instead of just y.
    """

    if x.ndim != 1:
        raise ValueError, "smooth only accepts 1 dimension arrays."

    if x.size < window_len:
        raise ValueError, "Input vector needs to be bigger than window size."


    if window_len<3:
        return x


    if not window in ['flat', 'hanning', 'hamming', 'bartlett', 'blackman']:
        raise ValueError, "Window is on of 'flat', 'hanning', 'hamming', 'bartlett', 'blackman'"


    s=numpy.r_[x[window_len-1:0:-1],x,x[-2:-window_len-1:-1]]
    #print(len(s))
    if window == 'flat': #moving average
        w=numpy.ones(window_len,'d')
    else:
        w=eval('numpy.'+window+'(window_len)')

    y=numpy.convolve(w/w.sum(),s,mode='valid')
    return y




from numpy import *
from pylab import *

def smooth_demo():

    t=linspace(-4,4,100)
    x=sin(t)
    xn=x+randn(len(t))*0.1
    y=smooth(x)

    ws=31

    subplot(211)
    plot(ones(ws))

    windows=['flat', 'hanning', 'hamming', 'bartlett', 'blackman']

    hold(True)
    for w in windows[1:]:
        eval('plot('+w+'(ws) )')

    axis([0,30,0,1.1])

    legend(windows)
    title("The smoothing windows")
    subplot(212)
    plot(x)
    plot(xn)
    for w in windows:
        plot(smooth(xn,10,w))
    l=['original signal', 'signal with noise']
    l.extend(windows)

    legend(l)
    title("Smoothing a noisy signal")
    show()


if __name__=='__main__':
    smooth_demo()

为你的数据拟合一个移动平均线可以消除噪音,看看这个如何做到这一点的答案。

如果你想使用LOWESS来拟合你的数据(它类似于移动平均,但更复杂),你可以使用statmodels库:

import numpy as np
import pylab as plt
import statsmodels.api as sm

x = np.linspace(0,2*np.pi,100)
y = np.sin(x) + np.random.random(100) * 0.2
lowess = sm.nonparametric.lowess(y, x, frac=0.1)

plt.plot(x, y, '+')
plt.plot(lowess[:, 0], lowess[:, 1])
plt.show()

最后,如果你知道信号的函数形式,你就可以为你的数据拟合曲线,这可能是最好的办法。