我有一个SQL Server表,其中有大约50,000行。我想随机选择大约5000行。我想到了一种复杂的方法,创建一个带有“随机数”列的临时表,将我的表复制到其中,循环遍历临时表并使用RAND()更新每一行,然后从该表中选择随机数列< 0.1的列。我正在寻找一种更简单的方法,如果可能的话,在一个单一的声明中。

本文建议使用NEWID()函数。这看起来很有希望,但我不知道如何可靠地选择一定百分比的行。

有人做过这个吗?什么好主意吗?


当前回答

只需按一个随机数对表进行排序,并使用TOP获得前5000行。

SELECT TOP 5000 * FROM [Table] ORDER BY newid();

更新

刚刚尝试过,一个newid()调用就足够了——不需要所有的类型转换和所有的数学运算。

其他回答

select  * from table
where id in (
    select id from table
order by random()
limit ((select count(*) from table)*55/100))

// to select 55 percent of rows randomly

这对我来说很管用:

SELECT * FROM table_name
ORDER BY RANDOM()
LIMIT [number]

试试这个:

SELECT TOP 10 Field1, ..., FieldN
FROM Table1
ORDER BY NEWID()

如果你(不像OP)需要特定数量的记录(这使得CHECKSUM方法很困难),并且想要一个比TABLESAMPLE本身提供的更随机的样本,并且也想要比CHECKSUM更好的速度,你可以将TABLESAMPLE和NEWID()方法合并,如下所示:

DECLARE @sampleCount int = 50
SET STATISTICS TIME ON

SELECT TOP (@sampleCount) * 
FROM [yourtable] TABLESAMPLE(10 PERCENT)
ORDER BY NEWID()

SET STATISTICS TIME OFF

就我而言,这是随机性(我知道这并不是真的)和速度之间最直接的妥协。适当地改变TABLESAMPLE百分比(或行数)——百分比越高,样本的随机性越大,但速度会有线性下降。(注意,TABLESAMPLE不接受变量)

这是一个更新和改进的抽样形式。它基于与其他一些使用CHECKSUM / BINARY_CHECKSUM和modulus的答案相同的概念。

使用与此类似的实现的原因,而不是其他答案:

It is relatively fast over huge data sets and can be efficiently used in/with derived queries. Millions of pre-filtered rows can be sampled in seconds with no tempdb usage and, if aligned with the rest of the query, the overhead is often minimal. Does not suffer from CHECKSUM(*) / BINARY_CHECKSUM(*) issues with runs of data. When using the CHECKSUM(*) approach, the rows can be selected in "chunks" and not "random" at all! This is because CHECKSUM prefers speed over distribution. Results in a stable/repeatable row selection and can be trivially changed to produce different rows on subsequent query executions. Approaches that use NEWID(), such as CHECKSUM(NEWID()) % 100, can never be stable/repeatable. Allows for increased sample precision and reduces introduced statistical errors. The sampling precision can also be tweaked. CHECKSUM only returns an int value. Does not use ORDER BY NEWID(), as ordering can become a significant bottleneck with large input sets. Avoiding the sorting also reduces memory and tempdb usage. Does not use TABLESAMPLE and thus works with a WHERE pre-filter.

缺点/限制:

Slightly slower execution times and using CHECKSUM(*). Using hashbytes, as shown below, adds about 3/4 of a second of overhead per million lines. This is with my data, on my database instance: YMMV. This overhead can be eliminated if using a persisted computed column of the resulting 'well distributed' bigint value from HASHBYTES. Unlike the basic SELECT TOP n .. ORDER BY NEWID(), this is not guaranteed to return "exactly N" rows. Instead, it returns a percentage row rows where such a value is pre-determined. For very small sample sizes this could result in 0 rows selected. This limitation is shared with the CHECKSUM(*) approaches.

要点如下:

-- Allow a sampling precision [0, 100.0000].
declare @sample_percent decimal(7, 4) = 12.3456

select
    t.*
from t
where 1=1
    and t.Name = 'Mr. No Questionable Checksum Usages'
    and ( -- sample
        @sample_percent = 100
        or abs(
            -- Choose appropriate identity column(s) for hashbytes input.
            -- For demonstration it is assumed to be a UNIQUEIDENTIFIER rowguid column.
            convert(bigint, hashbytes('SHA1', convert(varbinary(32), t.rowguid)))
        ) % (1000 * 100) < (1000 * @sample_percent)
    )

注:

While SHA1 is technically deprecated since SQL Server 2016, it is both sufficient for the task and is slightly faster than either MD5 or SHA2_256. Use a different hashing function as relevant. If the table already contains a hashed column (with a good distribution), that could potentially be used as well. Conversion of bigint is critical as it allows 2^63 bits of 'random space' to which to apply the modulus operator; this is much more than the 2^31 range from the CHECKSUM result. This reduces the modulus error at the limit, especially as the precision is increased. The sampling precision can be changed as long as the modulus operand and sample percent are multiplied appropriately. In this case, that is 1000 * to account for the 4 digits of precision allowed in @sample_percent. Can multiply the bigint value by RAND() to return a different row sample each run. This effectively changes the permutation of the fixed hash values. If @sample_percent is 100 the query planner can eliminate the slower calculation code entirely. Remember 'parameter sniffing' rules. This allows the code to be left in the query regardless of enabling sampling.


计算@sample_percent,带下限/上限,并在查询中添加TOP“提示”,这在示例用于派生表上下文时可能有用。

-- Approximate max-sample and min-sample ranges.
-- The minimum sample percent should be non-zero within the precision.
declare @max_sample_size int = 3333333
declare @min_sample_percent decimal(7,4) = 0.3333
declare @sample_percent decimal(7,4) -- [0, 100.0000]
declare @sample_size int

-- Get initial count for determining sample percentages.
-- Remember to match the filter conditions with the usage site!
declare @rows int
select @rows = count(1)
    from t
    where 1=1
        and t.Name = 'Mr. No Questionable Checksum Usages'

-- Calculate sample percent and back-calculate actual sample size.
if @rows <= @max_sample_size begin
    set @sample_percent = 100
end else begin
    set @sample_percent = convert(float, 100) * @max_sample_size / @rows
    if @sample_percent < @min_sample_percent
        set @sample_percent = @min_sample_percent
end
set @sample_size = ceiling(@rows * @sample_percent / 100)

select *
from ..
join (
    -- Not a precise value: if limiting exactly at, can introduce more bias.
    -- Using 'option optimize for' avoids this while requiring dynamic SQL.
    select top (@sample_size + convert(int, @sample_percent + 5))
    from t
    where 1=1
        and t.Name = 'Mr. No Questionable Checksum Usages'
        and ( -- sample
            @sample_percent = 100
            or abs(
                convert(bigint, hashbytes('SHA1', convert(varbinary(32), t.rowguid)))
            ) % (1000 * 100) < (1000 * @sample_percent)
        )
) sampled
on ..