我想在PostgreSQL中随机选择行,我尝试了这个:
select * from table where random() < 0.01;
但也有人建议:
select * from table order by random() limit 1000;
我有一个非常大的表,有5亿行,我希望它是快速的。
哪种方法更好?有什么不同?选择随机行最好的方法是什么?
我想在PostgreSQL中随机选择行,我尝试了这个:
select * from table where random() < 0.01;
但也有人建议:
select * from table order by random() limit 1000;
我有一个非常大的表,有5亿行,我希望它是快速的。
哪种方法更好?有什么不同?选择随机行最好的方法是什么?
当前回答
我的经验告诉我:
offset floor(random() * N) limit 1并不比order by random() limit 1快。
我认为偏移量方法会更快,因为它可以节省在Postgres中排序的时间。事实证明并非如此。
其他回答
从PostgreSQL 9.5开始,有一个新的语法专门用于从表中获取随机元素:
SELECT * FROM mytable TABLESAMPLE SYSTEM (5);
这个例子将给出mytable中5%的元素。
有关文档的更多说明:http://www.postgresql.org/docs/current/static/sql-select.html
select * from table order by random() limit 1000;
如果知道需要多少行,请检查tsm_system_rows。
tsm_system_rows
module provides the table sampling method SYSTEM_ROWS, which can be used in the TABLESAMPLE clause of a SELECT command. This table sampling method accepts a single integer argument that is the maximum number of rows to read. The resulting sample will always contain exactly that many rows, unless the table does not contain enough rows, in which case the whole table is selected. Like the built-in SYSTEM sampling method, SYSTEM_ROWS performs block-level sampling, so that the sample is not completely random but may be subject to clustering effects, especially if only a small number of rows are requested.
首先安装扩展
CREATE EXTENSION tsm_system_rows;
然后你的问题,
SELECT *
FROM table
TABLESAMPLE SYSTEM_ROWS(1000);
Erwin Brandstetter所概述的物化观点“可能的替代方案”的变体是可能的。
例如,您不希望在返回的随机化值中出现重复值。一个示例用例是生成只能使用一次的短代码。
包含你的(非随机的)值集的主表必须有一些表达式来决定哪些行是“被使用的”,哪些行不是——在这里我将保持简单,只创建一个布尔列,并使用名称。
假设这是输入表(可能会添加其他列,因为它们不会影响解决方案):
id_values id | used
----+--------
1 | FALSE
2 | FALSE
3 | FALSE
4 | FALSE
5 | FALSE
...
根据需要填充ID_VALUES表。然后,正如Erwin所描述的,创建一个物化视图,将ID_VALUES表随机化一次:
CREATE MATERIALIZED VIEW id_values_randomized AS
SELECT id
FROM id_values
ORDER BY random();
注意,物化视图不包含已使用的列,因为这很快就会过时。视图也不需要包含id_values表中的其他列。
为了获得(并“使用”)随机值,在id_values上使用update - return,通过连接从id_values_randomised中选择id_values,并应用所需的条件来只获得相关的可能性。例如:
UPDATE id_values
SET used = TRUE
WHERE id_values.id IN
(SELECT i.id
FROM id_values_randomized r INNER JOIN id_values i ON i.id = r.id
WHERE (NOT i.used)
LIMIT 1)
RETURNING id;
根据需要更改LIMIT——如果一次需要多个随机值,请将LIMIT更改为n,其中n是所需值的数量。
With the proper indexes on id_values, I believe the UPDATE-RETURNING should execute very quickly with little load. It returns randomized values with one database round-trip. The criteria for "eligible" rows can be as complex as required. New rows can be added to the id_values table at any time, and they will become accessible to the application as soon as the materialized view is refreshed (which can likely be run at an off-peak time). Creation and refresh of the materialized view will be slow, but it only needs to be executed when new id's added to the id_values table need to be made available.
Postgresql order by random(),按随机顺序选择行:
这是缓慢的,因为它对整个表进行排序,以保证每一行都有完全相等的机会被选中。全表扫描对于完美的随机性是不可避免的。
select your_columns from your_table ORDER BY random()
Postgresql order by random() with distinct:
select * from
(select distinct your_columns from your_table) table_alias
ORDER BY random()
Postgresql顺序随机限制一行:
这也很慢,因为它必须扫描表,以确保每一行都有相同的机会被选中,就在这一刻:
select your_columns from your_table ORDER BY random() limit 1
常数时间选择随机N行元素周期表扫描:
如果您的表非常大,那么上面的表扫描就需要花费5分钟才能完成。
为了更快,你可以安排一个幕后的夜间表扫描驯鹿,这将保证一个O(1)恒定时间速度的完美随机选择,除了在夜间索引表扫描期间,在你可能收到另一个随机行之前,它必须等待维护完成。
--Create a demo table with lots of random nonuniform data, big_data
--is your huge table you want to get random rows from in constant time.
drop table if exists big_data;
CREATE TABLE big_data (id serial unique, some_data text );
CREATE INDEX ON big_data (id);
--Fill it with a million rows which simulates your beautiful data:
INSERT INTO big_data (some_data) SELECT md5(random()::text) AS some_data
FROM generate_series(1,10000000);
--This delete statement puts holes in your index
--making it NONuniformly distributed
DELETE FROM big_data WHERE id IN (2, 4, 6, 7, 8);
--Do the nightly maintenance task on a schedule at 1AM.
drop table if exists big_data_mapper;
CREATE TABLE big_data_mapper (id serial, big_data_id int);
CREATE INDEX ON big_data_mapper (id);
CREATE INDEX ON big_data_mapper (big_data_id);
INSERT INTO big_data_mapper(big_data_id) SELECT id FROM big_data ORDER BY id;
--We have to use a function because the big_data_mapper might be out-of-date
--in between nightly tasks, so to solve the problem of a missing row,
--you try again until you succeed. In the event the big_data_mapper
--is broken, it tries 25 times then gives up and returns -1.
CREATE or replace FUNCTION get_random_big_data_id()
RETURNS int language plpgsql AS $$
declare
response int;
BEGIN
--Loop is required because big_data_mapper could be old
--Keep rolling the dice until you find one that hits.
for counter in 1..25 loop
SELECT big_data_id
FROM big_data_mapper OFFSET floor(random() * (
select max(id) biggest_value from big_data_mapper
)
) LIMIT 1 into response;
if response is not null then
return response;
end if;
end loop;
return -1;
END;
$$;
--get a random big_data id in constant time:
select get_random_big_data_id();
--Get 1 random row from big_data table in constant time:
select * from big_data where id in (
select get_random_big_data_id() from big_data limit 1
);
┌─────────┬──────────────────────────────────┐
│ id │ some_data │
├─────────┼──────────────────────────────────┤
│ 8732674 │ f8d75be30eff0a973923c413eaf57ac0 │
└─────────┴──────────────────────────────────┘
--Get 4 random rows from big_data in constant time:
select * from big_data where id in (
select get_random_big_data_id() from big_data limit 3
);
┌─────────┬──────────────────────────────────┐
│ id │ some_data │
├─────────┼──────────────────────────────────┤
│ 2722848 │ fab6a7d76d9637af89b155f2e614fc96 │
│ 8732674 │ f8d75be30eff0a973923c413eaf57ac0 │
│ 9475611 │ 36ac3eeb6b3e171cacd475e7f9dade56 │
└─────────┴──────────────────────────────────┘
--Test what happens when big_data_mapper stops receiving
--nightly reindexing.
delete from big_data_mapper where 1=1;
select get_random_big_data_id(); --It tries 25 times, and returns -1
--which means wait N minutes and try again.
改编自:https://www.gab.lc/articles/bigdata_postgresql_order_by_random
或者,如果以上都是太多的工作。
A simpler good 'nuff solution for constant time select random row is to make a new column on your big table called big_data.mapper_int make it not null with a unique index. Every night reset the column with a unique integer between 1 and max(n). To get a random row you "choose a random integer between 0 and max(id)" and return the row where mapper_int is that. If there's no row by that id, because the row has changed since re-index, choose another random row. If a row is added to big_data.mapper_int then populate it with max(id) + 1
添加一个名为r的列,类型为serial。指数r。
假设我们有20万行,我们将生成一个随机数n,其中0 < n <= 200000。
选择r > n的行,按ASC排序,选择最小的行。
代码:
select * from YOUR_TABLE
where r > (
select (
select reltuples::bigint AS estimate
from pg_class
where oid = 'public.YOUR_TABLE'::regclass) * random()
)
order by r asc limit(1);
代码是自解释的。中间的子查询用于快速估计来自https://stackoverflow.com/a/7945274/1271094的表行数。
在应用程序级别,如果n >为行数或需要选择多行,则需要再次执行该语句。