我有一个表与以下字段:

id (Unique)
url (Unique)
title
company
site_id

现在,我需要删除具有相同标题、company和site_id的行。一种方法是使用下面的SQL和脚本(PHP):

SELECT title, site_id, location, id, count( * ) 
FROM jobs
GROUP BY site_id, company, title, location
HAVING count( * ) >1

运行此查询后,可以使用服务器端脚本删除重复项。

但是,我想知道这是否只能使用SQL查询。


我有这个查询片段的SQLServer,但我认为它可以用在其他DBMS与小的变化:

DELETE
FROM Table
WHERE Table.idTable IN  (  
    SELECT MAX(idTable)
    FROM idTable
    GROUP BY field1, field2, field3
    HAVING COUNT(*) > 1)

我忘了告诉您,这个查询不会删除重复行中id最低的行。如果这对你有用,试试这个查询:

DELETE
FROM jobs
WHERE jobs.id IN  (  
    SELECT MAX(id)
    FROM jobs
    GROUP BY site_id, company, title, location
    HAVING COUNT(*) > 1)

要做到这一点,一个非常简单的方法是在3列上添加UNIQUE索引。在编写ALTER语句时,请包含IGNORE关键字。像这样:

ALTER IGNORE TABLE jobs
ADD UNIQUE INDEX idx_name (site_id, title, company);

这将删除所有重复的行。作为一个额外的好处,将来重复的insert将出错。像往常一样,在运行这样的程序之前,您可能想要进行备份…

编辑:不再工作在MySQL 5.7+

这个特性在MySQL 5.6中已经被弃用,在MySQL 5.7中被移除,所以它不起作用。


MySQL对引用要删除的表有限制。你可以用一个临时表来解决这个问题,比如:

create temporary table tmpTable (id int);

insert  into tmpTable
        (id)
select  id
from    YourTable yt
where   exists
        (
        select  *
        from    YourTabe yt2
        where   yt2.title = yt.title
                and yt2.company = yt.company
                and yt2.site_id = yt.site_id
                and yt2.id > yt.id
        );

delete  
from    YourTable
where   ID in (select id from tmpTable);

以下是Kostanos在评论中的建议: 上面唯一缓慢的查询是DELETE,适用于数据库非常大的情况。这个查询可以更快:

DELETE FROM YourTable USING YourTable, tmpTable WHERE YourTable.id=tmpTable.id

如果IGNORE语句在我的例子中不起作用,你可以使用下面的语句:

CREATE TABLE your_table_deduped LIKE your_table;


INSERT your_table_deduped
SELECT *
FROM your_table
GROUP BY index1_id,
         index2_id;

RENAME TABLE your_table TO your_table_with_dupes;

RENAME TABLE your_table_deduped TO your_table;

#OPTIONAL
ALTER TABLE `your_table` ADD UNIQUE `unique_index` (`index1_id`, `index2_id`);

#OPTIONAL
DROP TABLE your_table_with_dupes;

我想更具体地说明我删除了哪些记录,下面是我的解决方案:

delete
from jobs c1
where not c1.location = 'Paris'
and  c1.site_id > 64218
and exists 
(  
select * from jobs c2 
where c2.site_id = c1.site_id
and   c2.company = c1.company
and   c2.location = c1.location
and   c2.title = c1.title
and   c2.site_id > 63412
and   c2.site_id < 64219
)

你可以很容易地从这个代码中删除重复的记录。

$qry = mysql_query("SELECT * from cities");
while($qry_row = mysql_fetch_array($qry))
{
$qry2 = mysql_query("SELECT * from cities2 where city = '".$qry_row['city']."'");

if(mysql_num_rows($qry2) > 1){
    while($row = mysql_fetch_array($qry2)){
        $city_arry[] = $row;

        }

    $total = sizeof($city_arry) - 1;
        for($i=1; $i<=$total; $i++){


            mysql_query( "delete from cities2 where town_id = '".$city_arry[$i][0]."'");

            }
    }
    //exit;
}

我必须对文本字段执行此操作,并且遇到了索引上100字节的限制。

我通过添加一个列来解决这个问题,对字段进行md5哈希,并进行更改。

ALTER TABLE table ADD `merged` VARCHAR( 40 ) NOT NULL ;
UPDATE TABLE SET merged` = MD5(CONCAT(`col1`, `col2`, `col3`))
ALTER IGNORE TABLE table ADD UNIQUE INDEX idx_name (`merged`);

如果不想更改列属性,那么可以使用下面的查询。

因为你有一个列,它有唯一的id(例如,auto_increment列),你可以用它来删除重复项:

DELETE `a`
FROM
    `jobs` AS `a`,
    `jobs` AS `b`
WHERE
    -- IMPORTANT: Ensures one version remains
    -- Change "ID" to your unique column's name
    `a`.`ID` < `b`.`ID`

    -- Any duplicates you want to check for
    AND (`a`.`title` = `b`.`title` OR `a`.`title` IS NULL AND `b`.`title` IS NULL)
    AND (`a`.`company` = `b`.`company` OR `a`.`company` IS NULL AND `b`.`company` IS NULL)
    AND (`a`.`site_id` = `b`.`site_id` OR `a`.`site_id` IS NULL AND `b`.`site_id` IS NULL);

在MySQL中,你可以用空安全的相等操作符(又名“宇宙飞船操作符”)来简化它:

DELETE `a`
FROM
    `jobs` AS `a`,
    `jobs` AS `b`
WHERE
    -- IMPORTANT: Ensures one version remains
    -- Change "ID" to your unique column's name
    `a`.`ID` < `b`.`ID`

    -- Any duplicates you want to check for
    AND `a`.`title` <=> `b`.`title`
    AND `a`.`company` <=> `b`.`company`
    AND `a`.`site_id` <=> `b`.`site_id`;

这个解决方案将把重复的数据移到一个表中,唯一的数据移到另一个表中。

-- speed up creating uniques table if dealing with many rows
CREATE INDEX temp_idx ON jobs(site_id, company, title, location);

-- create the table with unique rows
INSERT jobs_uniques SELECT * FROM
    (
    SELECT * 
    FROM jobs
    GROUP BY site_id, company, title, location
    HAVING count(1) > 1
    UNION
    SELECT *
    FROM jobs
    GROUP BY site_id, company, title, location
    HAVING count(1) = 1
) x

-- create the table with duplicate rows
INSERT jobs_dupes 
SELECT * 
FROM jobs
WHERE id NOT IN
(SELECT id FROM jobs_uniques)

-- confirm the difference between uniques and dupes tables
SELECT COUNT(1)
AS jobs, 
(SELECT COUNT(1) FROM jobs_dupes) + (SELECT COUNT(1) FROM jobs_uniques)
AS sum
FROM jobs

还有另一种解决方案:

DELETE t1 FROM my_table t1, my_table t2 WHERE t1.id < t2.id AND t1.my_field = t2.my_field AND t1.my_field_2 = t2.my_field_2 AND ...

更快的方法是在临时表中插入不同的行。使用delete,我花了几个小时从一个800万行的表中删除重复项。使用insert和distinct,只花了13分钟。

CREATE TABLE tempTableName LIKE tableName;  
CREATE INDEX ix_all_id ON tableName(cellId,attributeId,entityRowId,value);  
INSERT INTO tempTableName(cellId,attributeId,entityRowId,value) SELECT DISTINCT cellId,attributeId,entityRowId,value FROM tableName;  
TRUNCATE TABLE tableName;
INSERT INTO tableName SELECT * FROM tempTableName; 
DROP TABLE tempTableName;  

简单和快速的所有情况:

CREATE TEMPORARY TABLE IF NOT EXISTS _temp_duplicates AS (SELECT dub.id FROM table_with_duplications dub GROUP BY dub.field_must_be_uniq_1, dub.field_must_be_uniq_2 HAVING COUNT(*)  > 1);

DELETE FROM table_with_duplications WHERE id IN (SELECT id FROM _temp_duplicates);

我一直在访问这个页面,任何时候我谷歌“从mysql中删除重复”,但对于我的忽略解决方案不起作用,因为我有一个InnoDB mysql表

这段代码在任何时候都能更好地工作

CREATE TABLE tableToclean_temp LIKE tableToclean;
ALTER TABLE tableToclean_temp ADD UNIQUE INDEX (fontsinuse_id);
INSERT IGNORE INTO tableToclean_temp SELECT * FROM tableToclean;
DROP TABLE tableToclean;
RENAME TABLE tableToclean_temp TO tableToclean;

tableToclean =需要清理的表的名称

tableToclean_temp =创建和删除的临时表


Deleting duplicates on MySQL tables is a common issue, that's genarally the result of a missing constraint to avoid those duplicates before hand. But this common issue usually comes with specific needs... that do require specific approaches. The approach should be different depending on, for example, the size of the data, the duplicated entry that should be kept (generally the first or the last one), whether there are indexes to be kept, or whether we want to perform any additional action on the duplicated data.

MySQL本身也有一些特殊性,比如在执行表UPDATE时不能在FROM上引用同一个表(它会引发MySQL错误#1093)。这种限制可以通过使用带有临时表的内部查询来克服(如上面的一些方法所建议的)。但是这种内部查询在处理大数据源时表现不佳。

然而,确实存在一种更好的方法来删除副本,这种方法既有效又可靠,并且可以很容易地适应不同的需求。

一般的想法是创建一个新的临时表,通常添加一个唯一的约束以避免进一步的重复,并将前一个表中的数据插入到新表中,同时处理重复的数据。这种方法依赖于简单的MySQL INSERT查询,创建一个新的约束以避免进一步的重复,并且跳过了使用内部查询来搜索重复和应该保存在内存中的临时表的需要(因此也适合大数据源)。

这就是实现它的方法。假设我们有一个表employee,有以下列:

employee (id, first_name, last_name, start_date, ssn)

为了删除具有重复ssn列的行,并只保留找到的第一个条目,可以遵循以下过程:

-- create a new tmp_eployee table
CREATE TABLE tmp_employee LIKE employee;

-- add a unique constraint
ALTER TABLE tmp_employee ADD UNIQUE(ssn);

-- scan over the employee table to insert employee entries
INSERT IGNORE INTO tmp_employee SELECT * FROM employee ORDER BY id;

-- rename tables
RENAME TABLE employee TO backup_employee, tmp_employee TO employee;

技术的解释

Line #1 creates a new tmp_eployee table with exactly the same structure as the employee table Line #2 adds a UNIQUE constraint to the new tmp_eployee table to avoid any further duplicates Line #3 scans over the original employee table by id, inserting new employee entries into the new tmp_eployee table, while ignoring duplicated entries Line #4 renames tables, so that the new employee table holds all the entries without the duplicates, and a backup copy of the former data is kept on the backup_employee table

使用这种方法,160万个寄存器在不到200秒的时间内转换为6k。

Chetan,按照这个过程,你可以快速轻松地删除所有副本,并通过运行创建一个UNIQUE约束:

CREATE TABLE tmp_jobs LIKE jobs;

ALTER TABLE tmp_jobs ADD UNIQUE(site_id, title, company);

INSERT IGNORE INTO tmp_jobs SELECT * FROM jobs ORDER BY id;

RENAME TABLE jobs TO backup_jobs, tmp_jobs TO jobs;

当然,在删除重复项时,可以进一步修改此过程以适应不同的需要。以下是一些例子。

✔保留最后一个条目而不是第一个条目的变化

有时我们需要保留最后一个重复的条目,而不是第一个。

CREATE TABLE tmp_employee LIKE employee;

ALTER TABLE tmp_employee ADD UNIQUE(ssn);

INSERT IGNORE INTO tmp_employee SELECT * FROM employee ORDER BY id DESC;

RENAME TABLE employee TO backup_employee, tmp_employee TO employee;

在第3行,ORDER BY id DESC子句使最后一个id优先于其他id

在副本上执行一些任务的变化,例如统计发现的副本

有时,我们需要对找到的重复条目执行一些进一步的处理(例如保持重复条目的计数)。

CREATE TABLE tmp_employee LIKE employee;

ALTER TABLE tmp_employee ADD UNIQUE(ssn);

ALTER TABLE tmp_employee ADD COLUMN n_duplicates INT DEFAULT 0;

INSERT INTO tmp_employee SELECT * FROM employee ORDER BY id ON DUPLICATE KEY UPDATE n_duplicates=n_duplicates+1;

RENAME TABLE employee TO backup_employee, tmp_employee TO employee;

在第3行中,创建了一个新列n_duplicate 在第4行,INSERT INTO…ON DUPLICATE KEY UPDATE查询用于在发现副本时执行额外的更新(在本例中,增加计数器) 插入…ON DUPLICATE KEY UPDATE查询可用于对找到的副本执行不同类型的更新。

重新生成自动递增字段id的变量

有时我们使用自动增量字段,为了使索引尽可能紧凑,我们可以利用删除重复项来在新的临时表中重新生成自动增量字段。

CREATE TABLE tmp_employee LIKE employee;

ALTER TABLE tmp_employee ADD UNIQUE(ssn);

INSERT IGNORE INTO tmp_employee SELECT (first_name, last_name, start_date, ssn) FROM employee ORDER BY id;

RENAME TABLE employee TO backup_employee, tmp_employee TO employee;

在第3行中,没有选择表中的所有字段,而是跳过了id字段,以便DB引擎自动生成一个新字段

进一步的变化

根据所需的行为,还可以进行许多进一步的修改。例如,下面的查询将使用第二个临时表,除了1)保留最后一个条目而不是第一个条目;2)在发现的副本上增加计数器;另外3)重新生成自动增量字段id,同时保持输入顺序,因为它是在以前的数据。

CREATE TABLE tmp_employee LIKE employee;

ALTER TABLE tmp_employee ADD UNIQUE(ssn);

ALTER TABLE tmp_employee ADD COLUMN n_duplicates INT DEFAULT 0;

INSERT INTO tmp_employee SELECT * FROM employee ORDER BY id DESC ON DUPLICATE KEY UPDATE n_duplicates=n_duplicates+1;

CREATE TABLE tmp_employee2 LIKE tmp_employee;

INSERT INTO tmp_employee2 SELECT (first_name, last_name, start_date, ssn) FROM tmp_employee ORDER BY id;

DROP TABLE tmp_employee;

RENAME TABLE employee TO backup_employee, tmp_employee2 TO employee;

一种简单易懂且不需要主键的解决方案:

add a new boolean column alter table mytable add tokeep boolean; add a constraint on the duplicated columns AND the new column alter table mytable add constraint preventdupe unique (mycol1, mycol2, tokeep); set the boolean column to true. This will succeed only on one of the duplicated rows because of the new constraint update ignore mytable set tokeep = true; delete rows that have not been marked as tokeep delete from mytable where tokeep is null; drop the added column alter table mytable drop tokeep;

我建议您保留您添加的约束,以便将来防止出现新的重复。


使用Delete JOIN语句删除重复的行 MySQL为您提供了DELETE JOIN语句,您可以使用它快速删除重复的行。

下面的语句删除重复的行并保留最高的id:

DELETE t1 FROM contacts t1
    INNER JOIN
contacts t2 WHERE
t1.id < t2.id AND t1.email = t2.email;

如果你有一个有大量记录的大表,那么上述解决方案将不起作用或花费太多时间。然后我们有一个不同的解

-- Create temporary table

CREATE TABLE temp_table LIKE table1;

-- Add constraint
ALTER TABLE temp_table ADD UNIQUE(title, company,site_id);

-- Copy data
INSERT IGNORE INTO temp_table SELECT * FROM table1;

-- Rename and drop
RENAME TABLE table1 TO old_table1, temp_table TO table1;
DROP TABLE old_table1;

我找到了一个简单的方法。(保持最新的)

DELETE t1 FROM table_name t1 INNER JOIN table_name t2 
WHERE t1.primary_id < t2.primary_id 
AND t1.check_duplicate_col_1 = t2.check_duplicate_col_1 
AND t1.check_duplicate_col_2 = t2.check_duplicate_col_2
...

从8.0版(2018)开始,MySQL终于支持窗口函数了。

窗口函数既方便又高效。下面是一个演示如何使用它们来解决这个任务的解决方案。

在子查询中,可以使用ROW_NUMBER()为表中columnn1 /column2组中的每条记录分配位置,按id排序。如果没有重复项,记录将获得第1行。如果存在副本,它们将按升序编号(从1开始)。

一旦子查询中的记录正确编号,外层查询只删除行号不是1的所有记录。

查询:

DELETE FROM tablename
WHERE id IN (
    SELECT id
    FROM (
        SELECT 
            id, 
            ROW_NUMBER() OVER(PARTITION BY column1, column2 ORDER BY id) rn
        FROM output
    ) t
    WHERE rn > 1
)

为了复制具有唯一列的记录,例如COL1,COL2, COL3不应该被复制(假设我们在表结构中遗漏了3个唯一列,并且在表中添加了多个重复项)

DROP TABLE TABLE_NAME_copy;
CREATE TABLE TABLE_NAME_copy LIKE TABLE_NAME;
INSERT INTO TABLE_NAME_copy
SELECT * FROM TABLE_NAME
GROUP BY COLUMN1, COLUMN2, COLUMN3; 
DROP TABLE TABLE_NAME;
ALTER TABLE TABLE_NAME_copy RENAME TO TABLE_NAME;

希望将有助于开发。


删除表中重复的记录。

delete from job s 
where rowid < any 
(select rowid from job k 
where s.site_id = k.site_id and 
s.title = k.title and 
s.company = k.company);

or

delete from job s 
where rowid not in 
(select max(rowid) from job k 
where s.site_id = k.site_id and
s.title = k.title and 
s.company = k.company);

下面是我用过的方法,它很有效:

create table temp_table like my_table;

T_id是唯一的列

insert into temp_table (id) select id from my_table GROUP by t_id;
delete from my_table where id not in (select id from temp_table);
drop table temp_table;

这将删除标题、公司和网站值相同的重复行。最后一个重复项将被保留,其余重复项将被删除(如果你想保留第一个重复项并删除其他重复项,将id的比较项更改为大于t1。Id > t2.id)

DELETE t1 FROM tablename t1
INNER JOIN tablename t2 
WHERE 
    t1.id < t2.id AND
    t1.title = t2.title AND
    t1.company=t2.company AND
    t1.site_ID=t2.site_ID;

我有一个忘记在id行添加主键的表。虽然它在id上有auto_increment。但是有一天,一个东西在数据库中重放了mysql bin日志,插入了一些重复的行。

我删除重复的行

选择唯一的重复行并导出它们 选择T1。(select count(*) as c,id from table_name group by id) T2 on T1。id = T2。T2.c > 1 group by T1.id; 按id删除重复的行 从导出的数据中插入该行。 然后在id上添加主键


使用Delete JOIN语句删除重复的行:

DELETE t1 FROM table_name t1
JOIN table_name t2
WHERE
    t1.id < t2.id AND
    t1.title = t2.title AND t1.company = t2.company AND t1.site_id = t2.site_id;

如果您试图删除其中一个副本并保留另一个副本,那么这是完美的。注意,如果没有子查询,您将得到#1093错误。

DELETE FROM table_name
WHERE id IN (
    SELECT * FROM (SELECT n.id FROM table_name n
    WHERE n.column2 != "value"
    GROUP BY n.column HAVING COUNT(n.column ) > 1) x
)