相关文章推荐
精明的饺子  ·  Python - __all__ ...·  1 年前    · 
考研的绿豆  ·  c++ map默认值-掘金·  1 年前    · 
强健的黑框眼镜  ·  XGBoost ...·  1 年前    · 
Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

I'm using a memory table . It has several ids and a counter. All data as integers. My code updates the counter by 1 if the data exists or creates a line with counter=1 if not.

The query I use is:

INSERT INTO linked_mem
    ( id1, id2, id31, id4, cnt)
    VALUES (31316, 0, 557158967, 261470594, 1)
ON DUPLICATE KEY UPDATE cnt = cnt+1

Occasionally (about 5% of inserts) I get " Duplicate entry '[key numbers]' for key 1

What could be the problem? Isn't the ON DUPLICATE KEY UPDATE part supposed to handle the duplicate key?

Update: adding create a table of the real table

CREATE TABLE `linked_mem` (
  `li_sid` int(10) unsigned NOT NULL default '0',
  `li_id1` int(10) unsigned NOT NULL default '0',
  `li_cid1` int(10) unsigned NOT NULL default '0',
  `li_id2` int(10) unsigned NOT NULL default '0',
  `li_cid2` int(10) unsigned NOT NULL default '0',
  `cnt` int(10) unsigned NOT NULL default '1',
  `li_filter` int(10) unsigned NOT NULL default '0',
  `li_group` int(10) unsigned NOT NULL default '0',
  PRIMARY KEY  (`li_id1`,`li_sid`,`li_cid1`,`li_cid2`,`li_group`,`cnt`,`li_id2`)
) ENGINE=MEMORY DEFAULT CHARSET=latin1

This can happen if you update a field that is marked UNIQUE and the second key violation occurs on UPDATE.

Update:

From your table structure I can see that it's exactly your case.

That's what happens:

INSERT INTO linked_mem
        ( id1, id2, id31, id4 cnt)
        VALUES (31316, 0, 557158967, 261470594, 1)
-- inserts
INSERT INTO linked_mem
        ( id1, id2, id31, id4 cnt)
        VALUES (31316, 0, 557158967, 261470594, 1)
-- updates `cnt` to 2
INSERT INTO linked_mem
        ( id1, id2, id31, id4 cnt)
        VALUES (31316, 0, 557158967, 261470594, 1)
-- tries again to update `cnt` to 2 and fails

Remove cnt from your PRIMARY KEY definition.

Are you sure that the primary key is correct? Using this primary key identifies a line also by the value of cnt, which is supposed to be a counter.

I've not tested this, but I think the following query will give the error if you start with an empty table.

INSERT INTO linked_mem
    ( id1, id2, id31, id4, cnt)
VALUES 
    (1, 1, 1, 1, 1),                  // (1, 1, 1, 1, 1) added
    (1, 1, 1, 1, 1),                  // (1, 1, 1, 1, 1) -> (1, 1, 1, 1, 2)
    (1, 1, 1, 1, 1),                  // (1, 1, 1, 1, 1) added (now two rows)
    (1, 1, 1, 1, 1)                   // error
ON DUPLICATE KEY UPDATE cnt = cnt+1

at the fourth row, the (1, 1, 1, 1, 1) would be updated to (1, 1, 1, 1, 2), but this already exists.

It's hard to tell with your uninformative column names, but that primary key is so wide that it looks pretty darn useless. What is it actually doing, and why was that set of columns chosen? Is there a better choice? I'd guess the other posters are right, and your update is violating the implicit uniqueness constrant of the PK; with the count being one of your PK columns, that's pretty much to be expected.

I had the same problem.

Using ON DUPLICATE KEY UPDATE and still having the error : Duplicate entry

It was caused by another column having a unique key index and one row having an empty value in it.

There may be 3 reason for duplicate key exception

  • You taking int into string or vice versa

  • spelling mistake between db column and table column

  • You are trying to update primary key or setting pk during update

    Thanks for contributing an answer to Stack Overflow!

    • Please be sure to answer the question. Provide details and share your research!

    But avoid

    • Asking for help, clarification, or responding to other answers.
    • Making statements based on opinion; back them up with references or personal experience.

    To learn more, see our tips on writing great answers.

  •