Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

How can I change the limit

Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help. In current row format, BLOB prefix of 768 bytes is stored inline.

Table:

id  int(11) No       
name    text    No       
date    date    No       
time    time    No       
schedule    int(11) No       
category    int(11) No       
top_a   varchar(255)    No       
top_b   varchar(255)    No       
top_c   varchar(255)    No       
top_d   varchar(255)    No       
top_e   varchar(255)    No       
top_f   varchar(255)    No       
top_g   varchar(255)    No       
top_h   varchar(255)    No       
top_i   varchar(255)    No       
top_j   varchar(255)    No       
top_title_a varchar(255)    No       
top_title_b varchar(255)    No       
top_title_c varchar(255)    No       
top_title_d varchar(255)    No       
top_title_e varchar(255)    No       
top_title_f varchar(255)    No       
top_title_g varchar(255)    No       
top_title_h varchar(255)    No       
top_title_i varchar(255)    No       
top_title_j varchar(255)    No       
top_desc_a  text    No       
top_desc_b  text    No       
top_desc_c  text    No       
top_desc_d  text    No       
top_desc_e  text    No       
top_desc_f  text    No       
top_desc_g  text    No       
top_desc_h  text    No       
top_desc_i  text    No       
top_desc_j  text    No       
status  int(11) No       
admin_id    int(11) No 
                Can't update error "Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help. In current row format, BLOB prefix of 768 bytes is stored inline."
– Lasha Kurt
                Mar 23, 2013 at 10:25
                if this data is in another table, consider using a foreign key instead, this will dramatically decrease your row size, and normalize your database schema.
– didierc
                Mar 23, 2013 at 10:35
  

You may want to take a look at this article which explains a lot about MySQL row sizes. It's important to note that even if you use TEXT or BLOB fields, your row size could still be over 8K (limit for InnoDB) because it stores the first 768 bytes for each field inline in the page.

The simplest way to fix this is to use the Barracuda file format with InnoDB. This basically gets rid of the problem altogether by only storing the 20 byte pointer to the text data instead of storing the first 768 bytes.

There is a possibility that the above still does not resolve your issues. It is a known (and verified) bug with the InnoDB engine, and a temporary fix for now is to fallback to MyISAM engine as temporary storage. So, in your my.cnf file:

internal_tmp_disk_storage_engine=MyISAM
                +1 -> The innodb_file_per_table part is crucial and goes hand-in-hand with changing the format to Barracuda. Without it, MySQL will silently continue to use Antelope.
– Nick
                Sep 16, 2014 at 18:59
                @Eduardo I'd recommend backing up the data first. But yes, you can do this on production too!
– hjpotter92
                Sep 3, 2016 at 9:18
                This isn't guaranteed to fix the problem. See this post for an example script that adds these settings but is still able to reproduce the error.
– Cerin
                Mar 15, 2017 at 0:08
                Seems to me this answer is not relevant for modern versions of MySQL.  At least not based on dba.stackexchange.com/a/125893/79826.
– Corin
                Jan 15, 2021 at 16:15

I ran into this problem recently and solved it a different way. If you are running MySQL version 5.6.20 there is a known bug in the system. See MySQL docs

Important Due to Bug #69477, redo log writes for large, externally stored BLOB fields could overwrite the most recent checkpoint. To address this bug, a patch introduced in MySQL 5.6.20 limits the size of redo log BLOB writes to 10% of the redo log file size. As a result of this limit, innodb_log_file_size should be set to a value greater than 10 times the largest BLOB data size found in the rows of your tables plus the length of other variable length fields (VARCHAR, VARBINARY, and TEXT type fields).

In my situation the offending blob table was around 16MB. Thus, the way I solved it was by adding a line to my.cnf that ensured I had at least 10x that amount and then some:

innodb_log_file_size = 256M

Spot on. My "Row Size too large" error on a longblob field disappeared as soon as I increased the innodb_log_file_size parameter. Also was running 5.6.20. – adamup Sep 14, 2014 at 14:06 This did not solve my issue either. I'm considering replacing many of my VARCHAR fields to TEXTs, as I've seen on similar threads. – PDoria Oct 1, 2017 at 15:11 innodb_strict_mode=0 -> no spaces. Otherwise, restarting mariaDB 10.2 results in error :-P – Pathros Mar 26, 2018 at 4:30 Accordingly to the documentation, disabling the strict mode only prevents erros and warnings from appearing, so it doesn'y seem to solve the problem! download.nust.na/pub6/mysql/doc/innodb-plugin/1.1/en/… – João Teixeira Jan 10, 2020 at 12:32

If you can switch the ENGINE and use MyISAM instead of InnoDB, that should help:

ENGINE=MyISAM

There are two caveats with MyISAM (arguably more):

  • You can't use transactions.
  • You can't use foreign key constraints.
  • Fine if you don't mind going without transactions and foreign key constraints. But be aware that you loose these when switching to MyISAM. – wobbily_col Aug 10, 2016 at 14:45 This advice is insane -- at least phrase it will al the caveats ! Transactions and foregith key constraints are the least of the problems with this format ! – Hassan Syed Oct 6, 2016 at 19:13 Sorry downvoted this answer. Please update to mention all the caveats involved. Anyone just following this advice switches to a format that I would never use on any production system. – Remco Wendt Jan 18, 2017 at 13:11 worked for me. My case I had over 300 fields about 10 in length each. I do not have any relationships in this table so switching to MyISAM was the fix. – Robert Saylor Feb 28, 2019 at 12:28

    The DYNAMIC row format maintains the efficiency of storing the entire row in the index node if it fits (as do the COMPACT and REDUNDANT formats), but this new format avoids the problem of filling B-tree nodes with a large number of data bytes of long columns. The DYNAMIC format is based on the idea that if a portion of a long data value is stored off-page, it is usually most efficient to store all of the value off-page. With DYNAMIC format, shorter columns are likely to remain in the B-tree node, minimizing the number of overflow pages needed for any given row.

    ... but you need to have a different file format, so u are already on Barracuda? Cause I get an error trying that command, saying Warnings from last query: InnoDB: ROW_FORMAT=DYNAMIC requires innodb_file_format > Antelope – Ted Aug 6, 2018 at 15:36 When I let HeidiSQL run the same command via its built-in GUI, it somehow succeeded, but it didnt help at all, I get the same error, Row size too large (>8126) – Ted Aug 6, 2018 at 15:38 Try to set innodb_file_format = Barracuda in my.ini and try ALTER TABLE my_table ROW_FORMAT=DYNAMIC; again – multimediaxp Aug 7, 2018 at 18:23 Yeah, I think thats what I did in the end actually, and I think that resolved it. But I also changed a lot of columns to TEXT at the same time in desperation =) Thx though, I actually think that was it. – Ted Aug 7, 2018 at 18:59 Good!, if you think this is a good answer, give it an up vote and accept as valid response, thanks! – multimediaxp Aug 16, 2018 at 0:24

    I would like to share an awesome answer, it might be helpful. Credits Bill Karwin see here https://dba.stackexchange.com/questions/6598/innodb-create-table-error-row-size-too-large

    They vary by InnoDB file format.At present there are 2 formats called Antelope and Barracuda.

    The central tablespace file (ibdata1) is always in Antelope format. If you use file-per-table, you can make the individual files use Barracuda format by setting innodb_file_format=Barracuda in my.cnf.

    Basic points:

  • One 16KB page of InnoDB data must hold at least two rows of data. Plus each page has a header and a footer containing page checksums and log sequence number and so on. That's where you get your limit of a bit less than 8KB per row.

  • Fixed-size data types like INTEGER, DATE, FLOAT, CHAR are stored on this primary data page and count toward the row size limit.

  • Variable-sized data types like VARCHAR, TEXT, BLOB are stored on overflow pages, so they don't count fully toward the row size limit. In Antelope, up to 768 bytes of such columns are stored on the primary data page in addition to being stored on the overflow page. Barracuda supports a dynamic row format, so it may store only a 20-byte pointer on the primary data page.

  • Variable-size data types are also prefixed with 1 or more bytes to encode the length. And InnoDB row format also has an array of field offsets. So there's an internal structure more or less documented in their wiki.

  • Barracuda also supports a ROW_FORMAT=COMPRESSED to gain further storage efficiency for overflow data.

    I also have to comment that I've never seen a well-designed table exceed the row size limit. It's a strong "code smell" that you're violating the repeating groups condition of First Normal Form.

    You need to do some changes to the my.ini file

    Add this under [mysqld]

    innodb_strict_mode=0
    

    Update These two lines

    innodb_log_file_size=256M
    innodb_log_buffer_size=256M
    

    innodb_strict_mode: When it is enabled, certain InnoDB warnings become errors instead.

    Reference: https://mariadb.com/kb/en/innodb-strict-mode/

    innodb_log_file_size & innodb_log_buffer_size needs to increase in size.

    It solved my issue on local machine. I have innodb database and table has 119 fields and got this issue when I tried to create a new 120th field. Your solution solved my issue for now. Thanks Soubhagya, – Kamlesh Mar 18 at 12:14

    After spending hours I have found the solution: just run the following SQL in your MySQL admin to convert the table to MyISAM:

    USE db_name;
    ALTER TABLE table_name ENGINE=MYISAM;
                    You gonna loose all advantages InnoDB has compared to MyISAM except that MyISAM is maybe faster (you can still increase key cache to overcome that). Like transactions and constrains.
    – Roland
                    Jun 19, 2022 at 20:49
    

    The maximum row size for an InnoDB table, which applies to data stored locally within a database page, is slightly less than half a page for 4KB, 8KB, 16KB, and 32KB

    For 16kb pages (default), we can calculate:

    Slightly less than half a page 8126 / Number of bytes to threshold for overflow 767 = 10.59 fields of 767 bytes maximum
    

    Basically, you could max out a row with:

  • 11 varchar fields > 767 characters (latin1 = 1 byte per char) or
  • 11 varchar fields > 255 characters (utf-8 on mysql = 3 bytes per char).
  • Remember, it will only overflow to an overflow page if the field is > 767 bytes. If there are too many fields of 767 bytes, it will bust (passing beyond max row_size). Not usual with latin1 but very possible with utf-8 if the developers aren’t careful.

    For this case, I think you could possibly bump the innodb_page_size to 32kb.

    in my.cnf:

    innodb_page_size=32K
    

    References:

  • https://dev.mysql.com/doc/refman/8.0/en/column-count-limit.html#row-size-limits
  • https://medium.com/@adamhooper/in-mysql-never-use-utf8-use-utf8mb4-11761243e434
  • https://mariadb.com/kb/en/library/innodb-compact-row-format/#overflow-pages-with-the-compact-row-format
  • Innodb page size setting
  • Very good info, I would suggest to add this link for first read, that explains column size limit especially discussing utf8mb4 character set dev.mysql.com/doc/refman/5.7/en/charset-unicode-conversion.html – Abdul Rehman Jul 16, 2021 at 7:59

    I am using MySQL 5.6 on AWS RDS. I updated following in parameter group.

    innodb_file_per_table=1
    innodb_file_format = Barracuda
    

    I had to reboot DB instance for parameter group changes to be in effect.

    Also, ROW_FORMAT=COMPRESSED was not supported. I used DYNAMIC as below and it worked fine.

    ALTER TABLE nombre_tabla ENGINE=InnoDB ROW_FORMAT=DYNAMIC KEY_BLOCK_SIZE=8
    

    I ran into this issue when I was trying to restore a backed up mysql database from a different server. What solved this issue for me was adding certain settings to my.conf (like in the questions above) and additionally changing the sql backup file:

    Step 1: add or edit the following lines in my.conf:

    innodb_page_size=32K
    innodb_file_format=Barracuda
    innodb_file_per_table=1
    

    Step 2 add ROW_FORMAT=DYNAMIC to the table create statement in the sql backup file for the table that is causing this error:

    DROP TABLE IF EXISTS `problematic_table`;
    /*!40101 SET @saved_cs_client = @@character_set_client */;
    /*!40101 SET character_set_client = utf8 */;
    CREATE TABLE `problematic_table` (
    `id` bigint(20) NOT NULL AUTO_INCREMENT,
    PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=3 ROW_FORMAT=DYNAMIC;
    

    the important change above is ROW_FORMAT=DYNAMIC; (that was not included in the orignal sql backup file)

    source that helped me to resolve this issue: MariaDB and InnoDB MySQL Row size too large

    Lines described in step1 should not have any space. Line innodb_page_size=32K didn't work, since it caused error to restart MariaDB 10.2 in my case. Instead, I added innodb_strict_mode=0 line, as described here – Pathros Mar 16, 2018 at 17:12 thank you for the feedback Pathros, I updated my answer and removed the spaces from step1. – matyas Mar 24, 2018 at 13:10 Note for MySQL <= 5.7.5: 32K is not valid option for innodb_page_size. See: dev.mysql.com/doc/refman/5.6/en/… – Dub Nazar Dec 11, 2018 at 21:25 Also, you must recreate your whole mysql server for scracth, as changing innodb_page_size requires ibdata* recreation, which is destructive operation. Watch your syslog for more information – Matija Nalis May 30, 2019 at 17:52

    The other answers address the question asked. I will address the underlying cause: poor schema design.

    Do not splay an array across columns. Here you have 3*10 columns that should be turned into 10 rows of 3 columns in a new table (plus id, etc)

    Your Main table would have only

    id  int(11) No       
    name    text    No       
    date    date    No       
    time    time    No       
    schedule    int(11) No       
    category    int(11) No       
    status  int(11) No       
    admin_id    int(11) No 
    

    Your extra table (Top) would have

    id  int(11) No          -- for joining to Main
    seq TINYINT UNSIGNED    -- containing 1..10
    img   varchar(255)    No       
    title varchar(255)    No       
    desc  text    No    
    PRIMARY KEY(id, seq)    -- so you can easily find the 10 top_titles
    

    There would be 10 (or fewer? or more?) rows in Top for each id.

    This eliminates your original problem, and cleans up the schema. (This is not "normalization", as debated in some of the Comments.)

    Do not switch to MyISAM; it is going away.
    Don't worry about ROW_FORMAT.

    You will need to change your code to do the JOIN and to handle multiple rows instead of multiple columns.

    I keep running into this issue when I destroy my Laravel Homestead (Vagrant) box and start fresh.

  • SSH into the box from the command line homestead ssh

  • Go to the my.cnf file sudo vi /etc/mysql/my.cnf

  • Add the below lines to the bottom of the file (below the !includedir)

    [mysqld]

    innodb_log_file_size=512M

    innodb_strict_mode=0

  • Save the changes to my.cnf, then reload MYSQL sudo service mysql restart

    I also encountered the same problem. I solve the problem by executing the following sql:

    ALTER ${table} ROW_FORMAT=COMPRESSED;
    

    But, I think u should know about the Row Storage.
    There are two kinds of columns: variable-length column(such as VARCHAR, VARBINARY, and BLOB and TEXT types) and fixed-length column. They are stored in different types of pages.

    Variable-length columns are an exception to this rule. Columns such as BLOB and VARCHAR that are too long to fit on a B-tree page are stored on separately allocated disk pages called overflow pages. We call such columns off-page columns. The values of these columns are stored in singly-linked lists of overflow pages, and each such column has its own list of one or more overflow pages. In some cases, all or a prefix of the long column value is stored in the B-tree, to avoid wasting storage and eliminating the need to read a separate page.

    and when purpose of setting ROW_FORMAT is

    When a table is created with ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED, InnoDB can store long variable-length column values (for VARCHAR, VARBINARY, and BLOB and TEXT types) fully off-page, with the clustered index record containing only a 20-byte pointer to the overflow page.

    Wanna know more about DYNAMIC and COMPRESSED Row Formats

    If this occures on a SELECT with many columns, the cause can be that mysql is creating a temporary table. If this table is too large to fit in memory, it will use its default temp table format, which is InnoDB, to store it on Disk. In this case the InnoDB size limits apply.

    You then have 4 options:

  • change the innodb row size limit like stated in another post, which requires reinitialization of the server.
  • change your query to include less columns or avoid causing it to create a temporary table (by i.e. removing order by and limit clauses).
  • changing max_heap_table_size to be large so the result fits in memory and does not need to get written to disk.
  • change the default temp table format to MYISAM, this is what i did. Change in my.cnf:

    internal_tmp_disk_storage_engine=MYISAM
    

    Restart mysql, query works.

    Here is simple tip for anyone interested:

    After upgrade from Debian 9 to Debian 10 with 10.3.17-MariaDB, I have some errors from Joomla databases:

    [Warning] InnoDB: Cannot add field field in table database.table because after adding it, the row size is 8742 which is greater than maximum allowed size (8126) for a record on index leaf page.

    Just in case, I set innodb_default_row_format = DYNAMIC in /etc/mysql/mariadb.conf.d/50-server.cnf (it was default anyway)

    Than, I have used phpmyadmin to run "Optimize table" for all the tables in Joomla database. I think table recreation done by phpmyadmin in the process helped. If you happen to have phpmyadmin installed it is just few clicks to do.

    If you are a Joomla user, come join us at Joomla Stack Exchange -- we can always use new contributors. – mickmackusa Sep 30, 2019 at 13:36 I tried this and got this error, "ERROR: Table storage engine 'InnoDB' does not support the create option 'KEY_BLOCK_SIZE'" – ttemple Jun 7, 2022 at 16:24

    For MariaDB this issue is solved by increasing the value from 16k to 32k in file my.ini

    innodb_page_size=32K
    

    which is located at D:\wamp\bin\mariadb\mariadb10.6.5 in my case.

    Note that this answer was published later this one: stackoverflow.com/a/58654840/3451846 that is somehow more complete as far as I can see – Valerio Bozz Jan 24 at 18:35
  •