Controlling the cache size when using database as a backend Magento

If you have been using the database as a cache storage mechanism (Eg. <slow_backend>) and you have a large number of SKUs, you will start to see a database that increases in size by quite a measure.

A customer of ours with around 160k SKUs, had a "core_cache" table that had grown to just under 10Gb and around 960k rows - causing difficulties during some operations due to table locks. Whilst it is safe to empty the table, truncating a 10Gb table will take quite a long time, and all the while it will be locked, causing connections to queue up, eventually maxing out the available connections for your MySQL user/total - not good.

To make it a little more manageable, we quickly added a little intelligence to the table, so that it can "self-rotate". This could be applied in the PHP itself, but for now, it was quicker to address the issue via the DB directly.

Moving the old table out the way was essential to begin with

CREATE TABLE `new_core_cache` LIKE `core_cache`;
RENAME TABLE `core_cache` TO `old_core_cache`, `new_core_cache` TO `core_cache`;

CREATE TABLE `new_core_cache_tag` LIKE `core_cache_tag`;
RENAME TABLE `core_cache_tag` TO `old_core_cache_tag`, `new_core_cache_tag` TO `core_cache_tag`;

ALTER TABLE `core_cache_tag` ADD FOREIGN KEY ( `cache_id` ) REFERENCES `core_cache` (

This then left us with the old two tables, which can be safely dropped in an instant

DROP TABLE `old_core_cache_tag`;
DROP TABLE `old_core_cache`;

Looking at the number of rows, and volume of data, a safe average was for every 100,000 records stored, it consumed 1GB of space. To make the DB a little more manageable, we wanted to add a 100,000 row ceiling to the DB (we could also try to cap by size, but InnoDB cannot report an accurate size). The cap was put in place using a MySQL trigger. Essentially, when the table hits 100,000 records, it will rotate off the row that expires the soonest to allow room for the next row.

CREATE TRIGGER `after_insert_core_cache_tag` AFTER INSERT ON `core_cache_tag`
  SELECT count(id) INTO num_rows FROM core_cache;
  IF num_rows &gt; 100000 THEN
    DELETE FROM core_cache 
    WHERE expire_time &gt; 0 
    ORDER BY expire_time ASC 
    LIMIT 1000;

There is probably more efficient methods than this, but as a temporary resolve, it should keep things in check.

It may also be worth commenting/modifying any thread_stack definitions in your MySQL configuration if you run into errors when inserting rows.

  • Great tip. We hade this problem ourselves and I created the trigger you suggested. I work like a charm. However, there is a lot more content stored in the core_cache_tag table which is not affected by the trigger.

    What do you think about doing the same thing for the core_cache_tag-table or would that corrupt the database?

  • Pingback: Controlling the cache size when using database as a backend Magento | Magento design, development and suport :: sonassi | Magento News()

  • Pingback: What is Memcache actually caching in Magento | Sonassi | We Make Magento Ecommerce Websites()

  • Kiran Bhimanwar


    I know this is the old post but just wanted to confirm which Magento version have you used to test this. It seems we can not set a foreign key on core_cache_tag table as Magento enters some entries in table which are not in core_cache table so after applying the Foreign Key it gives an error which says

    "SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (`core_cache_tag`, CONSTRAINT `core_cache_tag_ibfk_1` FOREIGN KEY (`cache_id`) REFERENCES `core_cache` (`id`) ON DELETE CASCADE ON UPDATE CASCADE)"

    Do you have any idea

    • sonassi

      The solution is never to use the DB as a cache store. There are much better alternatives.

      • Kiran Bhimanwar

        Ho Sonassi,

        Thank you for the replay. Bust just to let you know we are using memcache for the fast cache and Database as slow cache. Magento allows only two types of slow cache; database and files. We can not use files as slow cache as we are using a load Balencer so we chosen DB as a slow cache

        • sonassi

          @kiranbhimanwar:disqus Ideally, you shouldn't be using two level caching with Memcache at all. ESPECIALLY with the DB. MySQL isn't a suitable backend for this use and Magento's implementation of 2 level caching with Memcache was a failure. You should be using Redis or another suitable alternative if you need 2 level caching.

          Any competent Magento host would be able to guide you with this though - you shouldn't be left on your own to figure out store server-side architecture. Have you considered a specialist provider, ?

          • Kiran Bhimanwar

            Hello Sonassi,

            Thank you for the updates. Yes I am aware of the Redis and Magento is promoting Redis since 1.13. However PHP do not have inbuilt libraries for Redis support. We need to rely on third party libraries which I believe is not stable yet.

            I do not want to modify the whole process to introduce new cache layer. I just wanted to fix the issue with the table size of core_cahe_tag table and I am not sure the purpose of Magento to use this table. At this point I think removing the Slow Backend cache is the simpler solution.

            Thank you for your help though.

          • sonassi


            The whole FOSS world is reliant on 3rd parties. Nginx, Apache, PHP, Varnish - all the software you develop is built by the world at large - not a single organisation. To rule out a product because it isn't served through the PECL library is an odd justification.

            Not to mention that Magento EE's support of Redis is wholly copied from Colin Mollenhour's Cm_Redis implementation - they haven't developed it themselves, a third party did.

            Given the choice between the Magento core implementation of Zend's Two-Level caching - or the alternatives, the alternatives will always win out. Magento/Memcache is broken with 2-level caching, Magento/Files suffers from catastrophic cache swelling failure, Magento/DB has no control/clean-up causes needless locking and more problems than any other cache back-end. Magento/Redis - just works.

            When we speak, its from 5 years of building and hosting *large scale* Magento stores; not just advice on a whim.

          • Kiran Bhimanwar

            Hello Sonassi,

            I agree with you that world reliant on 3rd parties. But the point I want to mention here is that the third party extension available phpredis is very buggy.

            I hope you must have read the blog from Colin Mollenhour's on Redis. He is the one who says phpredis module is buggy. Please see below comment from him

            "Driver bugs.. AFAIK the Cm_Cache_Backend_Redis plays well with db numbers besides 0 but the phpredis driver is actually pretty buggy and especially old versions have problems where if the connection is lost the driver will reconnect but when it reconnects it will not select the last used database and just default to 0. So for example if your FPC is 0 and the sessions are in 1 and the connection reconnects you may end up with sessions in your FPC database. These are silly bugs to have but the safest thing to do is run a separate process for each and always use db 0."

            This is the reason I was trying to tell that I don't want to rely on 3rd Party

            Moreover the site on which I am working on is 1.10 version and changing the cache engine and implementing a new cache is not just a days job. We will have to do a QA and. So switching to new technology is good but you can not switch like this when you encounter any issue. First we have to fix the error we are getting currently with DB as slow cache and then we will plan to upgrade to better cache engine.

            Thanks for your help so far.

          • sonassi

            Kiran, implentation of a new cache back end isn't even minutes work let alone days. You've picked once very specific case, which isn't even valid now - or a particular blocking issue to begin with. As Colin suggested, you have many workarounds to a bug (that doesn't even exist any more).

            All software has bugs, the latest Memcache stable in the PECL repo suffers a number of major bugs, forcing users to use the beta.

            As the UK's largest specialist Magento hosting provider, we are very aware of the options available and the limitations of each. When we speak, its with 5 years experience of building and hosting Magenti stores - and being the authors of the most high performance, most scalable, Magento operating system available.

            I'm not giving you an unqualified opinion, I'm trying to impart knowledge to prevent you having further problems with your store. Using the DB as a second level cache will inevitably lead to catastrophic performance problems and the ultimate unreliability of your store.

            Out of all the unstable caching methods for Magento, single level Memcache and two level Redis are the **only** reccomendations we would provide.