Jump to content
Sign in to follow this  
flash760

inodes limit exceded

Recommended Posts

Seems to have levelled off the cache at around 35,000 objects at the moment, will keep an eye and report back later. 

  • Thanks 2

Share this post


Link to post
Share on other sites

Just now, havenswift-hosting said:

@Noodleman can you open a github issue and document the changes you made so these can get included in the next release ?

Don't worry I'm on this. Just waiting for an update. 

Share this post


Link to post
Share on other sites
5 minutes ago, Al Brookbanks said:

Don't worry I'm on this. Just waiting for an update. 

The cache got cleared by the admin today before I had chance to review the overall totals,  since yesterday it managed to write 34Mb of data to the log file which came to (rounded) 200,000 items written to cache.

From a crude "pick a random line and search for it" technique, I can see that cached content is being duplicated, BUT! I can't rule out this was due to the clear cache button being used at this time. 

I'm going to need more time to monitor and review. 

 

 

Share this post


Link to post
Share on other sites

Ok thanks so much. Looking at the logic of the code it does look like it writes everything though so it does look to me like we have found the problem.  

Share this post


Link to post
Share on other sites
23 hours ago, Noodleman said:

 I've also wrapped the log write with the same validation.

@Al Brookbanks the changes you have made fix the issue with database writes but what about this one ?

Share this post


Link to post
Share on other sites
2 minutes ago, havenswift-hosting said:

@Al Brookbanks the changes you have made fix the issue with database writes but what about this one ?

That's purely for debug on my test instance :)

Share this post


Link to post
Share on other sites

I'm still not 100% convinced this is the only issue, or I have simply vastly underestimated the amount of content cache will generate.

from checking the cache directory this evening, I can see almost 60,000 files. I've sorted these based on date/time and the earliest timestamp is 09:04AM today, so we can conclude that the cache was cleared around that time. 

I've randomly searched for duplicate queries, simply by picking randomly some of the lines from the log file and searching for the same string (Thankyou Notepad++ for being amazing)

I'm finding duplicates in the log with timestamps after the cache clear time. here is an example:

image.thumb.png.728319bc5eb043b610f8157dd1bb7669.png

It's possible this is legit, but raises the question, Shouldn't this only be cached once? it's being re-cached. 

I'm assuming this is because the cached object expired thus it re-cached. however in this situation do the OLD files associated with the old cached object get removed when the new cached item is created?

 

Share this post


Link to post
Share on other sites

A cached SQL result has a lifetime of 24 hours.

Of the 60K files in /cache/, does that count match the number of entries in your log file? (A stupid question, I know, but I'm just ironing out a bit of confusion.)

Is it possible to have your editor convert a selected string to MD5? If CubeCart comes up with a different MD5 for two otherwise identical-looking queries (used to get the cached file's filename), then the queries can't be identical.

Share this post


Link to post
Share on other sites

In your log file, you may see two or more of identical queries. If your log file is being sourced from https://github.com/cubecart/v6/blob/6.2.0-branch/classes/db/mysqli.class.php#L224, then you are too early. This is the intent to write to the cache. In DB->_writeCache(), a call is made to Cache->write(). In Cache->write(), PHP's file_put_contents() will overwrite an existing filename with the new contents.

Share this post


Link to post
Share on other sites
14 hours ago, Noodleman said:

I'm finding duplicates in the log with timestamps after the cache clear time. here is an example:

image.thumb.png.728319bc5eb043b610f8157dd1bb7669.png

It's possible this is legit, but raises the question, Shouldn't this only be cached once? it's being re-cached. 

I'm assuming this is because the cached object expired thus it re-cached. however in this situation do the OLD files associated with the old cached object get removed when the new cached item is created?

That certainly appears to be exactly the same sql query in which case, CubeCart should use the first cached query and not re-create - regardless of the number of cached entries being created it is very inefficient usage of caching.

11 hours ago, bsmither said:

A cached SQL result has a lifetime of 24 hours.

Why would cache be limited to 24 hours ?  Not sure there is any need to have a fixed lifetime for cached entries - they should be persistent especially as in a lot of stores, cache will be cleared by the admin user fairly regularly anyway.

Share this post


Link to post
Share on other sites
12 hours ago, bsmither said:

A cached SQL result has a lifetime of 24 hours.

Of the 60K files in /cache/, does that count match the number of entries in your log file? (A stupid question, I know, but I'm just ironing out a bit of confusion.)

Is it possible to have your editor convert a selected string to MD5? If CubeCart comes up with a different MD5 for two otherwise identical-looking queries (used to get the cached file's filename), then the queries can't be identical.

 

I'll make some changes and report back :)

I'll move the logging location and then also capture an MD5 hash of the query string. 

Share this post


Link to post
Share on other sites

Guys thanks for looking into this. I'm really tied up at the moment but I'll get involved when I can. 

Share this post


Link to post
Share on other sites

"A cached query is limited to 24 hours."

In the main, static CubeCart data is permanent until knowingly and intentionally changed. But out there in the far reaches of edge cases, there could exist scenarios where database data would be changed without CubeCart's knowledge/control. (Somewhat like making changes to a hosting account's operating environment without telling the account holder.)

So, as some would argue, having the (somewhat) freshest whatever is a good thing.

Share this post


Link to post
Share on other sites

I'll see if I can find an hour or two this afternoon to test the cache read/writes. Hopefully we can optimise it. 

Share this post


Link to post
Share on other sites

UPDATE: forgot to actually push the "submit" button on this post yesterday.... DOH

 

Reporting back.. something is still not right for sure. So I've been writing data to the log all day, the cache has been cleared at least twice but the number of actual writes doesn't add up.  In 9 hours, 377,000 new file writes.

But, I am seeing a lot of duplicate hashes being written, an example of this is:

image.png.7005968037440b2c8830ea3b8a315e06.png

even if caching WAS working correctly, and cache was cleared we should NOT have written the same item to cache 8442 times since this morning.

Most things in the log do appear to be duplicated many thousands of times. Assuming the overwriting of cache is working correctly, this is incorrect and will add to increased IO for file cache.

Here is my log amendment for reference:

modified _writeCache function:

    protected function _writeCache($data, $query) {
        $query_hash = md5($query);
        if (isset($GLOBALS['cache']) && is_object($GLOBALS['cache'])) {

            $fp = fopen('query_log.txt', 'a+');
            fwrite($fp, time() . " ### " . $query_hash . "\r\n");
            fclose($fp);                
        
            return $GLOBALS['cache']->write($data, 'sql.'.$query_hash);
        }
        return false;
    }

maybe I did something wrong... but, initial results suggest cache is being written mode than it should. 

I'll need to check the write function to see if it does a check first. won't have time until this evening.

 

 

Edited by Noodleman
  • Thanks 1

Share this post


Link to post
Share on other sites

Issue #2051 has suggested an edit. I did this and it did calm the writes. I have the suspicion that, if you attempted this edit, it didn't stick.

Also, your log is recording the call to Cache->write(), so, recall that even though you have 8442 actual writes, there will only be one file with that name. You will discover, however, each write will have an updated 'time' value.

I changed the following file to capture what this cache file represents:

file.class.php, function write();

		//Create the meta data for the file
		$meta = array(
			'time'  => time(),
			'expire' => (!empty($expire) && is_numeric($expire)) ? $expire : $this->_expire,
			'server_uri' => htmlentities($_SERVER['REQUEST_URI'], ENT_QUOTES, "UTF-8"),
			'query' => ($this->_empties_id!==$id) ? htmlentities(Database::getInstance()->getQuery(), ENT_QUOTES, "UTF-8") : "empties",
		);

You might add to your logging code:

$fp = fopen('query_log.txt', 'a+');
fwrite($fp, time() . " ### " . $query_hash . htmlentities($_SERVER['REQUEST_URI'], ENT_QUOTES, "UTF-8") ."\r\n");
fclose($fp);

 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

×
×
  • Create New...