Jump to content

Noodleman

Member
  • Content Count

    683
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by Noodleman

  1. that's how it is meant to work.
  2. rebuild the sitemap, then download the sitemap compressed file from your store and validate it's content. it's possible you hit a memory limit of timeout error when building the sitemap, so if the map is empty, cross reference PHP error logs for related information-
  3. Try this: https://www.cubecart.com/extensions/plugins/price-list-plugin
  4. UPDATE: forgot to actually push the "submit" button on this post yesterday.... DOH Reporting back.. something is still not right for sure. So I've been writing data to the log all day, the cache has been cleared at least twice but the number of actual writes doesn't add up. In 9 hours, 377,000 new file writes. But, I am seeing a lot of duplicate hashes being written, an example of this is: even if caching WAS working correctly, and cache was cleared we should NOT have written the same item to cache 8442 times since this morning. Most things in the log do appear to be duplicated many thousands of times. Assuming the overwriting of cache is working correctly, this is incorrect and will add to increased IO for file cache. Here is my log amendment for reference: modified _writeCache function: protected function _writeCache($data, $query) { $query_hash = md5($query); if (isset($GLOBALS['cache']) && is_object($GLOBALS['cache'])) { $fp = fopen('query_log.txt', 'a+'); fwrite($fp, time() . " ### " . $query_hash . "\r\n"); fclose($fp); return $GLOBALS['cache']->write($data, 'sql.'.$query_hash); } return false; } maybe I did something wrong... but, initial results suggest cache is being written mode than it should. I'll need to check the write function to see if it does a check first. won't have time until this evening.
  5. I'll make some changes and report back I'll move the logging location and then also capture an MD5 hash of the query string.
  6. I'm still not 100% convinced this is the only issue, or I have simply vastly underestimated the amount of content cache will generate. from checking the cache directory this evening, I can see almost 60,000 files. I've sorted these based on date/time and the earliest timestamp is 09:04AM today, so we can conclude that the cache was cleared around that time. I've randomly searched for duplicate queries, simply by picking randomly some of the lines from the log file and searching for the same string (Thankyou Notepad++ for being amazing) I'm finding duplicates in the log with timestamps after the cache clear time. here is an example: It's possible this is legit, but raises the question, Shouldn't this only be cached once? it's being re-cached. I'm assuming this is because the cached object expired thus it re-cached. however in this situation do the OLD files associated with the old cached object get removed when the new cached item is created?
  7. That's purely for debug on my test instance
  8. The cache got cleared by the admin today before I had chance to review the overall totals, since yesterday it managed to write 34Mb of data to the log file which came to (rounded) 200,000 items written to cache. From a crude "pick a random line and search for it" technique, I can see that cached content is being duplicated, BUT! I can't rule out this was due to the clear cache button being used at this time. I'm going to need more time to monitor and review.
  9. Seems to have levelled off the cache at around 35,000 objects at the moment, will keep an eye and report back later.
  10. That's definitely helped. I've also wrapped the log write with the same validation. I'm seeing a lot less cache content being created. I'm monitoring for a while and will update later. Thanks Al
  11. Just set this up and I immediately see a problem coming from some modules. SQL queries which have specifically been run to NOT cache are being written to cache. $GLOBALS['db']->query($sql, false, 0, false); Based on around 5 minutes of collecting data, I'm already seeing 3,500 of these non-cached queries being written to cache.
  12. that seems reasonable. I'll probably go with a cache log. would you mind giving a couple pointers on the correct location for it? Would save me some trial and error
  13. I'm sorting out the same problem this morning for somebody else. The cache has grown so large that CubeCart can't handle the deletion. It's throwing an out of memory error. To fix I've manually cleared the /cache/ directory from the server (I have full access to server via SSH).
  14. I'm still of the opinion something isn't correct... a mid-size store, 3500 products across an estimated 100 categories generates this in 48 hours almost 700,000 cached objects. This makes the "clear cache" button not work as it just can't clear this many files before throwing an error.
  15. yes, but question 2 was if it's responsible for the error, and as per my post.. no. disable the module and try to reproduce the issue again.
  16. it's a failure in the module SFWS_MailChimp, from the error it's checking if the user is logged in, but isn't able to find the required information to function. The error has been thrown from that module so needs the dev to review in more detail using your steps to reproduce. I've triggered these a few times in my own code from time to time, so have seen it before. without knowing the code being called I can't give any more details
  17. if your technical, it's quite simple. add more columns to CubeCart_inventory table to hold the data you require. the new fields are then available to your store theme as part of the smarty $PRODUCT array. then you just need to add those fields to the admin template (create a custom version). This can all be added to a module to auto add the fields
  18. So I just did a basic test on my site and it makes me think that the cache is per session (again, just a theory). if I go to my homepage in Chrome, the cache size increased with an additional 11 SQL cache files. if I reload the page, no more files. I then load the same page in Edge, and it reached and built another set of 11 cache files. the only difference is browser, thus session. The content is the same and should be using the cache built from the previous browser load. Over time, this will increase/build up excessive cached items. Shouldn't the existing cached items be reused? this appears to only be caused by SQL cache. page cache is working OK
  19. I suspect that the frequent clearing of cache in earlier versions masked an issue, and with the recent change it's made that issue more notable. I've not checked into it and it's just a gut feeling / theory, but it ties in with some reports I've had as well.
  20. I had a similar issue recently, but was at the OS level. Use some form of memory caching instead and you'll be fine. Alternatively increase the configured limit of nodes. some hosting providers will limit the number and not allow it to be changed, in which case grab a VPS or managed VPS and configure it as needed. https://support.cubecart.com/Knowledgebase/Article/View/235/41/how-do-i-enable-apc-memcached-redis-or-xcache On the flip side, I have noted that there does appear to be a much larger cache being built than I would expect which makes me suspect some kind of cache system issue. The longer you leave the cache uncleared, the worse it gets. I suspect new cache files are created in place of older ones, yet the older ones never clear and thus over time things build up to excessive levels if not cleared frequently. I assume that if the same page, with the same content is cached multiple times, its should re-use the same cache file. I don't think it does this.
  21. no.. it's your choice based on your business model. what does your business require?
  22. not that I can recall, but custom ones can be added, perhaps that's where it came from.
  23. Checkout robots.txt , it's what you will want
×
×
  • Create New...