Ticket Hash: | 0f4e961888a81913a271000a28152e4d19b570f1 | |||
Title: | Slow Memory Leak | |||
Status: | Closed | Type: | Code_Defect | |
Severity: | Important | Priority: | High | |
Subsystem: | Resource_Cleanup | Resolution: | Unable_To_Reproduce | |
Last Modified: | 2012-01-21 04:37:37 | |||
Version Found In: | 1.0.74.0 to 1.0.77.0 | |||
Description: | ||||
I've got a long running application which uses SQLite for its configuration database and for logging.
A new SQLite instance is opened at the beginning of the application, the application then loops for eternity, doing some pretty basic stuff, and writing the output to the SQLite database. This is an extremely slow memory leak, over 1 day where its forced to do more usage than it does on the server, the private bytes have climbed from 30mb to 40mb, while the bytes in all heaps have remained constant. A server which does quite a lot of work with this app, takes just under 1 week to go from less than 30mb to more than 90mb. We noticed this because the service eventually started throwing errors, but didn't completely crash, after it had grown to approximately 1gb in size. So when trying to replicate this, you're going to have to run it for a long time, before any pattern may become evident. Here is an example of the write method used: var insert = new Dictionary<string, object>(); insert.Add("type", description); insert.Add("location", location); insert.Add("message", message); this.Sqlite.Insert("logs", insert); This insert automatically adds a timestamp to the record, and auto increments an id, too. (Not sure if that's relevant, but that's all done automatically in SQLite) The only reading that it does is to check that the table exists, for example: this.Sqlite.ExecuteNonQuery("SELECT type, location, message FROM logs LIMIT 1;"); One of these instances writes out a byte[] which can contain up to something like 5kb, but usually only contains 100's of bytes. The rest of the inserted values are either integers or strings. The reason SQLite was used was due to the required analysis we often need to do on the output, while keeping it simple and portable. However, we're now considering moving it to a proper SQL server. This bug was introduced at sometime between 1.0.66.0 (originally hosted on phxsoftware.com) and 1.0.74.0 (which is hosted by you, and released on July 4, 2011). I have been testing this with the current version (1.0.77.0) for the past week, and it has the same problem. Should I be calling some flush procedure? Have I structured my insert properly incorrectly? Should I be creating and destroying these connections regularly, instead of making one and keeping it open? Thanks. mistachkin added on 2011-12-07 22:19:50 UTC: mistachkin added on 2011-12-16 05:05:50 UTC: |