Our Cassandras 1.0.9 production cluster is having allot of Out-Of-Memory Crashes.
The JVM heap is configured to it's highest recommended value (8GB - according to the docs).
These failures occur as a response for the application server writing massive data at once and than read massive data.
We're collection Column Families and other general statistics of Cassandra and the thing is that when the JVM reports that it got to the 8GB limit cfstats shows the total memory consumed by MemTables is less than 1GB.
The question is, what consumes the other 7GB?
What other memory structures live inside the heap?
Could it be that its just a matter of Garbage Collection tuning?
Thanks in advance,