In-Memory Database Guide : Choosing which tables to store in memory
  
Choosing which tables to store in memory
This section describes a strategy that will guide you in choosing which tables to put in memory.
The main principle to consider is the density of access to the table. The higher the frequency of access, the higher the access “density”. Similarly, the larger the table, the lower the access density for a given number of accesses per second.
The access density is measured in units of accesses per megabyte per second, which is shown here as rows/MB/s. For simplicity, one access per row is assumed.
Example 1
If you have a 1 megabyte table, and you access 300 rows in a 10-second period, the density is 30 rows/MB/s = 300 rows / 1 MB / 10 seconds.
Example 2
If you have a 500 KB table and you access 300 rows per second, the access density is 600 rows/MB/s = 300 rows / 0.5 MB / second.
The table in the second example has a higher access density than the first one, and if you can only fit one of these tables into memory, you should put the second one into memory.
You may want to take into account the number of bytes accessed each time. This is typically the average row size, although it may be different if you are using binary large objects, or if the server can find all the information that it needs by reading just an index rather than the entire table.
Because the server normally reads data from the disk in multiples of a “block” (where a block is typically 8 KB), the number of bytes per access or the number of bytes per row gives you only slightly more precise figures than the formula without these. Whether you read a 10-byte row or a 2000 byte row, the server does approximately the same amount of work.
When taking into account the size of the table, you must also take into account the size of any indexes on that table. Each time that you add an index, you add more data that is stored about that table. Furthermore, when you add a foreign key constraint to a table, the server will create an appropriate index (if one does not already exist) to speed up certain types of lookup operations on that table. When you calculate the size of your table in memory, you must take into account the table, all its indexes, and all its BLOBs.
Once you have calculated the access density of all your tables, you rank order those tables from highest to lowest. Starting with the table that has the highest density, work your way down the list, designating tables as in-memory tables until you use up all of the available physical memory.
This description is simplified as it assumes that you have perfect information and that you can change a table from disk-based to in-memory (or vice-versa) at any time. In fact, you may not know the total amount of free memory in your computer. You might accidentally designate more in-memory tables than the computer has room in physical memory for. The result may be that tables are swapped to disk. This may substantially reduce performance. Also, you may not really know how frequently each table is accessed until that table has a substantial amount of data in it. Yet the solidDB® server requires that you designate a table as in-memory or disk-based at the time that you create the table, before you have put any data into it. Thus your calculations are going to have to be based on estimates of the amount of usage each table gets, estimates of the size of each table, and estimates of the amount of free memory. It also assumes that the average access density does not change over time.
This approach also assumes that you are not planning to add still more tables in the future, and it assumes that your tables do not grow in size. In a typical situation, you should not use up all the memory that you have - you should leave enough space to take into account that your tables are likely to grow in size, and you should leave a little bit of a margin for error so that you do not run out of memory.
Note Since virtual memory may be swapped to disk frequently, using virtual memory negates the advantage of in-memory tables. Always make sure that the entire DBMS process fits into the physical memory of the computer.