solidDB Help : Programming : Using SQL for database administration : Managing tables : Deciding which tables to designate as in‑memory tables
  
Deciding which tables to designate as in‑memory tables
Ideally, your computer has enough memory to store all of your tables in memory, which gives the best possible performance for database transactions. However, in practice, you probably have to choose a subset of tables to store in memory, with the remaining tables being stored on disk.
If you cannot fit all tables in memory, try to put the most-frequently-used data in memory. In principle, small frequently-used tables should go into memory, and large rarely-used tables can be left on disk. With other possible combinations, such as large tables that are heavily used, or small tables that are not heavily used, the choice should depend on the "density" of access to a table. The tables that have the greatest density of access should be in-memory tables.
The access density is measured in units of access per MB per second, which is shown in the following examples as rows per MB per second. For simplicity, one access per row is assumed.
Example 1: If you have a 1 MB table, and you access 300 rows in a 10-second period, the density is 30 rows/MB/s (300 rows / 1 MB / 10 seconds).
Example 2: If you have a 500 KB table and you access 300 rows per second, the access density is 600 rows/MB/s (300 rows / 0.5 MB / second).
The table in the second example has a higher access density than the first one, and if you can only fit one of these tables into memory, you should put the second table into memory.
You might want to take into account the number of bytes accessed each time. This is typically the average row size, although it might be different if you are using binary large objects (BLOBs), or if the server can find all the information that it needs by reading just an index rather than the entire table.
Because the server normally reads data from the disk in multiples of a "block" (where a block is typically 8 KB), whether you read a 10-byte row or a 2000 byte row, the server does approximately the same amount of work.
When you calculate the size of your table in memory, you must take into account the table, all its indexes, and all its BLOBs.
Each time that you add an index, you add more data that is stored about that table. Furthermore, when you add a foreign key constraint to a table, the server creates an appropriate index (if one does not already exist) to speed up certain types of lookup operations on that table.
You are unlikely to have perfect information so your calculations will probably be based on estimates. Make sure to consider the following points when determining the tables that should be in memory.
If you designate more in-memory tables than the physical memory in the computer can take, tables will be swapped to disk, which can substantially reduce performance.
Although you must designate the table as in-memory or disk-based at the time that you create the table, you might not know how frequently each table is accessed until that table has a substantial amount of data in it.
To switch a populated table from disk-based to in-memory requires the creation of a new table, see Switching a table between in-memory and disk-based.
Average access density might change over time.
You might add more tables in the future or your tables might grow in size.
In a typical situation, you should not use up all the memory that you have - you should leave enough space to take into account that your tables are likely to grow in size, and you should leave a margin for error so that you do not run out of memory.
Note Since virtual memory can be swapped to disk frequently, using virtual memory negates the advantage of in-memory tables. Always make sure that the entire DBMS process fits into the physical memory of the computer.
See
Choosing the type of in-memory table
Go up to
Managing tables