Data Model > UNICOM Intelligence Data Model reference > Logging component
 
Logging component
This section provides detailed documentation of the properties and methods of the Logging component, which is used by the UNICOM Intelligence software to create and write logs. Additional overview material and examples of using the Logging component are not yet available.
The Logging component can write to ASCII or Unicode files as well as database targets supported by OLE DB. The Logging component can be used in two modes: global or local. Choosing which one to use is determined by whether you want to share your logs and performance issues. Switching between the two modes is seamless.
Global model
Use this mode when you want to share and synchronize with other clients and processes. The global model stores loggers within groups. All members of one group share the same log settings and resources. This means that the written entry sequence and the notification sequence are guaranteed to be FIFO (First-In-First-Out) within the group. However, this is not true across groups.
There is a greater overhead in the global model, not to mention the fact that you have to share resources with other objects and clients. If performance or delays become crucial, in debugging thread or synchronization issues, you may want to use the local model.
Local model
Use the local model when speed is more important than sharing your logs. The local model logs to file only, because the overhead in writing database records is incompatible with the speed requirements of local logging. In this model, there is no way for another process, such as a global log browser, to hook into the process and retrieve the connection point that exposes the logs. Because of this, In-process log components can enlist with the global LogAgent interface, stating their identity and context. If you create the log using the CreateLog or CreateLogEx methods with the PublishLogs parameter set to TRUE, all log entries will be propagated to the agent. This allows other processes to see your logs. Creating the log using the CreateLogEx method with the PublishLogs parameter set to FALSE, will enlist the component with the agent but not propagate logs to it.
One of the main reasons for using the local model is the high performance. Creating the log as a public log and thereby propagating all the log entries to the agent (another process) requires marshalling and also some CPU time to transfer the actual data. Using the LogAgent reduces the performance of the log component.
Optimizing performance
Performance tests typically reveal a loss in performance by 50%, when making your entries public. If performance still isn't satisfactory, there are some other things you can do:
Do not make your logs public.
Filter your logs, to show only the relevant messages.
Register your logs and use the LogById methods.
Customize the log entries to contain the minimum amount of information.
Switching between models
There are some differences in the way you create and initialize local and global logs, but that is really the only difference. Once you have created your log and initialized it with your settings, all logging calls are identical independent of your choice of model and features. This makes it easy to switch models and features at run time as well as design time. This also facilitates having multiple creation and initialization scenarios without having to worry about logging methods throughout your code.
When choosing which methods to use in creating and initializing the log components, note that all methods containing the word "Group" are targetted at the global model and the others at the local model. This does not mean that a particular method (such as CreateLogInGroup) will not work and create a local logger. It will work, but the CreateLogInGroup method is targetted at a global model and therefore declares parameters that are most useful in the global model.
Filtering logs
You can filter out some log entries, to enhance the overview, or just to see critical errors, and so on. You do this by first setting the filter and then activating it, if it's not already active. The filter is a set of binary flags, that you can set on or off. The following table provides some examples.
Filter
Binary value
Description
No filter - initial/default value 0
0000 0000 0000 0000
All logs will be written.
LCF_ELEVEL_ALL ^ LCF_ELEVEL_TRACE
1111 1111 1110 1111
Only trace logs will be written.
LCF_ELEVEL_ALL ^ LCF_ELEVEL_ERROR | LCF_ELEVEL_WARNING
1111 1111 1111 0011
Only error and warning logs will be written.
The filter takes affect when you use LogById with an ID that was registered with Levels, or when you use LogThisEx explicitly stating the Levels. See the method reference for further details.
Registering logs
Registering your logs has two advantages. First, you can gather all your logs in one place, for example in your InitialUpdate procedure. This means that you have only one place to maintain the log. Second, registering the logs can increase performance, because the front end doesn't have to transfer all the data, as it's already stored in the backend. Attempting to register an ID that is already registered will cause an exception, and trying to use LogById on an unregistered ID will also cause an exception.
When you store a log entry using RegisterLog, the ID is stored along with the Entry in the backend.
RegisterLogEx does the same but additionally the ID is stored along with the levels in the frontend.
Every time you call any of the LogById methods the entry and, if available the levels, will be looked up and used in the call.
Scrubbing
Setting the scrub settings of the log component prevents the hard disk becoming crowded with log files. You can define the maximum number of log files that can be in use, the maximum size of each file, and/or the maximum number of records in each log file. You can change these setting after calling CreateLog. However, after the first call to a logging method (such as LogThis) you can no longer make changes to the scrub settings.
If you log to a database, there are no methods to set or execute scrubbing. The log database instead contains queries and stored procedure to do cleanup. You may add queries and stored procedures of your own if you want.
Custom logging
If the standard log output doesn't meet your requirements, you can make your own custom log entries. Making your own custom format is easy, and only requires a change in the CreateLog part. After doing this, you use the normal LogThis to do the logging. This allows seamless switching between standard and custom logs. Using the CreateCustomLog or the CreateCustomLogEx you can specify a custom field separator, specify standard fields to include or exclude, and specify any number of fields (of any type) to follow. However, it is your responsibility to construct the custom log entry.
Rich error information
The log components provide rich error information. Performing illegal operations causes the log components to throw an exception. You must handle these exceptions to prevent program execution from halting.
Fault tolerance
The log components are fault tolerant in their handling of the output target. Whenever a database session or connection can not be successfully created or accessed, the backend will automatically switch to the emergency backup option of writing to file. Also if a write operation to database fails, the backend will also automatically switch to the emergency backup option. In an emergency backup situation, the log file header will contain (EMERGENCY BACKUP - DATABASE CONNECTION WAS LOST). There is no other fault tolerance built into the log components.
Loggers are unique
The logger front-end is always present regardless of the model and features you are using. When the front-end connects to the backend (and sometimes also the logagent) the front-end Dispatch pointer is passed and possibly marshaled. The pointer is prepended with an internal serial number to make the identifier unique and stored in both the front-end and the backend. Whenever the backend or logagent receives a call, this identifier is passed along, so the backend/logagent always knows the origin of any action within the framework.
Synchronization
All log components are free-threaded. Synchronization is done using Critical sections. If you are running the Logging components from a Multi-Threaded Apartment (MTA), be careful not to suspend a thread that is currently logging. Suspending the thread may block the critical section it owns in the middle of a call to LogThis (for example). This blocking means that no other threads are able to use logging while the suspended thread still owns the Critical section. This is not a special consideration with the Logging components, but applies to multithreaded development in general.
Using the component
To use the Logging component in Visual C++, import LogFront.dll. To use the Logging component in Visual Basic, add SPSS MR LogFront Type Library to the project references.
Known problem
A known problem in the Logging component means that you may encounter a "User breakpoint" when using the Visual C++ debugger on the first call to LogThis(). It is safe to ignore this. (Hitting F5 twice will enable you to continue debugging.)
Requirements
UNICOM Intelligence Data Model
See
Logging component: Object model
See also
Using the Logging component in mrScriptBasic
UNICOM Intelligence Data Model reference