This article assumes that you are familiar (at least in general terms) with the concepts of address space, memory page, RESERVE, and COMMIT.
Contents
- Memory architecture in Delphi applications
- What is the problem with a memory manager?
- How do you find memory bugs in Delphi applications?
- What is the VirtualMM debugging memory manager?
- Where can I download VirtualMM?
- How do I install VirtualMM?
- How do I add (connect) VirtualMM to my project?
- How do I configure VirtualMM?
- What problems does VirtualMM solve?
- When should I use VirtualMM?
- When should I NOT use VirtualMM?
- Using VirtualMM with EurekaLog
- What should I do if my application crashes with "Out Of Memory" when using VirtualMM?
- Notes
Memory architecture in Delphi applications
To understand the purpose (essence) of the VirtualMM debugging memory manager, we first need to refresh our knowledge on the basics of memory management (*) in Delphi applications.How do Delphi applications allocate memory?
To recap: we have a CPU stack for local variables and a block (section) for global variables, as well as dynamic memory (the heap) for any variable-sized data in a Delphi application. Although Delphi has a variety of dynamic data (objects, arrays, strings, interfaces, etc.), they all call theGetMem function in one way or another. For example, creating a string will call the GetMem function, passing it the character size of the string plus the size of the string's service header. Creating an object will call the GetMem function, passing it the object's .InstanceSize property. And so on. In other words, any dynamic data in Delphi is a superset of the GetMem function.How does System allocate memory?
Just as Delphi has a base memory allocation function (GetMem) that handles all memory allocations, Windows also has its own base memory allocation function: the VirtualAlloc. It actually has several variants, but for simplicity, we'll ignore them. For the purposes of this article, we'll refer to all functions in the "VirtualAlloc family" simply as the "VirtualAlloc function".What is a memory manager?
If you don't know anything about memory in Delphi and Windows applications, you might assume that theGetMem function in Delphi simply calls the VirtualAlloc function in Windows. In other words, they're the same function. But that's not true: in fact, the GetMem function doesn't call the VirtualAlloc function directly. Instead, the GetMem function calls Delphi's memory manager. Delphi’s memory manager then calls the VirtualAlloc function.Why do Delphi applications use a memory manager?
Why do we need a memory manager at all? Why can't we use the operating system's memory management functions directly? That is, why can't theGetMem function simply call the VirtualAlloc function? The problem is that the VirtualAlloc function allocates memory at a granularity of 64 KB (**). This means that you can't allocate memory smaller than 64 KB. So, if you allocate 8 bytes for a TObject using the VirtualAlloc function, the VirtualAlloc function will take a full 64 KB from your address space instead of 8 bytes. And these 64 KB cannot be used (to allocate another block of memory via the VirtualAlloc function) until you release the created TObject object and return the allocated memory.That is, if you create 100 objects of, say, 20 bytes each (very simple objects, you inherited them from
TObject), then instead of two kilobytes (20 bytes * 100 = 2 KB), you're now occupying almost 6.5 MB (64 * 100 = 6,400 KB) – several orders of magnitude more!This is precisely the problem the memory manager solves: it allocates one large chunk of memory using the
VirtualAlloc function (for example, 1 MB), and then places several smaller memory allocations into this block, which come from the GetMem function. Thus the memory manager will be able to accommodate approximately 50,000 20-byte objects using a single 1 MB block.Further in the text, I will refer to allocated/free memory (without quotation marks), meaning actually allocated (free) memory that was allocated and freed through the
VirtualAlloc/VirtualFree family of functions.I will refer to "allocated"/"free" memory (in quotation marks), meaning memory that was allocated (via the
VirtualAlloc function), but logically marked as allocated or free through the GetMem/FreeMem functions.What is the problem with a memory manager?
Let's say you allocated memory for an object using theVirtualAlloc function, worked with the object, and then freed it (using the VirtualFree function). If you now mistakenly try to do something with the (already freed) object, you'll get an Access Violation exception because you're accessing inaccessible memory (memory that wasn't allocated). For example:var
P: Pointer;
begin
P := VirtualAlloc({...}); // allocate memory for P
P^ := {...}; // do something (work) with P
VirtualFree(P); // finished working, free the memory
// BUG: accessing memory that has already been freed
P^ := {...}; // this line will ALWAYS throw an Access Violation exception
end;
That's good. Memory bugs are immediately visible. We detect them right where they occur.Will anything change if we don't use the
VirtualAlloc function, but use the memory manager (the GetMem function)? Yes, of course it will:GetMem(P, {...}); // actually: doesn't allocate memory
FreeMem(P); // actually: doesn't free memory
After all, memory manager functions don't actually allocate or free memory. Instead, they return a pointer to the middle of some (already allocated) memory block and mark this area as "allocated", then simply mark it as "free" at the end. In other words:
var
P: Pointer;
begin
GetMem(P, {...}); // "allocate" memory for P
P^ := {...}; // do something (work) with P
FreeMem(P); // finished working, "free" the memory
// BUG: accessing already freed memory
P^ := {...}; // this line executes successfully because this memory is still allocated
end;
Since the memory isn't actually freed (but only logically marked as "free"), we can successfully access it even after the object has been logically freed. This is the problem with using a memory manager: a previously obvious memory bug is now hidden.How do you find memory bugs in Delphi applications?
There are so-called debugging memory managers for Delphi (***). Unlike a regular memory manager, the purpose of a debugging memory manager is to help you diagnose memory problems. How do they do this?Write after delete
If we look at the example above, the problem is that the memory block, which is logically "free", changes its contents:FreeMem(P); // the memory block is now "free"
P^ := {...}; // the "free" memory block has changed
How can a memory manager detect this? For example, a memory manager may fill a "free" block with some known pattern (for example, the $CC byte). Then, if the memory manager allocates a new block of memory and sees that the "free" memory has changed (it doesn't contain the $CC byte at some location), then there was a write to "freed" memory bug. For example:var
P: Pointer;
begin
GetMem(P, {...});
P^ := {...};
FreeMem(P);
// BUG: accessing already freed memory
P^ := {...}; // this line executes successfully
GetMem(B, {...}); // will raise a "memory corruption" error,
// because the memory manager will see that the memory previously occupied by P has been modified
end;
As you can see, although a memory bug can be detected, it won't be detected at the moment the bug occurred, but much later. This greatly complicates memory diagnostics. This is a problem caused by using a debugging memory manager.Buffer overflow
Another common bug is writing beyond the bounds of allocated memory. For example:var
Buffer: PInteger;
begin
GetMem(Buffer, Count * SizeOf(Integer)); // "allocate" memory for Count Integers
for X := 0 to Count do // BUG: should be Count - 1
begin
Buffer^ := 0; // will write data outside the "allocated" buffer on the last step
Inc(Buffer);
end;
Here we allocate memory for Count elements, while zeroing out Count + 1 elements (one more than necessary). This means we'll be writing to "free" memory, which is located behind the memory block we've "allocated". The problem is that sometimes this memory isn't "free" but "allocated" by other data. This means we'll corrupt some other memory block located immediately after our memory block. This bug is called a buffer overflow.How can a debugging memory manager help us with this bug? For example, it might "allocate" more memory than you requested. Let's say you request a 20-byte memory block, and the memory manager "allocates" 28 bytes. It reserves 4 bytes on each side for the template (for example, the $CC bytes). If there's anything different in this reserved area when "freeing" the memory - it means the memory has been overwritten. For example:
var
Buffer: PInteger;
begin
GetMem(Buffer, Count * SizeOf(Integer)); // "allocate" memory for Count Integers
for X := 0 to Count do // BUG: should be Count - 1
begin
Buffer^ := 0; // here: buffer overflow when X = Count
Inc(Buffer);
end;
{ ... do something else with Buffer }
FreeMem(Buffer); // will trigger a buffer overflow error,
// because the memory manager will see that the memory immediately after our block has been modified.
And again, we see the same problem: the bug will be detected long after it occurs.Calling methods of a deleted object
Memory isn't just for writing. Memory is often an object. What happens if we try to call a method of an already "freed" object?Well, a method can be regular (static) or dynamic (virtual).
Calling a static method of a deleted object
A static method is essentially no different from a regular function: it has a fixed address. Therefore, calling a static method doesn't depend on the object's data. Therefore, whether the object is "allocated" or "deallocated" doesn't matter. For example:var L: TList; begin L := TList.Create; // "create" the object L.Free; // "free" the object I := L.IndexOf(P); // will execute successfully and return the correct result.Since the call to the
IndexOf method does not use the data of the L object, the memory manager has no influence on the method call. Even if the memory manager frees the object, the method will still be called. Code execution will continue.Therefore, a debugging memory manager will not be able to detect such a problem.
Note that we are only talking about the call (invokation) of the method. How this method will be executed with an already deleted object is a separate question. We'll look at it later.
Calling a virtual method of a deleted object
The code must read the method's address from the object's data to call a virtual method. If the object is "freed" and its data hasn't changed since it was "freed", then nothing will prevent the virtual method from being called. For example:var S: TStringList; begin S := TString.Create; // "create" the object // Working with S S.Free; // "deallocate" the object I := S.Count; // will execute successfully and return the correct resultHere, the address of the virtual method
GetCount is stored in the S object. When object S is "deleted", the memory it occupied remains accessible, so the method call succeeds.How can a debugging memory manager catch this problem? Well, for example, it could write a different virtual method address to the memory previously occupied by the
S object. And if someone tries to call the object's virtual method after it's "deleted", it will call the debug memory manager function, which will raise an error:var S: TStringList; begin S := TString.Create; // "create" the object // Working with S S.Free; // "free" the object I := S.Count; // will raise a "method called on a deleted object" error.Note: the debug memory manager was able to catch the error immediately when it occurred in this case - rather than later, as in other examples above.
Read after delete
Memory can be not only written to. You can also read from it. For example:var L: TList; begin L := TList.Create; // "create" the object L.Free; // "free" the object I := L.Count; // will execute successfully and return the correct result.If the object's memory had been freed (via the
VirtualFree function), any attempt to write or read the object's data (fields) would immediately raise an Access Violation exception. But since the object is only "freed" (via the FreeMem function), its data is still accessible, so attempts to read and write (modify) the object's data (fields) will succeed.How can a debugging memory manager help with this bug? Well, it can't help much. Yes, it can fill the "freed" object with some debugging pattern (the $CC byte, for example), but this won't prevent code from reading and writing the object's data:
var L: TList; begin L := TList.Create; // "create" the object L.Free; // "free" the object I := L.Count; // will succeed, but return an incorrect result ($CCCC'CCCC or -858'993'460)The only chance here is to hope that the data read is so erroneous that it ultimately causes some other exception. This usually happens when the address of something is read from an object.
Therefore, a debugging memory manager may or may not help in detecting this bug.
Memory reuse
Let's look at this example:var S1, S2: TStringList; begin S1 := TStringList.Create; // "create" the object // Working with S1 S1.Free; // "free" the object S2 := TStringList.Create; // "create" the object I := S1.Count; // logically - a bug, but will always succeed, since S1 = S2Here we "delete" the object, but immediately "create" an identical one. The addresses of both objects will be the same, i.e.
S1 = S2, since the new object will be allocated in the place of the old object. Then we access the first "deleted" object. Technically, S1.Count is the same as S2.Count. Therefore, although this is a logical error in the code, such code will execute without errors, working with S2 instead of S1.The debug memory manager cannot detect this problem for obvious reasons.
What is the VirtualMM debugging memory manager?
How can we eliminate the shortcomings of Delphi's debugging memory managers described above? To do this: we need to understand the source of the issues with debugging memory managers. These issues arise from software memory management. It means all memory problems are detected using user code.Unlike Delphi's memory managers, the system (
VirtualAlloc) uses hardware memory management. It means all memory problems are detected by the CPU itself.Therefore, one way to address the shortcomings of Delphi's debugging memory managers is to switch from software memory management to hardware memory management. This is precisely what the VirtualMM debug memory manager does: roughly speaking, it replaces the
GetMem function with the VirtualAlloc function. Specifically, its name is derived from the words "Virtual" - which refers to the VirtualAlloc function, and "MM" - which stands for Memory Manager.VirtualMM is a Pascal-based source code. It supports IDEs from Delphi 6 up to the latest available version (RAD Studio 13 Florence at the time of writing). Earlier versions of Delphi (5 and below) are not supported due to compiler's limitations.
It's important to understand that VirtualMM runs noticeably slower (compared to other memory managers with software allocation). This is because user code executes relatively quickly on the CPU. However, switching to kernel mode (which is necessary for hardware memory allocation) is very slow. Therefore, if your program is constantly allocating and freeing memory, be prepared for increased latency.
Recall that the need for memory managers originally arose in Delphi due to the system's 64 KB memory allocation granularity. How does VirtualMM solve this problem?
- If your application is 64-bit, then there is almost no problem: the 64-bit address space is 8 TB, which allows you to allocate no more than 134,217,728 memory blocks (ranging from 1 byte to 64 KB in size).
- If your application is 32-bit, things are much more complicated: the 32-bit address space is 2 GB (4 GB max), which allows you to allocate only 32,768 memory blocks maximum (ranging from 1 byte to 64 KB in size) – which is terribly small.
Therefore, if the requested block size is smaller than the page size (4 KB), VirtualMM will classify it as a "small block". These blocks will be grouped together in a single pool.
It's important to understand that since all "small blocks" are grouped in a single memory area, there will be no unaccessed memory pages between them — which is the case for all other (regular, large) memory blocks. It means VirtualMM won't be able to catch buffer overflows in "small blocks" — at least not in hardware. However, VirtualMM adds guard values before and after "small block"s to catch buffer overflows in software, just like regular debugging memory managers do.
It also means the memory consumption of your application will increase significantly when using VirtualMM — this memory will be spent on guard pages at the edges of allocated memory blocks, as well as on rounding up the block size to the allocation granularity. It means if you want to debug your application with VirtualMM, you need to ensure that any memory issue surfaces as soon as possible. The sooner the problem is detected, the less likely your application will crash with an out-of-memory error.
Where can I download VirtualMM?
The VirtualMM debugging memory manager is included with EurekaLog. You can find it in the\Extras subfolder of your EurekaLog installation. If you don't have EurekaLog, you can download VirtualMM separately from the EurekaLog website.It should be noted that the concepts of the VirtualMM debugging memory manager were implemented in another debugging memory manager: SafeMM, presented by Mark Eddington at the DelphiLive conference. SafeMM was presented as part of the DelphiLive conference materials, so it was not possible to download it directly. The conference materials are no longer available, but its source code has been posted on Code Central. Later, the source code was adapted to newer Delphi versions (at the time). However, SafeMM was written as a "proof of concept" and was not supported or developed. Therefore, if you are looking for where to download SafeMM, you can download VirtualMM instead. VirtualMM has more features.
How to install VirtualMM?
VirtualMM does not have a dedicated installer and is distributed:- Either with EurekaLog. In this case, "installing VirtualMM" consists of "installing EurekaLog". After installation: VirtualMM can be found in the
\Extrassubfolder of the EurekaLog installation folder. This folder will already be included in the search paths of installed IDEs; nothing else is required; - Or as a ZIP archive with Delphi source code files. To install VirtualMM in this format: simply unzip the archive to any folder and specify that folder in the search path for the project in which you want to use VirtualMM:

Click the image to enlarge
How do I add (connect) VirtualMM to a project?
Simply addVirtualMM as the first module in your project's .dpr file, for example:program Project1;
uses
VirtualMM, // - added
Vcl.Forms,
Unit1 in 'Unit1.pas' {Form1};
{$R *.res}
begin
Application.Initialize;
Application.MainFormOnTaskbar := True;
Application.CreateForm(TForm1, Form1);
Application.Run;
end.
It's very important to specify VirtualMM first in the uses clause. If you don't specify it first, some other code will be initialized first, which will likely allocate memory through the built-in memory manager. Therefore, when VirtualMM gets to initialization, it won't be able to set itself as the memory manager, since memory was already allocated through the built-in memory manager.If you get an error like this when compiling your project:
[dcc32 Fatal Error] Project1.dpr(4): F2613 Unit 'VirtualMM' not foundit means you haven't added the VirtualMM source code folder to your project's (or IDE's) search path. See the "How do I install VirtualMM?" section above.
The following warning will be displayed in IDE's "Messages" output when a project is built:
![]() |
[DCC Warning] VirtualMMOptions.inc(51): W1054 WARNING: VirtualMM is ON, do not use this build on production |
How to configure VirtualMM?
VirtualMM consists of three files:VirtualMM.pas- the main source code. You include this unit in the project (see above). This file does not need to be edited.VirtualMMDefs.inc- contains conditional symbols required for proper compilation in all supported IDEs (from Delphi 6 to the latest available IDE, which at the time of writing is RAD Studio 13 Florence). This file does not need to be edited.VirtualMMOptions.inc- contains user options that allow you to change the behavior of the memory manager. This is the file you need to edit to configure VirtualMM.
{$DEFINE USE_SMALL_BLOCKS}
The option is disabled if you comment its line, for example:
// {$DEFINE USE_SMALL_BLOCKS}
Specifically, VirtualMM supports the following options:
USE_SMALL_BLOCKS(enabled by default): Enables support for "small blocks", as discussed above. Enabling this option allows you to conserve memory, which is necessary for running 32-bit applications. 64-bit applications have a huge address space, so running out of blocks is less of a problem. It's worth noting that "small blocks" are a compromise, a workaround. Not all types of checks can be implemented with "small blocks", as discussed above. Usage recommendations: enable this option for 32-bit applications, disable it for 64-bit applications. Enable it if your 64-bit application starts crashing due to out-of-memory errors.PROTECT_OVERFLOW(enabled by default): instructs VirtualMM to protect against buffer overflows. In this case, all allocated memory will be aligned so that unavailable memory begins after the end of the allocated block. Thus, writing beyond the end of the allocated buffer will immediately throw an Access Violation exception. Disable this option only for special cases (see below). Only one of thePROTECT_*options can be enabled at a time.PROTECT_UNDERFLOW: Instructs VirtualMM to protect against buffer underflows. In this case, all allocated memory will be aligned so that unavailable memory begins immediately before the beginning of the allocated block. Therefore, writing beyond the allocated buffer will immediately throw an Access Violation exception. Enable this option only to detect problems with writing before the block. Such problems are quite rare; usually, the opposite occurs. Only one of thePROTECT_*options can be enabled at a time.PROTECT_RANDOM: Instructs VirtualMM to randomly enablePROTECT_OVERFLOWorPROTECT_UNDERFLOW(on each memory allocation). This is rarely needed — only if you have multiple buffer overflow/underflow issues (on both sides of memory blocks). Probably the best solution is to look for problems one at a time: firstPROTECT_UNDERFLOW, thenPROTECT_OVERFLOW(or vice versa). Only one of thePROTECT_*options can be enabled at a time.ALLOCATE_TOP_DOWN(enabled by default): Tells VirtualMM to allocate memory from top to bottom. Enabling this option slows down performance slightly, but allows you to catchInteger/Pointerconversion bugs faster. Typically, you should leave this option enabled. Note: a 32-bit application must be marked as high-address aware (theIMAGE_FILE_LARGE_ADDRESS_AWAREflag must be set) for this option to be useful.CATCH_USE_AFTER_FREE: This is a special option that allows you to find the bug of accessing freed memory in an environment where memory is frequently reused. We will discuss it in more detail below. Enabling this option tells VirtualMM to never to free memory (*). As you can imagine, enabling this option will lead to catastrophic memory usage growth, especially if you frequently "allocate" and "free" memory. For this reason, this option is useless in 32-bit applications: a 32-bit application will crash with an "out of memory" error before it even gets to the problem. Enable this option only in 64-bit applications and only to detect issues accessing freed memory that are otherwise undetectable.NeverUninstall: Tells VirtualMM not to uninstall itself when the application exits. This option is only needed for compatibility with some older IDEs that have a bug: attempting to free memory after the application has "terminated."VirtualMMAlign(16 by default): Specifies the alignment of all allocated memory. In other words, the allocation granularity. This value must be a multiple of 8 bytes for 32-bit applications and 16 bytes for 64-bit applications: i.e. 8 (32-bit only), 16, 24 (32-bit only), 32, 40 (32-bit only), 48, etc. Larger values will result in increased memory usage, since some memory will be wasted filling the gaps between blocks. Furthermore, large values will also significantly worsen buffer overflow detection (thePROTECT_OVERFLOWoption) - see discussion below. We recommend setting this to the minimum value your application can handle. This option also has two special values: 1 to disable alignment completely (formally prohibited in Delphi, as this may crash the application) and 0 to use dynamic alignment obtained from the functionSystem.GetMinimumBlockAlignment.
What problems does VirtualMM solve?
Let's look at how VirtualMM can help us diagnose the memory bugs we discussed above.Referencing already freed memory
We had the following examples:var
P: Pointer;
begin
P := VirtualAlloc({...}); // "allocate" memory for P
P^ := {...}; // do something (work) with P
VirtualFree(P); // finished working, "free" memory
// BUG: accessing already freed memory
P^ := {...}; // this line will ALWAYS throw an Access Violation exception with VirtualMM
end;
andvar L: TList; begin L := TList.Create; // "create" the object L.Free; // "free" the object I := L.Count; // this line will ALWAYS raise an Access Violation exception with VirtualMMSince VirtualMM actually frees memory when it "frees" it, the memory becomes inaccessible after it's "deleted", so any attempt to access such memory will result in an Access Violation exception — whether it's a write or (more interestingly) a read. Note that VirtualMM will allow you to detect memory bugs immediately, in situ, rather than much later, as would happen with a regular debugging memory manager. Furthermore, VirtualMM will also catch attempts to read from already deleted memory, whereas typical debug memory managers usually can't help with this bug.
Buffer overflow
We had this example:var
Buffer: PInteger;
begin
GetMem(Buffer, Count * SizeOf(Integer)); // "allocate" memory for Count Integers
for X := 0 to Count do // BUG: should be Count - 1
begin
Buffer^ := 0; // the last step will raise an Access Violation exception with VirtualMM in PROTECT_OVERFLOW mode
Inc(Buffer);
end;
If the USE_SMALL_BLOCKS option is disabled
If the PROTECT_OVERFLOW option is enabled in VirtualMM (or this option was selected with the PROTECT_RANDOM option), VirtualMM will allocate one inaccessible memory page immediately after the memory block, so an attempt to read or write beyond the block will throw an Access Violation exception.Of course, if the
PROTECT_UNDERFLOW option was enabled (or this option was selected with the PROTECT_RANDOM option), VirtualMM will allocate one inaccessible page immediately before the memory block. Since the memory page granularity is 4 KB, the end of the allocated block will most likely not fall exactly on a page boundary. It means the code will execute successfully in this case, without raising an Access Violation exception. The exception will only occur if you continue "stepping" forward and reach the end of the current page.There's one subtlety here. Delphi has a convention that any memory manager, whether standard or custom, must return memory aligned to at least 8 bytes. More is possible (for example, 32), but not less than 8. For example, Delphi 7 aligns memory to 8 bytes, while RAD Studio 13 Florence aligns it to 16 bytes. VirtualMM, however, aligns memory to whatever you specify. It could be 8, 16, or 32. But the default is 16 (see the
VirtualMMAlign option above).Why are we saying this? Obviously, if you align the start of a block to a certain boundary, the end of that block will be "random". For example, if you align to 8 bytes and allocate a 1-byte block, you could allocate it at, say, 16k - 8, but then the memory block would span addresses from 16k - 8 to 16k - 7 (exactly 1 byte) – which is 7 bytes less than the nearest page boundary (16k). It means these seven bytes of "padding" will have the same hardware protection attributes as the allocated memory block, i.e., they will be readable and writable. In other words, a buffer overflow of up to 7 bytes cannot be immediately detected. However, a buffer overflow of 8 or more bytes will affect the next memory page (without access), which will trigger an Access Violation exception.
For this reason, it is important to use the smallest possible memory allocation granularity.
However, VirtualMM always places guard values before and after a memory block (if there is the aforementioned empty space), so although an attempt to read immediately after a block will always succeed, an attempt to write after a block will be detected when the memory block is freed. Thus, VirtualMM will behave like a regular debugging memory manager. In this case, an
EAssertionFailed exception will be raised with the message ReleaseLargeBlock: Block Overflow (or "Block Underflow"). In general, any assert exceptions from VirtualMM indicate a memory bug: they indicate that someone has violated (overwritten) some protective or control structures (headers) of memory blocks, i.e., it is a clear bug of writing to an invalid address. Just in case, we'll clarify again that we're only talking about accesses within the memory block alignment padding. Access attempts outside the alignment padding will be detected immediately (by hardware).If you can't find a buffer overflow problem, you can try using formally unsupported align values, such as 1 (i.e., no alignment at all), and hope that your application can run in this mode. In this case, there will be no alignment padding, and therefore, hardware protection will operate immediately at the memory block boundary.
If the USE_SMALL_BLOCKS option is enabled
If the PROTECT_OVERFLOW option is enabled in VirtualMM (or this option was selected with the PROTECT_RANDOM option) and the block size is less than the page size (4 KB), then VirtualMM will place the guard value immediately after the memory block, so that although an attempt to read beyond the block will always succeed, an attempt to write beyond the block will be detected when the memory block is "freed". Thus, VirtualMM will behave like a regular debugging memory manager. In this case, an EAssertionFailed exception will be raised with the message ReleaseSmallBlock: Block Overflow (or "Block Underflow").If the block size is larger than the page size (4 KB), VirtualMM will behave as in the case "If the
USE_SMALL_BLOCKS option is disabled" above, i.e., it will allocate guard pages and detect a read/write beyond the block immediately (with the above-mentioned adjustment for the memory block alignment padding).Calling a static method of a deleted object
Since calling a static method does not depend on the object's data, VirtualMM cannot help with this case. However, since the called static method most likely does something with the object (otherwise it would be a function, not a method), the first attempt to read or write to the object will throw an Access Violation exception. For example:var L: TList; begin L := TList.Create; // "create" the object L.Free; // "free" the object I := L.IndexOf(P); // will always raise an Access Violation within a method with VirtualMMTherefore, VirtualMM will help detect the bug as close to its origin as possible.
Calling a dynamic method of a deleted object
Calling virtual methods is simpler: the object's memory will not be accessible after the object is "deleted", so calling a virtual method will always raise an Access Violation when attempting to read the method's address from the object. For example:var S: TStringList; begin S := TStringList.Create; // "create" the object // Working with S S.Free; // "free" the object I := S.Count; // will always raise an Access Violation when calling a virtual method with VirtualMM
Memory reuse
We had this code:var S1, S2: TStringList; begin S1 := TStringList.Create; // "create" the object // Working with S1 S1.Free; // "free" the object S2 := TStringList.Create; // "create" the object I := S1.Count; // logically - a bug, but will always succeed since S1 = S2Can VirtualMM do anything in this case? Yes, it can. But only in 64-bit applications.
To do this, you need to enable the
CATCH_USE_AFTER_FREE option. Enabling this option will prevent VirtualMM from ever freeing memory when "freeing". It means that if we immediately "create" a second object (S2 in the example above), it will never be allocated at the same address as the first object (S1 in the example above), since the memory for the first object will be forever occupied. In other words, S1 is no longer equal to S2, and therefore, calling S1.Count will throw an Access Violation exception, since the memory of S1 will be inaccessible.As you can imagine, never freeing memory is a very aggressive strategy that can only work if you have a HUGE amount of free memory. In particular, enabling the
CATCH_USE_AFTER_FREE option in 32-bit applications is pointless, since a 32-bit application will crash almost immediately with an out-of-memory error, before reaching the code that contains the bug. But even in 64-bit applications, it makes sense to try to ensure that the memory bug you're trying to catch occurs as early as possible.var S1, S2: TStringList; begin S1 := TStringList.Create; // "create" the object // Working with S1 S1.Free; // "free" the object S2 := TStringList.Create; // "create" the object I := S1.Count; // will always raise an Access Violation when calling a virtual method with VirtualMM and the CATCH_USE_AFTER_FREE option is enabled.Recommendation for this option: keep it disabled. If you can find a bug without enabling this option - do so. Enable this option only in 64-bit applications and only if you can't find the bug otherwise.
When should you use VirtualMM?
Since VirtualMM is a rather special debugging memory manager, it should only be used in extreme cases where you can't find a memory bug otherwise. Most often, these will be situations where you mistakenly read from "freed" memory.Less often, but also frequently, these will be situations where you write to "freed" memory. The main reason for choosing VirtualMM in these cases would be that debugging memory managers detect the problem too late (after it occurs), while VirtualMM can do this immediately.
When NOT to use VirtualMM?
Since VirtualMM is a rather special debugging memory manager, it should never be used in a release (production) for the following reasons:- Slow operation due to the need to make a kernel call for each memory "allocation" and "freeing";
- High memory consumption:
- Rounding up to the page size (4 KB);
- Allocation of guard pages (buffer overflow/underflow protection);
- Memory only grows, but does not shrink, when the
CATCH_USE_AFTER_FREEoption is enabled.
[DCC Warning] VirtualMMOptions.inc(51): W1054 WARNING: VirtualMM is ON, do not use this build on productionThis is done intentionally to prevent you from accidentally compiling your program for production with VirtualMM.
Using VirtualMM with EurekaLog
Although VirtualMM is part of EurekaLog, it is not related to EurekaLog and does not share source code with it. It is a standalone product that can also be downloaded as a standalone download.EurekaLog includes a memory manager filter (add-on) that performs several memory checks. These checks largely overlap with VirtualMM, so while they can be used together, there's little point in doing so. After all, the point of VirtualMM is to place hardware protection at memory block boundaries, and by enabling memory checks in EurekaLog, you'll be pushing those boundaries to accommodate EurekaLog debug data — which, as you might guess, will impair VirtualMM's ability to immediately report memory bugs (as soon as they occur). Since this ability is the primary reason you'd want to use VirtualMM, it shouldn't be interfered with. That is: if you use VirtualMM in a project with EurekaLog, all memory checks in EurekaLog must be disabled (the "Enable extended memory manager" option must be disabled, and the "When memory is released" option must be set to "Do nothing").
On the other hand, note that VirtualMM is not a replacement for memory checks in EurekaLog:
- First, EurekaLog has a memory leak detection feature, which VirtualMM lacks;
- Second, EurekaLog can report memory issues in a more accessible way (for example, two call stacks will be shown for memory double free errors);
- Third, EurekaLog is designed for reporting from user machines (release/production), while VirtualMM can only be used for development.
Note that VirtualMM never explicitly reports memory bugs as "this memory bug" exception: it will always be either
EAccessViolation or EAssertionFailed - unlike EurekaLog, which always tries to report precise information, for example, EUseAfterFree, EBufferOverflowError or EDoubleFreeError. Actually, the essence of using VirtualMM is precisely the hardware protection (i.e. the EAccessViolation exception). At the same time, in some cases, EAccessViolation/EAssertionFailed exceptions will be raised within the code of VirtualMM itself (i.e. the memory manager). For example, when an invalid or corrupted block of memory is passed. The problem here is that the Delphi debugger is not always able to correctly build the call stack if you are inside a memory manager function. For example, if VirtualMM detects a buffer overflow when freeing a block of memory, it will throw an EAssertionFailed exception with 'Block Overflow', but the IDE may show an incomplete or truncated call stack, like this:KERNELBASE.RaiseException@AssertVirtualMM.ReleaseLargeBlockorVirtualMM.ReleaseSmallBlockVirtualMM.VirtualFreeMem@FreeMem- and here the call stack either ends or contains functions further up the stack, and the one that directly calls
FreeMemis missing.
Yes, if you're proficient in debugging, you can build the call stack manually. But not everyone can do this, and it's quite a complex operation. Or you can use EurekaLog:
EAccessViolation/EAssertionFailed - these are regular exceptions that will be caught by EurekaLog, and if you don't suppress them with code like:try FreeMem(P); except // Do nothing end;then they will be processed by EurekaLog, which will be able to show an exception report with the full stack (unless you changed the stack tracing method to any frame-based method).
So, the recommended workflow is:
- DEBUG/Development stage:
- You turn off all memory checking in EurekaLog (or disable EurekaLog altogether, although this is not recommended for the reason stated above);
- You add VirtualMM to the project and configure it;
- You search for and fix memory bugs in the application (e.g., stress/load testing).
- RELEASE/Production stage:
- You remove VirtualMM from the project;
- You enable and configure memory checks in EurekaLog;
- You test the application (test exceptions and memory bugs);
- You deploy the application;
- You collect bug reports and fix any bugs found. If you find a memory bug that you can't diagnose, you're using VirtualMM again (while debugging).
What should I do if my app crashes with out-of-memory errors when using VirtualMM?
As we've said many times before: using VirtualMM leads to increased memory usage. If your app throws anEOutOfMemory exception, how can you fix it?Follow these steps from top to bottom until the error disappears:
- Modify the app to 64-bit. This is the most reliable method;
- Rearrange the code so that the memory bug occurs as early as possible. Remove all non-essential code;
- Disable
CATCH_USE_AFTER_FREE; - Enable
USE_SMALL_BLOCKS; - Revise your application logic. Allocate fewer memory blocks. For example, use a dynamic array of blocks instead of allocating single small blocks.
Notes
(*) When we talk about memory, we greatly simplify things. By "memory" we mean the application's address space. For simplicity, we don't distinguish between reserved (RESERVE) and committed (COMMIT) memory. For example, the words "allocated memory" can mean either "reserved memory" or "committed memory" - depending on the context.(**) 64 KB is the default value on many systems. However, this value may be different for your specific system. It is obtained from the
dwAllocationGranularity field of the SYSTEM_INFO structure. Throughout the text, we talk about 64 KB, but you should understand that it is not a constant.(***) Since VirtualMM is also a debugging memory manager, from here on, by "debugging memory managers" we will only mean "typical"/"regular"/"all other" memory managers, not including VirtualMM. To avoid having to write the exception "typical debugging memory managers other than VirtualMM" every time, we'll simply write "debugging memory managers" when referring to all other managers excluding VirtualMM.
(****) 4 KB is the default value on many systems. However, this value may be different for your specific system. It is obtained from the dwPageSize field of the
SYSTEM_INFO structure. Everywhere in the text it says 4 KB, but you should understand that it is not a constant.
Download EurekaLog | Purchase License | Contact Support
