Running out of memory
Describes remedies for limited memory.
A very common difficulty with MIPs is running out of memory. This problem almost always occurs when the branch & cut tree becomes so large that insufficient memory remains to solve a continuous LP, QP, or QCP subproblem. (In the rare case that the dimensions of a very large model are themselves the main contributor to memory consumption, you can try adjusting the memory emphasis parameter, as described in Lack of memory.) As memory gets tight, you may observe warning messages from CPLEX as it attempts various operations in spite of limited memory. In such a situation, if CPLEX does not find a solution shortly, it terminates the process with an error message.
The information about a tree that CPLEX accumulates in memory can be substantial. In particular, CPLEX saves a basis for every unexplored node. Furthermore, when CPLEX uses the best bound or best estimate strategies of node selection, the list of unexplored nodes itself can become very long for large or difficult problems. How large the unexplored node list can be depends on the actual amount of memory available, the size of the problem, and algorithm selected.
A less frequent cause of memory consumption is the generation of cutting planes. Gomory fractional cuts, and, in rare instances, Mixed Integer Rounding cuts, are the ones most likely to be dense and thus use significant memory at default automatic settings. You can try turning off these cuts, or any of the cuts you see listed as being generated for your model (in the cuts summary at the end of the node log), or simply all cuts, through the use of parameter settings discussed in the section on cuts in this manual; doing this carries the risk that this will make the model harder to solve and only delay the eventual exhaustion of available memory during branching. Since generation of cutting planes is not a frequent cause of memory consumption, apply these recommendations only as a last resort, after trying to resolve the problem with less drastic measures.
Certainly, if you increase the amount of available memory, you extend the problem-solving capability of CPLEX. Unfortunately, when a problem fails because of insufficient memory, it is difficult to project how much further the process needed to go and how much more memory is needed to solve the problem. For these reasons, the following suggestions aim at avoiding memory failure whenever possible and recovering gracefully otherwise.
Reset the tree memory parameter
To avoid a failure due to running out of memory, set
the working memory parameter, WorkMem
,
to a value significantly lower than the available memory on your computer
(in megabytes), to instruct CPLEX to begin compressing the storage
of nodes before it consumes all of available memory. See the related topic
Use node files for storage,
for other choices of what should happen when WorkMem
is
exceeded. That topic explains how to tell CPLEX that it should
use disk for working storage.
Because the storage of nodes can require a lot of space,
it may also be advisable to set a tree limit on the size of the entire
tree being stored so that not all of your disk will be filled up with
working storage. The call to the MIP optimizer will be stopped after
the size of the tree exceeds the value of TreLim
,
the tree limit parameter. At default settings, the limit is infinity
(1e+75), but you can set it to a lower value (in megabytes).
Use node files for storage
As noted in Using node files, CPLEX offers a node-file storage-feature to store some parts of the branch & cut tree in files as it progresses through its search. This feature allows CPLEX to explore more nodes within a smaller amount of computer memory. It also includes several options to reduce the use of physical memory. Importantly, it entails only a very small increase in runtime. In terms of performance, node-file storage offers a much better option than relying on generic swap space managed by your operating system.
This feature is especially helpful when you are using steepest-edge pricing as the subproblem simplex pricing strategy because pricing information itself consumes a great deal of memory.
Try node files whenever the MIP optimizer terminates with the condition "out of memory" and the node count is greater than zero. The message in such a situation looks like the following sample output.
Clique cuts applied: 30
CPLEX Error 1001: Out of memory.
Consider using CPLEX node files to reduce memory usage.
MIP-Error termination, integer feasible: Objective = 5.6297000000e+04
Current MIP best bound = 5.5731783224e+04 (gap = 565.217, 1.00%)
Several parameters control the use of node files. They are:
memory available for working storage:
WorkMem
in Concert Technology orCPX_PARAM_WORKMEM
in the Callable Librarynode storage file switch:
NodeFileInd
in Concert Technology orCPX_PARAM_NODEFILEIND
in the Callable Librarytree memory limit:
TreLim
in Concert Technology orCPX_PARAM_TRELIM
in the Callable Librarydirectory for working files
WorkDir
in Concert Technology orCPX_PARAM_WORKDIR
in the Callable Library
CPLEX uses node file storage most effectively when the
amount of working memory is reasonably large so that it does not have
to create node files too frequently. The default value of the
WorkMem
parameter
is 2048 megabytes (that is, 2 gigabytes). Setting it to higher values, even on a machine with
very large memory, can be expected to result in only marginally improved
efficiency. However, if your machine has less than 2 gigabytes of RAM,
it is advisable to reduce this setting by at least 128 MB
to avoid defeating the purpose of node files,
a situation that would occur if your application inadvertently triggers
the swap space of your operating system.
WorkMem
,
and if the tree-memory limit has not been exceeded, then what happens
next is decided by the setting of the node storage file switch (NodeFileInd
in
Concert Technology or CPX_PARAM_NODEFILEIND
in
the Callable Library). If that parameter is set to zero, then optimization
proceeds with the tree stored in memory until CPLEX reaches the tree memory limit (TreLim
in
Concert Technology or CPX_PARAM_TRELIM
in
the Callable Library). If the node file indicator is set to 1
(the
default), then a fast compression algorithm is used on the nodes to
try to conserve memory, without resorting to writing the node files
to disk. If the parameter is set to 2, then node files are written
to disk. If the parameter is set to 3, then nodes are both compressed
(as in option 1) and written to disk (as in option 2). Table 1 summarizes
these different options. Value | Meaning | Comments |
---|---|---|
0 |
no node files | optimization continues |
1 |
node file in memory and compressed | optimization continues (default) |
2 |
node file on disk | files created in temporary directory |
3 |
node file on disk and compressed | files created in temporary directory |
Among the memory conservation tactics employed by CPLEX when the memory emphasis parameter has been set, the maximum setting for the node file indicator is automatically chosen, so that node-file storage will go to disk. You may still wish to adjust the working memory or tree limit parameters to fit the capabilities of your computer.
In cases where node files are written to disk, CPLEX
will create a temporary subdirectory under the directory specified
by the directory for working files parameter
(WorkDir
in Concert Technology or CPX_PARAM_WORKDIR
in
the Callable Library). The directory named by this parameter must
exist before CPLEX attempts to create node files. By default, the
value of this parameter is “.”, which means the current
working directory.
CPLEX creates the temporary directory by means of system
calls. If the system environment variable is set (on Windows platforms,
the environment variable TMP
; on UNIX platforms,
the environment variable TMPDIR
), then
the system ignores the CPLEX node-file directory parameter and creates
the temporary node-file directory in the location indicated by its
system environment variable. Furthermore, if the directory specified
in the CPLEX node-file directory parameter is invalid (for example,
if it contains illegal characters, or if the directory does not allow
write access), then the system chooses a location according to its
own logic.
The temporary directory created for node file storage
will have a name prefixed by cpx
. The files
within it will also have names prefixed by cpx
.
CPLEX automatically removes the files and their temporary directory when it frees the branch & cut tree:
in the Interactive Optimizer,
at problem modification;
at normal termination;
from Concert Technology,
when you call
env.end
when you modify the extracted model
from the Callable Library,
when you call a problem modification routine;
when you call
CPXfreeprob
.
If a program terminates abnormally, the files are not removed.
Node files could grow very large. Use the tree memory limit parameter
(TreLim
, CPX_PARAM_TRELIM
)
to limit the size of the tree so that it does not exceed available
disk space, when you choose settings 2 or 3 in the node storage file switch (NodeFileInd
, CPX_PARAM_NODEFILEIND
).
It is usually better to let CPLEX terminate the run gracefully, with
whatever current feasible solution has been found, than to trigger
an error message or even a program abort.
When CPLEX uses node-file storage, the sequence of nodes processed may differ from the sequence in which nodes are processed without node-file storage. Nodes in node-file storage are not accessible to user-written callback routines.
Change algorithms
The best approach to reduce memory use is to modify the solution process. Here are some ways to do so:
- Switch the node selection strategy to best estimate, or more drastically to depth-first, as explained in Table 1. Depth-first search rarely generates a long, memory-consuming list of unexplored nodes since CPLEX dives deeply into the tree instead of jumping around. A narrowly focused search, like depth-first, also often results in faster processing times for individual nodes. However, overall solution time is generally much worse than with best-bound node selection because each branch is searched exhaustively to its deepest level before it is fathomed in favor of better branches.
Another memory-conserving strategy is to use strong branching for variable selection; that is, set the MIP variable selection strategy parameter (
VarSel
,CPX_PARAM_VARSEL
) to the value3
. Strong branching requires substantial computational effort at each node to decide the best branching variable. As a result, it generates fewer nodes and thus makes less overall demand on memory. Often, strong branching is faster as well.