Title: | Bayesian Optimization and Model-Based Optimization of Expensive Black-Box Functions |
Version: | 1.1.5.1 |
Description: | Flexible and comprehensive R toolbox for model-based optimization ('MBO'), also known as Bayesian optimization. It implements the Efficient Global Optimization Algorithm and is designed for both single- and multi- objective optimization with mixed continuous, categorical and conditional parameters. The machine learning toolbox 'mlr' provide dozens of regression learners to model the performance of the target algorithm with respect to the parameter settings. It provides many different infill criteria to guide the search process. Additional features include multi-point batch proposal, parallel execution as well as visualization and sophisticated logging mechanisms, which is especially useful for teaching and understanding of algorithm behavior. 'mlrMBO' is implemented in a modular fashion, such that single components can be easily replaced or adapted by the user for specific use cases. |
License: | BSD_2_clause + file LICENSE |
URL: | https://github.com/mlr-org/mlrMBO |
BugReports: | https://github.com/mlr-org/mlrMBO/issues |
Depends: | mlr (≥ 2.10), ParamHelpers (≥ 1.10), smoof (≥ 1.5.1) |
Imports: | backports (≥ 1.1.0), BBmisc (≥ 1.11), checkmate (≥ 1.8.2), data.table, lhs, parallelMap (≥ 1.3) |
Suggests: | cmaesr (≥ 1.0.3), ggplot2, DiceKriging, earth, emoa, GGally, gridExtra, kernlab, kknn, knitr, mco, nnet, party, randomForest, reshape2, rmarkdown, rgenoud, rpart, testthat, covr |
Encoding: | UTF-8 |
ByteCompile: | yes |
RoxygenNote: | 7.1.1 |
VignetteBuilder: | knitr |
NeedsCompilation: | yes |
Packaged: | 2022-07-04 07:35:16 UTC; ripley |
Author: | Bernd Bischl |
Maintainer: | Jakob Richter <code@jakob-r.de> |
Repository: | CRAN |
Date/Publication: | 2022-07-04 08:50:50 UTC |
Multi-Objective result object.
Description
pareto.front [
matrix
]Pareto front of all evaluated points.pareto.set [
list
oflist
s]Pareto set of all evaluated points.pareto.inds [
numeric
]Indices of the Pareto-optimal points in the opt.pathopt.path [
OptPath
]Optimization path. Includes all evaluated points and additional information as documented in mbo_OptPath. You can convert it viaas.data.frame
.final.state [
character
] The final termination state. Gives information why the optimization endedmodels [List of
WrappedModel
]List of saved regression models.control[
MBOControl
] Control object used in optimization
Single-Objective result object.
Description
x [
list
]Named list of proposed optimal parameters.y [
numeric(1)
]Value of objective function atx
, either from evals during optimization or from requested final evaluations, if those were greater than 0.best.ind [
numeric(1)
]Index ofx
in the opt.path.opt.path [
OptPath
]Optimization path. Includes all evaluated points and additional information as documented in mbo_OptPath. You can convert it viaas.data.frame
.resample.results [List of
ResampleResult
]List of the desiredresample.results
ifresample.at
is set inmakeMBOControl
.final.state [
character
] The final termination state. Gives information why the optimization ended. Possible values are- term.iter
Maximal number of iterations reached.
- term.time
Maximal running time exceeded.
- term.exectime
Maximal execution time of function evaluations reached.
- term.yval
Target function value reached.
- term.fevals
maximal number of function evaluations reached.
- term.custom
Terminated due to custom, user-defined termination condition.
models [List of
WrappedModel
]List of saved regression models ifstore.model.at
is set inmakeMBOControl
. The default is that it contains the model generated after the last iteration.control [
MBOControl
] Control object used in optimization
OptProblem object.
Description
The OptProblem contains all the constants values which define a OptProblem within our MBO Steps. It is an environment and is always pointed at by the OptState.
OptResult object.
Description
The OptResult stores all entities which are not needed while optimizing but are needed to build the final result.
It can contains fitted surrogate models at certain times as well as resample objects.
When the optimization ended it will contain the [MBOResult
].
OptState object.
Description
The OptState is the central component of the mbo iterations.
This environment contains every necessary information needed during optimization in MBO.
It also links to the OptProblem
and to the OptResult
.
Error handling for mlrMBO
Description
There are multiple types of errors that can occur during one optimization process. mlrMBO tries to handle most of them as smart as possible.
The target function could
1The target function returns NA(s) or NaN(s) (plural for the multi-objective case).
2The target function stops with an error.
3The target function does not return at all (infinite or very long execution time).
4The target function crashes the whole R process.
5The surrogate machine learning model might crash. Kriging quite often can run into numerical problems.
6The proposal mechanism - in multi-point or single point mode - produces a point which is either close to another candidate point in the same iteration or an already visited point in a previous iteration.
7The mbo process exits / stops / crashes itself. Maybe because it hit a walltime.
Mechanism I - Objective value imputation
Issues 1-4 all have in common that the optimizer does not obtain a useful
objective value. 3-4 are problematic, because we completely lose control of the R process.
We are currently only able to handle them, if you are parallelizing your optimization
via parallelMap
and use the BatchJobs mode.
In this case, you can specify a walltime (handles 3) and the function evaluation is performed
in a separate R process (handles 4). A later path might be to allow function evaluation in
a separate process in general, with a capping time. If you really need this now, you can always
do this yourself.
Now back to the problem of invalid objective values. By default, the mbo function stops with an error
(if it still has control of the process). But in many cases you still want the algorithm to continue.
Hence, mbo allows imputation of bad values via the control option impute.y.fun
.
Logging: All error messages are logged into the optimization path opt.path
if problems occur.
Mechanism II - The mlr's on.learner.error
If your surrogate learner crashes you can set on.surrogate.error
in makeMBOControl
to “quiet” or “warn”.
This will set mlr's on.learner.error
for the surrogate.
It prevents MBO from crashing in total (issue 5), if the surrogate learner produces an error.
As a resort a FailureModel will be returned instead of a the surrogate.
Subsequently a random point (or multiple ones) are proposed now for the current iteration.
And we pray that we can fit the model again in the next iteration.
Logging: The entry “model.error” is set in the opt.path
.
Mechanism III - Filtering of proposed point which are too close
Issue 6 is solved by filtering points that are to close to other proposed points or points already
proposed in preceding iterations. Filtering in this context means replacing the proposed points by
a randomly generated new point. The heuristics mechanism is (de)activated via the logical
filter.proposed.points.tol
parameter of the setMBOControlInfill
function, which defaults to
TRUE
.(closeness of two points is determined via the filter.proposed.points.tol
parameter).
Logging: The logical entry “filtered.point” is set in the opt.path indicating whether the corresponding point was filtered.
Mechanism IV - Continue optimization process
The mechanism is a save-state-then-continue-mechanism, that allows you to continue
your optimization after your system or the optimization process crashed for
some reason (issue 7). The mbo
function has the option to save the
current state after certain iterations of the main loop on disk via the control
option save.on.disk.at
of makeMBOControl
.
Note that this saving mechanism is disabled by default.
Here you can specify, after which iteration you want the current state to be
saved (option save.on.disk.at
). Notice that 0 denotes saving the initial
design and iters
+ 1 denotes saving the final results.
With mboContinue
you can continue the optimization from the last
saved state. This function only requires the path of the saved state.
You will get a warning if you turn on saving in general, but not for the the final result, as
this seems a bit stupid. save.file.path
defines the path of the RData file where
the state is stored. It is overwritten (= extended) in each saving iteration.
Perform an mbo run on a test function and and visualize what happens.
Description
Usually used for 1D or 2D examples,
useful for figuring out how stuff works and for teaching purposes.
Currently only parameter spaces with numerical parameters are supported.
For visualization, run plotExampleRun
on the resulting object.
What is displayed is documented here: plotExampleRun
.
Rendering the plots without displaying them is possible via the function
renderExampleRunPlot
.
Please note the following things:
- The true objective function (and later everything which is predicted from our surrogate model)
is evaluated on a regular spaced grid. These evaluations are stored in the result object.
You can control the resolution of this grid via points.per.dim
.
Parallelization of these evaluations is possible with the R package parallelMap on the level mlrMBO.feval
.
- In every iteration the fitted, approximating surrogate model is stored in the result object
(via store.model.at
in control
) so we can later visualize it quickly.
- The global optimum of the function (if defined) is extracted from the passed smoof function.
- If the passed objective function fun
does not provide the true, unnoisy objective function
some features will not be displayed (for example the gap between the best point so far and the global optimum).
Usage
exampleRun(
fun,
design = NULL,
learner = NULL,
control,
points.per.dim = 50,
noisy.evals = 10,
show.info = getOption("mlrMBO.show.info", TRUE)
)
Arguments
fun |
[ |
design |
[ |
learner |
[ |
control |
[ |
points.per.dim |
[ |
noisy.evals |
[ |
show.info |
[ |
Value
[MBOExampleRun
]
Perform an MBO run on a multi-objective test function and and visualize what happens.
Description
Only available for 2D -> 2D examples,
useful for figuring out how stuff works and for teaching purposes.
Currently only parameter spaces with numerical parameters are supported.
For visualization, run plotExampleRun
on the resulting object.
What is displayed is documented here: plotExampleRun
.
Usage
exampleRunMultiObj(
fun,
design = NULL,
learner,
control,
points.per.dim = 50,
show.info = getOption("mlrMBO.show.info", TRUE),
nsga2.args = list(),
...
)
Arguments
fun |
[ |
design |
[ |
learner |
[ |
control |
[ |
points.per.dim |
[ |
show.info |
[ |
nsga2.args |
[ |
... |
[any] |
Value
[MBOExampleRunMultiObj
]
Note
If the passed objective function has no associated reference point max(y_i) + 1 of the nsga2 front is used.
Finalizes the SMBO Optimization
Description
Returns the common mlrMBO result object.
Usage
finalizeSMBO(opt.state)
Arguments
opt.state |
[ |
Value
[MBOSingleObjResult
| MBOMultiObjResult
]
Helper function which returns the (estimated) global optimum.
Description
Helper function which returns the (estimated) global optimum.
Usage
getGlobalOpt(run)
Arguments
run |
[ |
Value
[numeric(1)
]. (Estimated) global optimum.
Get properties of MBO infill criterion.
Description
Returns properties of an infill criterion, e.g., name or id.
Usage
getMBOInfillCritParams(x)
getMBOInfillCritParam(x, par.name)
getMBOInfillCritName(x)
getMBOInfillCritId(x)
hasRequiresInfillCritStandardError(x)
getMBOInfillCritComponents(x)
Arguments
x |
[ |
par.name |
[ |
Get names of supported infill-criteria optimizers.
Description
None.
Usage
getSupportedInfillOptFunctions()
Value
[character
]
Get names of supported multi-point infill-criteria optimizers.
Description
Returns all names of supported multi-point infill-criteria optimizers.
Usage
getSupportedMultipointInfillOptFunctions()
Value
[character
]
Infill criteria.
Description
mlrMBO contains most of the most popular infill criteria, e.g., expected
improvement, (lower) confidence bound etc. Moreover, custom infill criteria
may be generated with the makeMBOInfillCrit
function.
Usage
makeMBOInfillCritMeanResponse()
makeMBOInfillCritStandardError()
makeMBOInfillCritEI(se.threshold = 1e-06)
makeMBOInfillCritCB(cb.lambda = NULL)
makeMBOInfillCritAEI(aei.use.nugget = FALSE, se.threshold = 1e-06)
makeMBOInfillCritEQI(eqi.beta = 0.75, se.threshold = 1e-06)
makeMBOInfillCritDIB(cb.lambda = 1, sms.eps = NULL)
makeMBOInfillCritAdaCB(cb.lambda.start = NULL, cb.lambda.end = NULL)
Arguments
se.threshold |
[ |
cb.lambda |
[ |
aei.use.nugget |
[ |
eqi.beta |
[ |
sms.eps |
[ |
cb.lambda.start |
[ |
cb.lambda.end |
[ |
Details
In the multi-objective case we recommend to set cb.lambda
to
q(0.5 \cdot \pi_{CB}^{(1 / n)})
where q
is the quantile
function of the standard normal distribution, \pi_CB
is the probability
of improvement value and n
is the number of objectives of the considered problem.
See Also
Initialize an MBO infill criterion.
Description
Some infill criteria have parameters that are dependent on values in the parameter set, design,
used learner or other control settings.
To actually set these default values, this function is called, which returns a fully
initialized [MBOInfillCrit
].
This function is mainly for internal use. If a custom infill criterion is created, it may be
required to create a separate method initCrit.InfillCritID
where ID
is the
id
of the custom MBOInfillCrit.
Usage
initCrit(crit, fun, design, learner, control)
Arguments
crit |
[ |
fun |
[ |
design |
Sampling plan. |
learner |
[ |
control |
[ |
Value
Initialize a manual sequential MBO run.
Description
When you want to run a human-in-the-loop MBO run you need to initialize it first.
Usage
initSMBO(
par.set,
design,
learner = NULL,
control,
minimize = rep(TRUE, control$n.objectives),
noisy = FALSE,
show.info = getOption("mlrMBO.show.info", TRUE)
)
Arguments
par.set |
|
design |
[ |
learner |
[ |
control |
[ |
minimize |
[ |
noisy |
[ |
show.info |
[ |
Value
[OptState
]
Set MBO options.
Description
Creates a control object for MBO optimization.
Usage
makeMBOControl(
n.objectives = 1L,
propose.points = 1L,
final.method = "best.true.y",
final.evals = 0L,
y.name = "y",
impute.y.fun = NULL,
trafo.y.fun = NULL,
suppress.eval.errors = TRUE,
save.on.disk.at = integer(0L),
save.on.disk.at.time = Inf,
save.file.path = file.path(getwd(), "mlrMBO_run.RData"),
store.model.at = NULL,
resample.at = integer(0),
resample.desc = makeResampleDesc("CV", iter = 10),
resample.measures = list(mse),
output.num.format = "%.3g",
on.surrogate.error = "stop"
)
Arguments
n.objectives |
[ |
propose.points |
[ |
final.method |
[ |
final.evals |
[ |
y.name |
[ |
impute.y.fun |
[ |
trafo.y.fun |
[ |
suppress.eval.errors |
[ |
save.on.disk.at |
[ |
save.on.disk.at.time |
[ |
save.file.path |
[ |
store.model.at |
[ |
resample.at |
[ |
resample.desc |
[ |
resample.measures |
[list of |
output.num.format |
[ |
on.surrogate.error |
[ |
Value
[MBOControl
].
See Also
Other MBOControl:
setMBOControlInfill()
,
setMBOControlMultiObj()
,
setMBOControlMultiPoint()
,
setMBOControlTermination()
Create an infill criterion.
Description
The infill criterion guides the model based search process. The most prominent infill criteria, e.g., expected improvement, lower confidence bound and others, are already implemented in mlrMBO. Moreover, the package allows for the creation of custom infill criteria.
Usage
makeMBOInfillCrit(
fun,
name,
id,
opt.direction = "minimize",
components = character(0L),
params = list(),
requires.se = FALSE
)
Arguments
fun |
[
Important: Internally, this function will be minimized. So the proposals will be where this function is low. |
name |
[ |
id |
[ |
opt.direction |
[ |
components |
[ |
params |
[ |
requires.se |
[ |
Value
Predefined standard infill criteria
- crit.ei
Expected Improvement
- crit.mr
Mean response
- crit.se
Standard error
- crit.cb
Confidence bound with lambda automatically chosen, see
infillcrits
- crit.cb1
Confidence bound with lambda=1
- crit.cb2
Confidence bound with lambda=2
- crit.aei
Augmented expected improvement
- crit.eqi
Expected quantile improvement
- crit.dib1
Direct indicator-based with lambda=1
Generate default learner.
Description
This is a helper function that generates a default surrogate, based on properties of the objective function and the selected infill criterion.
For numeric-only (including integers) parameter spaces without any dependencies:
A Kriging model “regr.km” with kernel “matern3_2” is created.
If the objective function is deterministic we add a small nugget effect (10^-8*Var(y), y is vector of observed outcomes in current design) to increase numerical stability to hopefully prevent crashes of DiceKriging.
If the objective function is noisy the nugget effect will be estimated with
nugget.estim = TRUE
(but you can override this in...
. Alsojitter
is set toTRUE
to circumvent a problem with DiceKriging where already trained input values produce the exact trained output. For further information check the$note
slot of the created learner.Instead of the default
"BFGS"
optimization method we use rgenoud ("gen"
), which is a hybrid algorithm, to combine global search based on genetic algorithms and local search based on gradients. This may improve the model fit and will less frequently produce a constant surrogate model. You can also override this setting in...
.
For mixed numeric-categorical parameter spaces, or spaces with conditional parameters:
A random regression forest “regr.randomForest” with 500 trees is created.
The standard error of a prediction (if required by the infill criterion) is estimated by computing the jackknife-after-bootstrap. This is the
se.method = "jackknife"
option of the “regr.randomForest” Learner.
If additionally dependencies are in present in the parameter space, inactive conditional parameters
are represented by missing NA
values in the training design data.frame.
We simply handle those with an imputation method, added to the random forest:
If a numeric value is inactive, i.e., missing, it will be imputed by 2 times the maximum of observed values
If a categorical value is inactive, i.e., missing, it will be imputed by the special class label
"__miss__"
Both of these techniques make sense for tree-based methods and are usually hard to beat, see Ding et.al. (2010).
Usage
makeMBOLearner(control, fun, config = list(), ...)
Arguments
control |
[ |
fun |
[ |
config |
[ |
... |
[any] |
Value
[Learner
]
References
Ding, Yufeng, and Jeffrey S. Simonoff. An investigation of missing data methods for classification trees applied to binary response data. Journal of Machine Learning Research 11.Jan (2010): 131-170.
Create a transformation function for MBOExampleRun.
Description
Creates a transformation function for MBOExampleRun.
Usage
makeMBOTrafoFunction(name, fun)
Arguments
name |
[ |
fun |
[ |
Value
Object of type MBOTrafoFunction.
See Also
Optimizes a function with sequential model based optimization.
Description
See mbo_parallel for all parallelization options.
Usage
mbo(
fun,
design = NULL,
learner = NULL,
control = NULL,
show.info = getOption("mlrMBO.show.info", TRUE),
more.args = list()
)
Arguments
fun |
[ |
design |
[ |
learner |
[ |
control |
[ |
show.info |
[ |
more.args |
[list] |
Value
[MBOSingleObjResult
| MBOMultiObjResult
]
Examples
# simple 2d objective function
obj.fun = makeSingleObjectiveFunction(
fn = function(x) x[1]^2 + sin(x[2]),
par.set = makeNumericParamSet(id = "x", lower = -1, upper = 1, len = 2)
)
# create base control object
ctrl = makeMBOControl()
# do three MBO iterations
ctrl = setMBOControlTermination(ctrl, iters = 3L)
# use 500 points in the focussearch (should be sufficient for 2d)
ctrl = setMBOControlInfill(ctrl, opt.focussearch.points = 500)
# create initial design
des = generateDesign(n = 5L, getParamSet(obj.fun), fun = lhs::maximinLHS)
# start mbo
res = mbo(obj.fun, design = des, control = ctrl)
print(res)
## Not run:
plot(res)
## End(Not run)
Continues an mbo run from a save-file.
Description
Useful if your optimization is likely to crash, so you can continue from a save point and will not lose too much information and runtime.
Usage
mboContinue(opt.state)
Arguments
opt.state |
[ |
Value
See mbo
.
Finalizes an mbo run from a save-file.
Description
Useful if your optimization didn't terminate but you want a results nonetheless.
Usage
mboFinalize(file)
Arguments
file |
[ |
Value
See mbo
.
OptPath in mlrMBO
Description
In mlrMBO the OptPath
contains extra information next to the information documented in OptPath
.
The extras are:
- train.time
Time to train the model(s) that produced the points. Only the first slot of the vector is used (if we have multiple points), rest are NA.
- propose.time
Time needed to propose the point. If we have individual timings from the proposal mechanism, we have one different value per point here. If all were generated in one go, we only have one timing, we store it in the slot for the first point, rest are NA.
- errors.model
Possible Error Messages. If point-producing model(s) crashed they are replicated for all n points, if only one error message was passed we store it for the first point, rest are NA.
- prop.type
Type of point proposal. Possible values are
- initdesign
Points actually not proposed, but in the initial design.
- infill_x
Here x is a placeholder for the selected infill criterion, e.g., infill_ei for expected improvement.
- random_interleave
Uniformly sampled points added additionally to the proposed points.
- random_filtered
If filtering of proposed points located too close to each other is active, these are replaced by random points.
- final_eval
If
final.evals
is set inmakeMBOControl
: Final evaluations of the proposed solution to reduce noise in y.
- parego.weight
Weight vector sampled for multi-point ParEGO
- ...
Depending on the chosen infill criterion there will be additional columns, e.g.
se
andmean
for the Expected Improvement)
Moreover, the user may pass additional “user extras” by appending a named list of scalar values to the return value of the objective function.
Parallelization in mlrMBO
Description
In mlrMBO you can parallelize the tuning on two different levels to speed up computation:
mlrMBO.feval
Multiple evaluations of the target function.mlrMBO.propose.points
Optimization of the infill criteria if multiple are used (e.g. ParEGO and ParallelLCB)
Internally the evaluation of the target function is realized with the R package parallelMap.
See the mlrMBO tutorial and the Github project pages of parallelMap for instructions on how to set up parallelization.
The different levels of parallelization can be specified in parallelStart*
.
Details for the levels mentioned above are given below:
Evaluation of the objective function can be parallelized in cases multiple points are to be evaluated at once. These are: evaluation of the initial design, multiple proposed points per iteration and evaluation of the target function in
exampleRun
. (Level:mlrMBO.feval
)Model fitting / point proposal - in some cases where independent, expensive operations are performed. (Level:
mlrMBO.propose.points
)
Details regarding the latter:
- single-objective MBO with LCB multi-point
Parallel optimization of LCBs for the lambda-values.
- Multi-objective MBO with ParEGO
Parallel optimization of scalarization functions.
mlrMBO examples
Description
Different scenarios of the usage of mlrMBO and visualizations.
Examples
#####################################################
###
### optimizing a simple sin(x) with mbo / EI
###
#####################################################
## Not run:
library(ggplot2)
library(mlrMBO)
configureMlr(show.learner.output = FALSE)
set.seed(1)
obj.fun = makeSingleObjectiveFunction(
name = "Sine",
fn = function(x) sin(x),
par.set = makeNumericParamSet(lower = 3, upper = 13, len = 1),
global.opt.value = -1
)
ctrl = makeMBOControl(propose.points = 1)
ctrl = setMBOControlTermination(ctrl, iters = 10L)
ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(),
opt = "focussearch", opt.focussearch.points = 500L)
lrn = makeMBOLearner(ctrl, obj.fun)
design = generateDesign(6L, getParamSet(obj.fun), fun = lhs::maximinLHS)
run = exampleRun(obj.fun, design = design, learner = lrn,
control = ctrl, points.per.dim = 100, show.info = TRUE)
plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw()))
## End(Not run)
#####################################################
###
### optimizing branin in 2D with mbo / EI #####
###
#####################################################
## Not run:
library(mlrMBO)
library(ggplot2)
set.seed(1)
configureMlr(show.learner.output = FALSE)
obj.fun = makeBraninFunction()
ctrl = makeMBOControl(propose.points = 1L)
ctrl = setMBOControlTermination(ctrl, iters = 10L)
ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(),
opt = "focussearch", opt.focussearch.points = 2000L)
lrn = makeMBOLearner(ctrl, obj.fun)
design = generateDesign(10L, getParamSet(obj.fun), fun = lhs::maximinLHS)
run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl,
points.per.dim = 50L, show.info = TRUE)
print(run)
plotExampleRun(run, gg.objects = list(theme_bw()))
## End(Not run)
#####################################################
###
### optimizing a simple sin(x) with multipoint proposal
###
#####################################################
## Not run:
library(mlrMBO)
library(ggplot2)
set.seed(1)
configureMlr(show.learner.output = FALSE)
obj.fun = makeSingleObjectiveFunction(
name = "Sine",
fn = function(x) sin(x),
par.set = makeNumericParamSet(lower = 3, upper = 13, len = 1L),
global.opt.value = -1
)
ctrl = makeMBOControl(propose.points = 2L)
ctrl = setMBOControlTermination(ctrl, iters = 10L)
ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritMeanResponse())
ctrl = setMBOControlMultiPoint(
ctrl,
method = "moimbo",
moimbo.objective = "ei.dist",
moimbo.dist = "nearest.neighbor",
moimbo.maxit = 200L
)
lrn = makeMBOLearner(ctrl, obj.fun)
design = generateDesign(4L, getParamSet(obj.fun), fun = lhs::maximinLHS)
run = exampleRun(obj.fun, design = design, learner = lrn,
control = ctrl, points.per.dim = 100, show.info = TRUE)
print(run)
plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw()))
## End(Not run)
#####################################################
###
### optimizing branin in 2D with multipoint proposal #####
###
#####################################################
## Not run:
library(mlrMBO)
library(ggplot2)
set.seed(2)
configureMlr(show.learner.output = FALSE)
obj.fun = makeBraninFunction()
ctrl = makeMBOControl(propose.points = 5L)
ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritMeanResponse())
ctrl = setMBOControlTermination(ctrl, iters = 10L)
ctrl = setMBOControlMultiPoint(ctrl,
method = "moimbo",
moimbo.objective = "ei.dist",
moimbo.dist = "nearest.neighbor",
moimbo.maxit = 200L
)
lrn = makeLearner("regr.km", predict.type = "se")
design = generateDesign(10L, getParamSet(obj.fun), fun = lhs::maximinLHS)
run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl,
points.per.dim = 50L, show.info = TRUE)
print(run)
plotExampleRun(run, gg.objects = list(theme_bw()))
## End(Not run)
#####################################################
###
### optimizing a simple noisy sin(x) with mbo / EI
###
#####################################################
## Not run:
library(mlrMBO)
library(ggplot2)
set.seed(1)
configureMlr(show.learner.output = FALSE)
# function with noise
obj.fun = makeSingleObjectiveFunction(
name = "Some noisy function",
fn = function(x) sin(x) + rnorm(1, 0, 0.1),
par.set = makeNumericParamSet(lower = 3, upper = 13, len = 1L),
noisy = TRUE,
global.opt.value = -1,
fn.mean = function(x) sin(x)
)
ctrl = makeMBOControl(
propose.points = 1L,
final.method = "best.predicted",
final.evals = 10L
)
ctrl = setMBOControlTermination(ctrl, iters = 5L)
ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(),
opt = "focussearch", opt.focussearch.points = 500L)
lrn = makeMBOLearner(ctrl, obj.fun)
design = generateDesign(6L, getParamSet(obj.fun), fun = lhs::maximinLHS)
run = exampleRun(obj.fun, design = design, learner = lrn,
control = ctrl, points.per.dim = 200L, noisy.evals = 50L,
show.info = TRUE)
print(run)
plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw()))
## End(Not run)
#####################################################
###
### optimizing 1D fun with 3 categorical level and
### noisy outout with random forest
###
#####################################################
## Not run:
library(mlrMBO)
library(ggplot2)
set.seed(1)
configureMlr(show.learner.output = FALSE)
obj.fun = makeSingleObjectiveFunction(
name = "Mixed decision space function",
fn = function(x) {
if (x$foo == "a") {
return(5 + x$bar^2 + rnorm(1))
} else if (x$foo == "b") {
return(4 + x$bar^2 + rnorm(1, sd = 0.5))
} else {
return(3 + x$bar^2 + rnorm(1, sd = 1))
}
},
par.set = makeParamSet(
makeDiscreteParam("foo", values = letters[1:3]),
makeNumericParam("bar", lower = -5, upper = 5)
),
has.simple.signature = FALSE, # function expects a named list of parameter values
noisy = TRUE
)
ctrl = makeMBOControl()
ctrl = setMBOControlTermination(ctrl, iters = 10L)
# we can basically do an exhaustive search in 3 values
ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(),
opt.restarts = 1L, opt.focussearch.points = 3L, opt.focussearch.maxit = 1L)
design = generateDesign(20L, getParamSet(obj.fun), fun = lhs::maximinLHS)
lrn = makeMBOLearner(ctrl, obj.fun)
run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl,
points.per.dim = 50L, show.info = TRUE)
print(run)
plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw()))
## End(Not run)
#####################################################
###
### optimizing mixed space function
###
#####################################################
## Not run:
library(mlrMBO)
library(ggplot2)
set.seed(1)
configureMlr(show.learner.output = FALSE)
obj.fun = makeSingleObjectiveFunction(
name = "Mixed functions",
fn = function(x) {
if (x$cat == "a")
x$num^2
else
x$num^2 + 3
},
par.set = makeParamSet(
makeDiscreteParam("cat", values = c("a", "b")),
makeNumericParam("num", lower = -5, upper = 5)
),
has.simple.signature = FALSE,
global.opt.value = -1
)
ctrl = makeMBOControl(propose.points = 1L)
ctrl = setMBOControlTermination(ctrl, iters = 10L)
ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(),
opt = "focussearch", opt.focussearch.points = 500L)
lrn = makeMBOLearner(ctrl, obj.fun)
design = generateDesign(4L, getParamSet(obj.fun), fun = lhs::maximinLHS)
run = exampleRun(obj.fun, design = design, learner = lrn,
control = ctrl, points.per.dim = 100L, show.info = TRUE)
print(run)
plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw()))
## End(Not run)
#####################################################
###
### optimizing multi-objective function
###
#####################################################
## Not run:
library(mlrMBO)
library(ggplot2)
set.seed(1)
configureMlr(show.learner.output = FALSE)
obj.fun = makeZDT1Function(dimensions = 2L)
ctrl = makeMBOControl(n.objectives = 2L, propose.points = 2L, save.on.disk.at = integer(0L))
ctrl = setMBOControlTermination(ctrl, iters = 5L)
ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritDIB(),
opt.focussearch.points = 10000L)
ctrl = setMBOControlMultiObj(ctrl, parego.s = 100)
learner = makeMBOLearner(ctrl, obj.fun)
design = generateDesign(5L, getParamSet(obj.fun), fun = lhs::maximinLHS)
run = exampleRunMultiObj(obj.fun, design = design, learner = learner, ctrl, points.per.dim = 50L,
show.info = TRUE, nsga2.args = list())
plotExampleRun(run, gg.objects = list(theme_bw()))
## End(Not run)
#####################################################
###
### optimizing multi objective function and plots
###
#####################################################
## Not run:
library(mlrMBO)
library(ggplot2)
set.seed(1)
configureMlr(show.learner.output = FALSE)
obj.fun = makeDTLZ1Function(dimensions = 5L, n.objectives = 2L)
ctrl = makeMBOControl(n.objectives = 2L,
propose.points = 2L)
ctrl = setMBOControlTermination(ctrl, iters = 10L)
ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(), opt.focussearch.points = 1000L,
opt.focussearch.maxit = 3L)
ctrl = setMBOControlMultiObj(ctrl, method = "parego")
lrn = makeMBOLearner(ctrl, obj.fun)
design = generateDesign(8L, getParamSet(obj.fun), fun = lhs::maximinLHS)
res = mbo(obj.fun, design = design, learner = lrn, control = ctrl, show.info = TRUE)
plot(res)
## End(Not run)
Generate ggplot2 Object
Description
Plots the values of the infill criterion for a 1- and 2-dimensional numerical search space for a given OptState
.
Usage
## S3 method for class 'OptState'
plot(x, scale.panels = FALSE, points.per.dim = 100, ...)
Arguments
x |
[ |
scale.panels |
[ |
points.per.dim |
[ |
... |
[any] |
Renders plots for exampleRun objects and displays them.
Description
The graphical output depends on the target function at hand. - For 1D numeric functions the upper plot shows the true function (if known), the model and the (infill) points. The lower plot shows the infill criterion. - For 2D mixed target functions only one plot is displayed. - For 2D numeric only target functions up to four plots are presented to the viewer: - levelplot of the true function landscape (with [infill] points), - levelplot of the model landscape (with [infill] points), - levelplot of the infill criterion - levelplot of the standard error (only if learner supports standard error estimation). - For bi-criteria target functions the upper plot shows the target space and the lower plot displays the x-space.
Usage
plotExampleRun(
object,
iters,
pause = interactive(),
densregion = TRUE,
se.factor = 1,
single.prop.point.plots = FALSE,
xlim = NULL,
ylim = NULL,
point.size = 3,
line.size = 1,
trafo = NULL,
colors = c("red", "blue", "green"),
gg.objects = list(),
...
)
Arguments
object |
[ |
iters |
[ |
pause |
[ |
densregion |
[ |
se.factor |
[ |
single.prop.point.plots |
[ |
xlim |
[ |
ylim |
[ |
point.size |
[ |
line.size |
[ |
trafo |
[ |
colors |
[ |
gg.objects |
[ |
... |
[any] |
Value
Nothing.
MBO Result Plotting
Description
Plots any MBO result objects. Plots for X-Space, Y-Space and any column in
the optimization path are available. This function uses
plotOptPath
from package ParamHelpers
.
Usage
## S3 method for class 'MBOSingleObjResult'
plot(x, iters = NULL, pause = interactive(), ...)
## S3 method for class 'MBOMultiObjResult'
plot(x, iters = NULL, pause = interactive(), ...)
Arguments
x |
[ |
iters |
[ |
pause |
[ |
... |
Additional parameters for the |
Print mbo control object.
Description
Print mbo control object.
Usage
## S3 method for class 'MBOControl'
print(x, ...)
Arguments
x |
[ |
... |
[any] |
Propose candidates for the objective function
Description
Propose points for the objective function that should be evaluated according to the infill criterion and the recent evaluations.
Usage
proposePoints(opt.state)
Arguments
opt.state |
[ |
Renders plots for exampleRun objects, either in 1D or 2D, or exampleRunMultiObj objects.
Description
The graphical output depends on the target function at hand. - For 1D numeric functions the upper plot shows the true function (if known), the model and the (infill) points. The lower plot shows the infill criterion. - For 2D mixed target functions only one plot is displayed. - For 2D numeric only target functions up to four plots are presented to the viewer: - levelplot of the true function landscape (with [infill] points), - levelplot of the model landscape (with [infill] points), - levelplot of the infill criterion - levelplot of the standard error (only if learner supports standard error estimation). - For bi-criteria target functions the upper plot shows the target space and the lower plot displays the x-space.
Usage
renderExampleRunPlot(
object,
iter,
densregion = TRUE,
se.factor = 1,
single.prop.point.plots = FALSE,
xlim = NULL,
ylim = NULL,
point.size = 3,
line.size = 1,
trafo = NULL,
colors = c("red", "blue", "green"),
...
)
Arguments
object |
[ |
iter |
[ |
densregion |
[ |
se.factor |
[ |
single.prop.point.plots |
[ |
xlim |
[ |
ylim |
[ |
point.size |
[ |
line.size |
[ |
trafo |
[ |
colors |
[ |
... |
[any] |
Value
[list
]. List containing separate ggplot object. The number of plots depends on
the type of MBO problem. See the description for details.
Extends mbo control object with infill criteria and infill optimizer options.
Description
Please note that internally all infill criteria are minimized. So for some of them, we internally compute their negated version, e.g., for EI or also for CB when the objective is to be maximized. In the latter case mlrMBO actually computes the negative upper confidence bound and minimizes that.
Usage
setMBOControlInfill(
control,
crit = NULL,
interleave.random.points = 0L,
filter.proposed.points = NULL,
filter.proposed.points.tol = NULL,
opt = "focussearch",
opt.restarts = NULL,
opt.focussearch.maxit = NULL,
opt.focussearch.points = NULL,
opt.cmaes.control = NULL,
opt.ea.maxit = NULL,
opt.ea.mu = NULL,
opt.ea.sbx.eta = NULL,
opt.ea.sbx.p = NULL,
opt.ea.pm.eta = NULL,
opt.ea.pm.p = NULL,
opt.ea.lambda = NULL,
opt.nsga2.popsize = NULL,
opt.nsga2.generations = NULL,
opt.nsga2.cprob = NULL,
opt.nsga2.cdist = NULL,
opt.nsga2.mprob = NULL,
opt.nsga2.mdist = NULL
)
Arguments
control |
[ |
crit |
[ |
interleave.random.points |
[ |
filter.proposed.points |
[ |
filter.proposed.points.tol |
[ |
opt |
[ |
opt.restarts |
[ |
opt.focussearch.maxit |
[ |
opt.focussearch.points |
[ |
opt.cmaes.control |
[ |
opt.ea.maxit |
[ |
opt.ea.mu |
[ |
opt.ea.sbx.eta |
[ |
opt.ea.sbx.p |
[ |
opt.ea.pm.eta |
[ |
opt.ea.pm.p |
[ |
opt.ea.lambda |
[ |
opt.nsga2.popsize |
[ |
opt.nsga2.generations |
[ |
opt.nsga2.cprob |
[ |
opt.nsga2.cdist |
[ |
opt.nsga2.mprob |
[ |
opt.nsga2.mdist |
[ |
Value
[MBOControl
].
See Also
Other MBOControl:
makeMBOControl()
,
setMBOControlMultiObj()
,
setMBOControlMultiPoint()
,
setMBOControlTermination()
Set multi-objective options.
Description
Extends MBO control object with multi-objective specific options.
Usage
setMBOControlMultiObj(
control,
method = NULL,
ref.point.method = NULL,
ref.point.offset = NULL,
ref.point.val = NULL,
parego.s = NULL,
parego.rho = NULL,
parego.use.margin.points = NULL,
parego.sample.more.weights = NULL,
parego.normalize = NULL,
dib.indicator = NULL,
mspot.select.crit = NULL
)
Arguments
control |
[ |
method |
[ |
ref.point.method |
[ |
ref.point.offset |
[ |
ref.point.val |
[ |
parego.s |
[ |
parego.rho |
[ |
parego.use.margin.points |
[ |
parego.sample.more.weights |
[ |
parego.normalize |
[ |
dib.indicator |
[ |
mspot.select.crit |
[ |
Value
[MBOControl
].
References
For more information on the implemented multi-objective procedures the following sources might be helpful: Knowles, J.: ParEGO: A hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. IEEE Transactions on Evolutionary Computation, 10 (2006) 1, pp. 50-66
Wagner, T.; Emmerich, M.; Deutz, A.; Ponweiser, W.: On Expected- Improvement Criteria for Model-Based Multi-Objective Optimization. In: Proc. 11th Int. Conf. Parallel Problem Solving From Nature (PPSN XI) - Part I, Krakow, Poland, Schaefer, R.; Cotta, C.; Kolodziej, J.; Rudolph, G. (eds.), no. 6238 in Lecture Notes in Computer Science, Springer, Berlin, 2010, ISBN 978-3-642-15843-8, pp. 718-727, doi:10. 1007/978-3-642-15844-5 72
Wagner, T.: Planning and Multi-Objective Optimization of Manufacturing Processes by Means of Empirical Surrogate Models. No. 71 in Schriftenreihe des ISF, Vulkan Verlag, Essen, 2013, ISBN 978-3-8027-8775-1
Zaefferer, M.; Bartz-Beielstein, T.; Naujoks, B.; Wagner, T.; Emmerich, M.: A Case Study on Multi-Criteria Optimization of an Event Detection Software under Limited Budgets. In: Proc. 7th International. Conf. Evolutionary Multi-Criterion Optimization (EMO 2013), March 19-22, Sheffield, UK, R. Purshouse; P. J. Fleming; C. M. Fonseca; S. Greco; J. Shaw, eds., 2013, vol. 7811 of Lecture Notes in Computer Science, ISBN 978-3-642-37139-4, pp. 756770, doi:10.1007/978-3-642-37140-0 56
Jeong, S.; Obayashi, S.: Efficient global optimization (EGO) for Multi-Objective Problem and Data Mining. In: Proc. IEEE Congress on Evolutionary Computation (CEC 2005), Edinburgh, UK, Corne, D.; et.al. (eds.), IEEE, 2005, ISBN 0-7803-9363-5, pp. 2138-2145
See Also
Other MBOControl:
makeMBOControl()
,
setMBOControlInfill()
,
setMBOControlMultiPoint()
,
setMBOControlTermination()
Set multipoint proposal options.
Description
Extends an MBO control object with options for multipoint proposal.
Usage
setMBOControlMultiPoint(
control,
method = NULL,
cl.lie = NULL,
moimbo.objective = NULL,
moimbo.dist = NULL,
moimbo.selection = NULL,
moimbo.maxit = NULL,
moimbo.sbx.eta = NULL,
moimbo.sbx.p = NULL,
moimbo.pm.eta = NULL,
moimbo.pm.p = NULL
)
Arguments
control |
[ |
method |
[ |
cl.lie |
[ |
moimbo.objective |
[ |
moimbo.dist |
[ |
moimbo.selection |
[ |
moimbo.maxit |
[ |
moimbo.sbx.eta |
[ |
moimbo.sbx.p |
[ |
moimbo.pm.eta |
[ |
moimbo.pm.p |
[ |
Value
[MBOControl
].
See Also
Other MBOControl:
makeMBOControl()
,
setMBOControlInfill()
,
setMBOControlMultiObj()
,
setMBOControlTermination()
Set termination options.
Description
Extends an MBO control object with infill criteria and infill optimizer options.
Usage
setMBOControlTermination(
control,
iters = NULL,
time.budget = NULL,
exec.time.budget = NULL,
target.fun.value = NULL,
max.evals = NULL,
more.termination.conds = list(),
use.for.adaptive.infill = NULL
)
Arguments
control |
[ |
iters |
[ |
time.budget |
[ |
exec.time.budget |
[ |
target.fun.value |
[ |
max.evals |
[ |
more.termination.conds |
[
|
use.for.adaptive.infill |
[ |
Value
[MBOControl
].
See Also
Other MBOControl:
makeMBOControl()
,
setMBOControlInfill()
,
setMBOControlMultiObj()
,
setMBOControlMultiPoint()
Examples
fn = smoof::makeSphereFunction(1L)
ctrl = makeMBOControl()
# custom termination condition (stop if target function value reached)
# We neglect the optimization direction (min/max) in this example.
yTargetValueTerminator = function(y.val) {
force(y.val)
function(opt.state) {
opt.path = opt.state$opt.path
current.best = getOptPathEl(opt.path, getOptPathBestIndex((opt.path)))$y
term = (current.best <= y.val)
message = if (!term) NA_character_ else sprintf("Target function value %f reached.", y.val)
return(list(term = term, message = message))
}
}
# assign custom termination condition
ctrl = setMBOControlTermination(ctrl, more.termination.conds = list(yTargetValueTerminator(0.05)))
res = mbo(fn, control = ctrl)
print(res)
Transformation methods.
Description
logTrafo
Natural logarithm.sqrtTrafo
Square root.
If negative values occur and the trafo function can handle only positive values,
a shift of the form x - min(x) + 1 is performed prior to the transformation if the
argument handle.violations
is set to “warn” which is the default
value.
Usage
trafoLog(base = 10, handle.violations = "warn")
trafoSqrt(handle.violations = "warn")
Arguments
base |
[ |
handle.violations |
[ |
Format
None
Updates SMBO with the new observations
Description
After a function evaluation you want to update the OptState
to get new proposals.
Usage
updateSMBO(opt.state, x, y)
Arguments
opt.state |
[ |
x |
[ |
y |
[ |
Value
[OptState
]