| Branch | Build | Coverage |
|---|---|---|
| dev |
MLJ is a Julia framework for combining and tuning machine learning models. MLJFlow is a package that extends the MLJ capabilities to use MLflow as a backend for model tracking and experiment management. To be specific, MLJFlow provides a close to zero-preparation to use MLflow with MLJ; by the usage of function extensions that automate the MLflow cycle (create experiment, create run, log metrics, log parameters, log artifacts, etc.).
This project is part of the GSoC 2023 program. The proposal description can be found here. The entire workload is divided into three different repositories: MLJ.jl, MLFlowClient.jl and this one.
-
MLflow cycle automation (create experiment, create run, log metrics, log parameters, log artifacts, etc.)
-
Provides a wrapper
Loggerfor MLFlowClient.jl clients and associated metadata; instances of this type are valid "loggers", which can be passed to MLJ functions supporting theloggerkeyword argument. -
Provides MLflow integration with MLJ's
evaluate!/evaluatemethod (model performance evaluation) -
Extends MLJ's
MLJ.savemethod, to save trained machines as retrievable MLflow client artifacts -
Provides MLflow integration with MLJ's
TunedModelwrapper (to log hyper-parameter tuning workflows) -
Provides MLflow integration with MLJ's
IteratedModelwrapper (to log controlled iteration of tree gradient boosters, neural networks, and other iterative models) -
Plays well with composite models (pipelines, stacks, etc.)
The example below assumes the user is familiar with basic MLflow concepts. We suppose an
MLflow API service is running on a local server, with address "http://127.0.0.1:5000". (In a
shell/console, run mlflow server to launch an mlflow service on a local server.)
Refer to the MLflow documentation for necessary background.
Important. For the examples that follow, we assume MLJ, MLJDecisionTreeClassifier
and MLFlowClient are in the user's active Julia package environment.
using MLJ # Requires MLJ.jl version 0.19.3 or higherWe first define a logger, providing the API address of our running MLflow instance. The experiment name and artifact location are optional.
logger = MLJFlow.Logger(
"http://127.0.0.1:5000/api";
experiment_name="test",
artifact_location="./mlj-test"
)Next, grab some synthetic data and choose an MLJ model:
X, y = make_moons(100) # a table and a vector with 100 rows
DecisionTreeClassifier = @load DecisionTreeClassifier pkg=DecisionTree
model = DecisionTreeClassifier(max_depth=4)Now we call evaluate as usual but provide the logger as a keyword argument:
evaluate(
model,
X,
y,
resampling=CV(nfolds=5),
measures=[LogLoss(), Accuracy()],
logger=logger,
)Navigate to "http://127.0.0.1:5000" on your browser and select the "Experiment" matching the name above ("MLJFlow test"). Select the single run displayed to see the logged results of the performance evaluation.
Continuing with the previous example:
r = range(model, :max_depth, lower=1, upper=5)
tmodel = TunedModel(
model,
tuning=Grid(),
range = r;
resampling=CV(nfolds=9),
measures=[LogLoss(), Accuracy()],
logger=logger,
)
mach = machine(tmodel, X, y) |> fit!Return to the browser page (refreshing if necessary) and you will find five more
performance evaluations logged, one for each value of max_depth evaluated in tuning.
Let's train the model on all data and save the trained machine as an MLflow artifact:
mach = machine(model, X, y) |> fit!
run = MLJ.save(logger, mach)Notice that in this case MLJBase.save returns a run (an instance of MLFlowRun from
MLFlowClient.jl).
To retrieve an artifact we need to use the MLFlowClient.jl API, and for that we need to
know the MLflow service that our logger wraps:
service = MLJFlow.service(logger)And we reconstruct our trained machine thus:
using MLFlowClient
artifacts = MLFlowClient.listartifacts(service, run)
mach2 = machine(artifacts[1].filepath)We can predict using the deserialized machine:
predict(mach2, X)Set logger as the global logging target by running default_logger(logger). Then,
unless explicitly overridden, all loggable workflows will log to logger. In particular,
to suppress logging, you will need to specify logger=nothing in your calls.
So, for example, if we run the following setup
using MLJ
# using a new experiment name here:
logger = MLJFlow.Logger(
"http://127.0.0.1:5000/api";
experiment_name="test global logging",
artifact_location="./mlj-test"
)
default_logger(logger)
X, y = make_moons(100) # a table and a vector with 100 rows
DecisionTreeClassifier = @load DecisionTreeClassifier pkg=DecisionTree
model = DecisionTreeClassifier()Then the following is automatically logged
evaluate(model, X, y)But the following is not logged:
evaluate(model, X, y; logger=nothing)To save a machine when a default logger is set, one can use the following syntax:
mach = machine(model, X, y) |> fit!
MLJ.save(mach)Retrieve the saved machine as described earlier.