Skip to content

JuliaDecisionFocusedLearning/DecisionFocusedLearningBenchmarks.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DecisionFocusedLearningBenchmarks.jl

Stable Dev Build Status Coverage Code Style: Blue

Warning

This package is currently under active development. The API may change in future releases. Please refer to the documentation for the latest updates.

What is Decision-Focused Learning?

Decision-Focused Learning (DFL) is a paradigm that integrates machine learning prediction with combinatorial optimization to make better decisions under uncertainty. Unlike traditional "predict-then-optimize" approaches that optimize prediction accuracy independently of downstream decision quality, DFL directly optimizes end-to-end decision performance.

A typical DFL algorithm involves training a parametrized policy that combines a statistical predictor with an optimization component:

$$x \;\longrightarrow\; \boxed{\,\text{Statistical model } \varphi_w\,} \;\xrightarrow{\theta}\; \boxed{\,\text{CO algorithm } f\,} \;\longrightarrow\; y$$

Where:

  • Statistical model $\varphi_w$: machine learning predictor (e.g., neural network)
  • CO algorithm $f$: combinatorial optimization solver
  • Instance $x$: input data (e.g., features, context)
  • Parameters $\theta$: predicted parameters for the optimization problem solved by f
  • Solution $y$: output decision/solution

Package Overview

DecisionFocusedLearningBenchmarks.jl provides a comprehensive collection of benchmark problems for evaluating decision-focused learning algorithms. The package offers:

  • Standardized benchmark problems spanning diverse application domains
  • Common interfaces for creating datasets, statistical models, and optimization algorithms
  • Ready-to-use DFL policies compatible with InferOpt.jl and the whole JuliaDecisionFocusedLearning ecosystem
  • Evaluation tools for comparing algorithm performance

Benchmark Categories

The package organizes benchmarks into three main categories based on their problem structure:

Static Benchmarks (AbstractBenchmark)

Single-stage optimization problems with no randomness involved:

Stochastic Benchmarks (AbstractStochasticBenchmark)

Single-stage optimization problems under uncertainty:

Dynamic Benchmarks (AbstractDynamicBenchmark)

Multi-stage sequential decision-making problems:

Getting Started

In a few lines of code, you can create benchmark instances, generate datasets, initialize learning components, and evaluate performance, using the same syntax across all benchmarks:

using DecisionFocusedLearningBenchmarks

# Create a benchmark instance for the argmax problem
benchmark = ArgmaxBenchmark()

# Generate training data
dataset = generate_dataset(benchmark, 100)

# Initialize policy components
model = generate_statistical_model(benchmark)
maximizer = generate_maximizer(benchmark)

# Training algorithm you want to use
# ... your training code here ...

# Evaluate performance
gap = compute_gap(benchmark, dataset, model, maximizer)

The only component you need to customize is the training algorithm itself.

Related Packages

This package is part of the JuliaDecisionFocusedLearning organization, and built to be compatible with other packages in the ecosystem:

About

Benchmark problems for decision-focused learning

Topics

Resources

License

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •  

Languages