Skip to content

Bandit algorithms simulations for online learning

Notifications You must be signed in to change notification settings

MinghaoYan/bandit_simulations

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bandit_simulations

Bandit algorithms simulations and analysis for online learning

This repo is part of my interest to learn more about optimisation for online learning algorithms which are heavily centerd on bandit theory. Based on what I understand, there are different types of bandit problems:

  • Multi-armed bandits: Bandits arms are inherently non-differentiable except for their inherent reward function. For multiple arm bandits, the objective is to determine the bandit with the highest reward function via online learning, which is a classic explore-versus-exploit problem.
  • Contextual bandits: Bandits with features (aka context) that interact differently with different actions. Different contextual features will require different actions to return the reward. This can be perceived as a classification problem: given input features aka context, what is the right classification of "actions" that will return high accuracy/reward?

This repo is segmented into both Python and R.

  • Python:
    • Phase 1 (MAB analysis): Comprises coding of certain Multi-Armed Bandit algorithms for experimentation.
    • Phase 2 (CB analysis): Implementation of contextual bandit algorithms starting with LinUCB Disjoint and LinUCB Hybrid based on A Contextual-Bandit Approach to Personalized News Article Recommendation.
    • Phase 3 (CB analysis): Utilise use vowpal wabbit package for online learning for contextual bandits simulation
  • R:
    • Phase 4 (MAB & CB analysis): Using R library package contextual that has a comprehensive ecosystem for different algorithm and policies

Analysis and Code Implementation

Phase 1 MAB analysis includes:

Phase 2 CB analysis (Currently ongoing):

Special Mention

A portion of the MAB code is based on the book "Bandit Algorithms for Website Optimization" by John Myles White.

Microsoft's vowpal wabbit package for Python can be found in this Github repo.

The R package for contextual can be found in this Github repo.

About

Bandit algorithms simulations for online learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.8%
  • Other 0.2%