Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Safe optimization in the Service API #2563

Open
Abrikosoff opened this issue Jul 5, 2024 · 5 comments
Open

Safe optimization in the Service API #2563

Abrikosoff opened this issue Jul 5, 2024 · 5 comments
Assignees
Labels
question Further information is requested

Comments

@Abrikosoff
Copy link

Abrikosoff commented Jul 5, 2024

Hi Ax Team,

I am trying to implement a Service API version of the safe optimization idea floated by @Balandat here; so far I've come up with a snippet of the form

def estimate_probabilities_of_satisfaction(model, points: torch.Tensor, constraints: List[Callable]):
    """
    Estimate the probability of satisfying the given nonlinear inequality constraints g(x) >= 0.
    
    Args:
        model (Model): A trained BoTorch model
        points (torch.Tensor): Points at which to estimate constraint satisfaction, shape (n, d)
        constraints (List[Callable]): List of constraint functions, each should take a tensor of shape (n, d) 
                                      and return a tensor of shape (n,)
    
    Returns:
        torch.Tensor: Probabilities of satisfying all constraints, shape (n,)
    """
    posterior = model.posterior(X=points)
    mus, sigma2s = posterior.mean, posterior.variance
    
    probs = torch.ones(points.shape[0], device=points.device)
    
    for constraint in constraints:
        # Compute the mean and variance of g(x)
        g_mus = constraint(mus)
        
        # Compute the gradients of g with respect to x
        mus.requires_grad_(True)
        g_mus = constraint(mus)
        grads = torch.autograd.grad(g_mus.sum(), mus)[0]
        mus.requires_grad_(False)
        
        # Compute the variance of g(x) using the delta method
        g_vars = torch.sum((grads.unsqueeze(1) * sigma2s * grads.unsqueeze(2)), dim=(1,2))
        
        # Create a normal distribution for g(x)
        dist = torch.distributions.normal.Normal(g_mus, g_vars.sqrt())
        
        # Compute the probability of g(x) >= 0
        prob_constraint = 1 - dist.cdf(torch.zeros_like(g_mus))
        
        # Update the overall probability
        probs *= prob_constraint
    
    return probs

def probs_constraint(
        gamma: float,
        model,
        X: torch.Tensor
        constraints: List[Callable],
):
    return gamma - estimate_probabilities_of_satisfaction(model, X, constraints)

But here I am stuck as I'm not sure how to retrieve the current fitted model, since I'm thinking of passing probs_constraint as a nonlinear_inequality_constraint in a GenerationStrategy. Any ideas?

@sdaulton
Copy link
Contributor

sdaulton commented Jul 8, 2024

Hi @Abrikosoff,

I am guessing that @Balandat intended to the constraint model to be pretrained outside of Ax. If you wanted to use non-linear constraints with scipy, you could implement a custom Acquisition that has a different optimize function that constructs the right non-linear constraint from the fitted model.

An alternative would be to create a new acquisition function that constructs and uses the probabilistic constraint. e.g. EI weighted by the probability that the probabilistic constraint is satisfied. One way to do this would be to make a subclass of (Log)EI that creates the necessary constraint within construct inputs similar to this.

You could then use this acquisition function in a GenerationStrategy that uses it. Parts 3b and 5 of this tutorial show how to do this.

@Abrikosoff
Copy link
Author

Abrikosoff commented Jul 10, 2024

Hi @Abrikosoff,

I am guessing that @Balandat intended to the constraint model to be pretrained outside of Ax. If you wanted to use non-linear constraints with scipy, you could implement a custom Acquisition that has a different optimize function that constructs the right non-linear constraint from the fitted model.

An alternative would be to create a new acquisition function that constructs and uses the probabilistic constraint. e.g. EI weighted by the probability that the probabilistic constraint is satisfied. One way to do this would be to make a subclass of (Log)EI that creates the necessary constraint within construct inputs similar to this.

You could then use this acquisition function in a GenerationStrategy that uses it. Parts 3b and 5 of this tutorial show how to do this.

Hi Sam, thanks a lot for the reply! Actually currently what I'm doing is defining nonlinear constraints and passing them to a GenerationStrategy, something like the following:

local_nchoosek_strategy = GenerationStrategy(
                    steps=[
                        GenerationStep(
                            model=Models.SOBOL,
                            num_trials=num_sobol_trials_for_nchoosek,  # https://github.com/facebook/Ax/issues/922
                            min_trials_observed=min_trials_observed,
                            max_parallelism=max_parallelism,
                            model_kwargs=model_kwargs,
                        ), 
                        GenerationStep(
                            model=Models.BOTORCH_MODULAR,
                            num_trials=-1,
                            model_gen_kwargs={
                                "model_gen_options": {
                                    "optimizer_kwargs": {
                                        "nonlinear_inequality_constraints": [_ineq_constraint],
                                        "batch_initial_conditions": batch_initial_conditions,
                                    }
                                }
                            },
                        ),
                    ]
                )

which I can then pass to my AxClient. My initial idea was to pass estimate_probabilities_of_satisfaction along with _ineq_constraint, which will enable me to do this relatively simply in the Service API. I guess what you mean is that there is no good way to do this if the trained model is required as one of the inputs (as in this case)?

@sdaulton
Copy link
Contributor

Yes that's right. If you need a trained model from Ax, using data collected during the experiment, I would recommend going with one of the two approaches that I mentioned, since then you would have access to the trained model.

@Abrikosoff
Copy link
Author

Abrikosoff commented Jul 12, 2024

Hi @Abrikosoff,

I am guessing that @Balandat intended to the constraint model to be pretrained outside of Ax. If you wanted to use non-linear constraints with scipy, you could implement a custom Acquisition that has a different optimize function that constructs the right non-linear constraint from the fitted model.

An alternative would be to create a new acquisition function that constructs and uses the probabilistic constraint. e.g. EI weighted by the probability that the probabilistic constraint is satisfied. One way to do this would be to make a subclass of (Log)EI that creates the necessary constraint within construct inputs similar to this.

You could then use this acquisition function in a GenerationStrategy that uses it. Parts 3b and 5 of this tutorial show how to do this.

Hi Sam @sdaulton , once again thanks for your reply! I'm preparing to try your alternative suggestion (subclassing LogEI), and I have a few related questions regarding this:

  1. the complete procedure I think is to define a input constructor that subclasses (inherits from?) qLogEI, which means defining a function akin to construct_inputs_qLogEISpecialConstraints (for want of a better name), with the necessary inputs, and passing this to model_kwargs via the botorch_acqf_class keyword in the GenerationStep of a GenerationStrategy? Is that more or less correct?

  2. If the above is correct, and looking at your linked code snippet, I see there's a kwarg entry called constraints; should i pass my nonlinear constraints here? Am confused because from the docstring it seems like the constraints here assume g(x) < 0, which is bit different from the usual nonlinear_inequality_constraint kwargs (which assumes g(x) > 0) which one passes to model_gen_kwargs. In addition, if i pass my nonlinear constraints here, do I need to pass them again to model_gen_kwargs in the GenerationStep?

  3. And if both the above questions are clarified, is the model from the Model keyword in the input constructor the model which i can use in the probability constraint?

Once again, thanks a lot for taking time out to help!

@lena-kashtelyan lena-kashtelyan added the question Further information is requested label Jul 31, 2024
@sdaulton
Copy link
Contributor

You'd want to construct a subclass of qLogEI that takes in, for example, a list of high_prob_constraints that would be treated differently than in addition to typical constraints. This AF would have to handle the high_prob_constraints in a reasonable way (weighting the AF value according to whether the constraints are satisfied). Since we wouldn't use Scipy's non-linear constraint functionality here, all constraints would be of the form g(x) <0. Then you'd need to define a new input constructor for this new acquisition function. This input constructor would need to separate constraints that should be treated by weighting by the probability of feasibility (standard) and constraints that need to be handled in the new way.

passing this to model_kwargs via the botorch_acqf_class keyword in the GenerationStep of a GenerationStrategy

Yes that's right.

  1. In addition, if i pass my nonlinear constraints here, do I need to pass them again to model_gen_kwargs in the GenerationStep?

No

Yes, but you'd need to specify which outcomes to use for the probabilistic constraints. model will be the model for all outcomes, so when computing the constraint, you'd need to use the outcome relevant to that constraint

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants