Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GENERAL SUPPORT]: SEBO with parameter constraints #2790

Open
1 task done
souravdey94 opened this issue Sep 26, 2024 · 6 comments
Open
1 task done

[GENERAL SUPPORT]: SEBO with parameter constraints #2790

souravdey94 opened this issue Sep 26, 2024 · 6 comments
Assignees
Labels
question Further information is requested

Comments

@souravdey94
Copy link

Question

I am trying the predict chemical reaction rates in different solvent combinations. I want to use SEBO because the parameter space can contain upto 30 solvents and in most cases the there are only 3 to 4 important solvents. Since, it is a composition problem, I need to use parameter constraints. But SEBO with parameter constraint is not implemented in Ax. Can you suggest me a work around?

I have added a code snippet of the generation strategy and experiment section.

Please provide any relevant code snippet if applicable.

length = len(solvent_names_minus1)
    print('length', length)

    torch.manual_seed(12345)  # To always get the same Sobol points
    tkwargs = {
    "dtype": torch.double,
    "device": torch.device("cuda" if torch.cuda.is_available() else "cpu"),
    }

    target_point = torch.tensor([0 for _ in range(length)], **tkwargs)
    print('target_point', target_point)

    SURROGATE_CLASS = SaasFullyBayesianSingleTaskGP


ax_client.create_experiment(
        name="solventproject",
 
        
        parameters=[
            {
                "name": solvent_names_minus1[i],
                "type": "range",
                "bounds": [float(range_min_minus1[i]), float(range_max_minus1[i])],
                "value_type": "float",  # Optional, defaults to inference from type of "bounds".
                "log_scale": False,  # Optional, defaults to False.
            }
            for i in range(len(solvent_names_minus1))
        ],
        objectives={"blend_score": ObjectiveProperties(minimize=False)},
        parameter_constraints=[sum_str],  # Optional.
        outcome_constraints=["lnorm <= 0.00"],  # Optional.
    )


gs = GenerationStrategy(
    name="SEBO_L0",
    steps=[
     
        GenerationStep(  # BayesOpt step
            model=Models.BOTORCH_MODULAR,
            # No limit on how many generator runs will be produced
            num_trials=-1,
            model_kwargs={  # Kwargs to pass to `BoTorchModel.__init__`
                "surrogate": Surrogate(botorch_model_class=SURROGATE_CLASS),
                "acquisition_class": SEBOAcquisition,
                "botorch_acqf_class": qNoisyExpectedHypervolumeImprovement,
                "acquisition_options": {
                    "penalty": "L0_norm", # it can be L0_norm or L1_norm.
                    "target_point": target_point, 
                    "sparsity_threshold": length,
                },
            },
        )
    ]
)

Code of Conduct

  • I agree to follow this Ax's Code of Conduct
@souravdey94 souravdey94 added the question Further information is requested label Sep 26, 2024
@sdaulton
Copy link
Contributor

in most cases the there are only 3 to 4 important solvents

Is it bad if the suggest arms include more than 3-4 or is this just prior knowledge you want to include? Note: using a SAAS model already encodes the prior the only a few parameters are relevant, so unless you specifically want to avoid generating arms that change many parameters, sparse BO is probably not needed.

Regarding using sparse BO, it looks like optimizing the L0 objective using homotopy does not support parameter constraints. There isn't a fundamental reason by one couldn't though. Some options would be:

  1. Use L1_norm instead of L0_norm. This may not lead to the most sparse results, but can be used out of the box. (
    if self.penalty_name == "L0_norm":
    if inequality_constraints is not None:
    raise NotImplementedError(
    "Homotopy does not support optimization with inequality "
    + "constraints. Use L1 penalty norm instead."
    )
    candidates, expected_acquisition_value, weights = (
    self._optimize_with_homotopy(
    n=n,
    search_space_digest=search_space_digest,
    fixed_features=fixed_features,
    rounding_func=rounding_func,
    optimizer_options=optimizer_options,
    )
    )
    else:
    # if L1 norm use standard moo-opt
    candidates, expected_acquisition_value, weights = super().optimize(
    n=n,
    search_space_digest=search_space_digest,
    inequality_constraints=inequality_constraints,
    fixed_features=fixed_features,
    rounding_func=rounding_func,
    optimizer_options=optimizer_options,
    )
    )
  2. implement support for parameter constraints in optimize_with_homotopy (
    def _optimize_with_homotopy(
    )
  3. Allow setting a fixed parameter value in differentiable relaxation for the L0 norm, and optimize without homotopy. This would require adding another argument that allows one to differentiate what norm to use from how to optimize it, since currently these are coupled.

@Balandat Balandat changed the title [GENERAL SUPPORT]: [GENERAL SUPPORT]: SEBO with parameter constraints Sep 26, 2024
@CompRhys
Copy link
Contributor

https://github.com/pytorch/botorch/pull/2588/files extends _optimize_with_homotopy to include the constraints typically available in optimize_acqf which I believe addresses suggestion 2. here.

@souravdey94
Copy link
Author

Has it been implemented in Botorch?

https://github.com/pytorch/botorch/pull/2588/files extends _optimize_with_homotopy to include the constraints typically available in optimize_acqf which I believe addresses suggestion 2. here.

Has it been implemented in Botorch?
I am currently using output constraints to learn the composition constraints.

facebook-github-bot pushed a commit that referenced this issue Dec 14, 2024
Summary:
Following pytorch/botorch#2588 being merged there is now no reason to stop people using inequality constraints with SEBO. This PR removes the ValueError check and adjusts the tests accordingly.

Connected to #2790

Pull Request resolved: #2938

Reviewed By: Balandat

Differential Revision: D64835824

Pulled By: saitcakmak

fbshipit-source-id: d12a80851bb815497bae42f617c82a93cd88b3bf
@CompRhys
Copy link
Contributor

@souravdey94 This is now merged with #2938! hope you can also make use of it.

@souravdey94
Copy link
Author

@CompRhys Thanks for the implementation. Is it already available with the latest Ax version?

@CompRhys
Copy link
Contributor

no you'll have to install from git

pip install git+https://github.com/facebook/Ax.git
pip install git+https://github.com/pytorch/botorch.git

you need dev versions of both Ax and botorch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants