-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: caching lookup to behave correctly when inputs/output mapping #450
base: master
Are you sure you want to change the base?
Conversation
} else { | ||
// Does this make sense for both types of allocators? | ||
ort_tensor_key_t ort_tensor_key{input_name}; | ||
auto it = ort_ov_tensor_map.find(ort_tensor_key); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when are we first inserting the values in the map ? if (it != ort_ov_tensor_map.end() || it->second.ort_ptr != tensor.GetTensorRawData())
// Does this make sense for both types of allocators? | ||
ort_tensor_key_t ort_tensor_key{input_name}; | ||
auto it = ort_ov_tensor_map.find(ort_tensor_key); | ||
if (it != ort_ov_tensor_map.end() || it->second.ort_ptr != tensor.GetTensorRawData()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the first condition will always be false
ov_tensor_data.tensor_ptr = std::make_shared<ov::Tensor>(input.get_element_type(), input.get_shape(), | ||
(void*)tensor.GetTensorRawData()); | ||
ov_tensor_data.ort_ptr = tensor.GetTensorRawData(); | ||
ort_ov_tensor_map[ort_tensor_key] = ov_tensor_data; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we even need to specify the ORT_NPU_ALLOCATOR anymore
Description
Motivation and Context