-
Notifications
You must be signed in to change notification settings - Fork 0
Semantic similarity #13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
anmarques
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall it looks good. It needs some small fixes
| config_semantic_similarity_args.update(semantic_similarity_args) | ||
| self.semantic_similarity_args = config_semantic_similarity_args | ||
|
|
||
| self.num_samples_per_dataset = config_kwargs.pop("num_samples_per_dataset", num_samples_per_dataset) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This implementation allows arguments set in config to overwrite arguments passed in by the user. This should not be the case. Config arguments should follow one of these two options:
- They can be overridden by the user. In this case the use-provided value should be used.
- They can't be overridden by the user. If the user provides a value for an argument that can't be overridden the argument parsing should raise an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fair point, will fix
src/automation/tasks/scripts/semantic_similarity_generate_script.py
Outdated
Show resolved
Hide resolved
|
|
||
| try: | ||
| print(">>> Initializing vLLM...") | ||
| llm = LLM( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we using the LLM class instead of vllm serve?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main branch has an old src/automation/vllm/server.py file the class VLLMServer but other branches use start_vllm_server`.
Also shouldn't the output of the LLM class be identical to the vllm serve api endpoint?
No description provided.