- 
                Notifications
    
You must be signed in to change notification settings  - Fork 369
 
Changes to TRT-LLM download tool for multigpu distributed case #3830
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py	2025-09-22 06:35:28.523784+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py	2025-09-22 06:36:00.657186+00:00
@@ -863,6 +863,6 @@
    return False
def is_thor() -> bool:
    if torch.cuda.get_device_capability() in [(11, 0)]:
-        return True
\ No newline at end of file
+        return True6e99bbc    to
    7134053      
    Compare
  
    3f1fa7e    to
    54948d9      
    Compare
  
    There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/distributed/utils.py	2025-09-25 19:33:28.176615+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/distributed/utils.py	2025-09-25 19:34:02.325958+00:00
@@ -100,11 +100,10 @@
        return True
    except Exception as e:
        logger.warning(f"Failed to detect CUDA version: {e}")
        return False
-
    return True
def _cache_root() -> Path:2bbc423    to
    5beefc0      
    Compare
  
    5beefc0    to
    809c7ee      
    Compare
  
    b96b9ee    to
    2f2cd31      
    Compare
  
    809c7ee    to
    5fb74da      
    Compare
  
    There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/distributed/test_nccl_ops.py	2025-10-13 18:42:31.890493+00:00
+++ /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/distributed/test_nccl_ops.py	2025-10-13 18:43:14.026641+00:00
@@ -23,10 +23,11 @@
if not dist.is_initialized():
    dist.init_process_group(
        backend="nccl",
        init_method="env://",
    )
+
class DistributedGatherModel(nn.Module):
    def __init__(self, input_dim, world_size, group_name):
        super().__init__()
        self.fc = nn.Linear(input_dim, input_dim)d289587    to
    bd02455      
    Compare
  
    bd02455    to
    38224c5      
    Compare
  
    | 
               | 
          ||
| def check_tensor_parallel_device_number(world_size: int) -> None: | ||
| if world_size % 2 != 0: | ||
| raise ValueError( | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would it matter what the examples need? This is supposed to be user facing code.
| 
               | 
          ||
| 
               | 
          ||
| def initialize_logger(rank: int, logger_file_name: str) -> logging.Logger: | ||
| logger = logging.getLogger() | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is this logger used? Typically this is a user's responsibility. we should not be creating the actual logger handlers in our library. We can do this in the examples code tho
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok was meant for the use case that if we want separate log outputs for the two different GPUs. But I agree these all can be moved to the user facing code
…T-LLM wheel by using lock file
f4f338a    to
    f07b5cb      
    Compare
  
    
TRT-LLM installation tool for distributed