-
Notifications
You must be signed in to change notification settings - Fork 13
MPI Comm cleanup update #124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: nest-gpu-2.0-mpi-comm
Are you sure you want to change the base?
Conversation
…on rule: removing obsolete files
…gging output still to be removed
…ved unnecessary debugging output
… HPC benchmark with this connection rule
… remote connection creation time
…rce nodes are actually used in the connections
…on of remote connections
… fast building of the maps when the source neurons of a remote connection command are in a sequence of contiguous integers
…rce neurons of a remote connection command are in a sequence of contiguous integers in target host (RemoteConnectSource)
…n the source neurons of a remote connection command are in a sequence of contiguous integers in source host (RemoteConnectTarget)
…en the source neurons of a remote connection command are in a sequence of contiguous integers in source host (RemoteConnectTarget)
…GPU memory allocation, with additional timers for remote connection creation
…onnectDistributedFixedIndegree
… bit-packing compression
…ber of MPI processes with the command SetNHosts(n_hosts)
… should be memorized
…rs needed only for specific choices of the algorithm
…allocating arrays useful for output spike buffer only using the number of local nodes rather than local+image nodes
…mote node sequences to local image nodes
…mote node sequences to local image nodes
…from GPU to CPU memory optimization level 0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this tremendous work! Could you please remove also the raster plots and the data unless it is needed for tests? And the Python examples and tests should be revised.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this tremendous work! 👍
|
Ping @lucapontisso @gmtiddia |
Thanks Jose for the huge work! |
|
I've just had a look at the code, it is ok for me, Thanks a lot for the huge work! |
|
Thank you for all this work Jose! However I have to ask you one more thing, for practical reasons. The python tests, previously in the folder python/test and usually launched with the bash scripts test_all.sh and test_mpi.sh, do not work any more, because data files for the test folder have been moved. I know that this is a temporary solution, because as soon as possible they should be handled in a similar way as the NEST (CPU) tests, however until we have that solution it would be better to keep them working in the old way because they are used after every changes to the code to check that everything is working properly. For the same reason I ask you to put back the all the files that were in the folder python/hpc_benchmark/test/, i.e. in the subfolders data_check, data_check_dfi, test_hpc_benchmark_hg, test_hpc_benchmark_p2p, test_hpc_benchmark_wg, test_hpc_benchmark_wg_dfi, and the files in the Potjans_2014 folder. |
This is an updated version of the latest mpi comm branch from Bruno's fork.
I have cleaned up unrelated benchmarking scripts as well as old automake related scripts and deprecated c++ examples.
I also bumped the version to 2.0.0.