Skip to content

Conversation

@JoseJVS
Copy link
Collaborator

@JoseJVS JoseJVS commented Oct 15, 2025

This is an updated version of the latest mpi comm branch from Bruno's fork.
I have cleaned up unrelated benchmarking scripts as well as old automake related scripts and deprecated c++ examples.
I also bumped the version to 2.0.0.

golosio added 30 commits March 4, 2025 01:05
…rce nodes are actually used in the connections
… fast building of the maps when the source neurons of a remote connection command are in a sequence of contiguous integers
…rce neurons of a remote connection command are in a sequence of contiguous integers in target host (RemoteConnectSource)
…n the source neurons of a remote connection command are in a sequence of contiguous integers in source host (RemoteConnectTarget)
…en the source neurons of a remote connection command are in a sequence of contiguous integers in source host (RemoteConnectTarget)
…GPU memory allocation, with additional timers for remote connection creation
…ber of MPI processes with the command SetNHosts(n_hosts)
…rs needed only for specific choices of the algorithm
…allocating arrays useful for output spike buffer only using the number of local nodes rather than local+image nodes
Copy link
Collaborator

@jhnnsnk jhnnsnk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this tremendous work! Could you please remove also the raster plots and the data unless it is needed for tests? And the Python examples and tests should be revised.

Copy link
Collaborator

@jhnnsnk jhnnsnk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this tremendous work! 👍

@JoseJVS
Copy link
Collaborator Author

JoseJVS commented Oct 31, 2025

Ping @lucapontisso @gmtiddia

@lucapontisso
Copy link
Collaborator

Ping @lucapontisso @gmtiddia

Thanks Jose for the huge work!
Sorry for the late reply. I will conclude the review in the weekend.

@gmtiddia
Copy link
Collaborator

gmtiddia commented Oct 31, 2025

I've just had a look at the code, it is ok for me, Thanks a lot for the huge work!

@golosio
Copy link
Collaborator

golosio commented Oct 31, 2025

Thank you for all this work Jose! However I have to ask you one more thing, for practical reasons. The python tests, previously in the folder python/test and usually launched with the bash scripts test_all.sh and test_mpi.sh, do not work any more, because data files for the test folder have been moved. I know that this is a temporary solution, because as soon as possible they should be handled in a similar way as the NEST (CPU) tests, however until we have that solution it would be better to keep them working in the old way because they are used after every changes to the code to check that everything is working properly. For the same reason I ask you to put back the all the files that were in the folder python/hpc_benchmark/test/, i.e. in the subfolders data_check, data_check_dfi, test_hpc_benchmark_hg, test_hpc_benchmark_p2p, test_hpc_benchmark_wg, test_hpc_benchmark_wg_dfi, and the files in the Potjans_2014 folder.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants