Search found 32 matches
- Sat 07 Aug 2021, 22:54
- Forum: Usage
- Topic: Problem with MPI
- Replies: 4
- Views: 3988
Re: Problem with MPI
Hello, I have already recompiled the code OPENEMS but you must first have the MPI libraries pre-installed: (see topic : https://openems.de/forum/viewtopic.php?f=3&t=335) The benefit of MPI is real on a server made up of NUMA nodes: on my Opetron server with 8 NUMA nodes I had divided by 6 the comput...
- Tue 07 Jan 2020, 16:33
- Forum: Usage
- Topic: Problem with local MPI
- Replies: 43
- Views: 74749
Re: Problem with local MPI
I identified 2 potential problems: 1- The versions of "CalcNF2FF.m" are not the same for version 0.33 (MPI version) and version 0.35: for version 0.33 there was an option to add in the arguments (... "MPI", true). Is this option still useful for version 0.35 (--with-MPI version). See respective file...
- Fri 20 Dec 2019, 16:17
- Forum: Usage
- Topic: Problem with local MPI
- Replies: 43
- Views: 74749
Re: Problem with local MPI
Hello Thorsten, During previous discussions (quite old), you had managed to create an MPI version of OpenEMS (version 0.33) with the parallelization of the calculation of the radiation patterns (see link here: https://openems.de/forum/viewtopic.php?f=3&t=335&start=30#p2002 . Recently, I tried to rec...
- Tue 16 Jul 2019, 17:05
- Forum: Usage
- Topic: Excitation of higher modes (other than TE11) in cylindrical waveguides
- Replies: 1
- Views: 4349
Excitation of higher modes (other than TE11) in cylindrical waveguides
Hello, By examining the "AddCircWaveguidePort.m" Matlab function, it seems that only the TE11 mode can be used: the expressions of "funct_Er, func_Ea, funct_Hr, func_Ha" do not conform to those mentioned in table 3.5 of D. Pozar's "Microwave Engineering" book when "n" or "m>1"? TE_TM_modes.JPG Could...
- Mon 27 Mar 2017, 08:49
- Forum: Usage
- Topic: Using DetectEdges with ImportSTL
- Replies: 3
- Views: 9848
Re: Using DetectEdges with ImportSTL
Hi,
You can use Aegmesher : https://bitbucket.org/uoyaeg/aegmesher/ ... Jet/Jet.md and the function "meshCreateLines" to generate meshes from a STL object.
Regards.
Pascal
You can use Aegmesher : https://bitbucket.org/uoyaeg/aegmesher/ ... Jet/Jet.md and the function "meshCreateLines" to generate meshes from a STL object.
Regards.
Pascal
- Mon 20 Feb 2017, 17:52
- Forum: Usage
- Topic: DetectEdge() on STL-imported objects
- Replies: 3
- Views: 9936
Re: DetectEdge() on STL-imported objects
Hello,
Maybe this mesher could help you :
AEG Mesher: An Open Source Structured Mesh Generator for FDTD Simulations.
https://bitbucket.org/uoyaeg/aegmesher/overview
Maybe this mesher could help you :
AEG Mesher: An Open Source Structured Mesh Generator for FDTD Simulations.
https://bitbucket.org/uoyaeg/aegmesher/overview
- Tue 23 Feb 2016, 18:40
- Forum: Usage
- Topic: Problem with local MPI
- Replies: 43
- Views: 74749
Re: Problem with local MPI
Hi, I finally solved the problem of affinities using another pthreads wrapper both for the operator creation and the engine. I abandoned the utility "likwid-pin" and I replaced it by http://www.poempelfox.de/workstuff/pthread-overload.c . The startup script is attached and I can also modify the memo...
- Tue 16 Feb 2016, 19:38
- Forum: Usage
- Topic: Problem with local MPI
- Replies: 43
- Views: 74749
Re: Problem with local MPI
Hi, I tried to test MPICH but I still have some problems with likwid-pin. The reason is that for each MPI process, the "n" threads initially created were destroyed after a few seconds and then recreated, is that true ? Likwid-pin correctly distributes the threads in the cores at the beginning but wh...
- Wed 10 Feb 2016, 18:54
- Forum: Usage
- Topic: Problem with local MPI
- Replies: 43
- Views: 74749
Re: Problem with local MPI
Hi, 1) Concerning the calculation of the radiation pattern : Yes, it seems much better with the latest version of CalcNF2FF.m, at least at first glance... ! 2) Concerning CPU load (at 100%) of the main threads : No, the problem is the same with or without nf2ff dumps.... Do you think it could be lin...
- Tue 09 Feb 2016, 19:19
- Forum: Usage
- Topic: Problem with local MPI
- Replies: 43
- Views: 74749
Re: Problem with local MPI
I forgot to mention that the best score on my 48 cores (quadri-socket Opteron 6176 with 12 cores/socket) is obtained using 8 MPI processes with 6 threads per process:
Final score: 8 * 75 mcells / second equivalent to 600 Mcells / s
For comparison, with multithreading alone : 350 Mcells/s max
Final score: 8 * 75 mcells / second equivalent to 600 Mcells / s
For comparison, with multithreading alone : 350 Mcells/s max