From: Alexander Brugh (abrugh_at_lanl_dot_gov)
Date: Tue May 15 2007 - 09:47:43 PDT
I'm having some trouble running a program on my machines when I scale-up the size of some global arrays. I've done what I consider to be a fair amount of googling for this problem, but if I overlooked a posted solution, please feel free to point it out. I'm compiling with upcc v. 2.4.0 on a couple of different machines: -Dual G5 tower with 2.5GB of ram, OS X 10.4 -Dual Core 2 Duo laptop with 2GB of ram, Ubuntu Linux Eventually I'll be trying to run this program on a real cluster which also has upcc v. 2.4.0. When I run on my development machines, I get the following error: UPCR: UPC threads 0..1 of 2 on hollywood (process 0 of 1, pid=5138) UPC Runtime error: out of shared memory Local shared memory in use: 578 MB per-thread, 1156 MB total Global shared memory in use: 15 MB per-thread, 30 MB total Total shared memory limit: 600 MB per-thread, 1200 MB total upc_global_alloc unable to service request from thread 1 for 8003584 more bytes This was compiled with the following: upcc -pthreads -T 2 --shared-heap=600 upctest.c I set the shared heap to 600 in a failed attempt to get more Global shared memory. I friend of mine suggested I try over subscribing the number of threads used, with the thought that I could get 15MB per thread. It wouldn't be a fast solution but it might have worked, instead the 30MB of global shared memory is just divided smaller amongst the threads. I can't seem to find a compiler option or upcrun option to get more global shared memory. This is my first shot at writing some UPC so if there's an obvious mistake staring me in the face, I'd appreciate the help in seeing it. Thanks, Alex Brugh