From: Eric Frederich (eric.frederich_at_gmail_dot_com)
Date: Fri Nov 18 2005 - 13:45:33 PST
Okay, So as for the remote nodes, as long as they have ssh running, have access to the executable and data, and have the same version of the C libraries installed they should be good to go and they don't need any UPC installed on them? These C files that result from running upcc do not link against any kind of UPC library? I appreciate your fast responses. As for me being new to parallel processing, I do have a bachelors degree in Computer Science. I was not trained in any form or parallel processing during my education except very primitive forms of semaphores in my operating systems class. I decided to take a course in parallel processing since that is where I see things going especially now with dual core and hyperthreading processors. As a result of taking the class I have played around with MPI a little bit. I just saw that Sun as come out with an 8 core processor capable of having 4 concurrent threads per core for a total of 32 threads. I just want to be able to make use of the new technology as it comes out. I am basically trying to get UPC up and running and do some basic benchmarking. I was thinking of writing a good solution for an embarrassingly parallel problem, and then write a poor solution for the same problem. This poor solution would be the same program except that it would work on data for which is did not have affinity. Something like the following.... good solution... for(i = MY_THREAD ; i < n ; i += THREADS){...} bas solution... for(i = MY_THREAD + 1 ; i < n ; i += THREADS){...} I have read through a little bit of a book on UPC and find it quite interesting. I really like the idea of not having to set up send and recieve buffers and explicitly send data from one processing node to another. I hope that UPC matures past it's "researchy"ness as you put it because I find it quite nice (on paper at least so far). Hopefully I have enough information to get something up and running this weekend on my network at home. Lets say I do get something up and running. If I have a node that has hyperthreading, is there a way to tell UPC to sent it twice as many processes as the other nodes? Would just listing that node twice in the list of hosts work? Thanks, ~Eric On 11/18/05, jcduell_at_lbl_dot_gov <jcduell_at_lbl_dot_gov> wrote: > > On Fri, Nov 18, 2005 at 10:14:18AM -0500, Eric Frederich wrote: > > > Does the user running the UPC program have to have an account with the > > same user name and password on all of the nodes it will be executed > > on? > > Yes, unless you've set up ssh to work otherwise (you can set up your > $HOME/.ssh/config to tell ssh what username to log in as for a given > machine). > > > You mentioned that I need NFS because every node needs to be able to > > run the executable. Does this executable just need to be placed in the > > same location on each machine and could I do this with a samba share? > > Yes, the executable needs to be in the same place on each machine, and > all of the standard C libraries it needs must also be present. Also, > any files that the application opens (input or output data) need to be > present on all nodes. > > You should be able to do this with a samba share, if you 'mount' the > share the way you would NFS or any other filesystem (i.e. you add it to > /etc/fstab as a smbfs-type filesystem, and then 'mount' it). I don't > think anyone else has tried it, but there's no reason it shouldn't work. > > > I am new to parallel processing. > > You should know that UPC is still a fairly "researchy" language, and > that the vast majority of actual parallel programs in the world are > written using other systems, such as MPI or OpenMP. I'm not sure what > you're aiming to do with UPC, but you might also want to look into those > methods for parallel programming as well. > > -- > Jason Duell Future Technologies Group > <jcduell_at_lbl_dot_gov> Computational Research Division > Tel: +1-510-495-2354 Lawrence Berkeley National Laboratory > -- ------------------------ Eric L. Frederich 321-246-1854