shared data and operator

From: Tahar Amari (amari_at_cpht.polytechnique.fr)
Date: Sat Feb 14 2009 - 10:37:28 PST

  • Next message: Jeff Glickman: "Support for NVIDIA Tesla"
    Hello,
    
    A naive question :
    
      I am trying to understand the manual for some typical application.
    SUppose that I distribute my data as hared one blocks (it is supposed  
    to be for example a
    2D grid with scalar number defined at the center of each cell)
    
    Then I perfom some operation that needs information of neighbouring  
    blocs (typically
    like in finite difference computing a laplacian in each block). This  
    operation will access
    some values close to the "interface " of the block, so with no  
    "affinity" with the local block.
    
    With MPI we need to communicate (send and get) this precise type of  
    needed data , known as "ghost" cells
    and then perform the operator.
    
    with UPC, if I do not do anything special (unlike in MPI)  is it  
    possible to do this ?
    
    If yes, I guess that UPC will do the communication, and therefore if  
    nothing special tells
    what data to comunicate,  this is where penalty will be big ?
    
    
    Has anyone tested a simple laplacian on a square grid, with simple  
    shared data,
    and measured how much we loose or not , compared to the method in which
    one will do almost as much effort as with MPI ?
    
    Many thanks
    
    Tahar
    
    
    --------------------------------------------
    T. Amari
    Centre de Physique Theorique
    Ecole Polytechnique
    91128 Palaiseau Cedex France
    tel : 33 1 69 33 42 52
    fax: 33 1 69 33 30 08
    email: <mailto:[email protected]>
    URL : http://www.cpht.polytechnique.fr/cpht/amari
    

  • Next message: Jeff Glickman: "Support for NVIDIA Tesla"