BEGIN_DEFAULT_CONFIG Flags: Files: $TESTNAME$.upc DynamicThreads: 0 StaticThreads: $DEFAULT$ CompileResult: pass PassExpr: passed FailExpr: failed ExitCode: 0 BuildCmd: make,notrans AppArgs: TimeLimit: 300 SaveOutput: 0 END_DEFAULT_CONFIG # ------------------------------------------------------------ TestName: guppie BenchmarkResult: rate\s*=\s*(\S+)\s*(Mup/s) TestName: guppie-async BenchmarkResult: rate\s*=\s*(\S+)\s*(Mup/s) RequireFeature: upc_nb MakeFlags: (_threads > 4) && (_threads <= 8 ) ; BENCHMARK_FLAGS=-DLTABSIZE=20L MakeFlags: (_threads > 8) && (_threads <= 16 ) ; BENCHMARK_FLAGS=-DLTABSIZE=21L MakeFlags: (_threads > 16) ; BENCHMARK_FLAGS=-DLTABSIZE=22L TestName: guppie-async-pipeline BenchmarkResult: rate\s*=\s*(\S+)\s*(Mup/s) RequireFeature: upc_nb MakeFlags: (_threads > 4) && (_threads <= 8 ) ; BENCHMARK_FLAGS=-DLTABSIZE=20L MakeFlags: (_threads > 8) && (_threads <= 16 ) ; BENCHMARK_FLAGS=-DLTABSIZE=21L MakeFlags: (_threads > 16) ; BENCHMARK_FLAGS=-DLTABSIZE=22L # Notes on LTABSIZE: # HPCChallenge RandomAccess benchmark design calls for a table size approx half # of physical memory, and the verification error thresholds are tuned with that # in mind. However the time/space resource requirements of such runs are # undesirable for a generalized test suite. This harness defaults to running # unrealistically small table sizes, for testing expediency. Running tiny tables # at high concurrency with asynchronous updates inflates the number of random # update conflicts, leading to verification failures when the error rate passes # the acceptance threshold (currently 10% of final element values). # We raise the table size just enough to prevent these failures in practice.