Computer Science – Performance
Scientific paper
2004-04-21
Computer Science
Performance
Invited presentation at the Workshop On Software Performance and Reliability (WOPR2) Menlo Park, California, April 15-17 2004
Scientific paper
Benchmarking; by which I mean any computer system that is driven by a controlled workload, is the ultimate in performance testing and simulation. Aside from being a form of institutionalized cheating, it also offer countless opportunities for systematic mistakes in the way the workloads are applied and the resulting measurements interpreted. Right test, wrong conclusion is a ubiquitous mistake that happens because test engineers tend to treat data as divine. Such reverence is not only misplaced, it's also a sure ticket to production hell when the application finally goes live. I demonstrate how such mistakes can be avoided by means of two war stories that are real WOPRs. (a) How to resolve benchmark flaws over the psychic hotline and (b) How benchmarks can go flat with too much Java juice. In each case I present simple performance models and show how they can be applied to correctly assess benchmark data.
No associations
LandOfFree
Benchmarking Blunders and Things That Go Bump in the Night does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Benchmarking Blunders and Things That Go Bump in the Night, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Benchmarking Blunders and Things That Go Bump in the Night will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-661947