Measuring number of floating point operations

Status
Not open for further replies.

Manolo

Posts: 23   +0
I'm interested in knowing how many operations a piece of software executes. I don't know if this is even possible. I don't think that estimating the performance of the computer in FLOPS and then multiplying for the total execution time is very reliable because of disk access, memory access, etc, but definitely worth a try.

I ran one of the benchmarks that I found here and it found my P4 2.8 GHz executes about 450 MFLOP/s. What % should I decrease that number to account for real life software execution?
 
1) FLOPS: FLoating point Operations Per Second.

Most commercial software runs verly little FP math. FP math is one of the slowest instructions in the CPU, so it reflects the worst case performance. All of the fixed point math runs significantly faster, and of course the memory-to-memory is limited by the bus speed.

2) you are correct on the issue of HD speeds, but normally we segregation CPU benchmarks from I/O benchmarks for the bus/dma limitation issue.

If you notice the processor ratings {xx mhz, yy ghz} and the single vs dual core variations, you can see it is very difficult to convert benchmark scores to CPU utilization numbers.

Those running overclocked processors know that there are variations in the timings that impact this stuff too. The memory timings are given through a series of numbers, as, for instance 2-3-2-6-T1, 3-4-4-8 or 2-2-2-5. These numbers indicate the amount of clock cycles that it takes the memory to perform a certain operation. The smaller the number, the faster the memory is.
 
It's for a code I wrote in C that will be executed on some random hardware. Ideally, I'd like to type something like
Code:
MeasureFLOP program.exe -units:MFLOP
and get the millions of floating point operations executed by program.exe. The coarse approach is to say avgMFLOPS*ExecTime=NumberOps
 
Status
Not open for further replies.
Back