Looking back at my last post, I realize I could have done things in a much more robust (and frankly much easier) way than running the test multiple times manually and looking for patterns. While the conclusions I came to in my last post hold, I’d like to provide a far more robust set of results and the method I used to arrive at those results.
The method being ‘use a couple bash scripts to run the test multiple times and average the results’. First, I decided to make a script I could use to test any program I want any number of times I want.
#!/usr/bin/env bash TOTAL=0 for (( x = 0; x < $2; x++ )); do TEMP=`( time $1 ) 2>&1 | grep real | grep -o '[0-9]\.[0-9]*'` TOTAL=`bc <<< "scale=3;$TOTAL+$TEMP"` done RESULT=`bc <<< "scale=3;$TOTAL/$2"` echo "The average time for $1 across $2 trials is: $RESULT seconds"
And then I made a simple script to run this on all of my different volume scalers
#!/usr/bin/env bash TRIALS=$1 bash timescript ./vol_noscale $TRIALS bash timescript ./vol1 $TRIALS bash timescript ./vol2 $TRIALS bash timescript ./vol3 $TRIALS bash timescript ./vol_inline $TRIALS bash timescript ./vol_intrinsics $TRIALS
Then I ran this script inside tmux (just incase of disconnection), and waited for it to spit out the results.
The average time for ./vol_noscale across 100 trials is: 5.084 seconds The average time for ./vol1 across 100 trials is: 5.228 seconds The average time for ./vol2 across 100 trials is: 6.409 seconds The average time for ./vol3 across 100 trials is: 5.180 seconds The average time for ./vol_inline across 100 trials is: 5.166 seconds The average time for ./vol_intrinsics across 100 trials is: 5.166 seconds
And there you have it.