With my tested file the difference was just 0.02 dB. I thought that the replaygain algorithm was a standard so that all implementations would produce the same results. Different decoder output would be an explanation. I didn't know that different decoders can produce different output.
Here's an interesting decoder test from the not-so-distant past: http://mp3decoders.mp3-tech.org/intro.html.
To see how current decoders work I just compared the decoded output of foobar and dBpoweramp. I converted two short MP3 files to 32-bit float wave. The resulting files do not contain identical audio data:
Differences found in 2 out of 2 track pairs.
Comparing:
"F:\Test\m1_fb2k.wav"
"F:\Test\m1_dbpa.wav"
Differences found: 804889 sample(s), starting at 0.0000000 second(s), peak: 0.0000006 at 2.2336961 second(s), 2ch
Comparing:
"F:\Test\m2_fb2k.wav"
"F:\Test\m2_dbpa.wav"
Differences found: 805515 sample(s), starting at 0.0000000 second(s), peak: 0.0000010 at 8.2026757 second(s), 2ch
The combined duration of one file pair is 882000 samples (10 + 10 s) so about 90% of the samples differ. However, these differences must be very small because this time they don't show up in a replay gain scan. The peak and dB values are identical for both decoders.
Another possible factor is gapless decoding. A gaplessly decoded file has always a slightly shorter duration and that may affect the result (at least in theory). Probably MP3Gain does not read the LAME info tag and apply gapless decoding.
Regarding the possible difference in the analyzer code, I think the same algorithm can be implemented in different ways so that the analysis results may differ slightly, but I am not an expert in this field.