Addition: It seems that the map list is the one with the kinda correct ratings, I gave copper a 5 star rating (deserved, was entertained) and the map list added 0.03 points or so, while the review page rating stayed static.
Bayesian average - seems like deeper bottoms; higher peaks.
Anyway, when I was personally wondering about how ratings work between the subject website and the list outcome, I came to conclusion there must be some offset or ballast - lowering the list outcome - in case of low amount of votes received; especially visible with fresh maps. Which means, if the subject website shows perfect five, with very few votes in the pool, the list outcome, would show a rating of - say - three; as if the low amount of votes, was a strong negative vote itself. Apparently, it must have been something else, contributing to my impression of this; the difference, between accounting of averages, such as ‘Spirit’ explained.
By the way, very good question, @‘GenericJohnDoe’.