Thinklab takes a bold and unprecedented approach to rewarding scientific contribution. The mechanisms of this system are in their infancy and would benefit from an analysis of existing rating data. And the meta project is an ideal forum to have this discussion.
However, rater identities are currently private. Since there are presently few ratings, privacy is easily unmasked by a diligent party. Nonetheless, I think it's important that privacy be maintained. One potential workaround to enable analytics is that raters choose to make their identity public. Fundamentally its a user's own prerogative to publicize their ratings.
Since any meaningful analysis of the current rating database will compromise rater identities, I think consented transparency is a must. Therefore if @larsjuhljensen is willing to make his ratings up till now public, so am I, and we could proceed with the aforementioned analysis.
Of course, the desire to remain private is no questions asked affair. However, if all concerned parties (@larsjuhljensen, @jspauld, and I) are okay with the proposed analysis, I will happily proceed.
Finally, I think there is a longer term potential for optional public rating. Perhaps this could be implemented as a non-default option on the rating bar. Not that public rating identities need to be immediately visible, but that when analysis comes knocking data will be available. In many instances, I think it will be instructive to know who rated what how. And leaving that choice up to the rater should avoid any nasty consequences.
This is something that probably should have been suggested privately. I think suggesting it publicly puts unfair pressure on @larsjuhljensen. In any case, I'm not sure it's something we should do right now — I think this would create a mistaken impression that rating data is public information. Of course, I'm not saying your proposed analysis wouldn't be interesting or potentially useful. And I'm also not saying that we wouldn't do such analysis internally at ThinkLab.
Finally, I think there is a longer term potential for optional public rating.
At this point I'm skeptical that this is a good idea. Even if people are choosing to make their ratings public I imagine this could still easily create bias. For example, will people feel comfortable rating a colleague or respected scientist with a low score? And if someone sees they've been rated high/low will they feel compelled to return the favor/non-favor?
How about we revisit this idea at some point in the future when there is a lot more ratings data? Perhaps you can suggest some analysis that we can do internally, and we can publish the results in a blog post.