The equation for calculating confidence scores will be the sum of observations (+1 for confirmation, –1 for rejection, o), weighted by the relative skill level of the participant (0–1, w). We can include a multiplier for negative observations to convey the extra effort in making a negative call (e.g., 2, m), and assess this sum against a threshold (e.g., 10, T) to ultimately mark a snippet as confirmed (e.g., S≥1).
Once you have a decent amount of data accumulated, and assuming you've marked a decent number of pathways as officially confirmed, I expect that you could use machine learning to optimize confidence score equations.
You could start by optimizing the equation for calculating how much trust the system should have in any given user. You might look at how new the user is, how many of their edits turned out to be correct, and what was the difficulty level associated with those edits. You would then probably want to optimize the equation for confidence in a particular snippet, as well as confidence in an entire pathway being correct.
Possibly. Given that this was for a 2-year grant, however, I think dedicating machine learning to the tuning of a simple scoring mechanism would be outside the scope of the project.