While there is a section of the proposal addressing this, I feel the scope of the challenge could be clarified.
Here's what I'd like to know:
How many total pathways do you expect users to model throughout the lifetime of the platform? (Presuming the project is successful and users do in fact use it)
How many man-hours do you expect it to take, on average, to model each pathway?
With these two pieces of data we can think clearly about whether it's worth the time investment of creating a crowdsourcing platform and gamifying the experience. As the proposal mentioned, you could just pay people to do this through Amazon Mechanical Turk.
Now I think there's likely a lot of value to getting real science do-gooders involved, creating a platform that may be reusable, etc. However, I think there should be more clarity about the scope of what you're proposing.
This funding mechanism is explicitly billed as "exploratory." It's two years of initialization funding to see if there is something worth really investing in, especially in terms of gamification. So, I don't have real answers to those questions because it's never been done before... the platform doesn't exist in any form currently.
My goal with this project was to build and organize biomedical knowledge. So, I focused on how there is definitely knowledge out there waiting to be curated and a few ways to go about curating it, the primary way being gamification with some backup plans.
Based on the NIH reviewer statements, however, there was no critique of the biomedical aims and value, nor on the approach towards those aims, but rather on the fact that the approach itself wasn't a new way to crowdsource and the aims didn't advance crowdsourcing science itself.
Views
58
Cite this as
Jesse Spaulding, Alexander Pico (2015) Clarifying the scope of the challenge. Thinklab. doi:10.15363/thinklab.d122