Proposal:
Pathways4Life: Crowdsourcing Pathway Modeling from Published Figures [pathways4life]

Are there significant differences in difficulty between pathway modeling tasks?


A not insignificant part of the proposal seems to be gamification based on users leveling up to tackle more challenging pathways, while scoring more points.

This is great but it seems to me this presumes that there are in fact noticeable differences in difficulty between tasks. As someone who is not familiar with modeling pathways, I wonder what makes one harder than another? Aren't users essentially just copying the nodes, arrows, and relationships they see in the image? Couldn't most users do all of them with relative ease? It might be worthwhile addressing this.

The difficulty matrix figure only specifies a set of easy pathways for humans (90% of them) and a set of hard ones (9%).

I was surprised by this as well. In our preliminary look at 40k images, we found the difficulty range to be huge. I tried to express this in the matrix figure, but really didn't have the space to include a sampling of images at sufficient resolution. There's wide range and a spectrum of levels between.

The easy ones are really easy. The harder ones would actually be hard for an expert biologist. In fact, I suspect there are many that are NOT possible to model. We would filter out as many of these as we could ahead of time, but we could also detect these by "skip" event and lack of consensus. It comes down to the fact that researchers literally makeup their own unique formats and conventions almost every time a pathway drawing is made outside of a dedicated modeling tool. It's a big problem.

 
Views
38
Topics
Cite this as
Jesse Spaulding, Alexander Pico (2015) Are there significant differences in difficulty between pathway modeling tasks?. Thinklab. doi:10.15363/thinklab.d120
License

Creative Commons License

Share