Project:
Thinklab Meta [meta]

ThinkLab as a vetting system for traditional grants


Most (90%) proposals submitted to the NIH and elsewhere are not funded. This is an enormous waste of time and energy. Can ThinkLab be used to improve these large grants before they are submitted to increase their chances of getting funded, perhaps to filter out those that have no chance earlier in the process or to redirect the applicant to a more appropriate funder (such as a thinklab sponsor)? It seems the system as it stands could be used for that, very important, purpose. What other new mechanisms would be needed to make that specific process more effective?

As an example, here is what a twice-failed R21 NIH grant looks like, along with the rejection letter (called the Summary Statement). http://sulab.org/2014/03/nih-grant-proposal-for-sale/ Could ThinkLab have helped this one? Is it salvageable? Would it make sense to resubmit it for this new call? http://grants.nih.gov/grants/guide/rfa-files/RFA-CA-15-006.html

Thanks for the great idea Ben. This was also suggested to me in an offline conversation with @newyorklenny.

I think this very well could be the idea that can really help ThinkLab turn the corner! It seems like it might really hit a pain point. And the great thing is, anybody who sees the benefit of ThinkLab in improving their proposal can easily just decide to continue their research as an open project like @dhimmel is doing.

That's what I'm thinking but I'd really appreciate getting feedback from others here. What do people think? Will researchers openly post their proposals in order to get feedback and increase their odds of getting funded?

Lenny, you're welcome to reiterate what you've said. Would also love to hear from @jonathaneisen and @hollyganz (who I worked with to put up the first ThinkLab proposal). What do you guys think?

I think in general it is a good idea. Though I note - in my experience most people put together grant proposals at the last minute so this might only help after one round of rejection

The last minute point is totally valid. Actually one of my concerns about putting something up here is that the attention required to develop it and monitor it would take away too much time from meeting the next deadline. (e.g. the letter of intent for that crowdsourcing call is one week away). Ideas:

(1) It should be encouraged to post less-than-fully-baked concepts here. Daniel's proposal was well thought out and pretty complete when I saw it. Very close to a state that would be ready for a typical grant. It would be cool if you supported "1 pagers" to gauge community feeling for the idea and to quickly get feedback on major direction changes. Basically much like the letter of intent process.

(2) If you do this, go whole-hog on a pattern for it. Support the whole process - idea, opportunity discovery, letter of intent, drafting, letters of support, personal statement... Get them all out in the open here. Importantly, add a timeline feature so that people could see deadlines that would need to be met.

(3) Catalogue success rates.. Follow up with people that try this to see how it impacted them. Many people will hesitate to share early stage ideas openly. You will need to get evidence that it really does produce better outcomes for their selfish interests to gain a lot of momentum.

(4) Phil Bourne (bioinformatics director at NIH) would likely be very supportive of something like this.. It fits his philosophy very well from what I can tell..

(5) I've had discussions about something like this with Jonathan Wren. Would be good to get his feedback as well..

Thanks Ben, this is great.

  1. Very much agree on the "1 pagers". This will be easy to do. Although in the beginning I'm suspecting the primary interest may be from people who have already had a proposal rejected?

  2. I don't personally have experience with submitting NIH/NSF proposals so I'll be relying on learning everything I can from others. Would you mind going into more detail here? Are you essentially suggesting that ThinkLab be not just a place to get feedback and improve proposals but basically an entire front-end to the NIH proposal process?

  3. Agree

  4. I'm in touch with Daniel Mietchen ( @daniel_mietchen ) who works with Phil now. I believe Daniel has spent considerable time thinking about this idea of opening up research proposals. In fact, he's put up a proposal for exactly that. There's a lot of great ideas in there. And the fact that we can benefit from them now actually demonstrates part of the value of open proposals!

  5. Thanks, I've emailed him :)

Another person to contact is Titus Brown at UC Davis, who has been posting his grant proposals on his blog and getting feedback on them there.

Titus Brown's blog: http://ivory.idyll.org/blog/

The key, as usual, is where the money is and this seems to be a problem with the current content on ThinkLab - there is not nearly enough of it. The "small" award I am looking to apply for right now is $400,000. That is the underlying reason for the suggestion to orient a ThinkLab process around getting participants access to the large pots of research money that do exist (at the NIH and similar places). Increasing chances of gaining such funding is definitely a value proposition for ThinkLab participation. If you achieve a flow of researchers here that way, it should increase your chances of diverting them towards other resources as they arise.

As a very junior grant writer, I find the structures provided by this book extremely helpful in preparing my applications. It goes into great detail about the various kinds of NIH grants and what elements they are composed of. If you are keen on this direction, its worth a look. If ThinkLab took something like the process suggested in that book and translated it into software modules imbued with the collaborative features of ThinkLab it would be very interesting. I would be happy to test it. Our lab also has a habit of posting our proposals e.g. http://sulab.org/category/GeneWikiRenewal/ though it does seem time to start opening things up earlier.

Note that you wouldn't need to have things as tightly laid out as they do in that (200+ page) book. Just having the core structures defined and attached to timelines would be great... Though the more utility you put in, the more attractive it becomes.

I agree it would be good to have mechanisms to develop research ideas into grant proposals in the open, and that an iterative approach (perhaps starting with a one-pager or even shorter, as per http://beta.briefideas.org/ ) seems most promising, especially since feedback is badly broken in many funding schemes. ThinkLab can be of help here, keeping in mind that features like automated generation of PERT and Gantt charts as well as budget overviews are amongst the major reasons why people often resort to dedicated grant-writing environments.

Yes, timing (especially distance to deadlines) is crucial and will likely have a strong influence on the amount of attention people can devote to such open drafting. On the other hand, by drafting in the open, you can get help that would otherwise be hard to get - our H2020 proposal at http://dx.doi.org/10.5281/zenodo.13906 was written in five weeks around Christmas, with significant contributions from multiple people outside the consortium (which itself was only formed on the basis of the published idea - with most of the partners, I have had no prior interaction).

As for my work with Phil at NIH, my notes are at https://github.com/Daniel-Mietchen/datascience , and exploring openness in funding contexts tops the list. I would thus welcome it very much if you (or others) were to step forward to draft NIH proposals in the open and to keep track of any red tape that stands in the way. I will work on trimming the red tape and combine that with creating incentives for sharing proposals. We try to be attentive to demands from the community, so suggestions, pull requests etc. are most welcome.

Independent of these NIH activities, I am pursuing the idea of publishing research proposals and associated materials more formally, as sketched out in the News Challenge proposal that Jesse mentioned above as well as in http://dx.doi.org/10.1371/journal.pbio.1002027 . Nothing more specific to show here yet, but I expect this to be operational before the end of the year. Ping me if you would like to be involved.

  • Jesse Spaulding: Thanks for your comments and links Daniel! We will certainly publish any suggestions we have for the NIH as far as removing any red tape or supporting open science. The community is lucky to have you there at NIH! Please keep us posted on any new initiatives!

The idea I proposed to Ben was essentially "gamifying" grant reviews. It would basically work something like this:

1) A grant writer would submit key concepts (overview/aims) for review, fitting it in one page maximum to keep it brief.
2) Competitors would try to identify both strong and weak points of the grant by highlighting text and categorizing it as minor/major weak or strong points
3) Points are scored by guessing - as accurately as possible - which areas are considered by others as strong and weak points (which likely reflects how the real grant reviewers will feel)
4) The more text highlighted, the fewer points a reviewer is likely to get, as the score would be a function of how specifically their own weak points aligned with others, with what frequency and what severity. I haven't worked out the cost function yet, but have a general idea in mind.
5) Badges mark specific milestones and leaderboards track people best able to guess points of criticism/praise
6) There are 3 points of value in this:
a) Grant writers: Obviously, they get strategic feedback before it actually gets sent to real reviewers
b) For casual reviewers: They will write their own grants and would like to think they can identify strong/weak areas in their own work. If they're not good at it, then the game offers them a way to get better.
c) For serious reviewers: People on the leaderboard become eligible to accept pay-for-review offers to review the entire grant. They can set their price - if it's too high, then they price themselves out of the market and get no offers. An average fee (based on past fees) will display at the top of the leaderboard to guide both writers & reviewers.
7) Submitting grants can be done either by pay-to-play or by earning enough points as a reviewer. If you don't want to help the system by reviewing, then you need to ante up some cash.

Surely, there will be kinks to iron out, but what do you guys think of this idea?

@jonathanwren the idea sounds genius to me. Presumably you wouldn't be posting it if you didn't want someone to take it and implement it? Few questions:

  1. For an initial one page proposal how useful would it be to see a heat map of the perceived strong and weak points within a proposal? Presumably you would also want to know why something is seen as weak?

  2. Do you see a reason to allow free-form highlighting? Why not just do this sentence by sentence? Perhaps free form would be more fun?

  3. What percentage of researchers do you think would pay for an open peer review (the serious review)? How much would they pay? Where is the money coming from? And how much time would we expect the average reviewer to spend reviewing?

@jspauld ,

1) Although it is definitely what people want to know, I think it will increase the complexity to try to synthesize the "why" of it from multiple responses. Presumably, the sentence itself would tell the writer what people may have a problem with, but I guess allowing (rather than requiring) commentary would be a plus

2) Free-form, yes - people choose whatever they want to highlight. Single word, phrase, sentence, etc.

3) I think that's going to depend on what they get out of it. The nice thing about the proposed system is they can evaluate how many grants the leaders have reviewed and how well they did. In other words, they don't have to guess as to whether or not it would be useful - they have data for that. I don't know how much they would pay. I don't know how much time reviewers would spend, but the leaderboards and badges would distinguish the serious from the casual reviewers.

The more time someone has to spend on the grant review, the less incentivized they will be. The incentive structure has to be set up so that, if they do spend more time, they gain something. Points, status, recognition, ranking, etc. There's no need to worry about individual reviewers - they will come in all forms, sizes, degrees of interest, etc, and the ranking system will segregate them accordingly.

Working on some copy for the homepage. This wouldn't be specifically for @jonathanwren's idea, but I wouldn't expect the selling points to be too different anyway.


Crowdsourced feedback for NIH grant proposals

(and any other research proposal)

Benefits

  • Receive $1000 to support crowdsourced feedback on your proposal
  • Improve your odds of NIH/NSF funding
  • Establish provenance over your research ideas
  • Increase the visibility and impact of your work
  • Connect with new potential collaborators

This seems pretty compelling to me. What does everyone think? Keep in mind I'm trying to create selling points for a potential proposal poster. (Not necessarily selling the benefits to science/society, of which I believe there are many.) Am I missing anything?

  • Jonathan Eisen: I thnk it is a bit risky to say "Improve your odds of NIH/NSF funding" ... seems possible but not proven

  • Jesse Spaulding: I'm thinking this is a big selling point though. Someone who doesn't believe it probably wouldn't post their proposal anyway. So unless this is really a turn off I would think it's worth taking the risk and leaving it in.

NIH is already showing specific interest in such matters:
http://grants.nih.gov/grants/guide/rfa-files/RFA-CA-15-006.html

I think others following this thread are well aware of that.

Hypothes.is gives a nice mechanism for markup of individual sentences or parts. I'd also be happy to pilot something like this with an R01(either A0 or A1) at some point if you want a test case for either a new submission or a revision of a larger grant. I'd like a system that is web based but that can output to an NIH style, that shows the grant as it would exist (e.g. with page marks, etc), that allows inline commenting, that works for the component documents that make up a grant.

I've used sharelatex but it didn't previously work with github for version control, though I think that was one of their requested features. Google docs is ok, but I'd like something that has the idea of a "commit" instead of live editing.

@caseygreene thanks for expressing an interest in this.

We're currently preparing a new version of the site with a focus on crowdsourced feedback for grant proposals so I would love to have you try it out!

I just want to clarify what you see as the primary benefits and the reason for your interest in piloting this. From my perspective the primary benefit is that we are creating an incentive for people to review your proposal. (By having feedback posted publicly and recognizing and paying people based on the value of their feedback.)

Are we on the same page there? Beyond this, what would be the requirements for trying this? You mentioned:

  • Output to NIH style — You want to be able to output to PDF documents that can be directly attached to your proposal correct? This should be doable.
  • Inline commenting — Is your interest in this that you want people to be able to make comments inline or view comments inline? Do you basically want Google docs inline commenting feature reproduced? Making comments inline (by selecting text to quote) should be quite doable and seems like a good idea, however, viewing comments inline could be quite tricky. Consider that this discussion thread (that you're reading) might be a discussion thread on your proposal.
  • The idea of a "commit" — Could you describe what you're looking for here and why? Currently, ThinkLab creates a commit with each "save" of the proposal. There's no integration with GitHub but there is a revision history page.

Any further information would be very helpful. Thanks!

@jspauld — here are my current thoughts:

Output: Yes. PDF documents with appropriate margins, size 11 arial, figures + legends that meet the requirements of an NIH grant. It could also just output to plain text that could be incorporated quickly into a word processor template. Accurate length estimates ("~11 formatted pages") are critical though if it doesn't handle the output.

Inline commenting: I'd like to see people be able to make comments inline. On the view front, that could be configured on a proposal by proposal or even comment by comment basis. Something double-opt-in for public comments would be great. (author either pre or post commenting clicks a button to flag a comment as visible, comment contributor can mark as allowing a comment to be made visible). I can definitely imagine cases where someone comments:

CX: Commenter X
A: Author

C1: Have you thought about XYZ?
A: Yes, due to ABC we didn't talk about them here but we could add Z, would that be more clear?
C1: I think you really need X and Y as well.
C2: What about Q which shows that Z depends on X and Y?
A: @C2 @C1 — Thanks! This really helped me clarify this. Take a look at the next commit and let me know if you think it helped.

I like the way that medium does inline comments (essentially on a paragraph by paragraph basis):
https://medium.com/@greenescientist/why-do-you-want-to-be-a-scientist-e4a94a93af78

The idea of a commit: essentially labeled savepoints. More like a tag if you're already essentially versioning each save. I think that by default it'd be nice to have only tagged versions be visible (or even only specific tagged versions). There are times where feedback is helpful, and other times where I'm trying to figure out how to best structure something and feedback at that point can just be more time consuming.

This might be something that I'd have to try, see how it works, and then see how my opinion changes during the process.

Thanks for your feedback everyone. We've announced this feature here: Thinklab open grant proposal review has arrived!

 
Status: Completed
Labels
  Biz Dev   Feature Suggestion
Views
213
Topics
Referenced by
Cite this as
Benjamin Good, Jesse Spaulding, Jonathan Eisen, Jack Park, Daniel Mietchen, Jonathan Wren, Casey Greene (2015) ThinkLab as a vetting system for traditional grants. Thinklab. doi:10.15363/thinklab.d58
License

Creative Commons License

Share