All TutorialsArrowArrow
Arrow

Panel Session: Why Strong Visuals Secure Grants

Can you quickly captivate reviewers’ attention on the impact of your proposal? Should the Specific Aims page include a figure? We’ll discuss these and other key ways to make your applications stand out during the review process.

Panel - Dr. Danielle Matsushima, Dr. Ariella Shikanov, and Dr. Peter J. Turnbaugh

Join the BioRender Community
https://biorender.com

Overview

Welcome Ryan Marien, who is actually one of our co-founders at BioRender. Here he leads a lot of really important functions and we'll call him the head of business and he was part of the spawning of BioRender itself four years ago. So welcome, you'll get to meet him through the panelists today.

We also have Dr. Danielle Matsushima, who comes with a world of expertise working with faculty to create very clear grant proposals. She herself has read quite a few, probably in the hundreds or thousands, I don't even know the number, but she's seen funded and unfunded summary statements and grants. So she comes with a wealth of knowledge. She also joined our Visualize event, and that was one of the most attended, well-attended events there.

Dr. Shikanov is also one who writes R01s as well as foundation grants and many others. She's very passionate about mentoring grad students, postdocs in learning how to write grants, and in creating great visuals.

Of course, Dr. Turnbaugh, who you've just heard from, is a seasoned grant writer and reviewer, and an NMHD study section number. And as I mentioned, over a hundred thousand citations and an H-index of 57.

I think without further ado, I'd love to open it up for Ryan to take over.

Thank you, Shiz, and thank you, everyone, for joining. Really, really exciting. The first two talks were incredibly inspirational. Daniel and Peter really learned a ton from that. I hope everyone else in the audience did too. And I actually maybe want to pull on one of Peter's slides where you reference how you were creating a grant and thinking that it got better each time for that innovators award. Were the reviewers feeling the opposite?

So one of the things I think might be really helpful for everyone is, again, we find it really, really good and really important at BioRender to try to always put ourselves, what we call, like the end-user shoes and sort of live their life as much as possible when they're using BioRender, in this case, if you're writing a grant, it would be trying to put yourself in the reviewer's shoes as much as possible.

So maybe to kind of help set the stage, if each of you, Peter, Ariella, and Danielle, could share with our audience a little bit about what is your routine when you go to review grants, read grants, and assess grants. Are you sitting at home or at the office? Do you have a big stack? Can you review them one at a time? Do you go through a little bit of each one? Walk us through what it looks like when you're doing it and what's on your mind so that everyone else in the audience can try to put themselves in your shoes as you start going through tips and tricks afterwards.

Maybe we want to start off with Peter. You just finished once, why don't we start with you, and then we can go to Danielle and then to Ariella?

Sure, I'm really not a procrastinator. I think the answer is going to depend on what sort of personality type you're talking about. Typically, right now, I'm a standing member of a study section so we meet three times a year. I'll typically get five to ten grants to review, and then I sort of dive in as soon as I can. I'll dedicate a day or a few days to reviewing the grants. I'm going to start in the morning when I'm fresh and have a coffee, and then I'm typically at my computer, sort of editing in Adobe, even just making comments all over the grant and trying to digest what's in the content.

In terms of what I focus on, it really matches the slides I just showed you. I spend the most time on the couple sentence rationale and the abstract. I spend a lot of time on the aims page in the research plan, and then the rest of the grant, I just sort of skim through as fast as I can. So, NIH grants are hundreds of pages long, so there's really no way that you can read every sentence of every grant. You can't read every sentence.

That's such an interesting thing to think about. There's a lot of what people might write, might actually not get read, and so you've got to be very clear on what the places you want to make sure people spend their time are.

Danielle: Hi, everyone. So, I'm not a reviewer on a study section, but what I do do is I work with faculty to make sure that their grant is written in a clear and compelling way so that reviewers on study section are digesting the information more quickly and it's easy for them to understand. So, for me, I could be at the office, or I could be at home, sitting on my couch late at night after my kids have gone to bed when I'm looking through this, but I'm really looking at it from the perspective of, do I understand someone who's scientifically trained but an intelligent layperson to your field? Do I understand what you're trying to say in the point you're getting across? So, similar to what Peter said, I'm looking, focusing on the specific games page first since a lot of faculty or a lot of reviewers look at that as a roadmap for what should be in the application and making sure that the writer is selling their work and what they're proposing to do and highlighting the impact of that work. Got it, awesome.

And are you going through one grant at a time, or do you review a lot of them briefly and sort of get a summary before going back and making deeper assessments? For us, we tend to have a queue around grant time and we do one at a time. We want to give each of our faculty members the appropriate time for their grant, and so some grants take a few hours to review if they're well-written and their concepts are written in a logical way, and some grants take like days to a week if it's a poorly written grant. Wow, it's hard to sift through, so we have a team of four people and we, around grant season which is now, we divide up the grants and you go through one at a time. Wow, so days to a week is a lot of time to go through something, so I can imagine the feeling you might have there or something that's really clear and smooth that goes through and hours feels really good compared to something that you're struggling through over days. Yeah, and we're trying to make recommendations to get to the point so when it reaches the reviewers' hands, they're only spending a few hours going through the grant. Right, so that's our goal. That is the goal. Okay, so the goal is to get people to read things fast. That's a clear one that I think I've picked up today.

An area, why don't you walk us through your routine? What do you do as you're getting excited to do and ready to review a section of grants? Sure. Yeah, so hello everybody. I review for NIH and also for NSF, which is the National Scientific Foundation in the United States, and it took me a minute to figure out my routine, so I ask a lot of senior people, and what works for me is to read. So, I have 10 grants to review, and then I read through all specific aims pages, and then I rank them, and then I know how my pile looks like, and then I start with the best, like highest-ranked paper grants, which is probably the top 30-40, and then I leave the worst ones to the end, so basically giving them a chance. Got it. And how often that's really interesting. So, one thing I've taken away if you know everyone has a slightly different routine, so if when you're writing you don't necessarily know what routine you're going to get in the reviewers that are reviewing your grants, so you almost have to think of how do you set it up to cater to a broad audience. I was really interested when you mentioned how you stack rank the grants based on the aims page. How often do you see a grant that was lowly ranked end up moving up the pile afterward into something you'll say they're in the bottom 30 percent up to the top 50 actually get funded? Not too often, but it happens, right, because ideally, you want to give a grant a chance, and sometimes just specific aims pages so they just didn't communicate this. Also, sometimes the grant can save itself when it happens, maybe in 25-30% of the time.

But it's not too often usually, it's the aims page that's very reflective of the quality of the grant. That's so interesting, so it really hammers home the point you made earlier, Peter, that you should spend all of your time on the specific aims page and then build the rest of your application after that. Peter, how does that compare to how you think of the aims page, and obviously it's really important to you. When do you have something in mind? I guess my follow-up question would be, is there a point when you're reviewing a grant, and it sounds like earlier, the aims page is really it for you, where you've made an initial assessment of whether this grant is going to be something that's going to get funded or is it unlikely, and you're going to be kind of pulling it up from the bottom?

Just to clarify, I don't want to cut you off, but I do read the whole grant, and in absolutely 70% of the cases, the specific aims page kind of reflects the rest of the grant. But yes, I do read the whole grant. So I just want to clarify to make sure I got it. So it's sort of like you've made an initial assessment, though, based on the aims page, exactly. And then I stagger, and then it's easier for me to see the whole picture. That's what we'll be reviewing for the panel. So it's almost like if the aims page isn't good, they've created a really steep hill for themselves to climb up, and they can't get there. The other reviewers can't get there either. Right, so absolutely. It really puts, you know, like in general, the quality. You don't even need to go to aims. If you start with the abstracts, you know, that's an interesting quality of each section is pretty highly correlated in my experience. So, like, you know, I mean, it sort of, I would describe it as, you read somebody's abstract, and you just have the sinking feeling in your gut. You're like, "Oh, no. You know, I'm gonna be stuck, man, for eight hours."

So there really is a judging, I would say, judging a book by its cover, but you're certainly judging it by your first, your early impression on it. And it sounds like it's abstract. That's the page for you that does it, Peter. It's more so than the aims page. Like, you've already kind of made an assessment of the next stage in terms of how likely it is to get funded. Yeah, I think most of the time, the research plan is really solid, and the other parts are typically really good. And then vice versa. So yeah, I haven't had a lot of grants where the abstract was amazing, and then the research plan was bad. Gotcha. No, that's so interesting. So yeah, I would say one of the things, this is actually the part I like most about study section, is that you start by reviewing the grant on your own, and you know, I think there, you can have a really firm opinion. But then, you know, the fun part is then you go and compare scores between the three reviewers that read it, and so that's where the surprises come in. Like, you know, I might think a grant is terrible, but then the next person thinks it's perfect, you know? And then, interestingly, I get to talk about why I'm wrong or why the other person was wrong and just to feel about how often does that happen in the scenario. If I'm applying it probably you know like you said you might end up with one person who loves it to the don't. How often do you see that scenario and what is the outcome or what is the thing that might convince a final outcome in those study sessions that gets you to a decision.

Frequently to me. I mean, I think there's sort of two. It's either that sometimes the three reviewers have basically exactly the same score so we're totally in sync. Those are actually really boring grants to talk about because you know I mean I think arguably they should not be discussed like we should just even give them the score. But fairly frequently the reviewers have really big differences in opinion and then you know unfortunately it does sort of come down to like who's most convincing in the room because you're gonna end interesting basically try to hash it out as the three people that read it and then also like the broader study section how they chime in got it and anyway.

 Do you feel this? Do you have a similar experience? Or you see it happens quite frequently where you see study sections have multiple multiple opinions?

I do agree with Peter that you know this moment when you open everybody else's course it's like either validation point or like oh my God what did I do wrong so I think yeah probably most of the 50 times it's agreeable sometimes there's small deviation and then sometimes it's like really you know I gave it an excellent score and somebody absolutely hated it so then you go back and you reread and make sure that you didn't miss anything. It's so interesting and so this is maybe a slight continuation but I kind of want to ask a question like you know when you see and then you're going to provide a really good Insight on this one too like when you see an application that's like you know clearly this is like not great yet you know what is the biggest tweak so you see them go from being like not ready you know or feeling like oh no I'm not gonna give you a good score to something it's going to get a really good score or could get a really good score.

It definitely depends on the grant and it could be a variety of different things but my suggestion is obviously the PI is working on them but you know if we're two weeks out and you haven't made any of your figures that could be a red flag so really make sure you are putting your figures in. The other thing is if I'm getting like a research plan to review I do the so wet test so when anything that they write down are they saying relating it back to the bigger picture the larger research question and the impact and so something that might be somewhat of a sparse research plan you can go in and make things easy for the reviewers to understand what you're trying to propose just by answering so what after everything. So, I think you know that's something she'll look at as well. And then here to spell check. I think a lot of our faculty at Columbia are very picky and get distracted by details and so if you have typos and grammar mistakes, they will be biased against your grants when they're reviewing it. That's an interesting one because I know you talked about that same bias you know even using certain words. What are some of the things you know Peter and early that you see that might be biases for you? I know you've mentioned some of the wars, but like, maybe, so maybe already you can start us off, like what are some of the biases that you look for when you're reviewing a grant? Is it just spelling and grammar, certain words, is there an approach to the aims page that you know makes you put it in the bottom of the pile?

Yeah, I mean, I think one thing that we get a lot, maybe this is true for or probably is true for a lot of areas, like, we get a lot of people that want to work on the microbiome but maybe have never done anything. So, you know, there are sort of these flags, you know, like, for example, right, flower, or plant, and so somebody says that it shows me that you don't actually know the terminology of the field, you haven't really run your Grant by somebody that works on the microbiome, and so those are sort of common red flags that we see a lot of. And I think related to that, I think a lot of people make the mistake of saying, like, "I want to work on the microbiome, so here are aims related to the microbiome." What they don't explain to the reader is like why the microbiome is the answer. So like you need to have a reason, you know, from data or the literature or something that told you that the microbiome is the answer. It can't just be intuition. Got it! So it's not you're just like, "Oh, you know, this is a I want to focus on the microbiome, so this seems like a cool place to go to do some research, because it's, you know, it would be, it's, you know, we could test it out, but there's no real validation." It's like, "Hey, this is the most obvious place to go through either preliminary data or some really strong logic." Yeah, got it. That's awesome. Thank you. Really cool insight.

And Ariella, what are the kind of pet peeves you see when you're stack ranking your grants, especially through those aims pages that make you sort of cringe and kind of not want to read the grant right away or assume it's not going to go in the direction you're hoping?

So, for a specific aims page and for the whole grant, there are two different pet peeves. The specific aims page so, you know, the aims shouldn't depend on each other, but they should also be not disconnected, right? So, there should be some sort of flow and connection, and usually, if the aims are disconnected or they really depend on another, this means that the grant is just not going to work, right? Because if everything depends on aim one, what happens if aim one doesn't work? This is for the aims page, but then for the whole grant, unnecessary repetition I don't understand because people repeat it over and over again. I'm like, "I got it! So let's talk about something else." And it happens a lot, and, you know, it's just bad grantsmanship. It's not necessarily that the person doesn't have something to say; they just, you know, it's bad grantsmanship, and maybe this is something that Danielle can look for because unnecessary repetition is really annoying and kind of a waste of space. Absolutely, and also, so if people are not familiar with other people's work and really say things like Peter just mentioned, you know, that really don't justify what they're writing about, it's also kind of it shouldn't be getting funded. So, it sounds like it, you know, we've heard some repetition on this one a few times today, and I was like, the Omni aims, like, they can't be overly connected or dependent on each other because then there's like a single point of failure on that application. But they also can't be so disconnected where it's like you're shooting in the dark it sounds like so you're looking for like a certain level of hey this person knows what they're talking about but you know they're not betting only on one thing they have a few approaches that you know if one of them works out this could build up funding is worth it kind of thing and I think Danielle you've talked about the second one I think I really mentioned were to me I think at one point before we're you know having things repeat themselves over and over again, you know it's not a science paper I'm pretty sure I think you mentioning it's like this is something that you can just say it once and that's probably enough. I think that's something I've heard you say before too but maybe you can elaborate.

Sure. I mean, I think using formatting I think to highlight the key and relevant things that reviewers are trying to pick up on, you can kind of get away from having to be super repetitive, but I mean, people are reading quickly so you want to be a smidge repetitive, but not enough to like get the reviewer over the head, so there's this balance that you're trying to strike. So, don't feel like they're spending like you said, I think it's a pure thing is you're like you know have them feel like they're getting a lot of information in a very short period of time versus having to spend more time so that could aggravate a reader if they have a lot of grants to review.

Yeah, that reminds me of one of my things that somebody else recommended to me that has really changed my life and that I had always thought about grants as having three aims but you know two aims is so much easier so interesting you know and there is no real reason to have a third aim. It's not going to make people like your grant more of anything. It makes them you know hate it because they have to read another aim and you know they're really cool but special thing about two aims is that your aims can sort of be opposites of each other interesting and so you can sort of design one aim is sort of top down I'm going to start with a complex system and and still then the mechanism and then the other can be bottom up you know and so that's very satisfying just like that's so interesting. Yeah, I would definitely recommend this, getting rid of your third aim.

Peter, I mean, and I think you know we can talk about this for an hour but I love grants mystery games. I prefer to write brands with students because your first name can be exploratory and then the second one can be validation. However, I recommend, so I once wrote won't propose in a specific aims page to my program director and I send it to her and she responded to me immediately are you proposing a three-year R01 so I think there's certain bias it's really important to communicate that it's a five-year R01 and really show that there's enough science to do five years or maybe ask for only three-year funding so it's important to talk to your officials this is the take home message got it you know what they expect. Yeah, I'm personally curious as to what percentage of grants have two aims versussay three that you see come through any gut check or idea. Most grants have aims like okay I agree unless it's a fellowship for a lot of graduate students. We're talking about R01 sometimes having two aims and it's fine but other ones big grants are expected to have three aims which is very difficult to come up with but you know maybe that's why it takes so long to write a good grant I feel another session one day coming up on a debate between two to three aims. But yeah, there are a lot of benefits to doing too well. I want to move over to maybe a second topic. It has been super cool to hear this kind of debate behind the scenes. You know from the reviewer's perspective on what you find exciting and pet peevey. Danielle, you mentioned something earlier, you know that one of the biggest things you can see how people go from not great to being exceptional sometimes is adding in figures. And I know Peter, you said even having one on every page. So maybe an overall question that you know we could address is like, what are the sections that really do benefit from having to figure something like if someone's thinking like hey, I want to add grant figures to my applicant application, which are those which are the sections that I should focus on adding them to? Maybe we'll start off with Ariella getting to your passion on the aims there.

So, I actually think that adding a figure on the aims page is something more recent. It happened in the past maybe 10 years. I haven't seen it before, but now it's more acceptable and I love it because of how many words you can explain. I think if you have an interesting mechanism, it would benefit from a figure. So, if it would be just easier to look at the image, I also like seeing images where people kind of could show how all the three aims integrate or how one leads to another or how you know or if you have some unique special technology that you're developing or using and it's not very familiar to people, I think this would also benefit from a nice flow chart. Are there any sections that you don't like to have figures in? I mean, if there is a thing like too many images, right, so I think innovation doesn't require a figure or a cartoon, this is kind of whatever, but yeah, all the rest always benefit. I also like, for example, if you have some interesting data, I always like when people add a schematic to explain experimental design so it's easy if there is a space. Obviously, do not overload, but if there is a place and you feel like it would better explain your experimental design or explain the results that I'm looking at, yeah, I would add a schematic, a small schematic to explain what you've done. That's so interesting.

I know it's been a hot topic we've heard before like, do you include a figure on the aims page? Actually, it was a follow-up question. So maybe I'll pass it to Peter because I know I think I saw on one of your examples we had it was Cecilia who did have her aims broken down in figure format. So how about you? What are the places that you like to see figures in an application? Maybe where are the places you don't?

Yeah, I agree. I think significance, the research plan for sure, in the significance and approach. I think innovation, you don't want to use too much space on it, so you probably don't want to add a bunch of figures. I think for the aims stage, it gets really hard because there's so much you have to have on that page. So, you know, I'd rather have a good aims page, like a very well-written aims page with a lot of white space and no figure, you know, than something where they've really crammed the text in to make room for a figure. So yeah, it really depends on how many words you need to describe what things are, you know, so I guess if you're reducing your only two aims, maybe you've got a better chance of putting a figure in. Another advantage of the two aims, so it's going to be a recurring theme, and one and then maybe I'll ask you, and then Danielle can help finish this point apart, but is there a certain type of figure that works well on an aims page versus some that don't. Keeping it really conceptual on the image is preferable. It's not recommended to show a lot of detail in the figure. The approach section is where the details belong. Keeping it at a really high level is ideal. Little diagrams that tell what the whole grant's goal is are great. There's no need to show a bunch of little old details. The advice given to researchers at Columbia is that basic scientists tend not to include figures on the aims page question. Clinical, translational, and epidemiologists sometimes put it on. It tends to be more of a conceptual model. If basic scientists include it, it is seen mainly on the background significance page. This is because it's limited to one page, and people are trying to get their points across and explain it in that one page.

Some pet peeves for figures are when people use large panels like a through. It takes a lot to get through them, and sometimes, you're hitting the reader over the head with too much information. It's not like a manuscript where you have to demonstrate something over and over again; maybe pick the key experiment and show that. The thing that data not shown to the other experiments you might be doing. Professionalism is essential, and general design principles should be observed. Is there enough spacing? Have you taken away the additional lines? Is the color scheme nice? It's not a big pet peeve, but it's pleasing and looks polished and presentable. Readability is also important. Can you read all the axes and figures? Can you understand what's happening in the figure quickly? The figure legend should have a title of the takeaway and then what's going on in every panel that you're discussing. The best figures tell a little story, so if you have an experimental diagram and then say you have a flow or IHC where you can visualize the effect in maybe two conditions and then you have a quantification, right? It's a clear depiction of the point that someone's trying to make. That's awesome. Well, thank you, thank you. Yeah, so it sounds like a similar thing like less is more, a little bit focus, focus your figure on something versus trying to explain too much information or be too repetitive on it as well.

Awesome, thank you. Peter, what makes a figure great for you and then what are your one or two pet peeves? Yeah, yeah, I totally agree with the points that were made. I think you just get hit by a couple of pet peeves which will maybe be surprising for people, but you know I see a lot is that people will just sort of overload the data. They'll put in a lot of data that is not related to the aims at all, so it's just experiments that that person has done that actually do not support the aim. And so, you know, I think really the goal of figures in a grant, I mean either you're showing a diagram that explains what you want to do, yeah, or you're providing evidence, you know, think of yourself like a detective. You really want to convince the reader that you're solving the case and that you have figured out that your hypothesis might be right. You know it's really only essential data that helps your case, you know, and then the set related to that I guess my second pet peeve is like, I think people you know because there's all this pressure to have preliminary data and grants, they are surprisingly often we'll put in figures that actually argue against their own hypothesis. I mean, it's really a fatal flaw I think because it shows that the writer has not really thought about what their data shows, you know, that maybe the whole premise of the grant is wrong, and so you know it really can take a very strong Grant and turn it into a very weak Grant and oh wow reviewers are going to catch those errors.

Yeah, I think like the talk before mine, Daniel’s, you know those figures looked like you could put them in a textbook or something, I mean they're just beautiful. I mean I think a lot of scientists don't like this conceptually that it is I mean it really does come down to aesthetics, you know it's like if you look at a painting on the wall, like what looks nice to your eye, you know, like that's kind of what we're looking for, you know if you have a graph that just came out of Excel and has really blurry text like that just doesn't look good. Got it and so just having things that look pretty makes a big difference.

Well, I'm sure Shiz will have lots of tips for that in our last session, but that's you know I totally agree sort of like you judge something quickly and that first impression counts for a lot and probably how much you're gonna sort of read the rest of the story so make that first impression count. I know I'm guilty of that myself and then Ariella maybe just take us home on the figure side. What are a couple pet peeves and one or two things that make something really stand out for you?

So in terms of, I totally agree with Danielle, I feel exactly the same way as they do. So I prefer especially in the text or approach session to have smaller figures that it kind of disperse throughout the text that help me understand what I'm reading especially if it's scientific it's experimental design. I do not like large figures. And in terms of aesthetically players and DS defects so everything ideally should be aligned and also kind of follow how my eye goes, right? So people say it's better for left to right instead of top down so these are all important things. Make sure there are no mistakes in your figures because it's the worst, you know, if you look at the figure and it's like there is no well, the panels are not labeled correctly and I have to go back in and look what is what is figure 5J and it's not really 5J it's 7A, this can kill a grant so it's very important that there are no mistakes.

Kind of the "you said" take-home message: I do encourage my students to read their grants and papers aloud—not to somebody else, just sit in your room and read it aloud. You will find so many mistakes and repetitions and unnecessary stuff that is going to improve the grant. So just take your time and slowly read it aloud and look at the figures.

Interesting, that's really good advice. Yeah, I know that sounds like similar advice, you know, be your own reviewer a little bit. And then also get other people probably to review your Grant, as well, but make sure you had a chance to kind of go through and does it sound logical when you read it out to yourself.

Awesome, well thank you so much everyone, this has been incredible. I've really, really enjoyed myself. It sounds like everyone who's been listening has too. 

Due to continuous improvements in BioRender, the application may appear slightly different in some of our videos.
Video idea succesfully submitted