Tag Archives: journal

What I do

I have been meaning to do this post for a while, but finally got down to it after reading an excellent post describing substantive editing, and another post on the importance of building off your unique set of skills and background. For some reason, I’ve often struggled to describe exactly what I do, beyond saying I am a biomedical editor. There’s nothing wrong with that, it just fails to capture the nuance. I could get really philosophical about what it means to be an editor, but Rich Adin over at An American Editor has done a far better job at this than I ever could, so I will direct you to his excellent posts here and here. His characterization of the twin pillars of editing–mechanics and thinking–really struck me.

So back to my purpose. How do I describe what I do? Of course, what I do differs based on my client, so I will focus on what I feel I do best – scientific grant and manuscript editing. There are really several different levels of editing going on here.

1) Basic editing: Checking spelling and grammar, fixing awkward sentence structures, simplifying the language, and checking verb tenses and parallelism. This is a part of the “mechanics” pillar of editing.

2) Substantive editing: checking the flow and logic. How is the story being told? Is it clear what has been done and what needs to be done? It is clear what hypothesis is being tested and why? Are the right headers being used to help the reader through the document? Obviously this is part of the “thinking” pillar.

My goal here is to make the paper or grant proposal something that the reader wants to read, that the reader actually enjoys reading, rather than just skimming the abstract and figure legends. As a graduate student, I read many, many, many papers, but there were only a few that sucked me in because they were just written so well. There was a clear explanation up front about why the research was being done, each experiment in the results section flowed logically into the next, and the discussion put all the data in perspective, in the context of what is already known. I almost felt like I could see the authors at the bench, working through the data, designing the next experiment. Those are the papers I want my clients to publish. The same thing goes for grants – if anything is confusing or convoluted, the grant reviewer is going to pass it over. I edit grant proposals so that they are as clear and logical as possible, so that the reviewer immediately understands and appreciates the problem that needs to be solved, sees that the proposed aims will answer a clear set of relevant questions, and agrees that the methods used and people involved are up to the challenge.

3) Content editing: Another one for the “thinking” pillar. This is where my unique background fits in, where I can look at the experiments and the results and think about what conclusions are being drawn and whether they are being communicated accurately. Is the experimental design sound? What are the caveats? Are the statistics appropriate? Are the proposed experiments going to test the stated hypothesis? Have the data proven or disproven the stated hypothesis?  How do the data compare to what’s been done before? What are the implications for future basic or translational research? For research grants, I also draw on my background in promotional writing – will the science as presented convince the reviewer that this is a worthy problem (or gap in knowledge) to tackle and that the proposed experiments will provide the data needed to solve the problem?

4) Formatting: This also falls under mechanics – it’s making sure that the manuscript follows the journal’s editorial rules, and that the grant proposal follows the structure stipulated in the funding announcement. There are a ton of rules and regulations for preparing federal grants, and even more embedded in specific funding announcements.

So that is what I do…and now I can just direct friends and family here when they ask (hi mom!).

Advertisements

Descriptive vs. Experimental Research

Because I have this handy soapbox, I’m gonna use it. Here’s the thing. There is descriptive research and there is experimental research. Descriptive research on its own is not enough. You’ve got to get in there, change something, and see what happens. Just reporting on what you see under the microscope or on a blot is NOT hypothesis-driven science. Descriptive science is a starting point, it sets the baseline, the control state, what is known. Experimental research tests a hypothesis, which means altering a variable in the known system and seeing what happens – the result will lead you to reject or fail to reject either prove or disprove your hypothesis. Of course you’ll repeat the experiment in exactly the same way several times so you can be confident your results are statistically true. But then you’ll need to try changing something else, repeat, repeat, repeat, and so on.

In a research grant proposal (and I’m coming from the NIH perspective here), each aim should independently test your central hypothesis from different angles. Angles meaning using different methods or combinations of methods, or working at different levels (biochemical, molecular, cellular, tissue, organism, ecosystem, etc). What you learn in each aim will come together to shed light on the system you are studying.

Now, one of those angles might be descriptive, but I would argue that a purely descriptive aim is going to be your weakest aim. Devoting an entire aim to descriptive science breaks two rules in scientific grantwriting – descriptive science is not able to test your central hypothesis, and your aims must not depend on each other. (Because if one aim fails, there goes the entire proposal, and no agency will be interested in funding something so risky.) Any aim that is descriptive will be dependent on what you find in the other two aims.

The same descriptive vs. experimental idea applies to journal articles too. If your article is just descriptive, you’ve got half a manuscript. Sorry, but it’s true. The best, most compelling, field-advancing, paradigm-shifting articles are those that have a clear hypothesis, describe what is known (from descriptive science), and then describe a logical progression of changes made to the known and what happened. I know you’ve heard this before, but the best paper tells a story, leads the reader into the known system and the hypothesis, and then through each question, discovery, question, discovery, until the Discussion section brings the reader back around and gives some context. I know, some journals will accept purely descriptive articles, but in my experience, those are the smaller, second-tier journals. Not the Cells, Sciences, Natures, etc.

It’s getting more and more competitive out there – for research grant funding and publishing articles. So get in there. Get your hands dirty. Know your system then change it and see what happens.  Then change it again and see what happens. And if you need help telling your story, getting other people to understand exactly what it is you’re doing, I’ve got your back.

How Do You Solve a Problem Like Peer Review?

The August issue of The Scientist features several articles that discuss the problems with the current anonymous peer-review system of scientific research papers—many of which have become even more obvious as the Web gains popularity as an alternative for rapid publication and access to manuscripts.

In I Hate Your Paper, Jef Akst identified 3 specific problems with the traditional peer-review process, and then presented some alternative strategies that are being tested by various journal editors. I’ve summarized the ideas discussed in the article here:

In Peer Review and the Age of Aquarius, Sarah Greene suggests that increased use of the Web by journal publishers, authors, and readers has accelerated change in the traditional manuscript review processes. First, the journal impact factor, which is based on the number of times articles from the journal have been cited, has been rendered nearly meaningless by the rise of open-access publishing on the Web. In the Internet age, the impact of individual articles might be more appropriately measured in terms of page hits or downloads. The Web has also introduced the concept of post-publication peer review, in which an article is published on the Web first and then undergoes open peer-review, with reviewers’ identities and comments published alongside the article.

What does all this mean for the manuscript editor? One commenter on my previous post lamented that the rapid pace of online article submission and publication will mean that more articles will appear online without the benefit of a final review by an editor. I certainly hope that the value of a manuscript editor—either prior to submission for review (the author’s editor) or prior to publication (the copyeditor)—will not be overlooked as review methods are overhauled in the name of speed and efficiency.  When the science is eventually lost in sloppy grammar and spelling mistakes, perhaps the pendulum will swing back the other way, and the process will slow down a bit to accommodate a round or two of careful editing.