oslo.town is one of the many independent Mastodon servers you can use to participate in the fediverse.
An online home for the people of Oslo, Norway 🇳🇴 but a gateway to the world.

Server stats:

229
active users

#peerreview

6 posts6 participants0 posts today

#Preprint sites #bioRxiv and #medRxiv launch new era of independence
The popular repositories, where life #scientists post research before #peerreview, will be managed by a new organization called #openRxiv.
Until now, they had been managed by Cold Spring Harbor Laboratory. The new organization, named openRxiv, will have a board of directors and a scientific and medical advisory board. It is supported by a fresh US$16M grant from Chan Zuckerberg Initiative (CZI).
nature.com/articles/d41586-025

www.nature.comPreprint sites bioRxiv and medRxiv launch new era of independenceThe popular repositories, where life scientists post research before peer review, will be managed by a new organization called openRxiv.

New paper out:
Rethlefsen et al (including me!) (2025). Improving peer review of systematic reviews and related review types by involving librarians and information specialists as methodological peer reviewers: a randomised controlled trial. BMJ evidence-based medicine, bmjebm-2024-113527. Advance online publication. doi.org/10.1136/bmjebm-2024-11
#medlibs #PeerReview #EvidenceSynthesis

BMJ Evidence-Based Medicine · Improving peer review of systematic reviews and related review types by involving librarians and information specialists as methodological peer reviewers: a randomised controlled trialObjective To evaluate the impact of adding librarians and information specialists (LIS) as methodological peer reviewers to the formal journal peer review process on the quality of search reporting and risk of bias in systematic review searches in the medical literature. Design Pragmatic two-group parallel randomised controlled trial. Setting Three biomedical journals. Participants Systematic reviews and related evidence synthesis manuscripts submitted to The BMJ , BMJ Open and BMJ Medicine and sent out for peer review from 3 January 2023 to 1 September 2023. Randomisation (allocation ratio, 1:1) was stratified by journal and used permuted blocks (block size=4). Of 2670 manuscripts sent to peer review during study enrollment, 400 met inclusion criteria and were randomised (62 The BMJ , 334 BMJ Open , 4 BMJ Medicine ). 76 manuscripts were revised and resubmitted in the intervention group and 90 in the control group by 2 January 2024. Interventions All manuscripts followed usual journal practice for peer review, but those in the intervention group had an additional (LIS) peer reviewer invited. Main outcome measures The primary outcomes are the differences in first revision manuscripts between intervention and control groups in the quality of reporting and risk of bias. Quality of reporting was measured using four prespecified PRISMA-S items. Risk of bias was measured using ROBIS Domain 2. Assessments were done in duplicate and assessors were blinded to group allocation. Secondary outcomes included differences between groups for each individual PRISMA-S and ROBIS Domain 2 item. The difference in the proportion of manuscripts rejected as the first decision post-peer review between the intervention and control groups was an additional outcome. Results Differences in the proportion of adequately reported searches (4.4% difference, 95% CI: −2.0% to 10.7%) and risk of bias in searches (0.5% difference, 95% CI: −13.7% to 14.6%) showed no statistically significant differences between groups. By 4 months post-study, 98 intervention and 70 control group manuscripts had been rejected after peer review (13.8% difference, 95% CI: 3.9% to 23.8%). Conclusions Inviting LIS peer reviewers did not impact adequate reporting or risk of bias of searches in first revision manuscripts of biomedical systematic reviews and related review types, though LIS peer reviewers may have contributed to a higher rate of rejection after peer review. Trial registration number Open Science Framework: <https://doi.org/10.17605/OSF.IO/W4CK2>. Data are available in a public, open access repository. All anonymized data and materials from this study are available in the Open Science Framework (<https://dx.doi.org/10.17605/OSF.IO/ZY547>).

#AI #science #PeerReview

'I do not have definitive proof the peer review of my manuscript was AI-generated. But the similarities between the comments left by the peer reviewer, and the output from the AI models was striking.

AI models make research faster, easier and more accessible. However, their implementation as a tool to assist in peer review requires careful oversight, with current guidance on AI use in peer review being mixed, and its effectiveness unclear.'

theconversation.com/vague-conf

The Conversation‘Vague, confusing, and did nothing to improve my work’: how AI can undermine peer reviewPeer review ensures the findings of research are trustworthy. But what happens when it’s performed by an AI model?

Coming soon: EASE Editorial School for Journal Editors. Part I begins online Wed 19 March, Part II in the Autumn.

Part I:
19/3: Journal structure & management, Duncan Nicholas
26/3: #PublicationEthics, Joan Marsh
2/4: #PeerReview processes, Serge Horbach
9/4: Journal visibility, promotion, indexing, Are Brean

£80 for EASE members
£160 for non-members
(low-income country discount)

Register:
ease.org.uk/ease-events/traini

Pioneering #CERN scheme will pay publishers more if they hit #openscience targets
#Physics funder will provide financial incentives to encourage practices such as data sharing and transparent #peerreview.
Journals publish work from field openly and at no cost to authors, in exchange for bulk payments. Under initiative, CERN will pay more to publishers that adopt polices such as public or open peer review and linking research to data sets, and less to those that don't.
nature.com/articles/d41586-025

www.nature.comPioneering CERN scheme will pay publishers more if they hit open-science targetsThe particle-physics funder will provide financial incentives to encourage practices such as data sharing and transparent peer review.
Replied in thread

@franco_vazza @astro_jcm

This proposal does not mention equations, because it's meant to serve all fields, but it has a question about statistics:

doi.org/10.7717/peerj.17514

PeerJStructured peer review: pilot results from 23 Elsevier journalsBackground Reviewers rarely comment on the same aspects of a manuscript, making it difficult to properly assess manuscripts’ quality and the quality of the peer review process. The goal of this pilot study was to evaluate structured peer review implementation by: 1) exploring whether and how reviewers answered structured peer review questions, 2) analysing reviewer agreement, 3) comparing that agreement to agreement before implementation of structured peer review, and 4) further enhancing the piloted set of structured peer review questions. Methods Structured peer review consisting of nine questions was piloted in August 2022 in 220 Elsevier journals. We randomly selected 10% of these journals across all fields and IF quartiles and included manuscripts that received two review reports in the first 2 months of the pilot, leaving us with 107 manuscripts belonging to 23 journals. Eight questions had open-ended fields, while the ninth question (on language editing) had only a yes/no option. The reviews could also leave Comments-to-Author and Comments-to-Editor. Answers were independently analysed by two raters, using qualitative methods. Results Almost all the reviewers (n = 196, 92%) provided answers to all questions even though these questions were not mandatory in the system. The longest answer (Md 27 words, IQR 11 to 68) was for reporting methods with sufficient details for replicability or reproducibility. The reviewers had the highest (partial) agreement (of 72%) for assessing the flow and structure of the manuscript, and the lowest (of 53%) for assessing whether interpretation of the results was supported by data, and for assessing whether the statistical analyses were appropriate and reported in sufficient detail (52%). Two thirds of the reviewers (n = 145, 68%) filled out the Comments-to-Author section, of which 105 (49%) resembled traditional peer review reports. These reports contained a Md of 4 (IQR 3 to 5) topics covered by the structured questions. Absolute agreement regarding final recommendations (exact match of recommendation choice) was 41%, which was higher than what those journals had in the period from 2019 to 2021 (31% agreement, P = 0.0275). Conclusions Our preliminary results indicate that reviewers successfully adapted to the new review format, and that they covered more topics than in their traditional reports. Individual question analysis indicated the greatest disagreement regarding the interpretation of the results and the conducting and the reporting of statistical analyses. While structured peer review did lead to improvement in reviewer final recommendation agreements, this was not a randomized trial, and further studies should be performed to corroborate this. Further research is also needed to determine whether structured peer review leads to greater knowledge transfer or better improvement of manuscripts.

#PeerReview question:
Some journals do double-blind peer-reviewing, which is good (IMO). 👍
Some journals also require you to share analysis code, which is also good and for which people usually use #Github. 👍
Most of the time the github repository of the authors is not anonymous... 🤔

Is it possible to anonymise a github repository somehow, or use another system to share code just for peer-reviewing?

Edit: has anyone used Anonymous Github for this?

anonymous.4open.scienceAnonymous Github

I don't manage to find time for in depth reviewing of manuscripts. I'm considering refusing all reviewing requests, and when I have some time screening @biorxivpreprint preprints #PeerReview #Preprints
(reposting because editing one word of the text removed the poll??)

I often wonder how academic publishing & peer-review can continue to work, in the era of the LLM. One specific issue: is the "megajournal" concept now dead? Pioneered by PLOS One, it was (loosely) the idea of a journal that will publish anything that meets basic quality criteria. Now in 2025 that seems naive, because (a) those criteria can be gamed by LLMs; (b) peer-review is unreliable to enforce the criteria, awash in generated text from both sides. #academicpublishing #peerreview #megajournal

EASE Update - Issue 3
linkedin.com/pulse/ease-update

We chat about Bluesky, poster presentations - and invite YOU to submit one to our Oslo Conference - take a look at some of the recent community action across the Association, highlight the latest publications from European Science Editing including a new Structured Peer Review Checklist, and more.

#EASEupdate #JournalPublishing #ResearchPublishing
#JournalEditing #ScienceEditing #PeerReview #Conference2025 #PosterPresentations