How science and scholarship are shooting themselves in the foot. And how that is a tragedy.
I have spent my whole professional life in academia and loved it. I was moderately successful. A successful teacher and curriculum designer and reformer but a complete nonentity as a researcher. I enjoyed research, but I became increasingly disillusioned with the way success in science and scholarship was being administrated and acted upon and by the way that the system could not, because of its structure, encourage synthesis or interdisciplinarity, both of which is what I thrive on. It could not encourage those things because it can (many judges think) only work ethically on the basis of in-depth expertise (respecting the boundaries of your own expertise) but will not tolerate not-knowing as a virtue and so everything it cannot know it must reject, which makes certain kinds of interdisciplinarity difficult to judge.
At the same time it cannot guarantee the success it seeks to guarantee because it does so using fallacious arguments. So, I wanted little to do with it. I did my work as a peer-reviewer and I functioned on many panels as jury member and I have published a few articles, most of which should be forgotten about. My own research, that which I valued, and that did not respect boundaries and accepted the weakness that this came with, became a largely private venture used exclusively for my own teaching. I became my own judge in everything and did so because I recognized the artistic nature of my venture. Art is judged purely subjectively. Beethoven’s ninth was not a committee product.
Now that I am ready to share my thinking in written documents, it will have to find its own way in the world unsupported by any institution, if it does not survive it has at least been fun thinking through writing, through having conversations, and through teaching.
The point is that much of the output of science and scholarship is probably good (although the amount of nonsense is astonishing) but the way we administer that science and scholarship and make it count for career, and the allocation of research grants, is self-undermining to such an extent that I am beginning to understand why people question science as an authority. That is dangerous, it is tragic, but it is our own fault. We have allowed academia to go the way of all institutions: becoming a self-serving career machine that is losing touch with its own purpose because of the way it is funded. Means and ends are not aligned.
My gripe is that much of the quality of science and scholarship is judged with the use of demonstrably fallacious arguments, arguments I would certainly not allow in my own work.
The emergence of the aesthetic quality of things is something that takes time and engagement on many levels. It cannot be made to disclose itself immediately and it cannot ever be guaranteed by being subjected to an objectifying system, as all systems are corruptible in use.
Because fallacies slip into your work when you are not being vigilant, much of my work has been to constantly revise my work, to seek out nasty little fallacies that spoil things, time and time again.
However, ‘Official Science and Scholarship” the stuff that helps careers may well do this with regard to the essays and articles that are written and published (although much slips through the system, good peer reviewers are few and far between) but then these articles in the attempt to measure their success, enter a machine that is actually built upon fallacies.
I started listing the fallacies myself, but it became a bit of a wearying and depressing task and realised that this is just the kind of job that Chat GTP would do a lot better than me. So I asked:
“Could you please list the various fallacious arguments used in the attempt to ensure and measure the success of scientific and scholarly publications. Please give examples.”
It did not catch all of them. For this I might have needed to ask the question differently. But it came up with a list that is more complete than I would have been able to come up with within the restrictions of the question posed. This is what it answered:
Below is a structured list of common fallacious arguments used when asserting, ensuring, or measuring the “success” of scientific and scholarly publications. Each fallacy includes:
- What the fallacy is,
- How it appears in research-evaluation contexts, and
- A concrete example.
Fallacious Arguments in Measuring the Success of Scholarly Publications
1. Impact Factor Fallacy (Ecological Fallacy)
What it is: Using a journal-level metric (Journal Impact Factor) to infer the quality of individual articles.
Why it’s fallacious: Averages of populations do not describe individual data points.
Example:
“This article is high quality because it was published in a journal with an impact factor of 14.”
→ A journal’s impact factor says nothing about the merit or reliability of any particular article, and citation distribution is extremely skewed.
2. Citation = Quality (Equivocation/False Equivalence)
What it is: Assuming that the number of citations reflects scientific quality, rigor, or truth.
Why fallacious: Citations reflect visibility, controversy, bandwagon effects, and strategic citation—not necessarily correctness.
Example:
“Paper A has 500 citations while Paper B has 30, so Paper A must be more scientifically important.”
→ Fraudulent or retracted articles often accumulate thousands of citations.
3. Appeal to Prestige (Argumentum ad Verecundiam)
What it is: Believing something is valid because it appears in a prestigious venue or was produced by a prestigious institution.
Why fallacious: Prestige is not a guarantee of truth or rigor.
Example:
“We know this study is groundbreaking because it’s published in Nature.”
→ Prestigious journals also publish flawed, irreproducible studies.
4. Goodhart’s Law Fallacy (Metric Fixation)
What it is: Treating a metric as a target rather than a proxy—causing the metric to cease to measure what it originally indicated.
Why fallacious: Outputs become distorted when people optimize for the metric rather than the underlying quality.
Example:
“We will reward researchers who increase their h-index by 5 points this year.”
→ Leads to salami-slicing publications, self-citation farms, and low-value papers.
5. Quantity = Productivity (False Proxy Fallacy)
What it is: Assuming that more papers means more scientific contribution.
Why fallacious: Quantity does not reflect originality, significance, or rigor.
Example:
“Dr. X published 12 papers last year—clearly our most productive scientist.”
→ Could be incremental fragments, template papers, or review articles with no new data.
6. Survivorship Bias
What it is: Evaluating success based only on surviving published works and ignoring the large body of unpublished or rejected work.
Why fallacious: It hides systemic biases (e.g., against negative results, early-career scholars, minority voices).
Example:
“The peer-review system works well because almost all published papers are good.”
→ Published papers are a biased sample.
7. Appeal to Popularity (Argumentum ad Populum)
What it is: Treating popularity (downloads, clicks, social media shares) as evidence of scientific merit.
Why fallacious: Popularity is not the same as accuracy.
Example:
“Our study went viral on Twitter—clearly this research is influential.”
→ Sensational or simplistic claims spread faster than rigorous ones.
8. Appeal to Novelty
What it is: Assuming new results are superior simply because they are new.
Why fallacious: Novelty does not equal correctness, especially in fields with replication crises.
Example:
“This study is cutting-edge and therefore more impactful than those old replication papers.”
→ Replication work is crucial but undervalued.
9. Appeal to Authority (Institutional or Reviewer Authority)
What it is: Assuming a study is correct because an expert or panel approved it (e.g., editorial board, senior scholar).
Why fallacious: Authorities can be wrong; peer review is imperfect.
Example:
“Three reviewers approved it, so the conclusions must be valid.”
10. The Funding Bias Fallacy
What it is: Assuming that well-funded projects produce superior results.
Why fallacious: Funding is often allocated based on networks and prestige, not necessarily innovation or rigor.
Example:
“This project had a $5 million grant, so its publications must be high impact.”
11. The “Metric Mirage” (Reification Fallacy)
What it is: Treating constructs like “impact,” “excellence,” or “quality” as if they are concrete, measurable entities.
Why fallacious: These are abstract, multidimensional concepts.
Example:
“Our excellence initiative increased institutional impact by 30%, as measured by altmetrics.”
→ Altmetrics are not themselves “impact.”
12. Corporate or Managerial Fallacy (Input–Output Fallacy)
What it is: Applying business KPIs to research, assuming academic outputs are analogous to widgets.
Why fallacious: Research quality cannot be standardized like production metrics.
Example:
“We require each lab to produce at least 4 high-impact publications per year to maintain funding.”
→ Encourages superficial or rushed work.
13. False Attribution of Causality (Post Hoc Fallacy)
What it is: Concluding that a performance metric caused improved science.
Why fallacious: Correlation ≠ causation.
Example:
“Since adopting publication quotas, our citation counts increased—quotas must improve research quality.”
→ Many confounding factors could explain the increase.
14. Misuse of Averages (Mean Fallacy)
What it is: Using averages (impact factor, average citations) to characterize heavily skewed distributions.
Why fallacious: Most citations are heavily right-skewed.
Example:
“The average citation count of our department is 40, so most papers are highly cited.”
→ Probably only a few raise the mean dramatically.
Below is a research-policy–oriented version of the fallacious arguments, written in a way suitable for grant-evaluation guidelines, research-assessment training, or policy briefs. It focuses on how these fallacies distort decision-making in funding, hiring, promotion, and institutional evaluation.
It then offered to provide me with a version tailored to research policy or grant evaluation, and this is what it came up with.
Fallacious Arguments in Research Policy and Grant Evaluation
1. Journal Impact Factor Fallacy (Ecological Fallacy)
Policy manifestation:
Evaluators treat the venue of publication as a proxy for the quality of the research output or the researcher.
Why it distorts policy:
Journal-level metrics cannot reliably predict the contribution or rigor of individual papers.
Example:
A grant panel assigns higher scores to applicants with papers in high-impact journals (e.g., Nature, Science) regardless of the actual study quality or reproducibility.
2. Citation Count = Quality (False Equivalence)
Policy manifestation:
Funding or promotion decisions rely heavily on raw citation counts or h-index.
Why it distorts policy:
Citations track visibility, networks, and subfield size more than scientific merit.
Example:
An interdisciplinary researcher working in a small niche is ranked as “low impact” because their field has low citation density.
3. Prestige Bias (Appeal to Prestige)
Policy manifestation:
Panels equate institutional prestige, journal prestige, or PI reputation with research excellence.
Why it distorts policy:
Reinforces inequality; penalizes early-career researchers, women, and scholars from less-resourced institutions.
Example:
A proposal from an elite university receives consistently higher scores despite having less methodological detail than a competitor from a regional institution.
4. Goodhart’s Law / Metric Fixation
Policy manifestation:
Institutions adopt metrics as performance targets (e.g., raise university’s average citation impact by X%).
Why it distorts policy:
Researchers optimize for the metric, not for meaningful scientific contributions.
Example:
Faculty are pressured to increase h-index, leading to hyper-fragmented publications, self-citations, and low-value incremental work.
5. Quantity = Productivity Fallacy
Policy manifestation:
Panels judge applicants by the number of publications/grants rather than significance or rigor.
Why it distorts policy:
Creates incentives for “salami slicing” and favors labs with resources to produce many incremental papers.
Example:
Applicant A with two major foundational papers is scored lower than Applicant B with twelve marginal ones.
6. Survivorship Bias
Policy manifestation:
Policymakers assume current publication and grant systems accurately reflect merit because only “good work” is published and funded.
Why it distorts policy:
Ignores structural biases (gender, race, geography), language barriers, access to resources, and negative-results bias.
Example:
“If this topic were important or promising, someone would already have been funded to study it.”
7. Popularity = Impact (Appeal to Popularity)
Policy manifestation:
Policymakers increasingly incorporate altmetrics or public buzz as “impact indicators.”
Why it distorts policy:
Public visibility correlates with entertainment value, not scientific relevance.
Example:
A high-profile but speculative neuroscience study gets funding because it generated wide media attention.
8. Novelty Bias (Appeal to Novelty)
Policy manifestation:
Grant programs overvalue “innovative,” “disruptive,” or “transformative” proposals at the expense of replication, incremental improvement, and careful methodological work.
Why it distorts policy:
Creates incentives to oversell novelty and under-values essential but unglamorous research.
Example:
A replication study essential for validating a major claim is rejected because it “lacks innovation.”
9. Authority Bias (Argument from Authority)
Policy manifestation:
Grant reviewers defer excessively to senior scholars or to previous panel decisions, rather than independently evaluating new evidence.
Why it distorts policy:
Entrenches conservative thinking; suppresses emerging ideas.
Example:
“This PI always produces excellent work; we should trust that this project will succeed.”
10. Funding = Merit Fallacy
Policy manifestation:
Past funding success is used as a proxy for research excellence or likelihood of future success.
Why it distorts policy:
Creates a self-reinforcing “rich-get-richer” cycle.
Example:
A PI with prior grants receives a high score because “previous funders trusted them,” while equally strong newcomers are deemed “high risk.”
11. Reification of Metrics (Metric Mirage)
Policy manifestation:
Abstract concepts like “excellence,” “impact,” or “quality” are treated as numerically measurable entities.
Why it distorts policy:
Metrics are imperfect proxies and should not be confused with the underlying constructs.
Example:
An institution claims to have increased “research excellence” by 15% based on a rise in normalized citation scores.
12. Managerial Reductionism (Input–Output Fallacy)
Policy manifestation:
Research is treated as a production process, where outputs (papers, patents, grants) are assumed to scale linearly with inputs (funding, personnel).
Why it distorts policy:
Discourages high-risk/high-reward projects and creative exploration.
Example:
“If we double this lab’s funding, we expect twice as many papers next year.”
13. Post Hoc Fallacy in Performance Metrics
Policy manifestation:
Policy makers incorrectly infer that performance frameworks caused improvements in research quality.
Why it distorts policy:
Overlooks confounders such as global citation trends, field growth, or collaborative patterns.
Example:
“Since implementing our KPI system, citation counts increased — therefore the KPIs improved research quality.”
14. Misuse of Averages and Composite Scores
Policy manifestation:
Average citations, normalized scores, and composite indices are used directly without attention to skewness, field differences, or manipulation.
Why it distorts policy:
Averages hide extreme variability and create incentives to game the metric.
Example:
A university rewards departments whose “average citation impact” exceeds a threshold, even though a few superstar labs inflate the department averages while most papers receive few citations.
I then asked how scientists and scholars use this system to their advantage…this is what I got.
Below is a safe, analytical explanation of how scholars sometimes make use of the fallacies listed earlier to advance their careers.
This is not a guide for unethical behavior—it is a diagnostic overview intended to help you recognize and counter such practices in academic evaluation systems.
How Scholars Exploit Fallacious Arguments to Advance Their Careers
1. Exploiting the Impact Factor Fallacy
Mechanism:
Targeting high-impact journals even when the fit is weak or the work is exaggerated.
Typical behaviors:
- Overselling novelty or significance to appeal to selective journals.
- Prioritizing “flashy” results over methodological rigor.
- Avoiding replication or incremental work because it won’t be accepted in high-IF venues.
Career impact:
“High-IF publication lists” can sway hiring committees and funders who wrongly equate venue prestige with quality.
2. Gaming Citations (Citation = Quality Fallacy)
Mechanism:
Increasing citation counts to appear influential.
Typical behaviors:
- Strategic self-citation or “citation circles” among collaborators.
- Producing many review papers that accumulate citations quickly.
- Engaging in controversial or speculative claims that attract attention.
- Publishing in rapidly growing subfields where citation density is high.
Career impact:
Inflated citation profiles can boost h-index and strengthen grant proposals under metrics-driven evaluation systems.
3. Leveraging Prestige Bias
Mechanism:
Using institutional or network prestige to make work appear more important.
Typical behaviors:
- Collaborating with big-name labs to gain reflected prestige regardless of actual contribution.
- Moving institutions primarily for brand value.
- Relying on reputation to push work through peer review with lighter scrutiny.
Career impact:
Committees often defer to prestige signals, consciously or unconsciously.
4. Optimizing for Goodhart’s Law (Metric Targeting)
Mechanism:
Maximizing metrics that institutions use as proxies for “research quality.”
Typical behaviors:
- Producing many minimal publishable units (“salami slicing”).
- Prioritizing “safe, publishable” projects rather than risky or fundamental ones.
- Publishing in venues that favor quick acceptance and high citation turnover.
- Excessive self-citation to boost metric profiles.
Career impact:
Helps meet performance targets (h-index, grant income, publication counts) that some institutions reward.
5. Exploiting the Quantity = Productivity Fallacy
Mechanism:
Increasing the number of publications to look more productive.
Typical behaviors:
- Splitting one dataset into multiple papers.
- Publishing many low-novelty variations of the same idea.
- Placing students or postdocs on multiple papers to multiply authorship count.
- Targeting conferences or journals with rapid publication cycles.
Career impact:
Creates an impressive-looking CV for committees that reward volume.
6. Benefiting from Survivorship Bias
Mechanism:
Highlighting successes and hiding failed research, negative results, or rejected work.
Typical behaviors:
- Only reporting positive findings.
- Dropping unsuccessful lines of research without publishing null results.
- Selectively publicizing awards, grants, and “wins.”
Career impact:
Creates an illusion of consistent success and reliability.
7. Using Popularity Metrics (Appeal to Popularity)
Mechanism:
Appearing influential through visibility rather than scientific substance.
Typical behaviors:
- Promoting work aggressively on social media.
- Engaging in attention-grabbing presentations or commentary.
- Publishing media-friendly abstracts or press releases.
- Choosing trendy topics likely to attract public interest and altmetrics.
Career impact:
Creates an image of “high impact” in systems that reward altmetric visibility.
8. Exploiting Novelty Bias
Mechanism:
Overclaiming novelty to fit grant or journal preferences.
Typical behaviors:
- Rebranding incremental improvements as “groundbreaking.”
- Using buzzwords (e.g., AI, sustainability, quantum) for topical appeal.
- Avoiding necessary replication and careful theory-building.
Career impact:
Attracts attention from selection panels biased toward “innovation.”
9. Leveraging Authority Bias
Mechanism:
Relying on the perceived expertise of senior collaborators.
Typical behaviors:
- Being added to senior scholars’ papers with minimal contribution.
- Using well-known co-authors to lend credibility and smooth publication.
- Citing authority figures to immunize work from criticism.
Career impact:
Strengthens networks and provides “halo effects.”
10. Using Prior Funding as a Proxy for Merit (Funding = Quality Fallacy)
Mechanism:
Turning past grants into self-reinforcing evidence of competence.
Typical behaviors:
- Using prior funding as justification for new funding (“We have a proven track record”).
- Joining large consortia primarily to be associated with prestigious grants.
- Highlighting grant income as a metric of success.
Career impact:
Amplifies cumulative advantage in funding decisions.
11. Exploiting Reified Metrics (“Excellence” Scores)
Mechanism:
Translating abstract concepts (excellence, impact) into quantifiable performance indicators—then optimizing for them.
Typical behaviors:
- Designing publication strategies to maximize score-weighted outputs.
- Tailoring research statements to fit institutional performance frameworks.
- Presenting normalized citation metrics as evidence of personal brilliance.
Career impact:
Panels often equate metrics with real excellence, even when they don’t align.
12. Managerial Reductionism
Mechanism:
Presenting research as a predictable set of “deliverables” to satisfy evaluation frameworks.
Typical behaviors:
- Overpromising deliverables in grant applications to appear reliable.
- Aligning research with institutional KPIs rather than core scientific needs.
- Producing short-term outputs to match reporting cycles.
Career impact:
Makes applicants appear “low-risk” to decision-makers.
Why Scholars Use These Tactics
- Competitive pressure: Scarce jobs and grants push people to optimize for what is rewarded.
- Misaligned incentives: Institutions reward metrics, not underlying scientific value.
- Cumulative advantage: Success breeds success; early wins snowball.
- Performance cultures: Neoliberal managerialism encourages metric-maximizing behavior.
I rest my case. Why should I be part of that? And what we haven’t talked about is the business models of academic journals which are very dubious indeed. I made my decision early on, paid the small price, but it was more than worth it. Je n’ai aucun regret. I do not thankfully even need to pretend to be an angel. I do not need such a reputation. I can just be human and fallible.