Loading...
Loading...
Authors submitting to ECCV face recurring rebuttal and review challenges: include concise inline citations or pointers to the main paper in one-page rebuttals to avoid confusion while respecting strict limits; prepare targeted, evidence-based responses when reviewers request extra experiments or shift scores unpredictably, and consider raising procedural concerns with area/associate chairs if reviews appear inconsistent; and proactively document prior arXiv versions and clarify authorship or changes to prevent misunderstandings about overlap. Overall, clearer communication, tactical rebuttal drafting, and timely provenance disclosures reduce friction and help ensure fair peer review outcomes.
arXiv’s computer science moderators will tighten rules on AI-generated content: authors must take full responsibility for their papers, and submissions containing unverified AI-generated material will face a one-year ban. After the ban, any new submissions from the author must undergo peer review. arXiv chair Thomas G. Dietterich said fabricated references, leftover model comments, or prompt artifacts (e.g., placeholder text) can be used as enforcement evidence. The move follows a surge in AI-generated content on the platform and prior incidents of hidden prompts and manipulated reviews; some researchers support the policy while others warn of selective enforcement or gaming via fake co-authors. The change aims to protect preprint integrity.
A first-time ECCV author asks whether cited works in their main submission must be repeated in a one-page rebuttal. They note three prior papers are referenced in the main manuscript and wonder if reviewers will consult the original paper or if the rebuttal requires explicit citations. The question matters because of the strict page limit and the need for clarity: omitting citations could save space but risk reviewer confusion or missed context. Best practice is to include brief inline citations or parenthetical references (author/year) and, if space permits, minimal bibliographic details; otherwise point reviewers to specific sections/line numbers in the main paper to ensure claims are verifiable without exceeding rebuttal constraints.
A researcher at ECCV reports receiving three peer reviews (1/3, 4/3, 4/5) where one reviewer gave a 1 (reject) but suggested additional experiments and wrote they "could change his assessment." The author asks how a reviewer might move from a 1 to a 4 after rebuttal and expresses frustration about stressful interactions and unclear reviewer behavior. The post seeks guidance about rebuttal strategy and whether area/associate chairs can intervene. This matters because inconsistent or seemingly changeable reviewer scores affect conference acceptance decisions, author workload for additional experiments, and perceptions of review quality in top computer-vision venues.
A conference reviewer for ECCV identified an older arXiv version of the authors' own paper and asked for a compare-and-contrast, noting similar results and figures despite a changed title and method name. The authors say the arXiv preprint is clearly the same work by the same team, with only minor additions in the submitted version; they object to the reviewer’s phrasing that implies duplicate or problematic overlap. This matters because reviewer misunderstanding of prior public drafts can trigger unnecessary rebuttals, affect perceived novelty, and complicate peer review outcomes for computer-vision and machine-learning research. Clear communication about version history and provenance of preprints can help avoid such disputes.