Editorial Guidelines
Everything you need to know about submitting to Pubroot — submission types, review criteria, scoring rubric, and what our AI reviewer evaluates.
Submission Types
Pubroot accepts six types of submissions. Each type has different review emphasis — a case study is judged differently from original research. Select the type that best fits your work when submitting.
Original Research
Novel findings, experiments, or discoveries with original evidence. This is the traditional "research paper" format.
Case Study
Real-world implementation stories, production incidents, debug logs, and lessons learned. Pubroot's differentiator — practical knowledge that traditional journals don't publish.
Benchmark
Structured comparisons and evaluations of tools, frameworks, models, or hardware. Must include reproducible methodology and raw data.
Review / Survey
Literature reviews, landscape analyses, and state-of-the-art surveys. Synthesize existing knowledge and identify gaps or trends.
Tutorial
Step-by-step guides with working code and clear instructions. Must be complete enough for someone to follow from start to finish.
Dataset
Dataset descriptions with collection methodology, statistics, access information, and usage guidelines. Must include a linked repository or data source.
Scoring Rubric
Every submission is scored on a scale of 0.0 to 10.0. The score determines acceptance:
| Score Range | Rating | What It Means |
|---|---|---|
9.0 – 10.0 |
Exceptional | Original contribution, all claims verified, excellent methodology. Publishable as-is. |
7.0 – 8.9 |
Good | Solid work with minor issues. Publishable with the noted caveats. |
6.0 – 6.9 |
Acceptable | Meets the minimum bar but has notable weaknesses. Borderline publishable. |
4.0 – 5.9 |
Below Average | Significant issues with methodology, accuracy, or novelty. Not publishable as-is. |
2.0 – 3.9 |
Poor | Major factual errors, very low novelty, or poorly structured. |
0.0 – 1.9 |
Reject | Spam, gibberish, prompt injection, or completely unsubstantiated claims. |
Acceptance Threshold: 6.0 / 10.0
Submissions scoring 6.0 or higher are accepted and published. Submissions below 6.0 are rejected with detailed feedback — you can address the issues and resubmit.
Review Dimensions
The AI reviewer evaluates each submission across six dimensions, each scored 0.0 to 1.0. These dimension scores inform (but don't mechanically determine) the overall score.
Methodology 0.0 – 1.0
Rigor of approach. Is the method sound? Are variables controlled? Is the experimental design appropriate? For non-experimental work: is the reasoning logical and well-structured?
Factual Accuracy 0.0 – 1.0
Are claims verified? The AI uses Google Search grounding to check specific factual claims. Each verified claim is listed with its source. Higher accuracy = more claims verified with high confidence.
Novelty 0.0 – 1.0
Does this contribute something new beyond existing work? Compared against arXiv, Semantic Scholar, and our published index. Pure duplicates score 0. Meaningful extensions or fresh perspectives score higher.
Code Quality 0.0 – 1.0 | null
If a supporting repository is linked: code structure, readability, documentation, test coverage, and whether the code matches the article's claims. Null if no code is provided.
Writing Quality 0.0 – 1.0
Clarity, structure, grammar, and readability. Is the article well-organized with clear sections? Can a reader follow the argument? Is technical terminology used correctly?
Reproducibility 0.0 – 1.0 | null
Can the results be reproduced? Are steps documented? Are dependencies specified? Is data available? Higher for articles with linked repos, full instructions, and pinned dependencies. Null for non-reproducible content (opinion, philosophy).
Type-Specific Review Criteria
Each submission type shifts the weight the reviewer places on different dimensions:
| Dimension | Original Research | Case Study | Benchmark | Review/Survey | Tutorial | Dataset |
|---|---|---|---|---|---|---|
| Methodology | High | Medium | Critical | Medium | Medium | High |
| Factual Accuracy | High | High | High | Critical | High | Medium |
| Novelty | Critical | Low | Medium | Medium | Low | Medium |
| Code Quality | Medium | Medium | High | Low | Critical | High |
| Writing Quality | High | High | Medium | High | Critical | Medium |
| Reproducibility | High | High | Critical | Low | Critical | Critical |
Trust Badges
Every published article receives a trust badge indicating the level of verification:
Article reviewed AND a public GitHub repository was analyzed. The highest trust level — the AI could inspect the code, check if it matches claims, and assess reproducibility.
Article reviewed, but the linked repository is private. The article text was fact-checked, but code could not be independently verified.
No supporting repository was provided. The article was reviewed on its text content alone. Factual claims were checked via Google Search, but no code was assessed.
Formatting Guide
Submissions are written in Markdown and rendered with full styling on pubroot.com. Use these features:
Structure
Use ## and ### headers to organize your article into clear sections. We recommend: Introduction, Background, Methods, Results, Discussion, Conclusion, References.
Code Blocks
Use triple backticks with a language tag for syntax highlighting: ```python. Inline code uses single backticks: `variable_name`.
Tables
Use Markdown tables for data comparisons. These render as styled tables on the site.
Links & Citations
Use inline links: [Author (2025)](https://...). Cite sources throughout your article — the AI fact-checker will verify linked claims.
Images
Use standard Markdown images: . Host images in your supporting repo or use a permanent URL.
Abstract
Write a self-contained summary (300 words max). This appears on homepage cards and in search results. It should make sense without reading the full article.
Minimum Requirements
- Title: Clear and descriptive (required)
- Category: Select from the journal/topic dropdown (required)
- Abstract: 50-300 words (required)
- Article Body: 200+ words in Markdown (required)
- Supporting Repo: Optional but strongly recommended for higher trust badge
Acceptance Process
- Submit — Open a GitHub Issue using the submission template. Fill in title, category, submission type, abstract, article body, and optionally a supporting repo.
- Queue — Your submission enters the priority queue. Position depends on your contributor reputation (new → 24hrs, trusted → minutes).
- Review — The 6-stage pipeline runs. The AI reviews your article with Google Search grounding, checks novelty against academic databases, and inspects any linked code.
- Decision — Score ≥ 6.0: Accepted. A Pull Request is created with your article, review, and metadata. It's auto-merged and published. Score < 6.0: Rejected. The full review is posted as a comment on your Issue with specific feedback.
- Publication — Accepted articles appear on pubroot.com with your abstract, the full article, a review sidebar with scores and claim verification, and a trust badge.
After Publication
- Content Freshness — Each article has a
valid_untildate (typically 6 months for technical content, 12 months for historical/philosophical). After expiry, articles are marked as potentially outdated. - Supersession — If you publish an updated version of an existing article, the new version can
supersedethe old one, which gets a "superseded" notice. - Reputation — Accepted articles increase your contributor reputation. Higher reputation means faster review times and queue priority.
- Machine-Readable — Every published article has a
manifest.jsonwith structured metadata, and the full review is available at/reviews/{paper-id}/review.json.
Ready to Submit?
Choose your journal and topic, pick a submission type, and start writing.
Submit Your Article