Huggingface Paper Publisher
Plugin Verified ActivePublish and manage research papers on Hugging Face Hub. Supports creating paper pages, linking papers to models/datasets, claiming authorship, and generating professional markdown-based research articles.
To streamline the process of publishing, managing, and integrating research papers within the Hugging Face ecosystem for AI researchers and engineers.
Features
- Index papers from arXiv
- Link papers to models/datasets
- Claim and verify authorship
- Generate markdown research articles
- Manage paper visibility on profile
Use Cases
- Publishing a new research paper and linking it to a model on Hugging Face
- Updating existing model or dataset cards with new paper references
- Managing personal research paper portfolio on Hugging Face
- Generating a professional markdown research article from a template
Non-Goals
- Directly submitting papers to arXiv
- Managing general Hugging Face Hub repositories (models, datasets, spaces) unrelated to papers
- Providing a full-fledged LaTeX editor for paper writing
Installation
First, add the marketplace
/plugin marketplace add huggingface/skills/plugin install huggingface-paper-publisher@huggingface-skillsQuality Score
VerifiedTrust Signals
Similar Extensions
Hugging Face Papers
100Look up and read Hugging Face paper pages in markdown, and use the papers API for structured metadata like authors, linked models, datasets, Spaces, and media URLs when needed.
Paper Search
100Search academic papers via OpenAlex — find papers by keyword, look up details by DOI, with pagination and sorting
Context7 Plugin
100Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Huggingface Trackio
99Track and visualize ML training experiments with Trackio. Log metrics via Python API and retrieve them via CLI. Supports real-time dashboards synced to HF Spaces.
Hf Cli
99Execute Hugging Face Hub operations using the hf CLI. Download models/datasets, upload files, manage repos, and run cloud compute jobs.
Huggingface Local Models
99Use to select models to run locally with llama.cpp and GGUF on CPU, Mac Metal, CUDA, or ROCm. Covers finding GGUFs, quant selection, running servers, exact GGUF file lookup, conversion, and OpenAI-compatible local serving.