Zum Hauptinhalt springen
Dieser Inhalt ist noch nicht in Ihrer Sprache verfügbar und wird auf Englisch angezeigt.

Sentencepiece

Skill Verifiziert Aktiv

Language-independent tokenizer treating text as raw Unicode. Supports BPE and Unigram algorithms. Fast (50k sentences/sec), lightweight (6MB memory), deterministic vocabulary. Used by T5, ALBERT, XLNet, mBART. Train on raw text without pre-tokenization. Use when you need multilingual support, CJK languages, or reproducible tokenization.

Zweck

To provide a fast, lightweight, and language-independent tokenizer for raw Unicode text, supporting BPE and Unigram algorithms for multilingual and CJK language processing.

Funktionen

  • Language-independent tokenization of raw Unicode text
  • Support for BPE and Unigram tokenization algorithms
  • Fast (50k sentences/sec) and lightweight (6MB memory) performance
  • Deterministic vocabulary for reproducible tokenization
  • Examples for training, encoding, and decoding

Anwendungsfälle

  • Building multilingual NLP models
  • Working with CJK languages
  • Ensuring reproducible tokenization across different environments
  • Training models directly on raw text without pre-tokenization

Nicht-Ziele

  • Providing a tokenizer for English-centric tasks specifically (though it can be used)
  • Acting as a wrapper for other tokenization libraries like HuggingFace Tokenizers or tiktoken
  • Offering complex pre-processing steps beyond basic Unicode normalization

Installation

Zuerst Marketplace hinzufügen

/plugin marketplace add Orchestra-Research/AI-Research-SKILLs
/plugin install AI-Research-SKILLs@ai-research-skills

Qualitätspunktzahl

Verifiziert
98 /100
Analysiert 1 day ago

Vertrauenssignale

Letzter Commit17 days ago
Sterne8.3k
LizenzMIT
Status
Quellcode ansehen

Ähnliche Erweiterungen

Sentencepiece

99

Language-independent tokenizer treating text as raw Unicode. Supports BPE and Unigram algorithms. Fast (50k sentences/sec), lightweight (6MB memory), deterministic vocabulary. Used by T5, ALBERT, XLNet, mBART. Train on raw text without pre-tokenization. Use when you need multilingual support, CJK languages, or reproducible tokenization.

Skill
davila7

HuggingFace Tokenizers

95

Fast tokenizers optimized for research and production. Rust-based implementation tokenizes 1GB in <20 seconds. Supports BPE, WordPiece, and Unigram algorithms. Train custom vocabularies, track alignments, handle padding/truncation. Integrates seamlessly with transformers. Use when you need high-performance tokenization or custom tokenizer training.

Skill
davila7

HuggingFace Tokenizers

98

Fast tokenizers optimized for research and production. Rust-based implementation tokenizes 1GB in <20 seconds. Supports BPE, WordPiece, and Unigram algorithms. Train custom vocabularies, track alignments, handle padding/truncation. Integrates seamlessly with transformers. Use when you need high-performance tokenization or custom tokenizer training.

Skill
Orchestra-Research

LinkedIn Humanizer

100

Schrubbt KI-Anzeichen aus jedem Textentwurf ODER prüft einen fertigen Beitrag anhand der Checkliste für heuristische Algorithmen von 2026. Umschreiber auf mehreren Ebenen (forensisch / streng / ästhetisch / alle) plus `--mode audit` für eine reine Erkennungsprüfung mit Bestehen/Nichtbestehen-Bewertung, die Länge, Aufhänger, Handlungsaufforderung, Formatstrafen und KI-Vokabular abdeckt. Unterwerkzeuge: Emoji-Mustererkennung, Tester für die Verteilung mehrerer Detektoren (GPTZero, Originality.ai, ZeroGPT, Sapling, Copyleaks), Regelerklärer. Löst bei "humanisieren", "de-KI", "diesen Entwurf prüfen", "vor dem Posten prüfen", "ist das fertig" aus.

Skill
sergebulaev

Convert Resume to Markdown

100

Convert a resume PDF to clean markdown for LLM parsing or candidate pipelines.

Skill
iterationlayer

Sentiment Analyzer

100

Analyze sentiment in text using ML models. Use when: analyzing customer reviews; processing NPS feedback; monitoring brand mentions; evaluating campaign responses; categorizing support tickets

Skill
guia-matthieu