Skip to main content

Ruflo Ruvllm

Plugin Verified Active
Part of:Ruflo

RuVLLM local inference with chat formatting (Claude/GPT/Gemini/Ollama/Cohere), model configuration, MicroLoRA fine-tuning, and SONA real-time adaptation

2 Skills 0 MCPs
Purpose

To enable users to run and fine-tune large language models locally with advanced features for optimal performance and integration into RAG pipelines.

Features

  • Local LLM inference with RuVLLM
  • Model configuration and optimization
  • MicroLoRA task-specific fine-tuning
  • SONA real-time adaptation
  • Multi-provider chat formatting (Claude, GPT, Gemini, Ollama, Cohere)
  • HNSW routing for RAG context retrieval

Use Cases

  • Configuring optimal local LLM models for specific tasks.
  • Fine-tuning LLMs with lightweight adapters (MicroLoRA) for specialized domains.
  • Implementing real-time adaptation (SONA) for continuous feedback loops.
  • Preparing prompts for various LLM providers and integrating RAG context.

Non-Goals

  • Cloud-based LLM inference
  • Replacing core Claude Code functionality
  • General-purpose system administration tools

Installation

First, add the marketplace

/plugin marketplace add ruvnet/ruflo
/plugin install ruflo-ruvllm@ruflo

Quality Score

Verified
98 /100
Analyzed about 17 hours ago

Trust Signals

Last commitabout 18 hours ago
Stars50.2k
LicenseMIT
Status
View Source

© 2025 SkillRepo · Find the right skill, skip the noise.