[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"extension-skill-lllllllama-run-train-de":3,"guides-for-lllllllama-run-train":502,"similar-k1747627qg43rqm3jxeynrz0m186ns1q-de":503},{"_creationTime":4,"_id":5,"children":6,"community":7,"display":9,"evaluation":15,"identity":241,"isFallback":226,"parentExtension":246,"providers":247,"relations":253,"repo":256,"tags":498,"workflow":499},1778692740057.788,"k1747627qg43rqm3jxeynrz0m186ns1q",[],{"reviewCount":8},0,{"description":10,"installMethods":11,"name":13,"sourceUrl":14},"Verwaltete Trainingsausführungskompetenz für Deep-Learning-Repositorys. Verwenden Sie diese Kompetenz, wenn ein dokumentierter oder ausgewählter Trainingsbefehl konservativ für die Starteinrichtungsüberprüfung, die Verifizierung kurzer Läufe, den vollständigen Start oder die Wiederaufnahme ausgeführt werden soll. Status, Checkpoints und Metriken werden in standardisierten `train_outputs/` erfasst. Nicht zur Umgebungsverwaltung, explorativen Sweeps, spekulativen Ideenimplementierung oder End-to-End-Orchestrierung verwenden.",{"claudeCode":12},"lllllllama/ai-paper-reproduction-skill","run-train","https://github.com/lllllllama/ai-paper-reproduction-skill",{"_creationTime":16,"_id":17,"extensionId":5,"locale":18,"result":19,"trustSignals":224,"workflow":239},1778692740057.7883,"kn71cd70xf4n8fex31zc0ese8186nw2x","de",{"checks":20,"evaluatedAt":192,"extensionSummary":193,"features":194,"nonGoals":200,"promptVersionExtension":205,"promptVersionScoring":206,"purpose":207,"rationale":208,"score":209,"summary":210,"tags":211,"tier":218,"useCases":219},[21,26,29,32,36,39,44,48,51,54,58,62,65,69,72,75,78,81,84,87,91,95,99,104,108,111,115,118,122,125,128,131,134,137,140,144,148,151,154,158,161,164,167,170,174,177,180,183,186,189],{"category":22,"check":23,"severity":24,"summary":25},"Praktischer Nutzen","Problemrelevanz","pass","Die Beschreibung benennt klar ein konkretes Problem: die konservative Ausführung von Deep-Learning-Trainingsbefehlen zur Verifizierung und Erfassung von Status in Forschungsrepositorys.",{"category":22,"check":27,"severity":24,"summary":28},"Alleinstellungsmerkmal","Die Kompetenz bietet eine spezialisierte, konservative Ausführungsschiene für Trainingsbefehle mit strukturierter Erfassung der Ausgabe, die über das Standardverhalten von LLMs hinausgeht.",{"category":22,"check":30,"severity":24,"summary":31},"Produktionsreife","Die Kompetenz scheint für den Produktionseinsatz bereit zu sein und kümmert sich um die Ausführung von Trainings, die Status-Erfassung und die Ausgabe von Informationen im angegebenen Bereich.",{"category":33,"check":34,"severity":24,"summary":35},"Umfang","Prinzip der einzigen Verantwortung","Die Kompetenz konzentriert sich ausschließlich auf die Ausführung und Überwachung von Trainingsbefehlen und hält sich an eine einzige Verantwortung.",{"category":33,"check":37,"severity":24,"summary":38},"Qualität der Beschreibung","Die Beschreibung spiegelt den Zweck der Kompetenz zur konservativen Trainingsausführung und Erfassung von Ausgaben genau wider, einschließlich klarer Anwendungsfälle und Grenzen.",{"category":40,"check":41,"severity":42,"summary":43},"Aufruf","Geltungsbereich von Tools","not_applicable","Dies ist eine Kompetenz, die einen einzelnen Befehl ausführt, keine Sammlung von Tools mit einzelnen Geltungsbereichen.",{"category":45,"check":46,"severity":24,"summary":47},"Dokumentation","Konfigurations- & Parameterreferenz","Schlüsselparameter wie --repo, --command, --timeout, --run-mode und andere sind in der Hilfetext des Skripts dokumentiert.",{"category":33,"check":49,"severity":42,"summary":50},"Tool-Benennung","Als Kompetenz gibt es keine benutzerseitigen Tools/Befehle, die auf Namenskonventionen geprüft werden könnten.",{"category":33,"check":52,"severity":24,"summary":53},"Minimale I/O-Oberfläche","Die Kompetenz nimmt spezifische Kommandozeilenargumente entgegen und gibt strukturiertes JSON aus, was einer minimalen I/O-Oberfläche entspricht.",{"category":55,"check":56,"severity":24,"summary":57},"Lizenz","Lizenznutzbarkeit","Die Erweiterung ist unter der MIT-Lizenz lizenziert, einer permissiven Open-Source-Lizenz.",{"category":59,"check":60,"severity":24,"summary":61},"Wartung","Aktualität der Commits","Der letzte Commit war am 9. Mai 2026, was innerhalb der letzten 3 Monate liegt.",{"category":59,"check":63,"severity":42,"summary":64},"Abhängigkeitsmanagement","Das Skript scheint nur Standard-Python-Bibliotheken zu verwenden und listet keine externen Drittanbieterabhängigkeiten auf, die verwaltet werden müssten.",{"category":66,"check":67,"severity":24,"summary":68},"Sicherheit","Geheimnisverwaltung","Das Skript handhabt oder exponiert keine Geheimnisse; es konzentriert sich auf die Ausführung eines Befehls und die Erfassung der Ausgabe.",{"category":66,"check":70,"severity":24,"summary":71},"Injection","Das Skript verwendet `shlex.split` zur Analyse des Befehls, was Risiken von Shell-Injection für die Befehlszeichenfolge mildert. Es ruft keine externen Daten ab.",{"category":66,"check":73,"severity":24,"summary":74},"Transitive Lieferketten-Granaten","Das Skript ruft zur Laufzeit keinen externen Code oder keine externen Daten ab; alle Abhängigkeiten sind lokale Python-Bibliotheken.",{"category":66,"check":76,"severity":24,"summary":77},"Sandbox-Isolierung","Das Skript operiert innerhalb des bereitgestellten Repository-Pfads und verwendet Standard-Subprozess-Ausführung, wobei Sandbox-Grenzen respektiert werden.",{"category":66,"check":79,"severity":24,"summary":80},"Sandbox-Escape-Primitive","Im Skript wurden keine getrennten Prozess-Spawns oder No-Retry-Schleifen erkannt.",{"category":66,"check":82,"severity":24,"summary":83},"Datenexfiltration","Das Skript führt keine ausgehenden Aufrufe zur Datenübermittlung durch und gibt nur strukturiertes JSON lokal aus.",{"category":66,"check":85,"severity":24,"summary":86},"Versteckte Texttricks","Der Skriptcode und seine zugehörige Dokumentation enthalten keine versteckten Texttricks oder Verschleierungen.",{"category":88,"check":89,"severity":24,"summary":90},"Hooks","Undurchsichtige Codeausführung","Das Python-Skript ist klarer, lesbarer Quellcode und beinhaltet keine Verschleierungstechniken.",{"category":92,"check":93,"severity":24,"summary":94},"Portabilität","Strukturelle Annahme","Das Skript verwendet korrekt relative Pfade und geht von einer Standard-Repository-Struktur aus, wobei es sich auf das bereitgestellte Argument `--repo` bezieht.",{"category":96,"check":97,"severity":24,"summary":98},"Vertrauen","Aufmerksamkeit für Issues","Es gibt 0 offene und 0 geschlossene Issues in den letzten 90 Tagen, was auf keine aktuelle Aktivität, aber auch keine offenen Probleme hinweist.",{"category":100,"check":101,"severity":102,"summary":103},"Versionierung","Release-Management","warning","Das Skript selbst hat keine Versionsnummer, und die Installationsanweisungen des Repositorys beziehen sich hauptsächlich auf die Installation von main (`npx skills add ... --all` oder `... --skill ...`), was es schwierig macht, eine bestimmte Version dieses Skripts festzulegen.",{"category":105,"check":106,"severity":24,"summary":107},"Ausführung","Validierung","Das Skript verwendet `shlex.split` zur Befehlsanalyse und `argparse` zur Argumentvalidierung, wodurch die grundlegende Eingintegrität sichergestellt wird.",{"category":66,"check":109,"severity":24,"summary":110},"Ungeschützte destruktive Operationen","Die Hauptfunktion des Skripts ist die Ausführung eines vom Benutzer bereitgestellten Befehls, aber es führt keine destruktiven Operationen selbst ohne Benutzereingriff durch.",{"category":112,"check":113,"severity":24,"summary":114},"Codeausführung","Fehlerbehandlung","Das Skript behandelt Fehler wie 'Datei nicht gefunden', Timeouts und Nicht-Null-Exit-Codes ordnungsgemäß und meldet sie in der Ausgabe-JSON.",{"category":112,"check":116,"severity":24,"summary":117},"Protokollierung","Das Skript gibt eine strukturierte JSON-Nutzlast mit Ausführungsdetails und Protokollen an stdout aus, die als Audit-Nachweis dient.",{"category":119,"check":120,"severity":42,"summary":121},"Compliance","DSGVO","Die Kompetenz führt nur Befehle aus und erfasst die Ausgabe; sie verarbeitet keine persönlichen Daten.",{"category":119,"check":123,"severity":24,"summary":124},"Zielmarkt","Die Kompetenz ist ein universelles Trainingsausführungs-Tool und hat keine regionale oder gerichtsbare Logik, wodurch es global einsetzbar ist.",{"category":92,"check":126,"severity":24,"summary":127},"Laufzeitstabilität","Das Skript verwendet Standard-Python-Bibliotheken und -Praktiken, was die plattformübergreifende Kompatibilität auf Systemen mit Python 3 gewährleistet.",{"category":45,"check":129,"severity":24,"summary":130},"README","Die README-Datei des Repositorys enthält umfassende Details zu den Kompetenzen, einschließlich Installation und Zweck, und die spezifische SKILL.md-Datei der Kompetenz ist klar.",{"category":33,"check":132,"severity":42,"summary":133},"Größe der Tool-Oberfläche","Dies ist eine einzelne Kompetenz, die einen Befehl ausführt, keine Erweiterung, die mehrere Tools bereitstellt.",{"category":40,"check":135,"severity":42,"summary":136},"Sich überschneidende Nahe-Synonym-Tools","Als einzelne Kompetenz gibt es keine sich überschneidenden Tools, die bewertet werden müssten.",{"category":45,"check":138,"severity":24,"summary":139},"Phantom-Funktionen","Alle beworbenen Funktionen in der SKILL.md und README sind im bereitgestellten Python-Skript implementiert.",{"category":141,"check":142,"severity":24,"summary":143},"Installation","Installationsanleitung","Die README bietet klare Installationsanweisungen mit `npx` und bietet erweiterte lokale Befehle zur Installation.",{"category":145,"check":146,"severity":24,"summary":147},"Fehler","Handlungsauffordernde Fehlermeldungen","Fehler wie 'Executable not found' oder 'timeout' werden mit Kontext und einer klaren Angabe des Fehlschlags in der Ausgabe-JSON gemeldet.",{"category":105,"check":149,"severity":24,"summary":150},"Angeheftete Abhängigkeiten","Das Skript basiert auf Standard-Python-Bibliotheken, und das Repository enthält `scripts/install_skills.py`, was auf ein Abhängigkeitsmanagement hindeutet, obwohl eine Lock-Datei für das Skript selbst nicht explizit ist.",{"category":33,"check":152,"severity":42,"summary":153},"Dry-Run-Vorschau","Die Kompetenz führt einen vom Benutzer bereitgestellten Befehl aus; eine Dry-Run-Funktion müsste innerhalb dieses Befehls selbst implementiert werden, nicht von diesem Wrapper.",{"category":155,"check":156,"severity":24,"summary":157},"Protokoll","Idempotente Wiederholung & Timeouts","Das Skript erzwingt ein Timeout und meldet es als strukturierten Fehler, und die Ausführung selbst ist so konzipiert, dass sie bei Bedarf von einer übergeordneten Orchestrierungsebene wiederholt wird.",{"category":119,"check":159,"severity":24,"summary":160},"Telemetrie-Opt-in","Das Skript sendet keine Telemetrie; die gesamte Ausgabe ist lokales JSON.",{"category":40,"check":162,"severity":24,"summary":163},"Präziser Zweck","Der Zweck der Kompetenz ist präzise definiert: konservative Ausführung ausgewählter Trainingsbefehle mit strukturierter Erfassung von Ausgaben, wobei explizit angegeben wird, wann sie verwendet und wann nicht verwendet werden soll.",{"category":40,"check":165,"severity":24,"summary":166},"Prägnantes Frontmatter","Das SKILL.md-Frontmatter ist prägnant und gibt den Zweck und den Bereich der Kompetenz klar an, ohne übermäßige Schlüsselwörter.",{"category":45,"check":168,"severity":24,"summary":169},"Prägnanter Textteil","Die SKILL.md ist prägnant und lagert tiefere Materialien an separate Referenzdateien wie `training-policy.md` und Skripte aus.",{"category":171,"check":172,"severity":24,"summary":173},"Kontext","Progressive Offenlegung","Die SKILL.md verweist auf externe Dateien wie `references/training-policy.md` und Skripte, was eine progressive Offenlegung zeigt.",{"category":171,"check":175,"severity":42,"summary":176},"Forked Exploration","Diese Kompetenz ist nicht für tiefe Explorationen konzipiert; sie führt einen bestimmten Befehl aus und es wird nicht erwartet, dass sie `context: fork` setzt.",{"category":22,"check":178,"severity":24,"summary":179},"Verwendungsbeispiele","Die README enthält zahlreiche Beispiele für verschiedene Kompetenzen im Repository, einschließlich konzeptioneller Beispiele, die andeuten, wie diese Kompetenz im Kontext aufgerufen würde.",{"category":22,"check":181,"severity":24,"summary":182},"Randfälle","Das Skript behandelt gängige Randfälle wie 'Befehl nicht gefunden', Timeouts und Nicht-Null-Exit-Codes und dokumentiert das Symptom und den Wiederherstellungspfad (gemeldet in der Ausgabe).",{"category":112,"check":184,"severity":42,"summary":185},"Tool-Fallback","Diese Kompetenz ist nicht auf externe MCP-Server oder andere Tools angewiesen, die einen Fallback-Mechanismus erfordern würden.",{"category":66,"check":187,"severity":24,"summary":188},"Stoppen bei unerwartetem Zustand","Das Skript ist so konzipiert, dass es die Ausführung stoppt und Fehler (z. B. Befehl nicht gefunden, Timeout, Nicht-Null-Exit) meldet, anstatt in einem unerwarteten Zustand fortzufahren.",{"category":92,"check":190,"severity":24,"summary":191},"Kreuz-Kompetenz-Kopplung","Diese Kompetenz ist in sich geschlossen und stützt sich nicht implizit auf andere Kompetenzen. Ihr Zweck ist deutlich und gut definiert.",1778692620395,"Diese Kompetenz führt einen angegebenen Trainingsbefehl in einem gegebenen Repository aus, erfasst dessen Ausgabe und Status und gibt diese Informationen in einem strukturierten JSON-Format aus. Sie verarbeitet Starteinrichtungsprüfungen, kurze Läufe, vollständige Starts und Wiederaufnahmen und kümmert sich auch um Timeouts und Fehler.",[195,196,197,198,199],"Konservative Ausführung von Trainingsbefehlen","Strukturierte Erfassung von Status, Checkpoints und Metriken","Verarbeitung von Starteinrichtungsprüfungen, Prüfungen kurzer Läufe, vollständigen Starts und Wiederaufnahmen","Ausgabe von Nachweisen in `train_outputs/`","Fehler- und Timeout-Behandlung",[201,202,203,204],"Umgebungsverwaltung oder Herunterladen von Assets","Explorative Sweeps oder spekulative Ideenimplementierung","End-to-End-Orchestrierung von Forschungszielen","Autonome Auswahl von Trainingsbefehlen","3.0.0","4.4.0","Bereitstellung einer vertrauenswürdigen und prüfbaren Möglichkeit zur konservativen Ausführung von Deep-Learning-Trainingsbefehlen, um die Verifizierung und strukturierte Erfassung von Ergebnissen zu gewährleisten.","Die Kompetenz ist gut dokumentiert, produktionsreif und hält sich an bewährte Sicherheitspraktiken. Der einzige geringfügige Befund ist das Fehlen einer expliziten Versionierung für das Skript selbst, was bei einzelnen Utility-Skripten innerhalb eines größeren Repositorys üblich ist.",99,"Eine robuste und gut dokumentierte Kompetenz für die konservative Trainingsausführung und Erfassung von Nachweisen im Bereich Deep Learning.",[212,213,214,215,216,217],"deep-learning","training","research","verification","monitoring","python","community",[220,221,222,223],"Überprüfung des Starts von Trainingsbefehlen in einem Forschungsrepository","Ausführung von kurzzeitigen Trainingszwecken zur Verifizierung","Initiierung oder Wiederaufnahme vollständiger Trainingsläufe mit überwachter Nachweisführung","Erfassung strukturierter Protokolle und Checkpoints von Trainingsprozessen",{"codeQuality":225,"collectedAt":227,"documentation":228,"maintenance":231,"security":236,"testCoverage":238},{"hasLockfile":226},false,1778692605425,{"descriptionLength":229,"readmeSize":230},435,22701,{"closedIssues90d":8,"forks":232,"hasChangelog":233,"openIssues90d":8,"pushedAt":234,"stars":235},4,true,1778347974000,75,{"hasNpmPackage":226,"license":237,"smitheryVerified":226},"MIT",{"hasCi":233,"hasTests":233},{"updatedAt":240},1778692740057,{"basePath":242,"githubOwner":243,"githubRepo":244,"locale":18,"slug":13,"type":245},"skills/run-train","lllllllama","ai-paper-reproduction-skill","skill",null,{"evaluate":248,"extract":251},{"promptVersionExtension":205,"promptVersionScoring":206,"score":209,"tags":249,"targetMarket":250,"tier":218},[212,213,214,215,216,217],"global",{"commitSha":252},"HEAD",{"repoId":254,"translatedFrom":255},"kd7629v5mqesxwwe9w7qtfgp7d86n6re","k17bmxf37ewg3r45z7ef99p7z986mf5w",{"_creationTime":257,"_id":254,"identity":258,"providers":259,"workflow":494},1778692391648.3123,{"githubOwner":243,"githubRepo":244,"sourceUrl":14},{"classify":260,"discover":488,"github":491},{"commitSha":252,"extensions":261},[262,340,370,382,402,415,428,441,451,465,476],{"basePath":263,"description":264,"displayName":265,"installMethods":266,"rationale":267,"selectedPaths":268,"source":338,"sourceLanguage":339,"type":245},"skills/ai-research-explore","Explore-lane end-to-end orchestrator for the third research scenario: the researcher has already chosen the task family, dataset, benchmark, evaluation method, and provided SOTA references, and wants candidate-only exploration on top of `current_research` with auditable repo understanding, idea gating, and governed experiments written to `explore_outputs/`. Do not use for README-first trusted reproduction, open-ended direction finding, narrow code-only or run-only exploration, passive repo analysis, or implicit experimentation.","ai-research-explore",{"claudeCode":12},"SKILL.md frontmatter at skills/ai-research-explore/SKILL.md",[269,272,275,277,279,281,283,285,288,290,292,294,296,298,300,302,304,306,308,310,312,314,316,318,320,322,324,326,328,330,332,334,336],{"path":270,"priority":271},"SKILL.md","mandatory",{"path":273,"priority":274},"references/ai-research-explore-policy.md","medium",{"path":276,"priority":274},"references/idea-evaluation-framework.md",{"path":278,"priority":274},"references/research-campaign-spec.md",{"path":280,"priority":274},"references/smoke-validation-policy.md",{"path":282,"priority":274},"references/source-mapping-policy.md",{"path":284,"priority":274},"references/sources-naming-policy.md",{"path":286,"priority":287},"scripts/lookup/__init__.py","low",{"path":289,"priority":287},"scripts/lookup/cache_store.py",{"path":291,"priority":287},"scripts/lookup/inventory_writer.py",{"path":293,"priority":287},"scripts/lookup/normalizers.py",{"path":295,"priority":287},"scripts/lookup/providers/__init__.py",{"path":297,"priority":287},"scripts/lookup/providers/arxiv_provider.py",{"path":299,"priority":287},"scripts/lookup/providers/base.py",{"path":301,"priority":287},"scripts/lookup/providers/doi_provider.py",{"path":303,"priority":287},"scripts/lookup/providers/github_provider.py",{"path":305,"priority":287},"scripts/lookup/providers/optional_provider.py",{"path":307,"priority":287},"scripts/lookup/providers/url_provider.py",{"path":309,"priority":287},"scripts/lookup/record_schema.py",{"path":311,"priority":287},"scripts/lookup/repo_extractors.py",{"path":313,"priority":287},"scripts/lookup/source_support.py",{"path":315,"priority":287},"scripts/orchestrate_explore.py",{"path":317,"priority":287},"scripts/passes/__init__.py",{"path":319,"priority":287},"scripts/passes/atomic_idea_decomposition.py",{"path":321,"priority":287},"scripts/passes/candidate_idea_generation.py",{"path":323,"priority":287},"scripts/passes/execution_feasibility.py",{"path":325,"priority":287},"scripts/passes/idea_cards.py",{"path":327,"priority":287},"scripts/passes/idea_ranking.py",{"path":329,"priority":287},"scripts/passes/implementation_fidelity.py",{"path":331,"priority":287},"scripts/passes/improvement_bank.py",{"path":333,"priority":287},"scripts/passes/lookup_sources.py",{"path":335,"priority":287},"scripts/passes/source_mapping.py",{"path":337,"priority":287},"scripts/write_outputs.py","rule","en",{"basePath":341,"description":342,"displayName":343,"installMethods":344,"rationale":345,"selectedPaths":346,"source":338,"sourceLanguage":339,"type":245},"skills/ai-research-reproduction","Main orchestrator for README-first AI repo reproduction. Use when the user wants an end-to-end, minimal-trustworthy reproduction flow that reads the repository first, selects the smallest documented inference or evaluation target, coordinates intake, setup, trusted execution, optional trusted training, optional repository analysis, and optional paper-gap resolution, enforces conservative patch rules, records evidence assumptions deviations and human decision points, and writes the standardized `repro_outputs/` bundle. Do not use for paper summary, generic environment setup, isolated repo scanning, standalone command execution, silent protocol changes, or broad research assistance outside repository-grounded reproduction.","ai-research-reproduction",{"claudeCode":12},"SKILL.md frontmatter at skills/ai-research-reproduction/SKILL.md",[347,348,350,352,354,356,358,360,362,364,366,368],{"path":270,"priority":271},{"path":349,"priority":287},"assets/COMMANDS.template.md",{"path":351,"priority":287},"assets/LOG.template.md",{"path":353,"priority":287},"assets/PATCHES.template.md",{"path":355,"priority":287},"assets/SUMMARY.template.md",{"path":357,"priority":287},"assets/status.template.json",{"path":359,"priority":274},"references/architecture.md",{"path":361,"priority":274},"references/language-policy.md",{"path":363,"priority":274},"references/output-spec.md",{"path":365,"priority":274},"references/patch-policy.md",{"path":367,"priority":274},"references/research-safety-principles.md",{"path":369,"priority":287},"scripts/orchestrate_repro.py",{"basePath":371,"description":372,"displayName":373,"installMethods":374,"rationale":375,"selectedPaths":376,"source":338,"sourceLanguage":339,"type":245},"skills/analyze-project","Trusted-lane analysis skill for deep learning research repositories. Use when the user wants to read and understand a repository, inspect model structure and training or inference entrypoints, review configs and insertion points, or flag suspicious implementation patterns without modifying code or running heavy jobs. Do not use for active command execution, broad refactoring, speculative code adaptation, or automatic bug fixing.","analyze-project",{"claudeCode":12},"SKILL.md frontmatter at skills/analyze-project/SKILL.md",[377,378,380],{"path":270,"priority":271},{"path":379,"priority":274},"references/analysis-policy.md",{"path":381,"priority":287},"scripts/analyze_project.py",{"basePath":383,"description":384,"displayName":385,"installMethods":386,"rationale":387,"selectedPaths":388,"source":338,"sourceLanguage":339,"type":245},"skills/env-and-assets-bootstrap","Environment and assets sub-skill for README-first AI repo reproduction. Use when the task is specifically to prepare a conservative conda-first environment, checkpoint and dataset path assumptions, cache location hints, and setup notes before any run on a README-documented repository. Do not use for repo scanning, full orchestration, paper interpretation, final run reporting, or generic environment setup that is not tied to a specific reproduction target.","env-and-assets-bootstrap",{"claudeCode":12},"SKILL.md frontmatter at skills/env-and-assets-bootstrap/SKILL.md",[389,390,392,394,396,398,400],{"path":270,"priority":271},{"path":391,"priority":274},"references/assets-policy.md",{"path":393,"priority":274},"references/env-policy.md",{"path":395,"priority":287},"scripts/bootstrap_env.py",{"path":397,"priority":287},"scripts/bootstrap_env.sh",{"path":399,"priority":287},"scripts/plan_setup.py",{"path":401,"priority":287},"scripts/prepare_assets.py",{"basePath":403,"description":404,"displayName":405,"installMethods":406,"rationale":407,"selectedPaths":408,"source":338,"sourceLanguage":339,"type":245},"skills/explore-code","Explore-lane code adaptation skill for deep learning research repositories. Use when the researcher explicitly authorizes exploratory work on an isolated branch or worktree to transplant modules, adapt a backbone, add LoRA or adapter layers, replace a head, or stitch together low-risk migration ideas with summary-only records in `explore_outputs/`. Do not use for end-to-end exploration orchestration on top of `current_research`, trusted baseline reproduction, conservative debugging, environment setup, or default repository analysis.","explore-code",{"claudeCode":12},"SKILL.md frontmatter at skills/explore-code/SKILL.md",[409,410,412,414],{"path":270,"priority":271},{"path":411,"priority":274},"references/explore-policy.md",{"path":413,"priority":287},"scripts/plan_code_changes.py",{"path":337,"priority":287},{"basePath":416,"description":417,"displayName":418,"installMethods":419,"rationale":420,"selectedPaths":421,"source":338,"sourceLanguage":339,"type":245},"skills/explore-run","Explore-lane experimental execution skill for deep learning research repositories. Use when the researcher explicitly authorizes exploratory runs such as small-subset validation, short-cycle guess-and-check, batch sweeps, idle-GPU search, or quick transfer-learning trials, with results summarized in `explore_outputs/`. Do not use for end-to-end exploration orchestration on top of `current_research`, trusted baseline execution, conservative training verification, default routing, or implicit experimentation.","explore-run",{"claudeCode":12},"SKILL.md frontmatter at skills/explore-run/SKILL.md",[422,423,425,427],{"path":270,"priority":271},{"path":424,"priority":274},"references/execution-policy.md",{"path":426,"priority":287},"scripts/plan_variants.py",{"path":337,"priority":287},{"basePath":429,"description":430,"displayName":431,"installMethods":432,"rationale":433,"selectedPaths":434,"source":338,"sourceLanguage":339,"type":245},"skills/minimal-run-and-audit","Trusted-lane execution and reporting skill for README-first AI repo reproduction. Use when the task is specifically to capture or normalize evidence from the selected smoke test or documented inference or evaluation command and write standardized `repro_outputs/` files, including patch notes when repository files changed. Do not use for training execution, initial repo intake, generic environment setup, paper lookup, target selection, or end-to-end orchestration by itself.","minimal-run-and-audit",{"claudeCode":12},"SKILL.md frontmatter at skills/minimal-run-and-audit/SKILL.md",[435,436,438,440],{"path":270,"priority":271},{"path":437,"priority":274},"references/reporting-policy.md",{"path":439,"priority":287},"scripts/run_command.py",{"path":337,"priority":287},{"basePath":442,"description":443,"displayName":444,"installMethods":445,"rationale":446,"selectedPaths":447,"source":338,"sourceLanguage":339,"type":245},"skills/paper-context-resolver","Optional narrow helper skill for README-first AI repo reproduction. Use only when the README and repository files leave a narrow reproduction-critical gap and the task is to resolve a specific paper detail such as dataset split, preprocessing, evaluation protocol, checkpoint mapping, or runtime assumption from primary paper sources while recording conflicts. Do not use for general paper summary, repo scanning, environment setup, command execution, title-only paper lookup, or replacing README guidance by default.","paper-context-resolver",{"claudeCode":12},"SKILL.md frontmatter at skills/paper-context-resolver/SKILL.md",[448,449],{"path":270,"priority":271},{"path":450,"priority":274},"references/paper-assisted-reproduction.md",{"basePath":452,"description":453,"displayName":454,"installMethods":455,"rationale":456,"selectedPaths":457,"source":338,"sourceLanguage":339,"type":245},"skills/repo-intake-and-plan","Narrow helper skill for README-first AI repo reproduction. Use when the task is specifically to scan a repository, read the README and common project files, extract documented commands, classify inference, evaluation, and training candidates, and return the smallest trustworthy reproduction plan to the main orchestrator. Do not use for environment setup, asset download, command execution, final reporting, paper lookup, or end-to-end orchestration.","repo-intake-and-plan",{"claudeCode":12},"SKILL.md frontmatter at skills/repo-intake-and-plan/SKILL.md",[458,459,461,463],{"path":270,"priority":271},{"path":460,"priority":274},"references/repo-scan-rules.md",{"path":462,"priority":287},"scripts/extract_commands.py",{"path":464,"priority":287},"scripts/scan_repo.py",{"basePath":242,"description":466,"displayName":13,"installMethods":467,"rationale":468,"selectedPaths":469,"source":338,"sourceLanguage":339,"type":245},"Trusted-lane training execution skill for deep learning research repositories. Use when a documented or selected training command should be run conservatively for startup verification, short-run verification, full kickoff, or resume, with status, checkpoint, and metric capture written to standardized `train_outputs/`. Do not use for environment setup, exploratory sweeps, speculative idea implementation, or end-to-end orchestration.",{"claudeCode":12},"SKILL.md frontmatter at skills/run-train/SKILL.md",[470,471,473,475],{"path":270,"priority":271},{"path":472,"priority":274},"references/training-policy.md",{"path":474,"priority":287},"scripts/run_training.py",{"path":337,"priority":287},{"basePath":477,"description":478,"displayName":479,"installMethods":480,"rationale":481,"selectedPaths":482,"source":338,"sourceLanguage":339,"type":245},"skills/safe-debug","Trusted-lane debug skill for deep learning research work. Use when the user pastes a traceback, terminal error, CUDA OOM, checkpoint load failure, shape mismatch, NaN loss symptom, or training failure and wants conservative diagnosis before any patching. Do not use for broad refactoring, speculative adaptation, automatic exploratory patching, or general repository familiarization.","safe-debug",{"claudeCode":12},"SKILL.md frontmatter at skills/safe-debug/SKILL.md",[483,484,486],{"path":270,"priority":271},{"path":485,"priority":274},"references/debug-policy.md",{"path":487,"priority":287},"scripts/safe_debug.py",{"sources":489},[490],"manual",{"closedIssues90d":8,"description":492,"forks":232,"license":237,"openIssues90d":8,"pushedAt":234,"readmeSize":230,"stars":235,"topics":493},"",[],{"classifiedAt":495,"discoverAt":496,"extractAt":497,"githubAt":497,"updatedAt":495},1778692395631,1778692391648,1778692393876,[212,216,217,214,213,215],{"evaluatedAt":500,"extractAt":501,"updatedAt":240},1778692620717,1778692396032,[],[504,535,564,591,621,647],{"_creationTime":505,"_id":506,"community":507,"display":508,"identity":514,"providers":518,"relations":528,"tags":531,"workflow":532},1778693180473.1174,"k17fm8t65dw1y28823kj8ce3bn86mgqg",{"reviewCount":8},{"description":509,"installMethods":510,"name":512,"sourceUrl":513},"Azure Monitor Query SDK for Python. Use for querying Log Analytics workspaces and Azure Monitor metrics.\nTriggers: \"azure-monitor-query\", \"LogsQueryClient\", \"MetricsQueryClient\", \"Log Analytics\", \"Kusto queries\", \"Azure metrics\".\n",{"claudeCode":511},"microsoft/agent-skills","azure-monitor-query-py","https://github.com/microsoft/agent-skills",{"basePath":515,"githubOwner":516,"githubRepo":517,"locale":339,"slug":512,"type":245},".github/plugins/azure-sdk-python/skills/azure-monitor-query-py","microsoft","agent-skills",{"evaluate":519,"extract":527},{"promptVersionExtension":205,"promptVersionScoring":206,"score":520,"tags":521,"targetMarket":250,"tier":526},100,[522,216,523,524,525,217],"azure","logs","metrics","sdk","verified",{"commitSha":252},{"parentExtensionId":529,"repoId":530},"k171mfx6atvhq1bkhpky84v4b186n9qd","kd77czgnv00rfjm815pcc5xx5986n5t8",[522,523,524,216,217,525],{"evaluatedAt":533,"extractAt":534,"updatedAt":533},1778695102758,1778693180473,{"_creationTime":536,"_id":537,"community":538,"display":539,"identity":545,"providers":549,"relations":557,"tags":560,"workflow":561},1778695116697.1838,"k17c6fx43mgkj95s4yzww50w5s86nb65",{"reviewCount":8},{"description":540,"installMethods":541,"name":543,"sourceUrl":544},"High-level PyTorch framework with Trainer class, automatic distributed training (DDP/FSDP/DeepSpeed), callbacks system, and minimal boilerplate. Scales from laptop to supercomputer with same code. Use when you want clean training loops with built-in best practices.",{"claudeCode":542},"Orchestra-Research/AI-Research-SKILLs","pytorch-lightning","https://github.com/Orchestra-Research/AI-Research-SKILLs",{"basePath":546,"githubOwner":547,"githubRepo":548,"locale":339,"slug":543,"type":245},"08-distributed-training/pytorch-lightning","Orchestra-Research","AI-Research-SKILLs",{"evaluate":550,"extract":556},{"promptVersionExtension":205,"promptVersionScoring":206,"score":209,"tags":551,"targetMarket":250,"tier":526},[552,553,213,554,555,212],"pytorch","lightning","distributed-training","mlops",{"commitSha":252},{"parentExtensionId":558,"repoId":559},"k17155ws9qc0hw7a568bg79sfd86max8","kd70hj1y80mhra5xm5g188j5n586mg18",[212,554,553,555,552,213],{"evaluatedAt":562,"extractAt":563,"updatedAt":562},1778696329359,1778695116697,{"_creationTime":565,"_id":566,"community":567,"display":568,"identity":574,"providers":578,"relations":585,"tags":587,"workflow":588},1778691799740.4905,"k17c27dcgjsqmxeggb19stv4xn86mf1z",{"reviewCount":8},{"description":569,"installMethods":570,"name":572,"sourceUrl":573},"Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.",{"claudeCode":571},"K-Dense-AI/claude-scientific-skills","PyTorch Lightning","https://github.com/K-Dense-AI/claude-scientific-skills",{"basePath":575,"githubOwner":576,"githubRepo":577,"locale":339,"slug":543,"type":245},"scientific-skills/pytorch-lightning","K-Dense-AI","claude-scientific-skills",{"evaluate":579,"extract":583},{"promptVersionExtension":205,"promptVersionScoring":206,"score":520,"tags":580,"targetMarket":250,"tier":526},[552,212,581,217,582],"machine-learning","framework",{"commitSha":252,"license":584},"Apache-2.0",{"repoId":586},"kd79rphh5gexy91xmpxc05h5mh86mm9r",[212,582,581,217,552],{"evaluatedAt":589,"extractAt":590,"updatedAt":589},1778693958717,1778691799740,{"_creationTime":592,"_id":593,"community":594,"display":595,"identity":601,"providers":605,"relations":614,"tags":617,"workflow":618},1778695548458.3782,"k17a4rtftm1z500gdcksks32wx86n9p2",{"reviewCount":8},{"description":596,"installMethods":597,"name":599,"sourceUrl":600},"Design and operate a data integrity monitoring programme based on ALCOA+ principles. Covers detective controls, audit trail review schedules, anomaly detection patterns (off-hours activity, sequential modifications, bulk changes), metrics dashboards, investigation triggers, and escalation matrix definition. Use when establishing a data integrity monitoring programme for GxP systems, preparing for inspections where data integrity is a focus area, after a data integrity incident requiring enhanced monitoring, or when implementing MHRA, WHO, or PIC/S guidance.\n",{"claudeCode":598},"pjt222/agent-almanac","monitor-data-integrity","https://github.com/pjt222/agent-almanac",{"basePath":602,"githubOwner":603,"githubRepo":604,"locale":339,"slug":599,"type":245},"skills/monitor-data-integrity","pjt222","agent-almanac",{"evaluate":606,"extract":613},{"promptVersionExtension":205,"promptVersionScoring":206,"score":520,"tags":607,"targetMarket":250,"tier":526},[608,609,610,611,216,612],"compliance","gxp","data-integrity","alcoa","anomaly-detection",{"commitSha":252},{"parentExtensionId":615,"repoId":616},"k170h0janaa9kwn7cfgfz2ykss86mmh9","kd7aryv63z61j39n2td1aeqkvh86mh12",[611,612,608,610,609,216],{"evaluatedAt":619,"extractAt":620,"updatedAt":619},1778699562914,1778695548458,{"_creationTime":622,"_id":623,"community":624,"display":625,"identity":631,"providers":634,"relations":641,"tags":643,"workflow":644},1778694578248.1062,"k17e56dzsqh7qked458bjbs0e586n21y",{"reviewCount":8},{"description":626,"installMethods":627,"name":629,"sourceUrl":630},"Query Netdata Cloud via its REST API -- metrics, logs (systemd-journal / windows-events / otel-logs), topology graphs (topology:snmp), network flows (flows:netflow), alerts, dynamic configuration (DynCfg), and generic Functions on a node. Use when the user asks about querying Netdata Cloud, fetching metrics from the cloud, querying logs / topology / netflow / sflow / ipfix through Cloud, listing or modifying configurations via DynCfg, calling agent Functions through Cloud, listing spaces/rooms/nodes, or building a curl command against `app.netdata.cloud`. Pairs with the `query-netdata-agents` skill when direct-agent access is needed.",{"claudeCode":628},"netdata/netdata","query-netdata-cloud","https://github.com/netdata/netdata",{"basePath":632,"githubOwner":633,"githubRepo":633,"locale":339,"slug":629,"type":245},"docs/netdata-ai/skills/query-netdata-cloud","netdata",{"evaluate":635,"extract":640},{"promptVersionExtension":205,"promptVersionScoring":206,"score":520,"tags":636,"targetMarket":250,"tier":526},[633,637,216,524,523,638,639],"api","topology","rest",{"commitSha":252},{"repoId":642},"kd70yp91ybn40a638h3hzz6nbd86m2cw",[637,523,524,216,633,639,638],{"evaluatedAt":645,"extractAt":646,"updatedAt":645},1778694825298,1778694578248,{"_creationTime":648,"_id":649,"community":650,"display":651,"identity":657,"providers":661,"relations":669,"tags":671,"workflow":672},1778694240519.7402,"k172jnxq28h65x6zn1p19r731586md2x",{"reviewCount":8},{"description":652,"installMethods":653,"name":655,"sourceUrl":656},"Track skill performance and emerging patterns",{"claudeCode":654},"mshadmanrahman/pm-pilot","meta-observer","https://github.com/mshadmanrahman/pm-pilot",{"basePath":658,"githubOwner":659,"githubRepo":660,"locale":339,"slug":655,"type":245},"skills/productivity/meta-observer","mshadmanrahman","pm-pilot",{"evaluate":662,"extract":668},{"promptVersionExtension":205,"promptVersionScoring":206,"score":520,"tags":663,"targetMarket":250,"tier":526},[216,664,665,666,667],"analytics","productivity","logging","skills",{"commitSha":252},{"repoId":670},"kd728wqst6vwd95ymycxb97nrx86mnsn",[664,666,216,665,667],{"evaluatedAt":673,"extractAt":674,"updatedAt":673},1778694605108,1778694240519]