[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"extension-skill-langchain-ai-cudf-analytics-id":3,"guides-for-langchain-ai-cudf-analytics":236,"similar-k175ck9zvdy7dd1ze11z36cjbx866tek":237},{"_creationTime":4,"_id":5,"children":6,"community":7,"display":9,"evaluation":22,"identity":190,"isFallback":195,"parentExtension":196,"providers":197,"relations":203,"repo":205,"workflow":232},1778053968286.491,"k175ck9zvdy7dd1ze11z36cjbx866tek",[],{"reviewCount":8},0,{"description":10,"installMethods":11,"name":12,"sourceUrl":13,"tags":14},"Use for GPU-accelerated data analysis on datasets, CSVs, or tabular data using NVIDIA cuDF. Triggers when tasks involve groupby aggregations, statistical summaries, anomaly detection, or large-scale data profiling.",{},"cuDF Analytics Skill","https://github.com/langchain-ai/deepagents/tree/HEAD/examples/nvidia_deep_agent/skills/cudf-analytics",[15,16,17,18,19,20,21],"gpu","data-analysis","cudf","pandas","tabular-data","aggregation","statistics",{"_creationTime":23,"_id":24,"extensionId":5,"locale":25,"result":26,"trustSignals":178,"workflow":188},1778054053159.3162,"kn762wbb5vn5fsprbcv8er7jbn867kwy","en",{"checks":27,"evaluatedAt":168,"extensionSummary":169,"promptVersionExtension":170,"promptVersionScoring":171,"rationale":172,"score":173,"summary":174,"tags":175,"targetMarket":176,"tier":177},[28,33,36,39,43,46,50,55,58,61,65,70,73,77,80,83,86,89,92,95,98,102,106,110,114,117,120,123,127,130,133,136,139,142,146,149,152,155,158,161,165],{"category":29,"check":30,"severity":31,"summary":32},"Practical Utility","Problem relevance","pass","The description clearly names a concrete user problem: GPU-accelerated data analysis on datasets using NVIDIA cuDF for specific tasks like aggregations and profiling.",{"category":29,"check":34,"severity":31,"summary":35},"Unique selling proposition","The skill leverages NVIDIA cuDF for GPU acceleration, offering a significant performance boost over standard pandas, which is a clear value beyond a simple prompt.",{"category":29,"check":37,"severity":31,"summary":38},"Production readiness","The skill provides initialization code to test GPU availability and fallbacks to pandas, along with clear examples for common data analysis tasks, indicating readiness for use.",{"category":40,"check":41,"severity":31,"summary":42},"Scope","Single responsibility principle","The skill focuses solely on GPU-accelerated data analysis using cuDF, clearly defining its scope and not extending into unrelated domains.",{"category":40,"check":44,"severity":31,"summary":45},"Description quality","The description is concise, readable, and accurately reflects the skill's capabilities and intended use cases.",{"category":47,"check":48,"severity":31,"summary":49},"Invocation","Scoped tools","The skill uses narrow verb-noun operations like `read_csv`, `describe`, `mean`, `corr`, `groupby`, and `sort_values`, which are specific to data analysis tasks.",{"category":51,"check":52,"severity":53,"summary":54},"Documentation","Configuration & parameter reference","info","Parameters like file paths are mentioned, but default values for configurations are not explicitly documented. The initialization code is clear but lacks documentation on potential configuration overrides.",{"category":40,"check":56,"severity":31,"summary":57},"Tool naming","Tool names like `read_csv`, `to_pd`, `mean`, `corr`, `groupby`, and `describe` are descriptive and directly relate to data analysis operations.",{"category":40,"check":59,"severity":31,"summary":60},"Minimal I/O surface","Input requirements are primarily file paths, and outputs are structured data (pandas DataFrames/Series), aligning with typical data analysis needs without unnecessary extra fields.",{"category":62,"check":63,"severity":31,"summary":64},"License","License usability","The repository includes a standard MIT License file, indicating permissive usability.",{"category":66,"check":67,"severity":68,"summary":69},"Maintenance","Commit recency","not_applicable","No commit history is available for evaluation.",{"category":66,"check":71,"severity":68,"summary":72},"Dependency Management","No third-party dependencies are explicitly managed within the skill's direct code, beyond the expected cudf/pandas for the task.",{"category":74,"check":75,"severity":68,"summary":76},"Security","Secret Management","The skill does not appear to handle or expose any secrets.",{"category":74,"check":78,"severity":31,"summary":79},"Injection","The skill focuses on data analysis operations and does not appear to load or execute untrusted external data as instructions.",{"category":74,"check":81,"severity":31,"summary":82},"Transitive Supply-Chain Grenades","The skill does not fetch remote content at runtime or include symlinks outside its directory; all necessary operations are self-contained or rely on pre-installed libraries.",{"category":74,"check":84,"severity":31,"summary":85},"Sandbox Isolation","The skill operates on data paths provided as input and does not attempt to modify files or paths outside its designated scope.",{"category":74,"check":87,"severity":31,"summary":88},"Sandbox escape primitives","No detached process spawns or deny-retry loops were detected in the provided script.",{"category":74,"check":90,"severity":31,"summary":91},"Data Exfiltration","The skill is focused on local data analysis and does not contain any outbound calls to external services for data submission or telemetry.",{"category":74,"check":93,"severity":31,"summary":94},"Hidden Text Tricks","The bundled files do not contain any hidden-steering tricks, invisible characters, or obfuscated content.",{"category":74,"check":96,"severity":31,"summary":97},"Opaque code execution","The provided Python script is plain, readable source code and does not use obfuscation techniques like base64 payloads or runtime fetching.",{"category":99,"check":100,"severity":31,"summary":101},"Portability","Structural Assumption","The skill assumes input is a file path, which is standard. It does not make assumptions about project organization outside of receiving a path.",{"category":103,"check":104,"severity":68,"summary":105},"Trust","Issues Attention","No issue data is available for this repository.",{"category":107,"check":108,"severity":68,"summary":109},"Versioning","Release Management","No versioning information (e.g., CHANGELOG, release tags, manifest version) is present in the provided files.",{"category":111,"check":112,"severity":53,"summary":113},"Code Execution","Validation","Input file paths are implicitly validated by the `read_csv` function, but explicit schema validation libraries are not used for input arguments or output structures.",{"category":74,"check":115,"severity":68,"summary":116},"Unguarded Destructive Operations","The skill is purely read-only and analytical; it does not perform any destructive operations.",{"category":111,"check":118,"severity":31,"summary":119},"Error Handling","The initialization includes a try-except block for GPU availability, and the `to_pd` function has a fallback for potential conversion errors, indicating basic error handling.",{"category":111,"check":121,"severity":53,"summary":122},"Logging","The script includes print statements for GPU unavailability and conversion errors, but there is no structured local audit file for actions or outcomes.",{"category":124,"check":125,"severity":68,"summary":126},"Compliance","GDPR","The skill operates on provided data files and does not handle personal data or interact with third parties.",{"category":124,"check":128,"severity":31,"summary":129},"Target market","The skill is for GPU-accelerated data analysis and has no regional or jurisdictional limitations, making it globally applicable.",{"category":99,"check":131,"severity":31,"summary":132},"Runtime stability","The skill has a fallback to pandas if cuDF is unavailable, ensuring it can still perform data analysis on systems without a compatible GPU.",{"category":47,"check":134,"severity":31,"summary":135},"Precise Purpose","The description clearly states the skill's purpose (GPU-accelerated data analysis with cuDF) and provides specific triggers and use cases (groupby, statistical summaries, anomaly detection).",{"category":47,"check":137,"severity":31,"summary":138},"Concise Frontmatter","The frontmatter is dense and effectively summarizes the core capability (GPU-accelerated data analysis with cuDF) and lists relevant triggers.",{"category":51,"check":140,"severity":31,"summary":141},"Concise Body","The skill body is concise and primarily uses relative paths to demonstrate operations, avoiding excessive length within the main file.",{"category":143,"check":144,"severity":31,"summary":145},"Context","Progressive Disclosure","The SKILL.md outlines the flow and provides code examples inline, which are sufficiently concise for direct use without needing external reference files for this scope.",{"category":143,"check":147,"severity":68,"summary":148},"Forked exploration","This skill is a short-form data analysis tool and does not involve deep exploration or multi-file inspection.",{"category":29,"check":150,"severity":31,"summary":151},"Usage examples","Sufficient end-to-end examples are provided for core capabilities like reading data, statistical summaries, groupby aggregations, and anomaly detection, which plausibly produce the claimed output.",{"category":29,"check":153,"severity":53,"summary":154},"Edge cases","The initialization handles GPU unavailability, and the `to_pd` function has a fallback, but explicit documentation of other failure modes (e.g., malformed CSV, missing columns) and their recovery paths is missing.",{"category":111,"check":156,"severity":68,"summary":157},"Tool Fallback","This skill does not rely on external tools like MCP servers; it uses standard Python libraries and cuDF/pandas.",{"category":99,"check":159,"severity":31,"summary":160},"Stack assumptions","The skill assumes a Python runtime with cuDF and pandas installed, and the initialization explicitly checks for cuDF availability, declaring its primary stack assumption.",{"category":162,"check":163,"severity":31,"summary":164},"Safety","Halt on unexpected state","The initialization code halts execution and reports an error if cuDF is unavailable and a fallback to pandas is not desired or fails, preventing unexpected behavior.",{"category":99,"check":166,"severity":31,"summary":167},"Cross-skill coupling","The skill is self-contained for its data analysis tasks and does not implicitly rely on other skills being loaded.",1778054020502,"This skill leverages NVIDIA cuDF to perform accelerated data analysis on CSVs and tabular data, supporting operations like statistical summaries, groupby aggregations, and anomaly detection. It includes a fallback to pandas for systems without GPU support and provides clear examples for its use.","2.0.0","3.4.0","The cuDF Analytics Skill is a high-quality, self-contained tool for GPU-accelerated data analysis. It demonstrates excellent documentation, clear scope, and robust error handling with a fallback mechanism. The main areas for improvement are more detailed documentation of edge case recovery paths and explicit mention of configuration defaults.",95,"A high-quality skill for GPU-accelerated data analysis using NVIDIA cuDF, offering robust functionality and clear documentation.",[15,16,17,18,19,20,21],"global","verified",{"codeQuality":179,"collectedAt":180,"documentation":181,"maintenance":183,"popularity":184,"security":185,"testCoverage":187},{},1778054008443,{"descriptionLength":182,"readmeSize":8},214,{},{"smitheryUniqueUsers":8,"smitheryUseCount":8},{"hasNpmPackage":186,"smitheryVerified":186},false,{"hasCi":186,"hasTests":186},{"updatedAt":189},1778054053159,{"githubOwner":191,"githubRepo":192,"locale":25,"slug":193,"type":194},"langchain-ai","deepagents","cudf-analytics","skill",true,null,{"extract":198,"llm":201,"smithery":202},{"commitSha":199,"license":200},"b108c71d0c570e16c7050c1eac482e15dc35a5ed","MIT-0",{"promptVersionExtension":170,"promptVersionScoring":171,"score":173,"targetMarket":176,"tier":177},{"qualityScore":8,"totalActivations":8,"uniqueUsers":8,"useCount":8,"verified":186},{"repoId":204},"kd76dna2fvfbnjvzcpd2cwqnyd865xz7",{"_creationTime":206,"_id":204,"identity":207,"providers":209,"workflow":229},1777995558409.8704,{"githubOwner":191,"githubRepo":192,"sourceUrl":208},"https://github.com/langchain-ai/deepagents",{"discover":210,"github":214},{"sources":211},[212,213],"skills-sh","smithery",{"closedIssues90d":215,"forks":216,"homepage":217,"license":218,"openIssues90d":219,"pushedAt":220,"readmeSize":221,"stars":222,"topics":223},256,3140,"https://docs.langchain.com/deepagents","MIT",142,1778033560000,6232,22320,[192,224,225,226,227,228],"langchain","langgraph","ai","python","typescript",{"discoverAt":230,"extractAt":231,"githubAt":231,"updatedAt":231},1777995558409,1778053970345,{"anyEnrichmentAt":233,"extractAt":234,"githubAt":235,"llmAt":189,"smitheryAt":233,"updatedAt":189},1778053994907,1778053968286,1778053969344,[],[238,265,291,322,346,379],{"_creationTime":239,"_id":240,"community":241,"display":242,"identity":249,"providers":253,"relations":259,"workflow":261},1778053327521.582,"k17d9qgp73tpxcffh07vf2esjx867rvz",{"reviewCount":8},{"description":243,"installMethods":244,"name":245,"sourceUrl":246,"tags":247},"SQL, pandas, and statistical analysis expertise for data exploration and insights. Use when: analyzing data, writing SQL queries, using pandas, performing statistical analysis, or when user mentions data analysis, SQL, pandas, statistics, or needs help exploring datasets.",{},"Data Analyst","https://github.com/shubhamsaboo/awesome-llm-apps/tree/HEAD/awesome_agent_skills/data-analyst",[248,18,21,16],"sql",{"githubOwner":250,"githubRepo":251,"locale":25,"slug":252,"type":194},"shubhamsaboo","awesome-llm-apps","data-analyst",{"extract":254,"llm":256},{"commitSha":255,"license":218},"a35897449fe8b0fab12e8f0fd9f2e2a40e872ab7",{"promptVersionExtension":170,"promptVersionScoring":171,"score":257,"targetMarket":176,"tier":258},75,"evaluated",{"repoId":260},"kd73kvct1kme7748mpsbddhhmx865wd3",{"anyEnrichmentAt":262,"extractAt":263,"githubAt":262,"llmAt":264,"updatedAt":264},1778053329769,1778053327521,1778053376632,{"_creationTime":266,"_id":267,"community":268,"display":269,"identity":282,"providers":284,"relations":289,"workflow":290},1778053968286.4915,"k175b5jxdjfkk9tbcw1p9c7m69867err",{"reviewCount":8},{"description":270,"installMethods":271,"name":272,"sourceUrl":273,"tags":274},"Use for GPU-accelerated machine learning on tabular data using NVIDIA cuML. Triggers when tasks involve classification, regression, clustering, dimensionality reduction, or model training on datasets.",{},"cuML Machine Learning Skill","https://github.com/langchain-ai/deepagents/tree/HEAD/examples/nvidia_deep_agent/skills/cuml-machine-learning",[275,15,276,19,277,278,279,280,281],"machine-learning","cuml","classification","regression","clustering","dimensionality-reduction","data-preprocessing",{"githubOwner":191,"githubRepo":192,"locale":25,"slug":283,"type":194},"cuml-machine-learning",{"extract":285,"llm":286,"smithery":288},{"commitSha":199,"license":218},{"promptVersionExtension":170,"promptVersionScoring":171,"score":287,"targetMarket":176,"tier":177},92,{"qualityScore":8,"totalActivations":8,"uniqueUsers":8,"useCount":8,"verified":186},{"repoId":204},{"anyEnrichmentAt":233,"extractAt":234,"githubAt":235,"llmAt":189,"smitheryAt":233,"updatedAt":189},{"_creationTime":292,"_id":293,"community":294,"display":295,"identity":307,"providers":310,"relations":316,"workflow":318},1778054691785.2554,"k179r3z09h3t0ed62ac4yy0qzn867erz",{"reviewCount":8},{"description":296,"name":297,"sourceUrl":298,"tags":299},"Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas","Excel Spreadsheet Operations","https://github.com/answerzhao/agent-skills/tree/HEAD/glm-skills/document-skills/xlsx",[300,301,302,303,18,304,305,16,306],"spreadsheet","excel","xlsx","csv","openpyxl","formulas","visualization",{"githubOwner":308,"githubRepo":309,"locale":25,"slug":302,"type":194},"answerzhao","agent-skills",{"extract":311,"llm":314},{"commitSha":312,"license":313},"aad73edbd0d9ffbc3d6a402b6eafa6dab96d5ebb","Proprietary",{"promptVersionExtension":170,"promptVersionScoring":171,"score":257,"targetMarket":176,"tier":315},"flagged",{"repoId":317},"kd712v2g1pay70swwj0jpv2ggs864zgh",{"anyEnrichmentAt":319,"extractAt":320,"githubAt":319,"llmAt":321,"updatedAt":321},1778054692243,1778054691785,1778054738050,{"_creationTime":323,"_id":324,"community":325,"display":326,"identity":332,"providers":335,"relations":340,"workflow":342},1778054086261.097,"k171dyxchnt284xn9x9heqddch866tmp",{"reviewCount":8},{"description":296,"installMethods":327,"name":328,"sourceUrl":329,"tags":330},{},"XLSX Spreadsheet Manipulator","https://github.com/bilalmk/todo_correct/tree/HEAD/.claude/skills/panaversity/xlsx",[301,300,16,227,18,304,331],"libreoffice",{"githubOwner":333,"githubRepo":334,"locale":25,"slug":302,"type":194},"bilalmk","todo_correct",{"extract":336,"llm":338},{"commitSha":337,"license":313},"8b43aa04bd5c53e3cda46469b953684519a84ea7",{"promptVersionExtension":170,"promptVersionScoring":171,"score":339,"targetMarket":176,"tier":315},40,{"repoId":341},"kd75ecf652eb91ha327s8bqbex865z6v",{"anyEnrichmentAt":343,"extractAt":344,"githubAt":343,"llmAt":345,"updatedAt":345},1778054086910,1778054086261,1778054163453,{"_creationTime":347,"_id":348,"community":349,"display":350,"identity":363,"providers":366,"relations":372,"workflow":374},1777995620896.9917,"k17231zep11befm3g43rsa1yv5864trn",{"reviewCount":8},{"description":351,"installMethods":352,"name":354,"sourceUrl":355,"tags":356},"Extension from aliengiraffe/spotdb",{"docker":353},"aliengiraffe/spotdb","SpotDB","https://github.com/aliengiraffe/spotdb",[357,248,358,359,16,360,361,362,303],"database","duckdb","go","sandbox","mcp","api",{"githubOwner":364,"githubRepo":365,"locale":25,"slug":365,"type":194},"aliengiraffe","spotdb",{"extract":367,"llm":369,"smithery":371},{"commitSha":368,"license":218},"cfbbef27f89d18939149790a0fa9ce1ee2c5eac5",{"promptVersionExtension":170,"promptVersionScoring":171,"score":370,"targetMarket":176,"tier":177},98,{"qualityScore":8,"totalActivations":8,"uniqueUsers":8,"useCount":8,"verified":186},{"repoId":373},"kd72fk7ta378vyy81k8hqp5rs5864hzf",{"anyEnrichmentAt":375,"extractAt":376,"githubAt":377,"llmAt":378,"smitheryAt":375,"updatedAt":378},1777995723550,1777995620897,1777995621254,1777995897177,{"_creationTime":380,"_id":381,"community":382,"display":383,"identity":393,"providers":397,"relations":402,"workflow":404},1778054663200.0452,"k1766cb3e1wpzhjb96qe6pky8s867we2",{"reviewCount":8},{"description":384,"installMethods":385,"name":386,"sourceUrl":387,"tags":388},"Deploys swarms of sub-agents for massive parallel data processing tasks. Unlike agent-army (which is for code changes), this is for DATA tasks -- processing 1000 documents, analyzing datasets, bulk content generation. Configurable swarm size, task distribution, result aggregation, progress tracking, and error recovery.",{},"Agent Swarm Deployer","https://github.com/onewave-ai/claude-skills/tree/HEAD/agent-swarm-deployer",[389,390,391,16,392],"data-processing","parallel-processing","agent-orchestration","automation",{"githubOwner":394,"githubRepo":395,"locale":25,"slug":396,"type":194},"onewave-ai","claude-skills","agent-swarm-deployer",{"extract":398,"llm":400},{"commitSha":399,"license":218},"eb3d80be32b6cafcf0d5df1c1b8a95df75838271",{"promptVersionExtension":170,"promptVersionScoring":171,"score":401,"targetMarket":176,"tier":177},97,{"repoId":403},"kd71e43dj0b7ak5e55pyshxp4n864t6p",{"anyEnrichmentAt":405,"extractAt":406,"githubAt":405,"llmAt":407,"updatedAt":407},1778054667983,1778054663200,1778055270278]