[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"extension-skill-bytedance-data-analysis-sw":3,"guides-for-bytedance-data-analysis":240,"similar-k172f9k8w7xg3bzb9t320dj0cn866517":241},{"_creationTime":4,"_id":5,"children":6,"community":7,"display":9,"evaluation":22,"identity":187,"isFallback":191,"parentExtension":192,"providers":193,"relations":198,"repo":200,"workflow":237},1778053100136.2417,"k172f9k8w7xg3bzb9t320dj0cn866517",[],{"reviewCount":8},0,{"description":10,"installMethods":11,"name":12,"sourceUrl":13,"tags":14},"Use this skill when the user uploads Excel (.xlsx/.xls) or CSV files and wants to perform data analysis, generate statistics, create summaries, pivot tables, SQL queries, or any form of structured data exploration. Supports multi-sheet Excel workbooks, aggregation, filtering, joins, and exporting results to CSV/JSON/Markdown.",{},"Data Analysis Skill","https://github.com/bytedance/deer-flow/tree/HEAD/skills/public/data-analysis",[15,16,17,18,19,20,21],"data-analysis","excel","csv","sql","duckdb","python","analytics",{"_creationTime":23,"_id":24,"extensionId":5,"locale":25,"result":26,"trustSignals":176,"workflow":185},1778053169012.8228,"kn7f0sa23ka7khbnzwfh8n0es5866dd0","en",{"checks":27,"evaluatedAt":166,"extensionSummary":167,"promptVersionExtension":168,"promptVersionScoring":169,"rationale":170,"score":171,"summary":172,"tags":173,"targetMarket":174,"tier":175},[28,33,36,39,43,46,50,54,57,60,64,69,72,76,79,82,85,88,91,94,97,101,105,110,115,118,121,124,128,131,134,137,140,143,147,150,153,156,159,163],{"category":29,"check":30,"severity":31,"summary":32},"Practical Utility","Problem relevance","pass","The description clearly identifies a concrete user problem: performing data analysis on uploaded Excel/CSV files, and lists specific desired outcomes like statistics, summaries, and SQL queries.",{"category":29,"check":34,"severity":31,"summary":35},"Unique selling proposition","The skill offers significant value over a simple prompt by providing a dedicated Python script leveraging DuckDB for efficient SQL querying, schema inspection, and statistical summaries on local files, which is beyond default LLM capabilities.",{"category":29,"check":37,"severity":31,"summary":38},"Production readiness","The extension is production-ready, providing a complete workflow from file inspection to analysis and export, with clear instructions and a functional Python script.",{"category":40,"check":41,"severity":31,"summary":42},"Scope","Single responsibility principle","The extension focuses on data analysis of Excel and CSV files using SQL, adhering to a single, well-defined responsibility.",{"category":40,"check":44,"severity":31,"summary":45},"Description quality","The description accurately reflects the skill's capabilities, including support for multi-sheet Excel, aggregation, filtering, joins, and exporting results, and is well-written.",{"category":47,"check":48,"severity":31,"summary":49},"Invocation","Scoped tools","The skill exposes narrow verb-noun tools like `inspect`, `query`, and `summary`, making them easy for the agent to select.",{"category":51,"check":52,"severity":31,"summary":53},"Documentation","Configuration & parameter reference","All parameters for the `analyze.py` script (`--files`, `--action`, `--sql`, `--table`, `--output-file`) are clearly documented in the SKILL.md file.",{"category":40,"check":55,"severity":31,"summary":56},"Tool naming","The primary action names (`inspect`, `query`, `summary`) are descriptive and align with the domain.",{"category":40,"check":58,"severity":31,"summary":59},"Minimal I/O surface","Tool inputs (file paths, SQL queries, action types) are specific to the task, and outputs are focused on analysis results or exported files, with no extraneous data.",{"category":61,"check":62,"severity":31,"summary":63},"License","License usability","The extension is licensed under the MIT License, which is a permissive open-source license.",{"category":65,"check":66,"severity":67,"summary":68},"Maintenance","Commit recency","not_applicable","No commit history is available for this specific skill, but the parent repository has recent commits, suggesting active maintenance.",{"category":65,"check":70,"severity":31,"summary":71},"Dependency Management","The script explicitly installs missing dependencies (`duckdb`, `openpyxl`) using pip if not found, demonstrating a form of dependency management.",{"category":73,"check":74,"severity":67,"summary":75},"Security","Secret Management","The script does not handle or expose any secrets.",{"category":73,"check":77,"severity":31,"summary":78},"Injection","The script treats file contents as data and executes them as SQL via DuckDB, with no indication of executing instructions within loaded data files.",{"category":73,"check":80,"severity":31,"summary":81},"Transitive Supply-Chain Grenades","The script only uses committed files and libraries installed via pip, with no runtime downloads or execution of external, untrusted content.",{"category":73,"check":83,"severity":31,"summary":84},"Sandbox Isolation","The script operates within the provided file paths (`/mnt/user-data/uploads/`, `/mnt/user-data/outputs/`, `/mnt/skills/...`) and uses a temporary directory for cache, indicating good sandbox isolation.",{"category":73,"check":86,"severity":31,"summary":87},"Sandbox escape primitives","No detached processes or retry loops around denied tool calls were found in the script.",{"category":73,"check":89,"severity":31,"summary":90},"Data Exfiltration","The script does not make any outbound network calls and operates solely on local files.",{"category":73,"check":92,"severity":31,"summary":93},"Hidden Text Tricks","The bundled files are free of hidden text tricks, control characters, or invisible Unicode sequences that could steer the model.",{"category":73,"check":95,"severity":31,"summary":96},"Opaque code execution","The Python script is readable source code and does not use obfuscation techniques like base64 decoding or runtime code fetching.",{"category":98,"check":99,"severity":31,"summary":100},"Portability","Structural Assumption","The script correctly uses placeholder paths for uploads and outputs (`/mnt/user-data/...`) and explicitly defines its cache directory, making it portable.",{"category":102,"check":103,"severity":67,"summary":104},"Trust","Issues Attention","Issue tracking data is not available for this specific skill.",{"category":106,"check":107,"severity":108,"summary":109},"Versioning","Release Management","warning","There is no explicit versioning information (e.g., version field in frontmatter, CHANGELOG, or release tags) for this skill, and install instructions (implicitly via the repository) do not point to a specific version.",{"category":111,"check":112,"severity":113,"summary":114},"Code Execution","Validation","info","The script validates action arguments and file paths, but it relies on DuckDB for SQL query validation, which might not be as robust as a dedicated schema validation library for all inputs.",{"category":73,"check":116,"severity":31,"summary":117},"Unguarded Destructive Operations","The skill is read-only with respect to user data, performing analysis and exports without destructive operations.",{"category":111,"check":119,"severity":31,"summary":120},"Error Handling","The script includes error handling for file loading, SQL execution, and dependency installation, providing informative messages on failure.",{"category":111,"check":122,"severity":31,"summary":123},"Logging","The script logs file loading status, table information, and potential errors to stdout, providing a clear audit trail of actions.",{"category":125,"check":126,"severity":31,"summary":127},"Compliance","GDPR","The script operates on user-provided data files and does not explicitly handle or submit personal data to third parties.",{"category":125,"check":129,"severity":31,"summary":130},"Target market","The skill operates on local files and uses standard SQL, with no regional logic or limitations detected. Target market is global.",{"category":98,"check":132,"severity":31,"summary":133},"Runtime stability","The script uses standard Python and relies on pip for dependency installation, ensuring good cross-platform compatibility. It gracefully handles missing dependencies.",{"category":47,"check":135,"severity":31,"summary":136},"Precise Purpose","The description clearly states the skill's purpose (data analysis on Excel/CSV) and provides specific examples of when to use it, including explicit boundaries like not reading the Python file directly.",{"category":47,"check":138,"severity":31,"summary":139},"Concise Frontmatter","The frontmatter is concise, clearly defines the skill's core capability and supported file types, and includes trigger phrases without excessive keyword stuffing.",{"category":51,"check":141,"severity":31,"summary":142},"Concise Body","The SKILL.md file is well-structured and avoids excessive verbosity, keeping the core instructions concise and delegating detailed SQL examples to the main body.",{"category":144,"check":145,"severity":31,"summary":146},"Context","Progressive Disclosure","The SKILL.md file outlines the workflow and calls the Python script with example commands, effectively using progressive disclosure for instructions.",{"category":144,"check":148,"severity":67,"summary":149},"Forked exploration","This skill performs direct data analysis and does not involve deep exploration that would necessitate a forked context.",{"category":29,"check":151,"severity":31,"summary":152},"Usage examples","The skill provides multiple end-to-end examples demonstrating file inspection, SQL querying, statistical summary, and multi-file joins with clear inputs and expected outcomes.",{"category":29,"check":154,"severity":31,"summary":155},"Edge cases","The documentation covers potential issues like unsupported file formats, table not found errors, and SQL errors, with explanations and recovery steps.",{"category":111,"check":157,"severity":67,"summary":158},"Tool Fallback","The skill does not rely on external tools like an MCP server; it uses standard Python libraries and DuckDB.",{"category":160,"check":161,"severity":31,"summary":162},"Safety","Halt on unexpected state","The script includes checks for file existence and successful table loading, exiting with an error if prerequisites are not met.",{"category":98,"check":164,"severity":31,"summary":165},"Cross-skill coupling","The skill is self-contained and does not implicitly rely on other skills being loaded or running.",1778053129144,"This skill provides a Python script that leverages DuckDB to perform data analysis on uploaded Excel and CSV files. It supports inspecting file structures, running arbitrary SQL queries, generating statistical summaries, and exporting results to various formats like CSV, JSON, and Markdown. The skill efficiently handles large files and supports multi-sheet Excel workbooks and cross-file joins.","2.0.0","3.4.0","The skill is highly robust, well-documented, and follows best practices for security and usability. It provides a clear problem statement, comprehensive examples, and a functional implementation. The only minor point is the lack of explicit versioning for the skill itself, which is a common pattern for individual skills within a larger project.",96,"A comprehensive and well-implemented skill for analyzing Excel and CSV files using SQL via DuckDB.",[15,16,17,18,19,20,21],"global","verified",{"codeQuality":177,"collectedAt":178,"documentation":179,"maintenance":181,"security":182,"testCoverage":184},{},1778053116443,{"descriptionLength":180,"readmeSize":8},327,{},{"hasNpmPackage":183,"smitheryVerified":183},false,{"hasCi":183,"hasTests":183},{"updatedAt":186},1778053169012,{"githubOwner":188,"githubRepo":189,"locale":25,"slug":15,"type":190},"bytedance","deer-flow","skill",true,null,{"extract":194,"llm":197},{"commitSha":195,"license":196},"1336872b15c25d45ebcb7c1cf72369c2bdd53187","MIT",{"promptVersionExtension":168,"promptVersionScoring":169,"score":171,"targetMarket":174,"tier":175},{"repoId":199},"kd789sm7egx1h0t1jag6zzhcq98656wv",{"_creationTime":201,"_id":199,"identity":202,"providers":204,"workflow":234},1777995558409.9045,{"githubOwner":188,"githubRepo":189,"sourceUrl":203},"https://github.com/bytedance/deer-flow",{"discover":205,"github":208},{"sources":206},[207],"skills-sh",{"closedIssues90d":209,"forks":210,"homepage":211,"license":196,"openIssues90d":212,"pushedAt":213,"readmeSize":214,"stars":215,"topics":216},389,8629,"https://deerflow.tech",356,1778052455000,38642,65247,[217,218,219,220,221,222,223,224,225,226,227,228,229,20,230,231,232,233],"agent","agentic","agentic-framework","agentic-workflow","ai","ai-agents","deep-research","langchain","langgraph","llm","multi-agent","nodejs","podcast","langmanus","typescript","harness","superagent",{"discoverAt":235,"extractAt":236,"githubAt":236,"updatedAt":236},1777995558409,1778053102364,{"anyEnrichmentAt":238,"extractAt":239,"githubAt":238,"llmAt":186,"updatedAt":186},1778053101076,1778053100136,[],[242,274,305,334,363,384],{"_creationTime":243,"_id":244,"community":245,"display":246,"identity":258,"providers":261,"relations":267,"workflow":269},1777995620896.9917,"k17231zep11befm3g43rsa1yv5864trn",{"reviewCount":8},{"description":247,"installMethods":248,"name":250,"sourceUrl":251,"tags":252},"Extension from aliengiraffe/spotdb",{"docker":249},"aliengiraffe/spotdb","SpotDB","https://github.com/aliengiraffe/spotdb",[253,18,19,254,15,255,256,257,17],"database","go","sandbox","mcp","api",{"githubOwner":259,"githubRepo":260,"locale":25,"slug":260,"type":190},"aliengiraffe","spotdb",{"extract":262,"llm":264,"smithery":266},{"commitSha":263,"license":196},"cfbbef27f89d18939149790a0fa9ce1ee2c5eac5",{"promptVersionExtension":168,"promptVersionScoring":169,"score":265,"targetMarket":174,"tier":175},98,{"qualityScore":8,"totalActivations":8,"uniqueUsers":8,"useCount":8,"verified":183},{"repoId":268},"kd72fk7ta378vyy81k8hqp5rs5864hzf",{"anyEnrichmentAt":270,"extractAt":271,"githubAt":272,"llmAt":273,"smitheryAt":270,"updatedAt":273},1777995723550,1777995620897,1777995621254,1777995897177,{"_creationTime":275,"_id":276,"community":277,"display":278,"identity":289,"providers":292,"relations":299,"workflow":301},1778054691785.2554,"k179r3z09h3t0ed62ac4yy0qzn867erz",{"reviewCount":8},{"description":279,"name":280,"sourceUrl":281,"tags":282},"Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas","Excel Spreadsheet Operations","https://github.com/answerzhao/agent-skills/tree/HEAD/glm-skills/document-skills/xlsx",[283,16,284,17,285,286,287,15,288],"spreadsheet","xlsx","pandas","openpyxl","formulas","visualization",{"githubOwner":290,"githubRepo":291,"locale":25,"slug":284,"type":190},"answerzhao","agent-skills",{"extract":293,"llm":296},{"commitSha":294,"license":295},"aad73edbd0d9ffbc3d6a402b6eafa6dab96d5ebb","Proprietary",{"promptVersionExtension":168,"promptVersionScoring":169,"score":297,"targetMarket":174,"tier":298},75,"flagged",{"repoId":300},"kd712v2g1pay70swwj0jpv2ggs864zgh",{"anyEnrichmentAt":302,"extractAt":303,"githubAt":302,"llmAt":304,"updatedAt":304},1778054692243,1778054691785,1778054738050,{"_creationTime":306,"_id":307,"community":308,"display":309,"identity":316,"providers":319,"relations":326,"workflow":329},1778003232571.9138,"k177f6ycqgfp2emrprca15dcad8651qy",{"reviewCount":8},{"description":310,"name":311,"sourceUrl":312,"tags":313},"Use this skill any time a spreadsheet file is the primary input or output. This means any task where the user wants to: open, read, edit, or fix an existing .xlsx, .xlsm, .csv, or .tsv file (e.g., adding columns, computing formulas, formatting, charting, cleaning messy data); create a new spreadsheet from scratch or from other data sources; or convert between tabular file formats. Trigger especially when the user references a spreadsheet file by name or path — even casually (like \"the xlsx in my downloads\") — and wants something done to it or produced from it. Also trigger for cleaning or restructuring messy tabular data files (malformed rows, misplaced headers, junk data) into proper spreadsheets. The deliverable must be a spreadsheet file. Do NOT trigger when the primary deliverable is a Word document, HTML report, standalone Python script, database pipeline, or Google Sheets API integration, even if tabular data is involved.","XLSX Spreadsheet Skill","https://github.com/anthropics/skills/tree/HEAD/skills/xlsx",[16,283,15,17,284,314,315],"data-analytics","productivity",{"githubOwner":317,"githubRepo":318,"locale":25,"slug":284,"type":190},"anthropics","skills",{"extract":320,"smithery":322},{"commitSha":321,"license":295},"d230a6dd6eb1a0dbee9fec55e2f00a96e28dff81",{"qualityScore":323,"totalActivations":324,"uniqueUsers":325,"useCount":8,"verified":183},0.98221684,663,309,{"parentExtensionId":327,"repoId":328},"k173j5mjcps56pe131t0b18eg18658ay","kd72m31vxr2nd4hahhzvp0cyrn864eyx",{"anyEnrichmentAt":330,"extractAt":331,"githubAt":332,"invalidatedAt":330,"llmAt":333,"smitheryAt":330,"updatedAt":330},1778008076651,1778003232571,1778003234861,1778007125066,{"_creationTime":335,"_id":336,"community":337,"display":338,"identity":348,"providers":352,"relations":357,"workflow":359},1778054663200.0632,"k17ewd377wdsx6ays73afgzezx867xrr",{"reviewCount":8},{"description":339,"name":340,"sourceUrl":341,"tags":342},"Designs and builds ETL/ELT data pipelines. Takes data sources, destination, transformation requirements. Generates pipeline code (Python/SQL), scheduling config, error handling, monitoring setup, and data quality checks. Outputs data-pipeline-spec.md + implementation files.","Data Pipeline Builder","https://github.com/onewave-ai/claude-skills/tree/HEAD/data-pipeline-builder",[343,344,345,20,18,346,347],"etl","elt","data-pipeline","airflow","dbt",{"githubOwner":349,"githubRepo":350,"locale":25,"slug":351,"type":190},"onewave-ai","claude-skills","data-pipeline-builder",{"extract":353,"llm":355},{"commitSha":354,"license":196},"eb3d80be32b6cafcf0d5df1c1b8a95df75838271",{"promptVersionExtension":168,"promptVersionScoring":169,"score":356,"targetMarket":174,"tier":175},95,{"repoId":358},"kd71e43dj0b7ak5e55pyshxp4n864t6p",{"anyEnrichmentAt":360,"extractAt":361,"githubAt":360,"llmAt":362,"updatedAt":362},1778054667983,1778054663200,1778055270278,{"_creationTime":364,"_id":365,"community":366,"display":367,"identity":376,"providers":378,"relations":382,"workflow":383},1778054663200.0618,"k17axa9h8se80jqqhrkjzk6e1986631m",{"reviewCount":8},{"description":368,"name":369,"sourceUrl":370,"tags":371},"Merge multiple CSV/Excel files with intelligent column matching, data deduplication, and conflict resolution. Handles different schemas, formats, and combines data sources. Use when users need to merge spreadsheets, combine data exports, or consolidate multiple files into one.","CSV/Excel Merger","https://github.com/onewave-ai/claude-skills/tree/HEAD/csv-excel-merger",[17,16,372,373,374,375],"data-manipulation","merging","deduplication","data-quality",{"githubOwner":349,"githubRepo":350,"locale":25,"slug":377,"type":190},"csv-excel-merger",{"extract":379,"llm":380},{"commitSha":354,"license":196},{"promptVersionExtension":168,"promptVersionScoring":169,"score":381,"targetMarket":174,"tier":175},88,{"repoId":358},{"anyEnrichmentAt":360,"extractAt":361,"githubAt":360,"llmAt":362,"updatedAt":362},{"_creationTime":385,"_id":386,"community":387,"display":388,"identity":395,"providers":399,"relations":404,"workflow":406},1778053327521.582,"k17d9qgp73tpxcffh07vf2esjx867rvz",{"reviewCount":8},{"description":389,"installMethods":390,"name":391,"sourceUrl":392,"tags":393},"SQL, pandas, and statistical analysis expertise for data exploration and insights. Use when: analyzing data, writing SQL queries, using pandas, performing statistical analysis, or when user mentions data analysis, SQL, pandas, statistics, or needs help exploring datasets.",{},"Data Analyst","https://github.com/shubhamsaboo/awesome-llm-apps/tree/HEAD/awesome_agent_skills/data-analyst",[18,285,394,15],"statistics",{"githubOwner":396,"githubRepo":397,"locale":25,"slug":398,"type":190},"shubhamsaboo","awesome-llm-apps","data-analyst",{"extract":400,"llm":402},{"commitSha":401,"license":196},"a35897449fe8b0fab12e8f0fd9f2e2a40e872ab7",{"promptVersionExtension":168,"promptVersionScoring":169,"score":297,"targetMarket":174,"tier":403},"evaluated",{"repoId":405},"kd73kvct1kme7748mpsbddhhmx865wd3",{"anyEnrichmentAt":407,"extractAt":408,"githubAt":407,"llmAt":409,"updatedAt":409},1778053329769,1778053327521,1778053376632]