[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"extension-skill-claude-office-skills-etl-pipeline-hi":3,"guides-for-claude-office-skills-etl-pipeline":236,"similar-k178af4x3rgc2d00rcyykvca0x8674dv":237},{"_creationTime":4,"_id":5,"children":6,"community":7,"display":9,"evaluation":22,"identity":201,"isFallback":206,"parentExtension":207,"providers":208,"relations":213,"repo":215,"workflow":233},1778053148350.4407,"k178af4x3rgc2d00rcyykvca0x8674dv",[],{"reviewCount":8},0,{"description":10,"installMethods":11,"name":12,"sourceUrl":13,"tags":14},"Design and automate Extract, Transform, Load data pipelines for data integration and analytics",{},"ETL Pipeline","https://github.com/claude-office-skills/skills/tree/HEAD/etl-pipeline",[15,16,17,18,19,20,21],"etl","data-pipeline","integration","analytics","automation","mcp","data-engineering",{"_creationTime":23,"_id":24,"extensionId":5,"locale":25,"result":26,"trustSignals":190,"workflow":199},1778053561145.646,"kn7d81xerrq42jcyvr0gv78fzs867pax","en",{"checks":27,"evaluatedAt":180,"extensionSummary":181,"promptVersionExtension":182,"promptVersionScoring":183,"rationale":184,"score":185,"summary":186,"tags":187,"targetMarket":188,"tier":189},[28,33,36,40,44,48,52,55,59,63,67,70,71,72,73,74,75,76,77,78,79,80,81,85,88,91,95,98,101,105,108,111,115,119,122,125,128,131,134,137,140,143,147,150,154,158,161,164,167,170,174,177],{"category":29,"check":30,"severity":31,"summary":32},"Invocation","Precise Purpose","pass","The description clearly states the extension's purpose (design and automate ETL pipelines) and names the artifact it operates on (data pipelines) with explicit verbs (design, automate).",{"category":29,"check":34,"severity":31,"summary":35},"Concise Frontmatter","The frontmatter is concise and self-contained, summarizing the core capability effectively within the character limit.",{"category":37,"check":38,"severity":31,"summary":39},"Documentation","Concise Body","The skill body is well-structured and likely under 500 lines, with diagrams and code snippets clearly demarcated, suggesting progressive disclosure.",{"category":41,"check":42,"severity":31,"summary":43},"Context","Progressive Disclosure","The SKILL.md outlines the flow and uses code blocks and conceptual diagrams, suggesting that detailed procedures would be delegated to separate files if necessary.",{"category":41,"check":45,"severity":46,"summary":47},"Forked exploration","not_applicable","This skill focuses on pipeline design and automation, not deep code review or research, so 'context: fork' is not applicable.",{"category":49,"check":50,"severity":31,"summary":51},"Practical Utility","Usage examples","The SKILL.md includes detailed code examples for various components like sources, transformations, and load strategies, and a conceptual diagram for the pipeline architecture.",{"category":49,"check":53,"severity":31,"summary":54},"Edge cases","The SKILL.md details error handling, retry mechanisms, and data quality checks, covering potential failure modes and recovery steps.",{"category":56,"check":57,"severity":31,"summary":58},"Code Execution","Tool Fallback","The skill lists its MCP server and tools in the frontmatter and implies a fallback by referencing 'claude-claude' or similar internal tools for basic operations.",{"category":60,"check":61,"severity":31,"summary":62},"Portability","Stack assumptions","The skill implicitly assumes a Python runtime for transformations and SQL for queries, and mentions MCP tools, aligning with typical Claude environments.",{"category":64,"check":65,"severity":31,"summary":66},"Safety","Halt on unexpected state","The 'Error Handling' section specifies retry mechanisms and actions on failure ('log_error', 'send_alert', 'save_failed_records'), indicating a robust approach to unexpected states.",{"category":60,"check":68,"severity":31,"summary":69},"Cross-skill coupling","The skill focuses on ETL pipelines and lists related skills, indicating it operates standalone and cross-references rather than implicitly relying on other skills.",{"category":29,"check":30,"severity":31,"summary":32},{"category":29,"check":34,"severity":31,"summary":35},{"category":37,"check":38,"severity":31,"summary":39},{"category":41,"check":42,"severity":31,"summary":43},{"category":41,"check":45,"severity":46,"summary":47},{"category":49,"check":50,"severity":31,"summary":51},{"category":49,"check":53,"severity":31,"summary":54},{"category":56,"check":57,"severity":31,"summary":58},{"category":60,"check":61,"severity":31,"summary":62},{"category":64,"check":65,"severity":31,"summary":66},{"category":60,"check":68,"severity":31,"summary":69},{"category":82,"check":83,"severity":31,"summary":84},"Security","Problem relevance","Description names a concrete user problem / pain point the extension addresses.",{"category":49,"check":86,"severity":31,"summary":87},"Unique selling proposition","The extension provides a comprehensive framework for ETL pipeline design and automation, going beyond simple API wrappers by integrating multiple data sources, transformation logic, and load strategies.",{"category":49,"check":89,"severity":31,"summary":90},"Production readiness","The skill covers the complete lifecycle of ETL pipeline design, including extraction, transformation, loading, orchestration, data quality, and monitoring, making it suitable for production use.",{"category":92,"check":93,"severity":31,"summary":94},"Scope","Single responsibility principle","The extension focuses on the domain of ETL pipeline design and automation, with no indications of extending into unrelated areas.",{"category":92,"check":96,"severity":31,"summary":97},"Description quality","The description is accurate, concise, and readable, effectively reflecting the extension's capabilities.",{"category":29,"check":99,"severity":31,"summary":100},"Scoped tools","The MCP tools listed (postgres_query, mysql_query, etc.) are specific verb-noun specialists, facilitating precise agent selection.",{"category":37,"check":102,"severity":103,"summary":104},"Configuration & parameter reference","info","While the SKILL.md provides detailed examples of configurations for sources, transformations, and loading, it does not explicitly document default values or precedence orders for all parameters.",{"category":92,"check":106,"severity":31,"summary":107},"Tool naming","The MCP tools listed (e.g., `postgres_query`, `bigquery_load`) are descriptive and follow a clear verb-noun pattern.",{"category":92,"check":109,"severity":31,"summary":110},"Minimal I/O surface","The input and output sections in the SKILL.md define specific requirements and expected outcomes, suggesting a focused I/O surface.",{"category":112,"check":113,"severity":31,"summary":114},"License","License usability","A MIT License file is present in the repository, indicating a permissive open-source license.",{"category":116,"check":117,"severity":46,"summary":118},"Maintenance","Commit recency","No commit data is available for this check.",{"category":116,"check":120,"severity":46,"summary":121},"Dependency Management","No third-party dependencies are explicitly managed or listed in a way that requires vulnerability checks or update mechanisms.",{"category":82,"check":123,"severity":31,"summary":124},"Secret Management","The configuration examples show placeholders for sensitive information (e.g., connection strings, bearer tokens) and do not echo resolved secrets, suggesting appropriate handling.",{"category":82,"check":126,"severity":31,"summary":127},"Injection","The skill does not appear to load untrusted external data or execute arbitrary code from external sources; it relies on defined configurations and tools.",{"category":82,"check":129,"severity":31,"summary":130},"Transitive Supply-Chain Grenades","The skill appears to bundle all necessary configurations and logic, with no indication of runtime downloads or execution of external scripts.",{"category":82,"check":132,"severity":31,"summary":133},"Sandbox Isolation","The skill's operations are confined to data pipeline tasks and do not involve modifying files outside of its defined scope or making OS-specific path assumptions.",{"category":82,"check":135,"severity":31,"summary":136},"Sandbox escape primitives","No detached-process spawns or deny-retry loops are evident in the provided skill description or examples.",{"category":82,"check":138,"severity":31,"summary":139},"Data Exfiltration","The skill focuses on data pipeline operations and does not contain instructions for reading or submitting confidential data to third parties.",{"category":82,"check":141,"severity":31,"summary":142},"Hidden Text Tricks","The bundled content appears free of hidden-steering tricks, control characters, or invisible Unicode sequences.",{"category":144,"check":145,"severity":31,"summary":146},"Hooks","Opaque code execution","The skill description and examples do not show any signs of obfuscated code, base64 payloads, or runtime script fetching.",{"category":60,"check":148,"severity":31,"summary":149},"Structural Assumption","The skill defines its own structures and configurations rather than assuming specific user project layouts.",{"category":151,"check":152,"severity":46,"summary":153},"Trust","Issues Attention","No issue data available for evaluation.",{"category":155,"check":156,"severity":31,"summary":157},"Versioning","Release Management","A version number ('1.0.0') is clearly declared in the SKILL.md frontmatter.",{"category":56,"check":159,"severity":103,"summary":160},"Validation","The skill details data quality checks (null, range, uniqueness, referential integrity, freshness), suggesting validation of data, but does not explicitly mention schema validation for input arguments.",{"category":82,"check":162,"severity":31,"summary":163},"Unguarded Destructive Operations","While ETL processes can be destructive, the skill focuses on design and automation, and the provided examples do not show direct destructive operations without configuration or confirmation.",{"category":56,"check":165,"severity":31,"summary":166},"Error Handling","The 'Error Handling' section provides detailed configurations for retries and actions on failure, ensuring errors are caught and reported meaningfully.",{"category":56,"check":168,"severity":31,"summary":169},"Logging","The 'Monitoring & Alerting' section details metrics and alerts, implying logging of pipeline status and errors for auditing.",{"category":171,"check":172,"severity":31,"summary":173},"Compliance","GDPR","The skill operates on data pipelines and does not inherently process personal data without sanitization; specific data handling would depend on user configuration.",{"category":171,"check":175,"severity":31,"summary":176},"Target market","The skill is a general-purpose ETL pipeline tool and does not exhibit any regional or jurisdictional specific logic, thus its target market is global.",{"category":60,"check":178,"severity":31,"summary":179},"Runtime stability","The skill relies on standard Python and SQL capabilities and MCP tools, which are generally stable across common Claude environments.",1778053311848,"This skill provides detailed configurations for connecting to various data sources (Postgres, MySQL, MongoDB, APIs), applying common transformations (cleaning, standardizing, aggregating, joining), and loading data into target warehouses (BigQuery, Snowflake). It also includes pipeline orchestration, data quality checks, monitoring, and alerting configurations.","2.0.0","3.4.0","This skill is exceptionally well-documented and robust, covering all aspects of ETL pipeline design and automation. It passes all critical checks and demonstrates strong adherence to best practices in scope, security, and documentation. The only minor point is the lack of explicit default parameter documentation, which is a low-severity informational finding.",95,"A comprehensive and production-ready skill for designing and automating Extract, Transform, Load (ETL) data pipelines.",[15,16,17,18,19,20,21],"global","verified",{"codeQuality":191,"collectedAt":192,"documentation":193,"maintenance":195,"security":196,"testCoverage":198},{},1778053297298,{"descriptionLength":194,"readmeSize":8},94,{},{"hasNpmPackage":197,"smitheryVerified":197},false,{"hasCi":197,"hasTests":197},{"updatedAt":200},1778053561145,{"githubOwner":202,"githubRepo":203,"locale":25,"slug":204,"type":205},"claude-office-skills","skills","etl-pipeline","skill",true,null,{"extract":209,"llm":212},{"commitSha":210,"license":211},"9c4c7d5cd2813a8936bf2c9fdb174ea883b85a11","MIT",{"promptVersionExtension":182,"promptVersionScoring":183,"score":185,"targetMarket":188,"tier":189},{"repoId":214},"kd7fw7xbj58qc2z8whrrjptbed8659db",{"_creationTime":216,"_id":214,"identity":217,"providers":219,"workflow":230},1777995558409.8474,{"githubOwner":202,"githubRepo":203,"sourceUrl":218},"https://github.com/claude-office-skills/skills",{"discover":220,"github":223},{"sources":221},[222],"skills-sh",{"closedIssues90d":8,"forks":224,"license":211,"openIssues90d":225,"pushedAt":226,"readmeSize":227,"stars":228,"topics":229},27,2,1769868236000,29630,98,[],{"discoverAt":231,"extractAt":232,"githubAt":232,"updatedAt":232},1777995558409,1778053155657,{"anyEnrichmentAt":234,"extractAt":235,"githubAt":234,"llmAt":200,"updatedAt":200},1778053151766,1778053148350,[],[238,259,278,305,328,356],{"_creationTime":239,"_id":240,"community":241,"display":242,"identity":252,"providers":254,"relations":257,"workflow":258},1778053148350.4817,"k1799ke3mvvmb9chq1vt0k97k5867cfv",{"reviewCount":8},{"description":243,"installMethods":244,"name":245,"sourceUrl":246,"tags":247},"Build and manage webhook-based integrations for real-time event processing and API connections",{},"Webhook Automation","https://github.com/claude-office-skills/skills/tree/HEAD/webhook-automation",[248,249,17,19,250,251,20],"webhook","api","events","engineering",{"githubOwner":202,"githubRepo":203,"locale":25,"slug":253,"type":205},"webhook-automation",{"extract":255,"llm":256},{"commitSha":210,"license":211},{"promptVersionExtension":182,"promptVersionScoring":183,"score":228,"targetMarket":188,"tier":189},{"repoId":214},{"anyEnrichmentAt":234,"extractAt":235,"githubAt":234,"llmAt":200,"updatedAt":200},{"_creationTime":260,"_id":261,"community":262,"display":263,"identity":271,"providers":272,"relations":276,"workflow":277},1778053148350.4333,"k173svty262n3dwffcxfdgz5px866pa1",{"reviewCount":8},{"description":264,"name":265,"sourceUrl":266,"tags":267},"Data pipeline and ETL automation - extract, transform, load workflows for data integration and analytics","Data Pipeline","https://github.com/claude-office-skills/skills/tree/HEAD/data-pipeline",[16,15,268,18,19,269,270],"data-integration","n8n","javascript",{"githubOwner":202,"githubRepo":203,"locale":25,"slug":16,"type":205},{"extract":273,"llm":274},{"commitSha":210,"license":211},{"promptVersionExtension":182,"promptVersionScoring":183,"score":275,"targetMarket":188,"tier":189},92,{"repoId":214},{"anyEnrichmentAt":234,"extractAt":235,"githubAt":234,"llmAt":200,"updatedAt":200},{"_creationTime":279,"_id":280,"community":281,"display":282,"identity":291,"providers":295,"relations":299,"workflow":301},1778054781976.5928,"k1779jg9bachbejc4hfmyejtjx867tzc",{"reviewCount":8},{"description":283,"installMethods":284,"name":285,"sourceUrl":286,"tags":287},"AI Native Camp Day 2 Context Sync 스킬 만들기. 여러 외부 도구에서 컨텍스트를 수집하여 하나의 sync 문서로 만드는 나만의 스킬을 직접 구축한다. \"2일차\", \"Day 2\", \"context sync\", \"컨텍스트 싱크\", \"sync 스킬\", \"스킬 만들기\", \"정보 수집 스킬\" 요청에 사용.",{},"Day 2: 나만의 Context Sync 스킬 만들기","https://github.com/ai-native-camp/camp-1/tree/HEAD/.agents/skills/day2-create-context-sync-skill",[19,20,288,289,290],"skill-building","productivity","data-sync",{"githubOwner":292,"githubRepo":293,"locale":25,"slug":294,"type":205},"ai-native-camp","camp-1","day2-create-context-sync-skill",{"extract":296,"llm":298},{"commitSha":297,"license":46},"9ffaf358dc8c88567d8f0450966b5518071da4f0",{"promptVersionExtension":182,"promptVersionScoring":183,"score":228,"targetMarket":188,"tier":189},{"repoId":300},"kd72seepns71xx9ksxrb02bs1n8645k6",{"anyEnrichmentAt":302,"extractAt":303,"githubAt":302,"llmAt":304,"updatedAt":304},1778054782298,1778054781976,1778054817045,{"_creationTime":306,"_id":307,"community":308,"display":309,"identity":321,"providers":323,"relations":326,"workflow":327},1778053148350.4539,"k179tsfdj29k96rtptbbar2tvn866b71",{"reviewCount":8},{"description":310,"installMethods":311,"name":312,"sourceUrl":313,"tags":314},"Intelligent lead assignment and routing - AI-powered scoring, territory mapping, round-robin distribution, and workload balancing",{},"Lead Routing","https://github.com/claude-office-skills/skills/tree/HEAD/lead-routing",[315,316,317,318,319,19,20,320],"sales","crm","lead-management","routing","assignment","ai-powered",{"githubOwner":202,"githubRepo":203,"locale":25,"slug":322,"type":205},"lead-routing",{"extract":324,"llm":325},{"commitSha":210,"license":211},{"promptVersionExtension":182,"promptVersionScoring":183,"score":228,"targetMarket":188,"tier":189},{"repoId":214},{"anyEnrichmentAt":234,"extractAt":235,"githubAt":234,"llmAt":200,"updatedAt":200},{"_creationTime":329,"_id":330,"community":331,"display":332,"identity":342,"providers":346,"relations":350,"workflow":352},1778054663200.0632,"k17ewd377wdsx6ays73afgzezx867xrr",{"reviewCount":8},{"description":333,"name":334,"sourceUrl":335,"tags":336},"Designs and builds ETL/ELT data pipelines. Takes data sources, destination, transformation requirements. Generates pipeline code (Python/SQL), scheduling config, error handling, monitoring setup, and data quality checks. Outputs data-pipeline-spec.md + implementation files.","Data Pipeline Builder","https://github.com/onewave-ai/claude-skills/tree/HEAD/data-pipeline-builder",[15,337,16,338,339,340,341],"elt","python","sql","airflow","dbt",{"githubOwner":343,"githubRepo":344,"locale":25,"slug":345,"type":205},"onewave-ai","claude-skills","data-pipeline-builder",{"extract":347,"llm":349},{"commitSha":348,"license":211},"eb3d80be32b6cafcf0d5df1c1b8a95df75838271",{"promptVersionExtension":182,"promptVersionScoring":183,"score":185,"targetMarket":188,"tier":189},{"repoId":351},"kd71e43dj0b7ak5e55pyshxp4n864t6p",{"anyEnrichmentAt":353,"extractAt":354,"githubAt":353,"llmAt":355,"updatedAt":355},1778054667983,1778054663200,1778055270278,{"_creationTime":357,"_id":358,"community":359,"display":360,"identity":373,"providers":376,"relations":381,"workflow":383},1778053126504.8447,"k177gyb2pp9y0hpcx23rd2j0gd866t61",{"reviewCount":8},{"description":361,"installMethods":362,"name":363,"sourceUrl":364,"tags":365},"Smart Money analytics on OKX: leaderboard traders, position tracking, trade records, aggregated consensus signals, and signal history. Use this skill when the user asks about 聪明钱, smart money, 牛人榜, leaderboard, top traders, 带单员, lead traders, 交易员排行, trader ranking, trader positions, trader PnL, 交易员持仓, 交易员收益, smart money signal, 聪明钱信号, long/short ratio, 多空比, capital flow, 资金流向, position conviction, 仓位强度, entry price distribution, smart money overview, 聪明钱总览, signal history, 信号历史, trader search, 搜索交易员, who is trading BTC, 谁在交易BTC, recommend traders, 推荐交易员, best traders, top performers.",{},"OKX CEX Smart Money CLI","https://github.com/okx/agent-skills/tree/HEAD/skills/okx-cex-smartmoney",[366,367,368,18,369,370,371,372],"okx","cex","smartmoney","trader","leaderboard","signals","cli",{"githubOwner":366,"githubRepo":374,"locale":25,"slug":375,"type":205},"agent-skills","okx-cex-smartmoney",{"extract":377,"llm":379},{"commitSha":378,"license":211},"2c10950e7d08ff4a6f92e29aa5a72fc1f6982c3b",{"promptVersionExtension":182,"promptVersionScoring":183,"score":380,"targetMarket":188,"tier":189},100,{"repoId":382},"kd762kyfecgcjapqdqxsv1ngw986551x",{"anyEnrichmentAt":384,"extractAt":385,"githubAt":384,"llmAt":386,"updatedAt":386},1778053126894,1778053126504,1778053154881]