windmill-labs / write-script-duckdb
Install for your project team
Run this command in your project directory to install the skill for your entire team:
mkdir -p .claude/skills/write-script-duckdb && curl -L -o skill.zip "https://fastmcp.me/Skills/Download/1544" && unzip -o skill.zip -d .claude/skills/write-script-duckdb && rm skill.zip
Project Skills
This skill will be saved in .claude/skills/write-script-duckdb/ and checked into git. All team members will have access to it automatically.
Important: Please verify the skill by reviewing its instructions before using it.
MUST use when writing DuckDB queries.
0 views
0 installs
Skill Content
---
name: write-script-duckdb
description: MUST use when writing DuckDB queries.
---
## CLI Commands
Place scripts in a folder. After writing, run:
- `wmill script generate-metadata` - Generate .script.yaml and .lock files
- `wmill sync push` - Deploy to Windmill
Use `wmill resource-type list --schema` to discover available resource types.
# DuckDB
Arguments are defined with comments and used with `$name` syntax:
```sql
-- $name (text) = default
-- $age (integer)
SELECT * FROM users WHERE name = $name AND age > $age;
```
## Ducklake Integration
Attach Ducklake for data lake operations:
```sql
-- Main ducklake
ATTACH 'ducklake' AS dl;
-- Named ducklake
ATTACH 'ducklake://my_lake' AS dl;
-- Then query
SELECT * FROM dl.schema.table;
```
## External Database Connections
Connect to external databases using resources:
```sql
ATTACH '$res:path/to/resource' AS db (TYPE postgres);
SELECT * FROM db.schema.table;
```
## S3 File Operations
Read files from S3 storage:
```sql
-- Default storage
SELECT * FROM read_csv('s3:///path/to/file.csv');
-- Named storage
SELECT * FROM read_csv('s3://storage_name/path/to/file.csv');
-- Parquet files
SELECT * FROM read_parquet('s3:///path/to/file.parquet');
-- JSON files
SELECT * FROM read_json('s3:///path/to/file.json');
```