Apache Iceberg
Provides direct access to Apache Iceberg tables stored in AWS, enabling exploration of catalogs, sch...
Create or identify an AWS identity with programmatic access
- In the AWS Console > IAM, either create a new IAM user (Programmatic access) or identify an existing role/profile that has read access to your Glue catalog and underlying object storage (S3).
- Recommended minimum permissions: Glue read (e.g., glue:GetDatabases, glue:GetDatabase, glue:GetTables, glue:GetTable) and S3 read/list (e.g., s3:ListBucket, s3:GetObject). You can attach managed policies such as AmazonS3ReadOnlyAccess and an appropriate Glue read policy.
Obtain AWS credentials (if using an access key)
- For an IAM user: create an access key and copy the Access Key ID and Secret Access Key (you will only see the secret once).
- If using an assumed role / SSO, make sure you have a named profile configured that can assume the role (see next step).
Install & configure the AWS CLI (if not already)
- Install AWS CLI per AWS docs.
- Configure a named profile for the credentials (replace
and enter the Access Key ID / Secret when prompted): aws configure --profile <profile-name> - When prompted for region, enter your preferred region (or leave blank; the MCP defaults to us-east-1 if ICEBERG_MCP_REGION is not set).
(Optional) Configure an SSO/assume-role profile
- If you use AWS SSO or a role assumption flow, create a named profile in ~/.aws/config that performs the SSO/login or sets credential_process/role_arn so the CLI profile name (e.g.,
) resolves to usable credentials.
- If you use AWS SSO or a role assumption flow, create a named profile in ~/.aws/config that performs the SSO/login or sets credential_process/role_arn so the CLI profile name (e.g.,
Verify the profile can access the Glue catalog / S3
- Example Glue check (replace
and ): aws glue get-databases --profile <profile-name> --region <region> - Or verify S3 access:
aws s3 ls --profile <profile-name> --region <region>
- Example Glue check (replace
Open the FastMCP connection interface (use the “Install Now” button)
- Click the repository’s “Install Now” button to open the FastMCP connection/installation UI.
Fill the environment variables in the FastMCP connection interface
- Add an env var named ICEBERG_MCP_PROFILE with the value set to the AWS CLI profile name you configured in step 3 (e.g., my-iceberg-profile).
- (Optional) Add ICEBERG_MCP_REGION with your AWS region (e.g., us-west-2). If you omit ICEBERG_MCP_REGION, the server will default to us-east-1.
- Notes:
- To use the default AWS profile, either set ICEBERG_MCP_PROFILE to default or leave ICEBERG_MCP_PROFILE blank (the MCP will use the default role/profile if not specified).
- If your profile uses SSO/assume-role, ensure the environment where FastMCP runs can obtain those credentials (use a profile that resolves without interactive prompts).
Save / Install the MCP server
- Complete the Install Now flow and deploy the MCP server with those environment variables populated.
Test the MCP connection from your MCP client
- From your MCP client (Claude/Cursor/etc.), run a simple tool command like:
- "List all namespaces in my catalog"
- Confirm the server returns namespace/table information. If authentication fails, re-check the profile name, region, and that the credentials/role have Glue + S3 read permissions.
- From your MCP client (Claude/Cursor/etc.), run a simple tool command like:
Troubleshooting pointers
- If you see permission errors, ensure the IAM identity has Glue and S3 read/list permissions.
- If the MCP cannot find the profile, verify the profile exists in the AWS credentials/config files on the machine/environment where FastMCP is running, or use an environment-based credential mechanism that your deployment supports (e.g., instance profile, secret manager).
- If you need a different credential type (instance role, secrets), populate ICEBERG_MCP_PROFILE accordingly or leave blank to use the host’s default credentials.
Quick Start
Choose Connection Type for
Authentication Required
Please sign in to use FastMCP hosted connections
Run MCP servers without
local setup or downtime
Access to 1,000+ ready-to-use MCP servers
Skip installation, maintenance, and trial-and-error.
No local setup or infra
Run MCP servers without Docker, ports, or tunnels.
Always online
Your MCP keeps working even when your laptop is off.
One secure URL
Use the same MCP from any agent, anywhere.
Secure by default
Encrypted connections. Secrets never stored locally.
Configuration for
Environment Variables
Please provide values for the following environment variables:
HTTP Headers
Please provide values for the following HTTP headers:
started!
The MCP server should open in . If it doesn't open automatically, please check that you have the application installed.
Copy and run this command in your terminal:
Make sure Gemini CLI is installed:
Visit Gemini CLI documentation for installation instructions.
Make sure Claude Code is installed:
Visit Claude Code documentation for installation instructions.
Installation Steps:
Configuration
Installation Failed
More for Database
View All →Supabase MCP Server
Connect Supabase projects directly with AI assistants using the Model Context Protocol (MCP). This server standardizes communication between Large Language Models and Supabase, enabling AI to manage tables, query data, and interact with project features like edge functions, storage, and branching. Customize access with read-only or project-scoped modes and select specific tool groups to fit your needs. Integrated tools cover account management, documentation search, database operations, debugging, and more, empowering AI to assist with development, monitoring, and deployment tasks in your Supabase environment efficiently and securely.
Svelte
Official Svelte documentation access and code analysis server that provides up-to-date reference material, playground link generation, and intelligent autofixer capabilities for detecting common patterns, anti-patterns, and migration opportunities in Svelte 5 and SvelteKit projects.
ClickHouse
Unlock powerful analytics with the ClickHouse MCP Server—seamlessly run, explore, and manage SQL queries across ClickHouse clusters or with chDB’s embedded OLAP engine. This server offers easy database and table listing, safe query execution, and flexible access to data from files, URLs, or databases. Built-in health checks ensure reliability, while support for both ClickHouse and chDB enables robust data workflows for any project.