Playwright Scraper
1-Click ReadyLeverages Playwright and BeautifulSoup to enable robust web scraping and content extraction, convert...
Tools
scrape_to_markdown
Scrape a URL and convert the content to markdown
Quick Start
Choose Connection Type for
Authentication Required
Please sign in to use FastMCP hosted connections
Configure Environment Variables for
Please provide values for the following environment variables:
started!
The MCP server should open in . If it doesn't open automatically, please check that you have the application installed.
Copy and run this command in your terminal:
Make sure Gemini CLI is installed:
Visit Gemini CLI documentation for installation instructions.
Make sure Claude Code is installed:
Visit Claude Code documentation for installation instructions.
Installation Steps:
Configuration
Installation Failed
More for Web Scraping
View All →Selenium WebDriver
Enables browser automation through Selenium WebDriver with support for Chrome, Firefox, and Edge browsers, providing navigation, element interaction, form handling, screenshot capture, JavaScript execution, and advanced actions for automated testing and web scraping tasks.
Similar MCP Servers
Playwright Browser Automation
Experience fast, deterministic browser automation with Playwright MCP, a Model Context Protocol server that enables language models to interact with web pages using structured accessibility snapshots instead of images. It provides LLM-friendly automation without relying on vision models, enhancing reliability and speed. Playwright MCP supports diverse browser interactions such as clicking, typing, navigation, file uploads, and tab management. It offers modes optimized for accessibility snapshots or visual screenshots and supports persistent or isolated user profiles. Configurable with rich options, it empowers developers and AI agents to automate complex web tasks efficiently and precisely.
Read Website Fast
Extracts web content and converts it to clean Markdown format using Mozilla Readability for intelligent article detection, with disk-based caching, robots.txt compliance, and concurrent crawling capabilities for fast content processing workflows.