Cloud platform for web scraping, browser automation, AI agents, and data for AI. Use 23,000+ ready-made tools, code templates, or order a custom solut
We’re building the world’s largest platform for web data and tools for AI builders. Legal info Apify was launched by Jan Čurn and Jakub Balada in 2015 from the Y Combinator Fellowship in Mountain View, California. The original idea was to make it easy for developers to build flexible and scalable web crawlers simply using front-end JavaScript, thanks to the back-then new headless browser technology. In 2016, the team moved back to the Czech Republic, raised a seed investment, and started building a company around its product. Soon it became obvious that customers’ use cases need more than a simple JavaScript crawler, so we committed to building the most flexible full-stack platform for web scraping and browser automation. We're helping people get more value from the web, letting them automate mundane tasks and spend their time on things that matter. We strive to keep the web open as a public good and a basic right for everyone, regardless of the way you want to use it, as its creators intended. Trusted by industry leaders all over the world Ready-made Actors in Apify Store Prague startup Apify raises €2.8M for AI data mining Apify Powers Web Insights for Businesses, Cuts Cloud Costs by 25% Using AWS Sifted 30 ranks the fastest-growing startups across Central Europe Apify is a proud supporter of various non-profit projects and organizations. Bay Area hub for European startups expanding to the US. Linux Foundation project advancing agentic AI standards. Price transparency tool exposing fake e-commerce discounts. Real estate monitoring service built on Apify. Prague initiative to become a European AI hub. Non-profit helping women build careers in IT and data. Represents Czech startups in government dialogue. Charity that brings happiness to children from orphanages. Czech podcast and community for data professionals. Resources for presenting the Apify brand consistently and professionally. We’re looking for smart, talented, and ambitious people to join our growing team. Join us and help people get more value from the web.
Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Features
Industry
information technology & services
Employees
170
Funding Stage
Venture (Round not Specified)
Total Funding
$4.5M
20
npm packages
Pricing found: $596, $3, $500, $0, $5
What is the best way to capture online mentions and capture signals daily?
I was talking with a friend who works in a startup and they would like to track decision makers across a specific set of companies and be updated daily. News mentions, comments or signals from those people across the web, and social media, particularly X. What is the best way to achieve this? A custom solution made with Claude code, leveraging scrapers (Ex. Firecrawl, Apify)? Or would n8n and similar platforms be better? Any personal experiences with this? Thanks! submitted by /u/W0rldIsMy0yster [link] [comments]
View originalApify not getting installed
This error is showing. What to do? submitted by /u/Distant_aura [link] [comments]
View originalusing Claude Code for go-to-market, not just code. context engineering patterns that keep sessions productive.
most posts here are about coding with Claude. I use Claude Code for something different: running an entire go-to-market operation. scraping. enrichment. databases. email infrastructure. content across 5 platforms. sharing what works because the patterns apply to any non-coding use case. the rate limit problem is a context problem two people on my team use Claude Code full-time. one builds the product. I build the GTM machine around it. neither of us hit rate limits regularly. three things that made the difference: CLAUDE.md file at the project root. Claude Code reads it automatically every session. project context, file paths, workflow rules. 15 lines. the agent knows what it's working with before you say anything. eliminates the repetitive "here's my project" preamble that burns context every session. scope your sessions. I cd into the specific repo and subdirectory before starting. Claude Code reads the local CLAUDE.md and surrounding files. smaller scope = less context consumed = more useful output per session. CLI tools instead of MCP servers where possible. MCP tool definitions load into your system prompt and consume tokens whether you call them or not. a CLI tool takes zero context. Claude Code just runs bash commands. Apify, Supabase, gcloud all have CLIs. I went from 15 MCP servers to 3. subagents for the heavy lifting anything that involves reading a lot of files or exploring a codebase goes to a subagent. the subagent burns through its own context window. reports back a summary. main session stays clean and focused. batch operations, research, file analysis. all subagents. main session coordinates and directs. what I actually run through Claude Code daily - Apify CLI to scrape competitor follower lists. 10K followers for about $5. cross-reference multiple scrapes to find companies evaluating solutions in your space. - Python scripts calling Apollo API for enrichment. 0-credit endpoints for company data and job-change detection. 27K contacts processed with resumable caching. - Supabase CLI for database operations. push scraped and enriched data. query in natural language through Claude Code. - Google Sheets sync so non-technical teammates see a spreadsheet, not a terminal. - content drafting with voice DNA files loaded as context. anti-slop rules catch AI-sounding patterns before publishing. - 12 email domains managed through Azure Communication Services. warm-up cron jobs running automatically. all from terminal sessions on a Mac Mini. Claude Code reads the project structure, knows the schemas, knows the voice rules, and executes. I direct. what doesn't work loading every MCP integration you can find. your sessions will crawl. long exploratory sessions without subagents. context fills up. output quality drops. session becomes useless after 30 minutes of heavy file reading. generic prompts at the home directory level. "help me with my business" gives you generic output. "cd into this repo, read the CLAUDE.md, and run the enrichment script on this CSV" gives you results. skills bloat. 40 custom slash commands means 40 tool definitions in context. most of them you'll never use in a given session. keep it lean. the skills that matter are the ones you actually use weekly. open sourced the patterns github.com/shawnla90/gtm-coding-agent 10 chapters. context engineering. token efficiency. CLI vs MCP vs API decision framework. local-first GTM infrastructure. terminal multiplexing. working Apify and Apollo scripts with docs. MIT licensed. built for GTM use cases but the context engineering and session management patterns apply to any Claude Code workflow. submitted by /u/Shawntenam [link] [comments]
View originalI built a skill that gives Claude Code access to every major social platform: X, Reddit, LinkedIn, TikTok, Facebook, Amazon
Was tired of my agent not being able to pull real data from social platforms. Every time I needed tweets, Reddit posts, or LinkedIn profiles, I'd either scrape manually or stitch together 5 different APIs with different auth flows. So I built Monid, a CLI + skill that lets your agent discover data endpoints, inspect schemas, and pull structured data from platforms like X, Reddit, LinkedIn, TikTok, Facebook, and Amazon. How it works with Claude Code Just tell Claude Code: "Install the Monid skill from https://monid.ai/SKILL.md" Then your agent can: ```bash Find endpoints for what you need monid discover -q "twitter posts" Check the schema monid inspect -p apify -e /apidojo/tweet-scraper Run it monid run -p apify -e /apidojo/tweet-scraper \ -i '{"searchTerms":["AI agents"],"maxItems":50}' ``` The agent handles the full flow — discover → inspect → run → poll for results. What's supported X/Twitter (posts, profiles, search) Reddit (posts, comments, subreddits) LinkedIn (profiles, company pages) TikTok (videos, profiles, hashtags) Facebook (pages, posts) Amazon (products, reviews) More being added Would love feedback from anyone who tries it. What platforms or data sources would be most useful for your workflows? submitted by /u/Shot_Fudge_6195 [link] [comments]
View originalCowork scheduled tasks not accessing MCP connectors - is this a known limitation?
I've been trying to automate a weekly job scraping task using Cowork's scheduled tasks feature, with Apify connected via MCP to do the actual scraping. I'm hitting a wall and can't tell if this is a bug, a limitation, or something I'm configuring wrong. What works: Apify is connected and works fine in interactive Cowork tasks. If I start a manual task and ask it to use Apify, it calls the actor, retrieves results, and writes them to a file without issues. No problems there. What doesn't work: When the same instructions run as a scheduled task, Cowork searches for connectors and finds none. The log shows literally "0 connectors / No connectors found," so it falls back to web search instead. Apify is never called and no credits are used. What I've already tried: - All Apify tool permissions are set to Always Allow - Deleted and recreated the scheduled task - Run the scheduled task manually using Run Now — no approval prompts appear, it just skips Apify entirely - Checked the Anthropic support docs, which say connectors should be available in scheduled tasks The support doc (https://support.claude.com/en/articles/13854387-schedule-recurring-tasks-in-cowork) says scheduled tasks have access to the same capabilities as regular Cowork tasks including connected tools. That doesn't appear to be the case in practice. Has anyone got MCP connectors working in a scheduled task? Is there a setup step I'm missing, or is this a known issue? submitted by /u/twiddle1977 [link] [comments]
View originalThe developer settings on claude desktop won't open
I'm trying to edit config in claude desktop so i could add a few apify actors but everytime i try to open the developer config file this pops up, what do i do?? submitted by /u/Cultural-Fondant-281 [link] [comments]
View originalMade some MCP tools for e-commerce research, figured this crowd might find them useful
I've been using Claude heavily for e-commerce research and kept running into the same problem, getting it to pull real competitive data meant either copy-pasting manually or writing custom code every time. I probably wasted 10 hours before realizing I was an idiot and could just make something to skip that step lol. So I built three MCP servers and put them on Apify so Claude can just call them directly. Shopify one lets Claude analyze any public store without needing an API key. You can ask it things like "what apps is Gymshark running" or "show me Allbirds' full product catalog with pricing" and it just works. Amazon one does product research with a scoring system I built that weights demand signals, competition level, price health, and BSR rank. So instead of getting a raw list of results you get each product scored on how good of an opportunity it actually is. Google Maps one finds local businesses by industry and location and scores them as sales leads. It also generates an outreach hint for each one based on what data signals drove the score — like "no website, offer web design" or "low rating, offer reputation management." All three are live now: • https://apify.com/rothy/shopify-intel-mcp • https://apify.com/rothy/amazon-intel-mcp • https://apify.com/rothy/gmaps-intel-mcp Would be curious if anyone has ideas for other data sources that would be useful to add. submitted by /u/Rothy12 [link] [comments]
View originalSolo branding agency lead gen?
I’m looking to use Claude as a lead gen tool. Currently I have a coworker task that is supposed to us apify to scrape Google Maps and give me a list of 10 businesses with no website and either an email address (using vibe prospector) or a social media handle I can contact but it works maybe half the time. Either Claude tells me apify isn’t connected or all the businesses it shows me have websites. Has anyone had luck in my specific use case (website/branding)? What did you do? I’m trying to lower the cost of entry as I have 0 client pipeline and reserving funds. submitted by /u/ianfgraphics [link] [comments]
View originalRepository Audit Available
Deep analysis of apify/apify-sdk-python — architecture, costs, security, dependencies & more
Yes, Apify offers a free tier. Pricing found: $596, $3, $500, $0, $5
Key features include: TikTok Scraper, Google Maps Scraper, Instagram Scraper, Website Content Crawler, Amazon Scraper, Facebook Posts Scraper, Marketplace of 23,000+ Actors, Build and deploy your own.
Based on 13 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.