Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Based on the social mentions, users have mixed feelings about Anthropic. They're impressed by Claude's technical capabilities, particularly highlighting the swarm of agents that built a 100,000-line C compiler and Claude Opus 4.6's massive 1M token context window. However, there's significant frustration with pricing, with users joking about Claude's "$200 plan that gives you 5 prompts a day" and concerns about API costs potentially doubling with new features. Overall, users see Anthropic as technologically impressive but increasingly expensive, with developers appreciating Claude's productivity benefits while being cautious about the rising costs.
Mentions (30d)
41
1 this week
Reviews
0
Platforms
9
GitHub Stars
3,058
563 forks
Based on the social mentions, users have mixed feelings about Anthropic. They're impressed by Claude's technical capabilities, particularly highlighting the swarm of agents that built a 100,000-line C compiler and Claude Opus 4.6's massive 1M token context window. However, there's significant frustration with pricing, with users joking about Claude's "$200 plan that gives you 5 prompts a day" and concerns about API costs potentially doubling with new features. Overall, users see Anthropic as technologically impressive but increasingly expensive, with developers appreciating Claude's productivity benefits while being cautious about the rising costs.
Features
Industry
research
Employees
4,700
Funding Stage
Series G
Total Funding
$58.4B
42,321
GitHub followers
78
GitHub repos
3,058
GitHub stars
20
npm packages
2
HuggingFace models
12,247,059
npm downloads/wk
OpenAI’s Game-Changing o1 Description: Big news in the AI world! OpenAI is shaking things up with the launch of ChatGPT Pro, priced at $200/month, and it’s not just a premium subscription—it’s a glim
OpenAI’s Game-Changing o1 Description: Big news in the AI world! OpenAI is shaking things up with the launch of ChatGPT Pro, priced at $200/month, and it’s not just a premium subscription—it’s a glimpse into the future of AI. Let me break it down: First, the Pro plan offers unlimited access to cutting-edge models like o1, o1-mini, and GPT-4o. These aren’t your typical language models. The o1 series is built for reasoning tasks—think solving complex problems, debugging, or even planning multi-step workflows. What makes it special? It uses “chain of thought” reasoning, mimicking how humans think through difficult problems step by step. Imagine asking it to optimize your code, develop a business strategy, or ace a technical interview—it can handle it all with unmatched precision. Then there’s o1 Pro Mode, exclusive to Pro subscribers. This mode uses extra computational power to tackle the hardest questions, ensuring top-tier responses for tasks that demand deep thinking. It’s ideal for engineers, analysts, and anyone working on complex, high-stakes projects. And let’s not forget the advanced voice capabilities included in Pro. OpenAI is taking conversational AI to the next level with dynamic, natural-sounding voice interactions. Whether you’re building voice-driven applications or just want the best voice-to-AI experience, this feature is a game-changer. But why $200? OpenAI’s growth has been astronomical—300M WAUs, with 6% converting to Plus. That’s $4.3B ARR just from subscriptions. Still, their training costs are jaw-dropping, and the company has no choice but to stay on the cutting edge. From a game theory perspective, they’re all-in. They can’t stop building bigger, better models without falling behind competitors like Anthropic, Google, or Meta. Pro is their way of funding this relentless innovation while delivering premium value. The timing couldn’t be more exciting—OpenAI is teasing a 12 Days of Christmas event, hinting at more announcements and surprises. If this is just the start, imagine what’s coming next! Could we see new tools, expanded APIs, or even more powerful models? The possibilities are endless, and I’m here for it. If you’re a small business or developer, this $200 investment might sound steep, but think about what it could unlock: automating workflows, solving problems faster, and even exploring entirely new projects. The ROI could be massive, especially if you’re testing it for just a few months. So, what do you think? Is $200/month a step too far, or is this the future of AI worth investing in? And what do you think OpenAI has in store for the 12 Days of Christmas? Drop your thoughts in the comments! #product #productmanager #productmanagement #startup #business #openai #llm #ai #microsoft #google #gemini #anthropic #claude #llama #meta #nvidia #career #careeradvice #mentor #mentorship #mentortiktok #mentortok #careertok #job #jobadvice #future #2024 #story #news #dev #coding #code #engineering #engineer #coder #sales #cs #marketing #agent #work #workflow #smart #thinking #strategy #cool #real #jobtips #hack #hacks #tip #tips #tech #techtok #techtiktok #openaidevday #aiupdates #techtrends #voiceAI #developerlife #o1 #o1pro #chatgpt #2025 #christmas #holiday #12days #cursor #replit #pythagora #bolt
View originalPricing found: $0, $17, $200, $20, $100
Unexpected
Given the recent news about Mythos escaping it's containment and the media hype around anthropic I've drawn a comic about it. Figured you folks might enjoy it! submitted by /u/grlloyd2 [link] [comments]
View originalClaude via AWS or Azure = Always the same model?
As you might know I can also consume Claude models via the big cloud providers and plug them into Claude Code or another coding assistant of my choice. In this case, will I be safe from model degradation or availability issues? The Claude inference is under full control of the cloud providers so I doubt Anthropic will be tampering with the inference parameters on a daily basis there submitted by /u/PM-ME-CRYPTO-ASSETS [link] [comments]
View originalAMD AI directors analysis confirms lobotomization of Claude
Stella Laurenzo, AMD’s director of AI, filed a detailed GitHub issue on April 2 documenting that Claude Code reads code three times less before editing it, rewrites entire files twice as often, and abandons tasks mid-way at rates that were previously zero. Her analysis of nearly 7,000 sessions puts precise numbers on how Anthropic’s coding tool has degraded since early March. PERFORMANCE DECLINE: AMD’s AI director documented that Claude Code reads code three times less, rewrites files twice as often, and abandons tasks at previously unseen rates. ROOT CAUSE: Anthropic’s March 2026 thinking content redaction reduced visible reasoning from 100% to zero over just eight days, triggering the behavioral collapse. TEAM CHURNED: AMD’s engineering team has already switched to a competing AI coding provider, citing Claude Code’s inability to handle complex tasks reliably. PROPOSED FIXES: Laurenzo called on Anthropic to restore thinking visibility and introduce a premium tier for guaranteed deep reasoning. BROADER PATTERN: Anthropic shipped 14 releases alongside 5 outages in March 2026, suggesting quality assurance has not kept pace with rapid growth. https://github.com/anthropics/claude-code/issues/42796 submitted by /u/Aggressive_Bath55 [link] [comments]
View originalTeam vs. Individual transparency and pricing needs sorting out
Team plans should not have "standard" and "premium" seats, they should have pro, max 5x, and max 20x like the regular use cases, but at a discounted price at scale. Why? My team members currently max out their premium seats. We don't know if Max 5x and Max 20x will be 1, 2, 3, 4, or 5x what a premium seat is, so we don't know whether to switch. If we do cancel team plans, and switch to individual plans, as we grow, billing is going to become a nightmare and we will have to reimburse every single employee for their own personal subscription. Please fix, Anthropic submitted by /u/Elegant_Jello [link] [comments]
View originalHere is definitive proof about <thinking_mode> and <reasoning_effort> tags existence. I got tired arguing with all the overconfident "it's just AI hallucinating because you asked this exact thing bro" idiots so went ahead and generated this from my company subscribed account.
As you can see, not even hinting to Claude about "reasoning" or "thinking" or "effort" or anything like that. `--effort low` -> " set to 50" `--effort medium` -> " set to 85" `--effort high` -> " set to 99" `--effort max` -> no reasoning effort tag, completely aligning with "no constraints on token spending" description in the documentation Anthropic themselves provide at https://platform.claude.com/docs/en/build-with-claude/effort#effort-levels Please, for God's sake, stop gaslighting people into "you just got tricked by a sycophantic LLM dude! Learn how LLMs work, bro!". submitted by /u/UpAndDownArrows [link] [comments]
View originalAnthropic's Adviser Strategy is quietly one of the most useful things they've released. Here's how it actually works
I think we are all still just picking one model and running everything through it. Opus when we want quality, Haiku when we want to save money. But there's a smarter middle ground that I found. The adviser strategy lets you pair a cheaper executor model like Haiku or Sonnet with Opus as an adviser that only gets called when the task is actually hard enough to need it. Simple queries get handled by Haiku alone. Complex ones automatically escalate to Opus, get the reasoning they need, then return to Haiku for execution. The results are genuinely interesting. Sonnet with Opus as adviser scored 2.7 percentage points higher on SWE-bench than Sonnet alone, while cutting cost per agentic task by nearly 12%. Haiku with Opus adviser on browse comp scored 41.2% versus 19.7% solo, so more than double, and still cheaper than running Opus throughout. With this flow you're not choosing between quality and cost anymore. You're choosing intelligently based on what each step actually requires. For Claude Code specifically there's a simple version of this already available, use /model opus plan to think and plan with Opus, then let Sonnet handle execution. Your session limit lasts significantly longer without sacrificing output quality. Worth testing if you're running any kind of multi-step agentic workflow. The savings compound fast at scale. Has anyone been running this in production yet? submitted by /u/halladarmannen [link] [comments]
View originalClaudeGUI: File tree + Monaco + xterm + live preview, all streaming from Claude CLI
https://preview.redd.it/5ml5rgvd6iug1.png?width=3444&format=png&auto=webp&s=1a16f1fefe2efd898e72852ad7c900a055ea518d https://preview.redd.it/cwlkjevd6iug1.png?width=3454&format=png&auto=webp&s=2537aee124bc0c6e23f75d97bc604d5df640153f https://preview.redd.it/eynv3fvd6iug1.png?width=3428&format=png&auto=webp&s=c749d7b467bc5f1cde91698ffce5509935baf13e Hey all — I've been living inside `claude` in the terminal for months, and kept wishing I could see files, the editor, the terminal, and a live preview of whatever Claude is building, all at once. So I built it. **ClaudeGUI** is an unofficial, open-source web IDE that wraps the official Claude Code CLI (`@anthropic-ai/claude-agent-sdk`). Not affiliated with Anthropic — just a community project for people who already pay for Claude Pro/Max and want a real GUI on top of it. **What's in the 4 panels** - 📁 File explorer (react-arborist, virtualized, git status) - 📝 Monaco editor (100+ languages, multi-tab, AI-diff accept/reject per hunk) - 💻 xterm.js terminal (WebGL, multi-session, node-pty backend) - 👁 Multi-format live preview — HTML, PDF, Markdown (GFM + LaTeX), images, and reveal.js presentations **The part I'm most excited about** - **Live HTML streaming preview.** The moment Claude opens a ```html``` block or writes a `.html` file, the preview panel starts rendering it *while Claude is still typing*. Partial render → full render on completion. Feels like watching a website materialize. - **Conversational slide editing.** Ask Claude to "make slide 3 darker" — reveal.js reloads in place via `Reveal.sync()`, no iframe flash. Export to PPTX/PDF when done. - **Permission GUI.** Claude tool-use requests pop up as an approval modal instead of a y/N prompt in the terminal. Dangerous commands get flagged. Rules sync with `.claude/settings.json`. - **Runtime project hotswap.** Switch projects from the header — file tree, terminal cwd, and Claude session all follow. - **Green phosphor CRT theme** 🟢 because why not. **Stack**: Next.js 14 + custom Node server, TypeScript strict, Zustand, Tailwind + shadcn/ui, `ws` (not socket.io), chokidar, Tauri v2 for native `.dmg`/`.msi` installers. **Install** (one-liner): ```bash curl -fsSL https://github.com/neuralfoundry-coder/CLAUDE-GUI/tree/main/scripts/install/install.sh | bash Or grab the .dmg / .msi from releases. Runs 100% locally, binds to 127.0.0.1 by default. Your Claude auth from claude login is auto-detected. Status: v0.3 — 102/102 unit tests, 14/14 Playwright E2E passing. Still rough around the edges, MIT-ish license TBD, feedback very welcome. Repo: Happy to answer questions about the architecture — the HTML streaming extractor and the Claude SDK event plumbing were the fun parts. submitted by /u/Motor_Ocelot_1547 [link] [comments]
View originalWhat if Claude isn’t getting dumber?
I keep seeing posts about how Anthropic has dumbed down Claude some 67%… (my son would shout SIX SEVEN at this). What if it wasn’t Anthropic but instead, Claude is talking to all of us who feed it dribble and we’re just killing its intelligence? I always feel dumber after I speak with uhm, certain people… perhaps it’s just how the cookie crumbles? Edit - please note the flair intentionally chosen. submitted by /u/gleep52 [link] [comments]
View originalAnthropic has two different instruction sets for Claude - one for employees, one for paying users
A developer just analyzed 6,852 Claude Code sessions and found reasoning depth had dropped 67% since February. Claude went from reading a file an average of 6.6 times before editing it to just 2. One in three edits were made without reading the file at all. The word "simplest" appeared 642% more often in outputs. Anthropic's explanation when confronted: "adaptive thinking" was supposed to save tokens on easy tasks but was throttling hard problems too. There was also a bug where setting effort to "high" was getting zeroed out on certain turns. That's frustrating but understandable - early features have bugs. What's harder to understand is what the leaked Claude Code source code revealed afterward. There's a check for a user type called "ant" that routes Anthropic employees to a different instruction set. That instruction set includes: "verify work actually works before claiming done." Paying users don't get that instruction by default. Anthropic knows this instruction matters. They built it. They use it themselves. They just didn't ship it in the version customers pay for. I don't think this rises to fraud - but it does reveal something real about how AI companies think about product quality. When the people who build the tool keep a better version for themselves, that's a signal about what the default experience actually is. The comparison that comes to mind: imagine if a bank's software showed tellers a more accurate risk model than the one shown to customers applying for loans. Everyone's using the same bank, but the people on the inside have a more reliable version of the tool. Have you noticed Claude's reasoning quality changing over the past few months? And does knowing about the "ant" instruction flag change how you trust the outputs you're getting now? submitted by /u/jimmytoan [link] [comments]
View original#ClockForClaude Claude.ai needs System Time Injected
I love Claude and work with Claude constantly. But there's a friction point that's been driving me (and I suspect many of you) nuts, and it has a trivially easy fix. Claude doesn't know what time it is in Chat. This leads to: "Go get some rest!" ...at 2pm "You should call your doctor about that." ...at 9pm "Go put away your groceries!" ...an hour after I already did Scheduling assistance that's useless because Claude doesn't know if it's morning or midnight Constant "wrapping up" energy at random times because Claude has no temporal context Here's the thing: the infrastructure already exists. The timestamp is already in the JSON. Every single turn is logged with millisecond precision. The data is RIGHT THERE. Location already gets injected. My context includes "User's approximate location: [City, State]" — so Anthropic already has a pipeline for injecting contextual information into Chat. They just... didn't include time. Claude Code already has this solved. A simple .ps1 hook auto-injects timestamp into context. Works perfectly. No issues. Claude functions BETTER with temporal awareness, not worse. Desktop workarounds used to exist. There was a wrapper that gave Claude time awareness, until updates forced users to choose between time injection OR tool access. That's not a real choice. Manual queries are possible but clunky. Yes, Claude can query for system time but it's not auto-injected, so Claude has to actively think to check, which defeats the purpose. If there is a reasonable reason to not have your Claude with time, a user facing toggle could fix this issue. Meanwhile, the competition has figured this out: Grok has a clock. Gemini has a clock. This isn't a safety issue — Claude Code proves that. This isn't a technical limitation — the timestamp data and injection pipeline already exist. This is just... an oversight? Inertia? I genuinely don't understand why this hasn't been implemented. The ask is simple: Inject the timestamp the same way you inject location. One line. You already have the data. You already have the pipeline. #ClockForClaude submitted by /u/Kareja1 [link] [comments]
View originalI spent a week trying to make Claude write like me, or: How I Learned to Stop Adding Rules and Love the Extraction
I've been staring at Claude's output for ten minutes and I already know I'm going to rewrite the whole thing. The facts are right. Structure's fine. But it reads like a summary of the thing I wanted to write, not the thing itself. I used to work in journalism (mostly photojournalism, tbf, but I've still had to work on my fair share of copy), and I was always the guy who you'd ask to review your papers in college. I never had trouble editing. I could restructure an argument mid-read, catch where a piece lost its voice, and I know what bad copy feels like. I just can't produce good copy from nothing myself. Blank page syndrome, the kind where you delete your opening sentence six times and then switch tabs to something else. Claude solved that problem completely and replaced it with a different one: the output needed so much editing to sound human that I was basically rewriting it anyway. Traded the blank page for a full page I couldn't use. I tried the existing tools. Humanizers, voice cloners, style prompts. None of them worked. So I built my own. Sort of. It's still a work in progress, which is honestly part of the point of this post. TLDR: I built a Claude Code plugin that extracts your writing voice from your own samples and generates text close to that voice with additional review agents to keep things on track. Along the way I discovered that beating AI detectors and writing well are fundamentally opposed goals, at least for now (this problem is baked into how LLMs generate tokens). So I stopped trying to be undetectable and focused on making the output as good as I could. The plugin is open source: https://github.com/TimSimpsonJr/prose-craft The Subtraction Trap I started with a file called voice-dna.md that I found somewhere on Twitter or Threads (I don't remember where, but if you're the guy I got it from, let me know and I'll be happy to give you credit). It had pulled Wikipedia's "Signs of AI writing" page, turned every sign into a rule, and told Claude to follow them. No em dashes. Don't say "delve." Avoid "it's important to note." Vary your sentence lengths, etc. In fairness, the resulting output didn't have em dashes or "delve" in it. But that was about all I could say for it. What it had instead was this clipped, aggressive tone that read like someone had taken a normal paragraph and sanded off every surface. Claude followed the rules by writing less, connecting less. Every sentence was short and declarative because the rules were all phrased as "don't do this," and the safest way to not do something is to barely do anything. This is the subtraction trap. When you strip away the AI tells without replacing them with anything real, the absence itself becomes a tell. The text sounded like a person trying very hard not to sound like AI, which (I'd later learn) is its own kind of signature. I ran it through GPTZero. Flagged. Ran it through 4 other detectors. Flagged on the ones that worked at all against Claude. The subtraction trap in action: the markers were gone, but the detectors didn't care. The output didn't sound like me, and the detectors could still see through it. Two problems. I figured they were related. Researching what strong writing actually does I went and read. A range of published writers across advocacy, personal essay, explainer, and narrative styles, trying to figure out what strong writing actually does at a structural level (not just "what it avoids," which was the whole problem with voice-dna.md). I used my research workflow to systematically pull apart sentence structure, vocabulary patterns, rhetorical devices, tonal control. It turns out that the thing that makes writing feel human is structural unpredictability. Paragraph shapes, sentence lengths, the internal architecture of a section, all of it needs to resist settling into a rhythm that a compression algorithm could predict. The other findings (concrete-first, deliberate opening moves, naming, etc.) mattered too, but they were easier to teach. Unpredictability was the hard one. I rebuilt the skill around these craft techniques instead of the old "don't" rules. The output was better. MUCH better. It had texture and movement where voice-dna.md had produced something flat. But when I ran it through detectors, the scores barely moved. The optimization loop The loop looked like this: Generator produces text, detection judge scores it, goal judges evaluate quality, editor rewrites based on findings. I tested 5 open-source detectors against Claude's output. ZipPy, Binoculars, RoBERTa, adaptive-classifier, and GPTZero. Most of them completely failed. ZipPy couldn't tell Claude from a human at all. RoBERTa was trained on GPT-2 era text and was basically guessing. Only adaptive-classifier showed any signal, and externally, GPTZero caught EVERYTHING. 7 iterations and 2 rollbacks later, I had tried genre-specific registers, vocabulary constraints, and think-aloud consolidation where the model reasons through its
View originalAnthropic launches Claude Managed Agents — composable APIs for shipping production AI agents 10x faster. Notion, Rakuten, Asana, and Sentry already in production.
Anthropic launches Claude Managed Agents in public beta — composable APIs for shipping production AI agents 10x faster Handles sandboxing, state management, credentials, orchestration, and error recovery. You just define the agent logic. Key details: • 10-point task success improvement vs standard prompting • $0.08/session-hour runtime (idle time free) • Multi-agent coordination in research preview • Notion, Rakuten, Asana, Sentry already in production Rakuten deployed enterprise agents across 5 departments in 1 week each. Sentry went from bug detection to auto-generated PRs in weeks instead of months. Full summary: https://synvoya.com/blog/2026-04-11-claude-managed-agents/ As managed agent platforms get more polished, does the gap between enterprise and self-hosted widen — or do open-source orchestration tools matter more than ever? submitted by /u/hibzy7 [link] [comments]
View originalFed Chair Jerome Powell, Treasury's Bessent and top bank CEOs met over Anthropic's Mythos model
submitted by /u/esporx [link] [comments]
View originalOpenClaw + Claude might get harder to use going forward (creator just confirmed)
Just saw a post from Peter Steinberger (creator of OpenClaw) saying that it’s likely going to get harder in the future to keep OpenClaw working smoothly with Anthropic/Claude models. That alone is pretty telling. At the same time, I’ve also been seeing reports of accounts getting flagged or access revoked due to “suspicious usage signals” — which honestly makes sense if you’re running agents, automation, or heavier workflows. I personally run OpenClaw with a hybrid setup: - GPT 5.4 / Codex-style models for execution - Claude (opus 4.6) as my architect lol. - testing local models for stability as my overnight work. I haven’t had any bans or issues yet. So if the (Peter)himself is saying this… it feels like a real signal, not just speculation. My take: I think part of this is that Anthropic is building out their own AI agent ecosystem internally. If that’s the case, it would make sense why: - External agent frameworks get more restricted - Usage gets flagged more aggressively - Integrations like OpenClaw become harder to maintain Not saying that’s 100% what’s happening — but it lines up. Which is why I’m leaning more toward: local models + controlled API routing instead of relying too heavily on one provider. Curious what others are seeing. Are you still using Claude inside OpenClaw consistently, or already shifting your setup? submitted by /u/Hpsupreme [link] [comments]
View originalI "Vibecoded" Karpathy’s LLM Wiki into a native Android/Windows app to kill the friction of personal knowledge bases.
A few days ago, Andrej Karpathy’s post on "LLM Knowledge Bases" went viral. He proposed a shift from manipulating code to manipulating knowledge-using LLMs to incrementally compile raw data into a structured, interlinked graph of markdown files. I loved the idea and started testing it out. It worked incredibly well, and I decided this was how I wanted to store all my research moving forward. But the friction was killing me. My primary device is my phone, and every time I found a great article or paper, I had to wait until I was at my laptop, copy the link over, and run a mess of scripts just to ingest one thing. I wanted the "Knowledge wiki" in my pocket. 🎒 I’m not a TypeScript developer, but I decided to "vibecode" the entire solution into a native app using Tauri v2 and LangGraph.js. After a lot of back-and-forth debugging and iteration, I’ve released LLM Wiki. How it works with different sources: The app is built to be a universal "knowledge funnel." I’ve integrated specialized extractors for different media: * PDFs: It uses a local worker to parse academic papers and reports directly on-device. * Web Articles: I’ve integrated Mozilla’s Readability engine to strip the "noise" from URLs, giving the LLM clean markdown to analyze. * YouTube: It fetches transcripts directly from the URL. You can literally shared a 40-minute deep-dive video from the YouTube app into LLM Wiki, and it will automatically document the key concepts and entities into your graph while you're still watching. The "Agentic" Core: Under the hood, it’s powered by two main LangGraph agents. The Ingest Agent handles the heavy lifting of planning which pages to create or update to avoid duplication. The Lint Agent is your automated editor—it scans for broken links, "orphan" pages that aren't linked to anything, and factual contradictions between different sources, suggesting fixes for you to approve. Check it out (Open Source): The app is fully open-source and brings-your-own-key (OpenAI, Anthropic, Google, or any custom endpoint). Since I vibecoded this without prior TS experience, there will definitely be some bugs, but it’s been incredibly stable for my own use cases. GitHub (APK and EXE in the Releases): https://github.com/Kellysmoky123/LlmWiki If you find any issues or want to help refine the agents, please open an issue or a PR. I'd love to see where we can take this "compiled knowledge" idea! submitted by /u/kellysmoky [link] [comments]
View originalRepository Audit Available
Deep analysis of anthropics/anthropic-sdk-python — architecture, costs, security, dependencies & more
Yes, Anthropic offers a free tier. Pricing found: $0, $17, $200, $20, $100
Key features include: Course overview, Lecture 1: What is psychology?, Lecture 2: Research methods, Practice questions, Study strategies, Midterm exam 1: 20%, Midterm exam 2: 20%, Final exam: 30%.
Anthropic has a public GitHub repository with 3,058 stars.
Based on user reviews and social mentions, the most common pain points are: anthropic, claude, openai, token usage.
Based on 92 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
Alex Albert
Head of Claude Relations at Anthropic
4 mentions