GitBook is a technical documentation platform that connects your docs, product and users into a self-improving knowledge loop that answers user questi
The State of Docs 2026 is here. The State of Docs 2026 is here. We're on a mission to redefine how development teams make decisions and centralize knowledge. Rooted in open source, we build with transparency, collaboration and a deep respect for community. Documentation has been stuck in the past while software development has evolved at breakneck speed. Teams ship faster than ever, but their docs lag behind — becoming outdated, fragmented, and disconnected from the products they're meant to support. Knowledge should move as fast as your product. That’s why we’ve reimagined documentation — where humans and AI work together to create, refine, and share what teams know. What started as a simple documentation platform has evolved into an intelligent knowledge layer that connects your team, your tools, and your users. We don’t think of GitBook as just a tool — it’s a platform for building and maintaining knowledge. Documentation that’s easier to write, smart enough to maintain itself, and powerful enough to become part of your product experience. Today, we’re used by more than two million people — including teams like Zoom, FedEx, Nvidia, Snyk, and Google. GitBook helps them focus on what they do best: building products that move the world forward, while their documentation evolves alongside them. We’re not here to write corporate commandments. GitBook isn’t built on slogans or posters — it’s built on the way we work every day: writing things down, questioning assumptions, and shipping improvements that make knowledge easier to trust. We believe teams do their best work when their thinking is clear, shared, and easy to find. That’s why we focus on simplicity, thoughtful craft, and meaningful transparency. Not as branding — but because they help people make better decisions. We move fast, but with intention. We ship small, steady improvements. We use AI to amplify good judgment, not replace it. And we care about the details: the structure of a page, the clarity of a sentence, the feedback that unblocks someone else tomorrow. GitBook is our product, but it’s also our own way of working made visible: a belief that documentation, done right, quietly powers everything a team builds. Ready for a new challenge and want to work with a bunch of talented and compassionate folks? We’d love to hear from you! 440 N Barranca Ave #7171, Covina, CA 91723, USA. EIN: 320502699 440 N Barranca Ave #7171, Covina, CA 91723, USA. EIN: 320502699
Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Features
Industry
information technology & services
Employees
42
Funding Stage
Seed
Total Funding
$2.1M
Pricing found: $25, $0.20, $0.2., $65, $249
I made a Claude skill that builds learning paths from official docs instead of random blog links
Even though Claude is impressive and can do a lot out of the box, I like staying informed about how things actually work under the hood. Even if it's just curiosity, I want to understand the technology I'm using, not just trust the output. The problem is, whenever I asked AI for learning resources and forgot to specify where I wanted them from, I kept getting random responses from "innovative" sources. A Medium post from 2021. Some guy's YouTube playlist. A paid course recommendation. No structure, no sense of what to read first or whether any of it was current. So I made a skill called Mentor. Give it a topic, it gives you a phased learning path built mostly from official docs. The thing I care about: source hierarchy. Official docs first, always. Vendor and maintainer content second. Community posts only when official docs have a real gap — and it has to say why it's including them. It picks up your background from context too. I said "teach me Rust, I've been writing Go for 3 years" and it skipped the beginner stuff, framed ownership through Go's garbage collector, and ordered the Rust Book chapters in a way that makes sense if you already know systems programming. Something I haven't seen in other tools: every resource gets tagged with how to approach it. "Read now" means you need this before the next step. "Skim" means get the shape of it. "Hands-on" means clone it and build something. "Bookmark as reference" means you'll want it later but not right now. Most lists just hand you 15 links and say good luck. Broad topics (Rust, Kubernetes) get a 4-phase structure. Narrow topics (Terraform modules, GitLab CI caching) get compressed. It doesn't force everything into the same shape. Repo: https://github.com/ayhammouda/mentor .skill file on the release page - claude skill add mentor.skill. MIT licensed. 4 example outputs in the repo if you want to see what it produces before installing. Curious about topics where this breaks down, especially where official docs are bad enough that "official first" is the wrong call. submitted by /u/ahammouda [link] [comments]
View original[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.
The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an afterthought — English-first tokenizer, English-first data, maybe some Italian sprinkled in during fine-tuning. The result: bloated token counts, poor morphology handling, and models that "speak Italian" the way a tourist orders coffee in Rome. I decided to fix this from the ground up. What is Dante-2B A 2.1B parameter, decoder-only, dense transformer. Trained from scratch — no fine-tune of Llama, no adapter on Mistral. Random init to coherent Italian in 16 days on 2× H200 GPUs. Architecture: LLaMA-style with GQA (20 query heads, 4 KV heads — 5:1 ratio) SwiGLU FFN, RMSNorm, RoPE d_model=2560, 28 layers, d_head=128 (optimized for Flash Attention on H200) Weight-tied embeddings, no MoE — all 2.1B params active per token Custom 64K BPE tokenizer built specifically for Italian + English + code Why the tokenizer matters This is where most multilingual models silently fail. Standard English-centric tokenizers split l'intelligenza into l, ', intelligenza — 3 tokens for what any Italian speaker sees as 1.5 words. Multiply that across an entire document and you're wasting 20-30% of your context window on tokenizer overhead. Dante's tokenizer was trained on a character-balanced mix (~42% Italian, ~36% English, ~22% code) with a custom pre-tokenization regex that keeps Italian apostrophe contractions intact. Accented characters (à, è, é, ì, ò, ù) are pre-merged as atomic units — they're always single tokens, not two bytes glued together by luck. Small detail, massive impact on efficiency and quality for Italian text. Training setup Data: ~300B token corpus. Italian web text (FineWeb-2 IT), English educational content (FineWeb-Edu), Italian public domain literature (171K books), legal/parliamentary texts (Gazzetta Ufficiale, EuroParl), Wikipedia in both languages, and StarCoderData for code. Everything pre-tokenized into uint16 binary with quality tiers. Phase 1 (just completed): 100B tokens at seq_len 2048. DeepSpeed ZeRO-2, torch.compile with reduce-overhead, FP8 via torchao. Cosine LR schedule 3e-4 → 3e-5 with 2000-step warmup. ~16 days, rock solid — no NaN events, no OOM, consistent 28% MFU. Phase 2 (in progress): Extending to 4096 context with 20B more tokens at reduced LR. Should take ~4-7 more days. What it can do right now After Phase 1 the model already generates coherent Italian text — proper grammar, correct use of articles, reasonable topic continuity. It's a 2B, so don't expect GPT-4 reasoning. But for a model this size, trained natively on Italian, the fluency is already beyond what I've seen from Italian fine-tunes of English models at similar scale. I'll share samples after Phase 2, when the model has full 4K context. What's next Phase 2 completion (est. ~1 week) HuggingFace release of the base model — weights, tokenizer, config, full model card SFT phase for instruction following (Phase 3) Community benchmarks — I want to test against Italian fine-tunes of Llama/Gemma/Qwen at similar sizes Why I'm posting now I want to know what you'd actually find useful. A few questions for the community: Anyone working with Italian NLP? I'd love to know what benchmarks or tasks matter most to you. What eval suite would you want to see? I'm planning perplexity on held-out Italian text + standard benchmarks, but if there's a specific Italian eval set I should include, let me know. Interest in the tokenizer alone? The Italian-aware 64K BPE tokenizer might be useful even independently of the model — should I release it separately? Training logs / loss curves? Happy to share the full training story with all the numbers if there's interest. About me I'm a researcher and entrepreneur based in Rome. PhD in Computer Engineering, I teach AI and emerging tech at LUISS university, and I run an innovation company (LEAF) that brings emerging technologies to businesses. Dante-2B started as a research project to prove that you don't need a massive cluster to train a decent model from scratch — you need good data, a clean architecture, and patience. Everything will be open-sourced. The whole pipeline — from corpus download to tokenizer training to pretraining scripts — will be on GitHub. Happy to answer any questions. 🇮🇹 Discussion also on r/LocalLLaMA here submitted by /u/angeletti89 [link] [comments]
View originalI built CLI-Anything-WEB — a Claude Code plugin that generates complete Python CLIs for any website (17 CLIs so far: Amazon, Airbnb, TripAdvisor, Reddit, YouTube...)
Point it at a URL, Claude Code captures the live HTTP traffic, and generates a production-grade Python CLI with commands, tests, REPL mode, and --json output — fully automated across 4 phases. How it works Phase 1 (capture): Records live browser traffic via playwright-cli Phase 2 (methodology): Analyzes endpoints, designs architecture, generates CLI code Phase 3 (testing): Writes unit + E2E tests (40–60+ per CLI, all passing) Phase 4 (standards): 3 parallel Claude agents do compliance review, then publishes 17 CLIs generated so far No-auth public scraping: Amazon, Airbnb, TripAdvisor, Reddit, YouTube, Hacker News, GitHub Trending, Pexels, Unsplash, ProductHunt, FutBin, Google AI Auth-required: NotebookLM, Google AI Studio, Booking.com, ChatGPT, CodeWiki Example — built Amazon search in one pipeline run bash cli-web-amazon search "crash cart adapter" --json cli-web-amazon bestsellers electronics --json cli-web-amazon product get B002CLKFTQ --json Open source https://github.com/ItamarZand88/CLI-Anything-WEB The entire pipeline runs inside Claude Code using a 4-phase skill system. Anti-bot bypass is handled with curl_cffi impersonation (Chrome/Safari iOS) — no Playwright needed at runtime. Each CLI is a standalone pip-installable package. Happy to answer questions about the skill system, anti-bot patterns, or how the testing phase works. submitted by /u/zanditamar [link] [comments]
View originalI reverse-engineered why Claude Code burns through your usage so fast. 7 bugs that stack on top of each other — and the worst one activates when Extra Usage kicks in
**Edit: yes I used Claude to help research this, thats literally the point — using the tool to investigate the tool. The findings are real and verified from the public npm package. If you can't be bothered to read it, have your Claude read it for you. GitHub issue with technical details: anthropics/claude-code#43566** I'm a Max 20x subscriber. On April 1st I burned 43% of my weekly quota in a single day on a workload that normally takes a full week. I spent the last few days tracing why. Here's what I found. There are 7 bugs that stack on top of each other. Three are fixed, two are mitigable, two are still broken. But the worst one is something nobody's reported yet. **The big one: Extra Usage kills your cache** There's a function in cli.js that decides whether to request 1-hour or 5-minute cache TTL from the server. It checks if you're on Extra Usage. If you are, it silently drops to 5 minutes. Any pause longer than 5 minutes triggers a full context rebuild at API rates, charged to your Extra Usage balance. The server accepts 1h when you ask for it. I verified this. The client just stops asking the moment Extra Usage kicks in. For a 220K context session that means roughly $0.22 per turn with 1h cache vs $0.61 per turn with 5m. Thats 2.8x more expensive per turn at the exact moment you start paying per token. Your $30 Extra Usage cap buys 135 turns instead of ~48. The death spiral: cache bugs drain your plan usage faster than normal, plan runs out, Extra Usage kicks in, client detects it and drops cache to 5m, every bathroom break costs a full rebuild, Extra Usage evaporates, you're locked out until the 5h reset. Repeat. A one line patch to the function (making it always return true) fixes it. Server happily gives you 1h. Its overwritten by updates though. **The other 6 layers (quick summary)** 1 - The native installer binary ships with a custom Bun runtime that corrupts the cache prefix on every request. npm install fixes this. Verify with file $(which claude), should be a symlink not an ELF binary. 2 - Session resume dropped critical attachment types from v2.1.69 to v2.1.90 causing full cache misses on every resume. 28 days, 20 versions. Fixed in v2.1.91. 3 - Autocompact had no circuit breaker. Failed compactions retried infinitely. Internal source comment documented 1,279 sessions with 50+ consecutive failures. Fixed in v2.1.89. 4 - Tool results are truncated client side (Bash at 30K chars, Grep at 20K). The stubs break cache prefixes. These caps are in your local config at ~/.claude.json under cachedGrowthBookFeatures and can be inspected. 5 - (the Extra Usage one above) 6 - Client fabricates fake rate limit errors on large transcripts. Shows model: synthetic with zero tokens. No actual API call made. Still unfixed. 7 - Server side compaction strips tool results mid-session without notification, breaking cache. Cant be patched client side. Still unfixed. These multiply not add. A subscriber hitting 1+3+5 simultaneously could burn through their weekly allocation in under 2 hours. **What you can do** Switch to npm if you're on the native installer. Update to v2.1.91. If you're comfortable editing minified JS you can patch the cache TTL function to always request 1h. **What I'm not claiming** I don't know if the Extra Usage downgrade is intentional or an oversight. Could be cost optimization that didn't account for second order effects. I just know the gate exists, the server honors 1h when asked, and a one line patch proves the restriction is client side. **Scope note** This is all from the CLI. But the backend API and usage bucket are shared across claude.ai, Cowork, desktop and mobile. If similar caching logic exists in those clients it could affect everyone. GitHub issue with full technical details: anthropics/claude-code#43566 submitted by /u/UnfairFortune9840 [link] [comments]
View originalSolo dev + Claude: From side project to press coverage in under 2 months - here's what I learned"
Hey everyone, I wanted to share my experience building NV-UV, a free companion app for undervolting NVIDIA RTX 50-series GPUs, almost entirely with Claude over the past ~2 months and 100+ sessions. I started in early February as a small side project for my own RTX 5090, and it kind of snowballed from there. What NV-UV does It's a WPF/C#/.NET 9.0 desktop app that makes GPU undervolting accessible - lower power draw, lower temps, same performance. It integrates with MSI Afterburner and includes features like automatic per-game UV profiles (587 games in the database), a built-in stress test scanner, crash detection that automatically adjusts your settings, and full DE/EN localization. It's currently in Open Alpha. Press coverage The project got picked up by several major tech outlets, which I honestly did not expect: VideoCardz (2 articles) https://videocardz.com/newz/nv-uv-brings-one-click-undervolting-to-geforce-rtx-50-gpus https://videocardz.com/newz/nv-uv-enters-open-alpha-for-geforce-rtx-50-series-rtx-5060-and-laptop-support-planned PCGH (Germany's biggest PC hardware magazine) https://www.pcgameshardware.de/Grafikkarten-Grafikkarte-97980/News/NV-UV-Undervolting-Tool-fuer-Geforce-RTX-5000-1521437/ https://www.pcgameshardware.de/Grafikkarten-Grafikkarte-97980/Specials/NV-UV-Untervolting-Tool-startet-Alpha-Test-1523449/ KitGuru https://www.kitguru.net/components/graphic-cards/joao-silva/new-nv-uv-utility-aims-to-simplify-undervolting-for-rtx-50-series-gpus/ Hardwareluxx https://www.hardwareluxx.de/index.php/news/hardware/grafikkarten/68478-nv-uv-undervolting-der-geforce-rtx-50-karten-per-mausklick.html PCGH even gave NV-UV its own dedicated subforum for the Open Alpha How Claude was involved - it wasn't just code This is what I think makes this interesting for this community. Yes, Claude wrote and refactored most of the C# code with me. But it went way beyond that: Documentation - I had never used GitBook before. Claude walked me through the setup, told me "do this, then that", I asked 2-3 questions, and it just worked. I didn't even need to read the docs first - that came later naturally as I started working with it. It was literally like having a buddy who knows the tool and just tells you what to click. Same thing for the full user guide (DE + EN), tester guides, and a detailed handbook. Discord community - Claude helped me set up and grow a Discord community by summarizing the technical details of each build into announcements and changelogs that testers could actually understand. 86+ members now. Community posts - I write all my forum posts and support answers myself, but Claude helps me polish them, especially the English ones. My native language is German, so having Claude correct and refine my English texts while keeping my voice is a huge help. The posts are mine, Claude just makes sure my English doesn't suck. Architecture decisions - We discussed approaches together - sacrificial process architecture, encryption strategies, pipe protocols - Claude would lay out options, I'd pick the direction. Debugging - I'd paste logs, we'd discuss what went wrong, I'd come back with "that doesn't work, let's try it this way" - and often that's what cracked it. It was a real back and forth, not just Claude spitting out answers. I'm not a professional developer, but I do have some coding background from my job - enough to read code, understand logs, and know when something smells off. I review everything, I dig through logs myself, I make the calls. One thing I want to be clear about: every single feature in NV-UV - the UV-Pilot, Game Replay, the stress test scanner, the preset system, the OCS import - those are all 100% my ideas. Claude can't come up with stuff like that because it doesn't know what GPU users actually need. But it wasn't just "build me this". I'd define features in detail, Claude would come back with options, and then I'd think it through - "if we do it this way, this and that could go wrong", "we need to optimize this", "isn't there another approach?", "let's analyze this deeper before we commit". Sometimes we'd go back and forth for an entire session before landing on the right architecture. The vision and the decisions are mine, the implementation is teamwork. But let's be real: Claude is basically that one friend who happens to have a CS degree and somehow always has time to help. I work in Visual Studio as my editor, no AI agent or copilot, just the Claude chat window. It’s basically the manual version of what people call “vibe coding” , in reality it’s a mix of both. Sometimes he does a lot on his own, sometimes I go through it more carefully and review things. Sometimes with bug fixing he’s almost too fast, so I have to pull him back a bit. Every now and then he fixes something just to make it compile, and I only catch that later, but most of the time it works really well. Claude also handles a lot of the heavy lifting. He fixes compiler errors on its own, help
View originalClaude AI Cheat Sheet
Most people use Claude like a chatbot. But Claude is actually a full AI workspace if you know how to use it. I broke the entire system down in this Claude AI Cheat Sheet: Claude Models Use the right model for the job. • Opus 4.5 → Hard reasoning, research, complex tasks • Sonnet 4.5 → Daily writing, analysis, editing (best default) • Haiku 4.5 → Fast, cheap tasks and quick prompts All models support 200K context, which means you can feed large documents and projects. Prompting Techniques The quality of your output depends on the structure of your prompt. Some of the most effective techniques: • Role playing • Chained instructions • Step-by-step prompting • Adding examples • Tree of thought reasoning • Style-based instructions The best combo usually is: Role + Examples + Step by Step. Role → Task → Format Framework One of the simplest ways to improve prompts. Example structure: Act as [Role] Perform [Task] Output in [Format] Example: Act as a marketing expert Create a content strategy Output in a table or bullet points Prompt Learning Methods Different prompt styles produce different outputs. • Open ended → broad exploration • Multiple choice → force clear decisions • Fill in the blank → structured responses • Comparative prompts → X vs Y analysis • Scenario prompts → role based thinking • Feedback prompts → review and improve content Prompt Templates You can dramatically improve results using structured prompting. Three core styles: • Zero shot → no examples • One shot → one example provided • Few shot → multiple examples More examples usually means better outputs. Projects Projects turn Claude into a knowledge workspace. You can: • Upload files as knowledge • Organize chats by topic • Add custom instructions • Share with teams • Maintain long context across work Artifacts Artifacts allow Claude to generate interactive outputs like: • Code • Documents • Visualizations • HTML or Markdown apps You can read, edit, and run them directly inside the chat. MCP + Connectors MCP (Model Context Protocol) connects Claude to external tools. Examples: • Google Drive • Gmail • Slack • GitHub • Figma • Asana • Databases This allows Claude to work with real data and workflows. Claude Code Claude can also act as a coding agent inside the terminal. It can: • Read entire codebases • Write and test code • Run commands • Integrate with Git • Deploy projects Reusable Skills + Hooks Claude supports reusable markdown instructions called Skills. Plus automation hooks like: • PreToolUse • PostToolUse • Stop • SubagentStop These help control workflows and outputs. Prompt Starters Some prompts work almost everywhere: • “Act as [role] and perform [task].” • “Explain this like I am 10” • “Compare X vs Y in a table.” • “Find problems in this document.” • “Create a step-by-step plan for [goal].” • “Summarize in 3 bullet points.” Study the cheat sheet once. Your prompting will immediately level up. submitted by /u/Longjumping_Fruit916 [link] [comments]
View originalEvery programming language abstracts the one below it. Markdown is next.
Lately I have been pushing to see how far you can go with AI and coding by creating the Novel Engine that lets you build books like an IDE lets you compile code into an app. Here is something I have learned from the process. Every programming language is a meta-language — abstraction over the layer below. C abstracts assembly. Python abstracts C. LLMs are the next layer: natural language abstracts Python. The pattern didn't break. It continued. I have a 400-line 'program' called intake that is a markdown file that stores a prompt that you can attach to a context with a feature request file. Intake accepts documents in natural language and produces one to many encapsulated session prompts, and outputs another program prompt as markdown that runs the sessions, committing code per step. Each feature program has a top level control prompt that loops until all session are executed and the feature is complete. It has state, control loops, and handled failures like a session crash. It can resume when terminated and does not have to start from the beginning because the context is stored on disk. What this means in practice is you can give me a text request for a feature you would like that I can turn into a feature by running only two prompts. Intake, and the master program prompt it produces. The intake markdown file has shipped production features on Novel Engine. Some of the features include document version control, a helper agent that helps the user navigate the app, and an onboarding guide with tooltips. The intake source file is on GitHub. Feature requests go in. Completed features come out. submitted by /u/HuntConsistent5525 [link] [comments]
View originalDIY technical book with Claude Code
I wanted a long-read book about intermediate Claude Code features. Something in EPUB I could read on an e-reader. Couldn't find one, so I built one myself with Claude Code. The process: Asked Claude to collect all Anthropic docs onto local drive as reference material. Used Claude, Gemini, and Perplexity to deep-research real-world examples of how people use Claude Code, specifically in front-office finance (my field). Asked Claude Code to write a technical book using standard non-fiction structure (think Malcolm Gladwell or Michael Lopp): each chapter opens with a recap, follows with the technical features from Anthropic's docs, then real-world examples of that feature in practice, then closes with the most important points from the chapter. What I learned about the process itself: If you just ask Claude Code to write a book from all that material, it reads everything, compacts, and produces something mediocre. Borderline dumb. If you tell it to "use agents" without constraints, it spawns one agent per chapter, hits the session limit (I was on Pro at the time), and all the work from all agents is lost. The fix: tell Claude to run no more than 2-3 agents at a time, wait for them to finish, then launch the next batch. The book is about 1.5 months old now, which means it's basically obsolete. AI related books are a bit like newspapers. But the complete Claude assignment file with everything mentioned above is on the GitHub repo, so if you want to build your own version, you don't have to reinvent the wheel - https://github.com/vkorost/weekend-diy-book Same approach can be used for any book - collect the materials, formulate the assignment, let Claude do the rest. submitted by /u/vkorost [link] [comments]
View originalOpen-sourced a Claude Code plugin that captures books (Kindle/Apple Books/PDF) and turns them into structured Markdown
I kept losing insights from books I read. Highlighting is useless — you never go back to it. So I built this for myself and figured others might find it useful. It's a Claude Code plugin. You open a book on your Mac, run a command, and it: Screenshots every page automatically (macOS screencapture + CGWindowList via Swift) Runs OCR with macOS Vision. Pages that Vision can't read well (especially vertical Japanese text) get re-read by Claude Code agents via multimodal image reading Analyzes the full text and generates structured Markdown files organized by theme — not by chapter order The output is multiple topic files (500+ lines each) with tables, blockquotes, cross-references, plus a hub file with wikilinks. Built for Obsidian but it's just standard Markdown. Some implementation details if you're curious: No external API keys — all AI work runs through Claude Code agents Theme count scales with book size (4-6 for a short book, up to 25 for 500+ pages) Parallel agents generate topic files simultaneously End-of-book detection via perceptual image hashing (3 consecutive identical pages = done) Supports Mac Kindle, Apple Books, Kindle Cloud Reader (Playwright), and scanned PDFs (Poppler) Requires macOS. The OCR uses a Swift CLI that compiles on first run GitHub: https://github.com/masterleopold/book-capture submitted by /u/masterleopold [link] [comments]
View originalClaude Code on Windows: 6 critical bugs closed as "not planned" — is Anthropic aware that 70% of the world and nearly all enterprise IT runs Windows?
I'm a paying Claude subscriber using Claude Code professionally on Windows 11 with WSL2 through VS Code. I've hit a wall. Not with the AI — Claude is brilliant. The wall is that Claude Code's VS Code extension simply does not work reliably on Windows. Here's what I've documented: The VS Code extension freezes on ANY file write or code generation over 600 lines. Just shows "Not responding" and dies. Filed as #23053 on GitHub — Anthropic closed it as "not planned" and locked it. The March 2026 Windows update (KB5079473) crashes every WSL2 session at 4.6GB heap exhaustion. Claude Code spawns PowerShell 38 times on every WSL startup — 30 seconds of input lag before you can even type. Memory leaks grow to 21GB+ during normal sessions with sub-agents. Path confusion between WSL and Windows causes silent failures. Extreme CPU/memory usage makes extended sessions on WSL2 impossible. Every single one of these is tagged "platform:windows" on GitHub. Several are closed as stale or "not planned." Meanwhile, Mac users report none of these issues. Because Anthropic builds and tests on Macs. I get it — Silicon Valley runs on MacBooks. But the rest of the world doesn't. The Fortune 500 runs on Windows. Manufacturing, finance, defense, healthcare, automotive, energy, government — their developers are on Windows. Their IT policies mandate Windows. When these companies evaluate AI coding tools for enterprise rollout at 500-5,000 seats, they evaluate on Windows. GitHub Copilot works on Windows. Cursor works on Windows. Amazon Q works on Windows. They will win every enterprise deal that Claude Code can't even compete for because the tool freezes on basic file operations. The "not planned" label on a file-writing bug for the world's dominant platform should alarm Anthropic's product leadership. I've filed a detailed bug report on GitHub today. I'm posting here to ask: am I alone? Are other Windows users hitting these same walls? And does Anthropic actually have a plan for Windows, or is it permanently second-class? I believe Claude is the best AI available. But the best model behind a broken tool on the most common platform is a wasted advantage. --- cc: u/alexalbert2 u/birch_anthropic — Anthropic, 95K people are watching this thread. Windows users deserve a response. submitted by /u/Critical_Ladder3127 [link] [comments]
View originalI built a Claude Code plugin that generates Python CLIs for any website by capturing HTTP traffic
After months of building, I'm open-sourcing CLI-Anything-Web — a Claude Code plugin that turns any web app into a command-line tool. How it works: 1. You run /cli-anything-web https://some-website.com 2. Playwright opens a browser and captures all HTTP traffic while you use the site 3. Claude analyzes the API (REST, GraphQL, RPC, whatever) and generates a full Python CLI 4. You get cli-web- on your PATH — with auth, REPL mode, --json output, and tests What you get: - Click commands with --json on everything (so Claude can use the CLIs as tools) - Interactive REPL mode - Browser-based auth (Google SSO, OAuth, cookies) - Handles Cloudflare, AWS WAF, Google batchexecute, and more The repo ships with 10 reference CLIs I generated: Reddit, Booking.com, Google Stitch (AI design), Google AI Mode, NotebookLM, Pexels, Unsplash, Product Hunt, FUTBIN, and GitHub Trending. The coolest part: generated CLIs come with Claude Code skills, so Claude automatically uses them to answer your questions. Ask "what's trending on GitHub?" and it runs the CLI for you. GitHub: https://github.com/ItamarZand88/CLI-Anything-WEB Would love feedback — especially on what websites you'd want CLIs for. submitted by /u/zanditamar [link] [comments]
View originalI built an open-source context framework for Codex CLI (and 8 other AI agents)
Codex is incredible for bulk edits and parallel code generation. But every session starts from zero — no memory of your project architecture, your coding conventions, your decisions from yesterday. What if Codex had persistent context? And what if it could automatically delegate research to Gemini and strategy to Claude when the task called for it? I built Contextium — an open-source framework that gives AI agents persistent, structured context that compounds across sessions. I'm releasing it today. What it does for Codex specifically Codex reads an AGENTS.md file. Contextium turns that file into a context router — a dynamic dispatch table that lazy-loads only the knowledge relevant to what you're working on. Instead of a static prompt, your Codex sessions get: Your project's architecture decisions and past context Integration docs for the APIs you're calling Behavioral rules that are actually enforced (coding standards, commit conventions, deploy procedures) Knowledge about your specific stack, organized and searchable The context router means your repo can grow to hundreds of files without bloating the context window. Codex loads only what it needs per session. Multi-agent delegation is the real unlock This is where it gets interesting. Contextium includes a delegation architecture: Codex for bulk edits and parallel code generation (fast, cheap) Claude for strategy, architecture, and complex reasoning (precise, expensive) Gemini for research, web lookups, and task management (web-connected, cheap) The system routes work to the right model automatically based on the task. You get more leverage and spend less. One framework, multiple agents, each doing what they're best at. What's inside Context router with lazy loading — triggers load relevant files on demand 27 integration connectors — Google Workspace, Todoist, QuickBooks, Home Assistant, and more 6 app patterns — briefings, health tracking, infrastructure remediation, data sync, goals, shared utilities Project lifecycle management — track work across sessions with decisions logged and searchable via git Behavioral rules — not just documented, actually enforced through the instruction file Works with 9 AI agents: Claude Code, Gemini CLI, Codex, Cursor, Windsurf, Cline, Aider, Continue, GitHub Copilot. Battle-tested I've used this framework daily for months: 100+ completed projects, 600+ journal entries, 35 app protocols running in production. The patterns shipped in the template are the ones that survived sustained real-world use. Plain markdown. Git-versioned. No vendor lock-in. Apache 2.0. Get started bash curl -sSL contextium.ai/install | bash Interactive installer with a gum terminal UI — picks your agent, selects your integrations, optionally creates a GitHub repo, then launches your agent ready to go. GitHub: https://github.com/Ashkaan/contextium Website: https://contextium.ai Happy to answer questions about the Codex integration or the delegation architecture. submitted by /u/Ashkaan4 [link] [comments]
View originalYes, GitBook AI offers a free tier. Pricing found: $25, $0.20, $0.2., $65, $249
Key features include: A connected, personalized AI Assistant, A connected layer for product knowledge.
Based on 17 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.