Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Employees
19
Funding Stage
Series A
Total Funding
$15.0M
Claude Code eats my token reading files. So I made Gemini CLI do it for free.
I have a Google Pro Plan for free, thanks to my telecom provider. Built a simple MCP bridge that lets Opus delegate gemini workers reading codebases, summarizing docs, bulk research, with flash's 1M context window. Opus stays the brain. Gemini does the legwork. For $0. Instead of burning an Opus message on "read this project and find the complex files," Claude sends ~50 tokens to Gemini, gets a compact answer back. 250 tokens instead of 500,000. ~200 lines of Python. 15 min setup. No API keys. just Gemini CLI's free OAuth. Works with Claude Desktop and Claude Code. It's rough, I built it solo and I'm not a dev. But it's been my daily driver for weeks. If you try it and want to make it better, issues and PRs welcome. https://github.com/ankitdotgg/making-gemini-useful-with-claude submitted by /u/HanDunker27 [link] [comments]
View originalClaude Sonnet default vs Haiku with extended thinking — similar quality, but is there a real cost difference?
I've been using Claude Haiku on default for both planning and building. Whenever something feels more complex or needs broader context, I switch to Claude Sonnet, also on default. Up to now I never paid much attention to the extended thinking setting. I just left it off and worked that way. This strategy has kept my token use low instead of just blasting Sonnet on everything. Today I ran into a case where Haiku on default gave me an answer that was technically workable, but it missed the bigger context of the problem. The solution might have fixed the immediate issue but would have caused problems elsewhere in the code. Caught it before moving forward, thankfully. I rolled back and reran the same planning/analysis using Sonnet. That version understood the problem properly and gave me the right solution. I repeated the comparison a few times — the pattern held: Sonnet consistently understood the broader context, while Haiku default kept missing it and suggesting weaker fixes. Then I tried Haiku with extended thinking enabled. That changed things. Haiku with extended thinking was suddenly able to follow the broader context much better and started producing answers much closer to Sonnet quality. I repeated it several times and it stayed on point. So now I'm thinking of shifting my workflow to: Haiku default for simpler, routine work Haiku with extended thinking for tasks that need more reasoning or wider context Sonnet only when truly needed (and Opus for the rare, genuinely hard problems) My question: in real-world use, how much am I actually saving with Haiku + extended thinking compared to just using Sonnet? At that point, is Haiku extended still meaningfully cheaper, or does it get close enough that I might as well use Sonnet? Also curious how others decide when to step up from Haiku default → extended thinking → Sonnet → Opus. Where are your personal thresholds? submitted by /u/Working-Middle2582 [link] [comments]
View originalBuilt “Convince Them” at the Claude Builder Club Hackathon
Hello everyone, I’m a freshman at UIUC, and I’m currently attending the Claude Builder Club Hackathon. My project is called Convince Them. It’s a retro desktop-style AI experience where you get 10 minutes to pitch your startup idea to a panel of investor personas. They question your assumptions in real time, push back on your pitch, and at the end of the session they tell you what check they would write. I wanted to build something that feels more intense and realistic than a normal chatbot, so the whole experience is framed like a live pressure test rather than a casual conversation. Here is the link: https://www.convincethem.org/ Thanks for trying! submitted by /u/Slight_Scholar_388 [link] [comments]
View originalDoes some salesperson international and business develpper use Claude ?
Hey everyone ! I'm working for a brand clothes company and they give me Claude as AI to help me with some of my tasks. I would like to know if some of you are international salespersons and business developers, if yes how do you use it to help you ? (Prospection, managing, data follow, audit, etc...) thank you ! submitted by /u/elterryble [link] [comments]
View originalI think I accidentally made gstack and superpowers obsolete
Ok that's probably an exaggeration. But hear me out. For the past 2 years I've been building a TypeScript workflow compiler called Flow Weaver. You annotate TypeScript functions with inputs, outputs, and connections. The compiler validates the graph and generates standalone TypeScript. No runtime dependency. No YAML. Just code. While developing it I noticed that using Claude Code I could now create those workflows and evolve them rapidly and easily. So easily that I started to forget its syntax. I just ask Claude and it generates the workflow through a conversation for me. The iteration loop is very tight where I just don't have to touch it anymore. Here is what that looks like visually (just a random example): https://preview.redd.it/3da1fnfromug1.png?width=2104&format=png&auto=webp&s=f144a68043ec4e991e56b4135795fa812b73f2ee Then it hit me, what if Flow Weaver... was the deterministic layer that we have been missing. Think about it... How many written workflow descriptions do you have scattered around. CLAUDE.md. Skills. Hooks. Memory files. All describing a process you want the AI to follow. And you even go out of your way to install gstack and superpowers on top. But they're all just text. The LLM decides whether to follow them. Is this you? https://preview.redd.it/0tlx8jg8rmug1.png?width=1038&format=png&auto=webp&s=9c92e5aa01b207b0ce68f7e1ffe2293f64e17327 Here is what I envisioned as a use case for Flow Weaver, this is simply a teaser I made to showcase what I just said. https://reddit.com/link/1siwil7/video/ks6ay9ullmug1/player What you're seeing is a compiled Flow Weaver workflow running inside Claude Code. You say "Review this PR" and Claude Code runs the workflow: Gather Context runs first: a pure function, no AI, deterministic The workflow hits Wait for Agent, execution pauses. It sends a structured prompt back to Claude Code with the full context Claude analyzes the code, finds the issues, returns the result The workflow resumes, Format Comments structures the output Done. Claude Code doesn't control the order. It starts the workflow, it is then given control back to resolve the workflow's prompt, does the thinking, and returns control with the response. The workflow decides what happens with the result. That's the difference. Not another prompt telling Claude what to do fighting all the other contradictory rules you may have added. A compiler enforcing it. Want to share a workflow with your team or friends? It's just a TypeScript file. No Flow Weaver runtime needed. If you think I'm onto something, I'd love to hear it. But if this is just my imagination, let me know that too. Also, if any of this is confusing, allow me to explain, ask away. Thank you. It's free to use, source-available on GitHub, free for individuals and teams up to 15. GitHub | Website | Discord | X/Twitter The project is still early, if any of this resonates, come shape it. submitted by /u/Moraispgsi [link] [comments]
View originalWhy Claude? Why me?
I’ve been using Claude for a while, mainly for coding, brainstorming ideas, and refining projects. It became a huge part of how I learn and build. Today I got an email from saying my account was suspended because I’m 13. I understand companies have to follow policies and legal requirements. But it still feels frustrating to lose access to a tool that genuinely helped me learn faster, think better, and build things. This isn’t about bypassing rules — I get why they exist. What I’m wondering is: Are there any legitimate ways for younger users to access tools like this (with parental consent, etc.)? How are other younger developers dealing with these restrictions? What alternatives are you using for coding + brainstorming? Would appreciate any advice or perspective from the community. submitted by /u/arnxv-coder [link] [comments]
View originalarXiv cs.CY endorsement request for adaptive scheduling paper
Hi everyone, I'm a 17-year-old student from India currently in Class 12, preparing for the JEE exam. Over the past few months I wrote a research paper on adaptive exam scheduling, arguing that student discipline is stochastic and that exam prep should be treated as a control problem, not a planning problem. I built a simulation that shows priority-directed adaptive scheduling gets 85.7% coverage of high-priority topics vs 42.9% for a static schedule, even starting at half the daily study hours. Here's the abstract: Every existing tool for exam preparation shares the same assumption: that discipline can be measured and reported back to the student, and that awareness alone will change behaviour. This assumption does not hold. This paper takes a different position: discipline is a stochastic variable to be accommodated, and exam preparation is a control problem rather than a planning problem. The proposed system closes a feedback loop around observed student behaviour through a behavioural tracker, a scheduling engine driven by a topic priority function and dependency graph, and a psychological reset condition that eliminates the backlog accumulation that causes students to abandon existing planners entirely. Computational simulation across three conditions shows that priority-directed adaptive scheduling achieves 85.7% coverage of high-priority topics against 42.9% for a static schedule, despite beginning at half the daily study hours. Paper and simulation code: https://github.com/NikhileshAR/stochastic-discipline-sim I've initiated my arXiv submission under cs.CY (Computers and Society) and I need an endorsement to complete it. If you are a registered arXiv author who has submitted to cs.CY or any related CS category in the last 5 years, you can endorse me by clicking this link: https://arxiv.org/auth/endorse?x=CKTPPA or enter code CKTPPA at arxiv.org/auth/endorse.php It takes about 30 seconds. I would be really grateful. Thank you. Nikhilesh A R submitted by /u/theleadcreator [link] [comments]
View originalwhat eats more token, opus 1M high effort or regular opus max effort ?
Hello, what between effort and context size eats more token ? what's more efficient to use ? thank you submitted by /u/mombaska [link] [comments]
View originalPhD or Masters for Computational Cognitive Science [R]
First in US. How does the Masters differ from PhD? The field is niche so not many universities offer a masters in the first place but for the ones who are part of one, what is it like? The ones who are doing PhD what kind of research is projected to blow up or become the trend 2 years from now. How does the funding look like, the administration cuts, in general. Around the globe. Same questions. More personally, what drew you all to this field? Which field did you find most surprising that was also inter-lapping with CCS? Thank You. Source: Starry-eyed undergrad discovering Tenenbaum’s papers. submitted by /u/Friendly_Schedule_36 [link] [comments]
View originalClaude code x n8n
Hi everyone, today I wanted to ask what you think about the MCP and the n8n skills in Claude's Code. Do you use it? Is it worth it? What do you think? Can it replace us?, Thank you all submitted by /u/emprendedorjoven [link] [comments]
View originalBest workflow for AI Agent-driven Content Refresh? (n8n + Claude/Haiku vs. Others)
Hey everyone, I’m looking to build an automated workflow to "refresh" my existing blog posts and I’m curious how you all would architect this. My goal is to take an existing article from my WordPress site and have an AI agent perform a deep SEO and quality audit before rewriting it. Specifically, I want the agent to: Extract & Analyze: Identify long-tail keywords, keyword density, and content gaps in my original post. Competitor Research: Compare my content against top-ranking competitors for the same topic. Optimization: Calculate the average keyword density from the top results and identify "missing" high-interest subtopics. Rewrite: Generate a final version that improves the original quality, hits the target SEO metrics, and fills the identified gaps. Publish: Auto-update or post the final version directly back to WordPress. My questions for the experts here: Are you guys building this kind of multi-step logic using n8n with agents? Which LLMs are you finding most reliable for this? I’m considering Claude 3.5 Sonnet for the heavy lifting or Haiku for the extraction phases to save on tokens. Is there a better way to handle the "competitor comparison" step within the workflow? Would love to hear about your stacks or any specific nodes/tools you're using to keep the content sounding human while hitting those SEO benchmarks. Thanks! submitted by /u/JosetxoXbox [link] [comments]
View originalAi tools for studies
I am considering to buy a paid version (permium) of an Ai tool. I feel like Chatgpt is very general. Can u guys recommad me an ai which is better than chatgpt or gemini for studies . I want to use it for like a guide of A level. Thank you! submitted by /u/cokeyboi54 [link] [comments]
View originalExport only recv'd audio files
I want the non audio conversations that I have had. I exported and it only gave me the audio files. How do I get the actual transcript files? Thank you. submitted by /u/DBOHGA [link] [comments]
View originalI need a real-time deepfake filter
I was suspended on Instagram for supposedly acting like a bot. It’s asking for real time face video authentication for an appeal. I find this to be an invasion of my privacy and I’d rather use a nonexistent person to do this. Thanks! submitted by /u/Roshambo_Roshambo [link] [comments]
View originalBased on 45 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
The Verge AI
Publication at The Verge
1 mention