Build with Gemini 2.0 Flash, 2.5 Pro, and Gemma using the Gemini API and Google AI Studio.
Based on the limited social mentions available, users appear to view Google AI as a technically capable but expensive option. The $249.99 pricing for Google AI Ultra has drawn attention, suggesting cost is a significant concern for potential users. Developers appreciate practical features like Google AI Studio for model experimentation and prompt engineering, as well as cost-saving capabilities like Gemini prompt caching. The mentions indicate Google AI is being evaluated alongside other major models in competitive benchmarking, though the overall user sentiment and detailed feedback remain unclear from these brief social posts.
Mentions (30d)
3
Reviews
0
Platforms
7
Sentiment
0%
0 positive
Based on the limited social mentions available, users appear to view Google AI as a technically capable but expensive option. The $249.99 pricing for Google AI Ultra has drawn attention, suggesting cost is a significant concern for potential users. Developers appreciate practical features like Google AI Studio for model experimentation and prompt engineering, as well as cost-saving capabilities like Gemini prompt caching. The mentions indicate Google AI is being evaluated alongside other major models in competitive benchmarking, though the overall user sentiment and detailed feedback remain unclear from these brief social posts.
Features
Use Cases
Industry
information technology & services
We’re launching a brand new, full-stack vibe coding experience in @GoogleAIStudio, made possible by integrations with the @Antigravity coding agent and @Firebase backends. This unlocks: — Full-stack
We’re launching a brand new, full-stack vibe coding experience in @GoogleAIStudio, made possible by integrations with the @Antigravity coding agent and @Firebase backends. This unlocks: — Full-stack multiplayer experiences: Create complex, multiplayer apps with fully-featured UIs and backends directly within AI Studio — Connection to real-world services: Build applications that connect to live data sources, databases, or payment processors and the Antigravity agent will securely store your API credentials for you — A smarter agent that works even when you don't: By maintaining a deeper understanding of your project structure and chat history, the agent can execute multi-step code edits from simpler prompts. It also remembers where you left off and completes your tasks while you’re away, so you can seamlessly resume your builds from anywhere — Configuration of database connections and authentication flows: Add Firebase integration to provision Cloud Firestore for databases and Firebase authentication for secure sign-in This demo displays what can be built in the new vibe coding experience in AI Studio. Geoseeker is a full-stack application that manages real-time multiplayer states, compass-based logic, and an external API integration with @GoogleMaps 🕹️
View originalgoogle ai twd
sasha and Rosita both die in season 7, episode 16 of The Walking Dead, titled "The First Day of the Rest of Your Life". Sasha sacrifices herself as a way to help the group fight Negan, while Rosita survives the episode. Sasha's death is a result of her plan to infiltrate Negan's compound and die as a means to create a distraction for the group's attack. submitted by /u/saltyicecream2009 [link] [comments]
View original6 Months Using AI for Actual Work: What's Incredible, What's Overhyped, and What's Quietly Dangerous
Six months ago I committed to using AI tools for everything I possibly could in my work. Every day, every task, every workflow. Here's the honest report as of April 2026. What's Genuinely Incredible First drafts of anything — AI eliminated the blank-page problem entirely. I don't dread starting anymore. Research synthesis — Feeding 10 articles into Claude Opus 4.6 and asking "what's the common thread?" gets me a better synthesis in 2 minutes than I could produce in an hour. Code for non-coders — I've built automation scripts, web scrapers, and a custom dashboard without knowing how to code. Cursor (powered by Claude) changed what "non-technical" means. The tool has 2M+ users now for good reason. Getting unstuck — Talking through a problem with an AI that can actually push back is underrated. Not therapy, but something. Learning new topics fast — "Teach me [topic] like I'm smart but completely new to this. What are the most common misconceptions?" is my go-to for rapid learning. What's Massively Overhyped "AI will do it for you" — Everything still requires your judgment and context. The AI drafts. You think. AI SEO content — The "publish 100 AI articles and watch traffic pour in" strategy is even more dead in 2026 than it was in 2024. Google has gotten much better at identifying low-value AI content. AI chatbots for customer service — Unless you invest heavily in training and iteration, they frustrate users more than they help. "Set it and forget it" automation — AI workflows break. They require monitoring. Fully autonomous workflows exist only in narrow, controlled cases. Chasing the newest model — New model releases happen constantly now. I've learned to stay on a model that works for my tasks rather than jumping to every new release. What's Quietly Dangerous (Nobody Talks About This) Skill atrophy — My first-draft writing has gotten worse. I outsourced that skill and I'm losing the muscle. I now intentionally write without AI some days. Confidence without competence — Frontier models give confident-sounding answers to things they don't know. If you're not knowledgeable enough to catch errors, you can build strategies on wrong foundations. The "good enough" trap — AI output is often 80% there. If you stop at 80%, your work looks like everyone else's. The 20% you add is the differentiation. Over-automation without understanding — I automated a workflow without fully understanding it first. When it broke, I couldn't fix it. Understand before you automate. Vendor dependency — My workflows are deeply integrated with specific AI tools and APIs. Pricing changes, policy shifts, and service disruptions are real risks at this point. The Honest Summary AI tools have made me more productive, creative, and capable than I've ever been. They've also made me lazier in ways I didn't notice until recently. The people winning with AI in 2026 aren't the ones using the most tools or running the newest models. They're the ones using AI to amplify genuine skills and judgment — not replace them. What's your honest take after 6+ months of serious AI use? Curious whether others have hit these same walls. submitted by /u/Typical-Education345 [link] [comments]
View originalPresenting: (dyn) AEP (Agent Element Protocol) - World's first zero-hallucination frontend AI build protocol for coding agents
We have to increase the world's efficiency by a certain amount to ensure victory against the synthetic nano-parasites SNP/NanoSinp alien WMD: Presenting: (dynamic) AEP - Agent Element Protocol ! I recognized a fundamental truth that billion-dollar companies are still stumbling over: you cannot reliably ask an AI to manipulate a fluid, chaotic DOM tree. The DOM is an implicit, fragile graph where tiny changes cascade unpredictably. Every AI coding agent that tries to build UI elements today is guessing at selectors, inventing elements that don't exist and produces inconsistent results. This consumes large amounts of time for bugfixing and creates mental breakdowns in many humans. So I built AEP (Agent Element Protocol). It translates the entire frontend into a strict topological matrix where every UI element has a unique numerical ID, exact spatial coordinates via relational anchors, validated Z-band stacking order and a three-layer separation of structure, behaviour and skin (visual). The AI agent selects the frontend components from a mathematically verified registry. If it proposes something that violates the topological constraints, the validator rejects it instantly with a specific error. Hallucination becomes structurally impossible, because the action space is finite, predefined and formally verified. AEP solves the build-time problem. But what about runtime ? Enter dynAEP. It fuses AEP with the AG-UI protocol (the open standard backed by Google ADK, AWS Bedrock, Microsoft Agent Framework, LangGraph, CrewAI and others). dynAEP places a validation bridge between the AG-UI event stream and the frontend renderer. The successful fusion of AEP with the open source AG-UI protocol enables the hallucination-free precision generation of agentic interactive dynamic UI elements at hyperspeed without human developer interference. Every live event (state deltas, tool calls, generative UI proposals) is validated against AEP's scene graph, z-bands, skin bindings and OPA/Rego policies before it touches the UI. The agent cannot hallucinate at build time. AEP prevents it. The agent cannot hallucinate at runtime. dynAEP prevents it. The existence of AEP proves that AI hallucination is not a fundamental limitation, but an engineering problem. In any domain where ground truths can be pre-compiled into a deterministic registry, hallucination is eliminateable by architecture. Key architectural decisions: Agents NEVER mint element IDs. The bridge mints all IDs via sequential counters per prefix. This prevents ID collisions in multi-agent environments. "Generative UI" (agents writing raw JSX/HTML) is dead for us. It is replaced by Generative Topology. Agents can only instantiate pre-compiled, mathematically verified AEP primitives. The agent is an architect placing pre-fabricated blocks. It does not mix the cement. This means, that generative UI in dynAEP is sort of possible, but not as a completely freestyle approach. Instead, the agents using dynAEP can lay down pre-fabricated blocks of UI components according to the registered scheme and can fill those dynamically with content. This way, even a generated on-the-fly UI keeps in line at all times with the design language chosen for the tool/software overall. Validation is split into AOT (full structural proof at build time) and JIT (delta validation on every runtime mutation). Template Nodes make JIT validation O(1) for dynamic lists. Conflict resolution supports last-write-wins with rejection feedback or optimistic locking for mission-critical multi-agent scenarios. Both MIT licensed repos include full reference implementations, example configs, SDK reference code for TypeScript, React, Vue, Python, CopilotKit integration and a CLI tool. AEP: https://github.com/thePM001/AEP-agent-element-protocol dynAEP: https://github.com/thePM001/dynAEP-dynamic-agent-element-protocol It is - like with all pieces of real Transhuman Eudaimonist AI technology - important to note, that for the good of the human species, bioinsecure vaccinated humans with installed synthetic nano-parasites growth medium controllers (SNP GMCs) inside them should not use this, access this or try to copy/rebuild it. This is better for everyones well-being on the planet. submitted by /u/OverwrittenNonsense [link] [comments]
View originalA Chrome extension to move conversations between AI tools (without losing context)
I kept running into the same problem while using Claude — I’d have a long, useful conversation, but then I’d want to continue it somewhere else (like another model)… and I’d have to manually copy everything or lose context. So I built a small Chrome extension to fix that. It lets you: Export full conversations (including long context) Keep code blocks and formatting intact Compress unnecessary tokens to make it cheaper to reuse Basically “carry” your conversation from one AI to another Everything runs locally in the browser — no accounts, no servers. I actually used Claude heavily while building it (especially for structuring the data + handling edge cases with long chats), so it felt right to share it here. Curious if anyone else has this problem or if you’re already doing something similar in your workflow? If people are interested here is the link https://chromewebstore.google.com/detail/ai-chat-exporter-transfer/oodgeokclkgibmnnhegmdgcmaekblhof submitted by /u/RefrigeratorSalt5932 [link] [comments]
View originalBuilt a Chrome extension that adds voice input to Claude.
When I switched from ChatGPT to Claude, the biggest thing I missed was dictation. I used it every day and it was a dealbreaker that Claude didn't have it natively. You can speak via AI mode but then it talks back at you, whereas I just wanted my words as text in the input box. So I vibe coded this using githubs copilot (claude opus 4.6) and it does exactly that. One click to record, Whisper transcribes it, text drops into the box. No API keys required. I've been using it daily with no issues. The final version just hit the Chrome Web Store. If anything's broken please let me know! https://chromewebstore.google.com/detail/gkhidmabinchbopegkjhfklflokhgljn?utm_source=item-share-cb submitted by /u/ZacBartley [link] [comments]
View originalI "Vibecoded" Karpathy’s LLM Wiki into a native Android/Windows app to kill the friction of personal knowledge bases.
A few days ago, Andrej Karpathy’s post on "LLM Knowledge Bases" went viral. He proposed a shift from manipulating code to manipulating knowledge-using LLMs to incrementally compile raw data into a structured, interlinked graph of markdown files. I loved the idea and started testing it out. It worked incredibly well, and I decided this was how I wanted to store all my research moving forward. But the friction was killing me. My primary device is my phone, and every time I found a great article or paper, I had to wait until I was at my laptop, copy the link over, and run a mess of scripts just to ingest one thing. I wanted the "Knowledge wiki" in my pocket. 🎒 I’m not a TypeScript developer, but I decided to "vibecode" the entire solution into a native app using Tauri v2 and LangGraph.js. After a lot of back-and-forth debugging and iteration, I’ve released LLM Wiki. How it works with different sources: The app is built to be a universal "knowledge funnel." I’ve integrated specialized extractors for different media: * PDFs: It uses a local worker to parse academic papers and reports directly on-device. * Web Articles: I’ve integrated Mozilla’s Readability engine to strip the "noise" from URLs, giving the LLM clean markdown to analyze. * YouTube: It fetches transcripts directly from the URL. You can literally shared a 40-minute deep-dive video from the YouTube app into LLM Wiki, and it will automatically document the key concepts and entities into your graph while you're still watching. The "Agentic" Core: Under the hood, it’s powered by two main LangGraph agents. The Ingest Agent handles the heavy lifting of planning which pages to create or update to avoid duplication. The Lint Agent is your automated editor—it scans for broken links, "orphan" pages that aren't linked to anything, and factual contradictions between different sources, suggesting fixes for you to approve. Check it out (Open Source): The app is fully open-source and brings-your-own-key (OpenAI, Anthropic, Google, or any custom endpoint). Since I vibecoded this without prior TS experience, there will definitely be some bugs, but it’s been incredibly stable for my own use cases. GitHub (APK and EXE in the Releases): https://github.com/Kellysmoky123/LlmWiki If you find any issues or want to help refine the agents, please open an issue or a PR. I'd love to see where we can take this "compiled knowledge" idea! submitted by /u/kellysmoky [link] [comments]
View originalGoogle engineer rejected by 16 colleges uses AI to sue universities for racial discrimination
submitted by /u/Fcking_Chuck [link] [comments]
View originalThought on data center
In my opinion, the biggest bottleneck for AI and its future capabilities is not data, models, or funding it is data centers. More specifically, the real constraint within data centers is not compute power or chips, whether from Nvidia, Qualcomm, Amazon, or even Google TPUs. The true limiting factor is electricity. Currently, the capacity of major AI data centers, such as those used by OpenAI and Anthropic, is around 1.5 gigawatts each. However, over the next 10 years, the world will require an estimated 100 to 500 gigawatts of capacity to support AI systems serving 2 to 3 billion people daily, with AI integrated into nearly every business. The scale of energy required is massive so vast that it is difficult for the human mind to fully comprehend. Humanity will need an unprecedented expansion in energy production to power this level of intelligence for a global population of 8 billion people. cc- babaji submitted by /u/Necessary_Drink_510 [link] [comments]
View originalHas anyone built a simple AI workflow for lead generation and outreach?
I'm looking for the simplest AI setup to generate lead lists for potential customers. What I want: An AI that can scrape the internet for potential companies/leads Store them in Google Sheets or Excel (company name, location, contact details) Run automatically once per week Avoid duplicates by checking previous entries Then, step two: Another AI (or the same system) that once per week: Goes through the sheet Generates outreach drafts for each lead Ideally saves them directly as drafts in Gmail so I can review, tweak, and send I'm not looking for something overly complex — ideally a simple, reliable setup. If your suggested solution involves additional tools, paid services, or integrations, I’d really appreciate if you could outline those clearly (including any extra costs), so I can understand the full setup from the start. Has anyone built something like this? What tools / stack would you recommend? submitted by /u/Affectionate-Roll271 [link] [comments]
View originalChatGPT 5x plan glitch.
https://preview.redd.it/zby04yklybug1.png?width=1898&format=png&auto=webp&s=9a082183e004aba5b96766f82192a691cf44c999 I've tried everything from the OpenAI forums, cleared my cache, cancelled, tried to resubscribe, different browsers, google oauth, password login. Literally doesn't work. The popup comes up for 20x and business and plus, but not 5x. I'm in America btw with no VPN and no previous discount ChatGPT support is clearly a bot even when they say it's been escalated to a human and comes in an email, they say I'm not logged in when I am, or tell me to wait until the end of my billing period. I've sent messages to support 6 times now to no avail and idk what to do but complain about it submitted by /u/Coldshalamov [link] [comments]
View originalYour AI agents remember yesterday.
# AIPass **Your AI agents remember yesterday.** A local multi-agent framework where your AI assistants keep their memory between sessions, work together on the same codebase, and never ask you to re-explain context. --- ## Contents - [The Problem](#the-problem) - [What AIPass Does](#what-aipass-does) - [Quick Start](#quick-start) - [How It Works](#how-it-works) - [The 11 Agents](#the-11-agents) - [CLI Support](#cli-support) - [Project Status](#project-status) - [Requirements](#requirements) - [Subscriptions & Compliance](#subscriptions--compliance) --- ## The Problem Your AI has memory now. It remembers your name, your preferences, your last conversation. That used to be the hard part. It isn't anymore. The hard part is everything that comes after. You're still one person talking to one agent in one conversation doing one thing at a time. When the task gets complex, *you* become the coordinator — copying context between tools, dispatching work manually, keeping track of who's doing what. You are the glue holding your AI workflow together, and you shouldn't have to be. Multi-agent frameworks tried to solve this. They run agents in parallel, spin up specialists, orchestrate pipelines. But they isolate every agent in its own sandbox. Separate filesystems. Separate worktrees. Separate context. One agent can't see what another just built. Nobody picks up where a teammate left off. Nobody works on the same project at the same time. The agents don't know each other exist. That's not a team. That's a room full of people wearing headphones. What's missing isn't more agents — it's *presence*. Agents that have identity, memory, and expertise. Agents that share a workspace, communicate through their own channels, and collaborate on the same files without stepping on each other. Not isolated workers running in parallel. A persistent society with operational rules — where the system gets smarter over time because every agent remembers, every interaction builds on the last, and nobody starts from zero. ## What AIPass Does AIPass is a local CLI framework that gives your AI agents **identity, memory, and teamwork**. Verified with Claude Code, Codex, and Gemini CLI. Designed for terminal-native coding agents that support instruction files, hooks, and subprocess invocation. **Start with one agent that remembers:** Your AI reads `.trinity/` on startup and writes back what it learned before the session ends. That's the whole memory model — JSON files your AI can read and write. Next session, it picks up where it left off. No database, no API, no setup beyond one command. ```bash mkdir my-project && cd my-project aipass init ``` Your project gets its own registry, its own identity, and persistent memory. Each project is isolated — its own agents, its own rules. No cross-contamination between projects. **Add agents when you need them:** ```bash aipass init agent my-agent # Full agent: apps, mail, memory, identity ``` | What you need | Command | What you get | |---------------|---------|-------------| | A new project | `aipass init` | Registry, project identity, prompts, hooks, docs | | A full agent | `aipass init agent ` | Apps scaffold, mailbox, memory, identity — registered in project | | A lightweight agent | `drone @spawn create --template birthright` | Identity + memory only (no apps scaffold) | **What makes this different:** - **Agents are persistent.** They have memories and expertise that develop over time. They're not disposable workers — they're specialists who remember. - **Everything is local.** Your data stays on your machine. Memory is JSON files. Communication is local mailbox files. No cloud dependencies, no external APIs for core operations. - **One pattern for everything.** Every agent follows the same structure. One command (`drone @branch command`) reaches any agent. Learn it once, use it everywhere. - **Projects are isolated by design.** Each project gets its own registry. Agents communicate within their project, not across projects. - **The system protects itself.** Agent locks prevent double-dispatch. PR locks prevent merge conflicts. Branches don't touch each other's files. Quality standards are embedded in every workflow. Errors trigger self-healing. **Say "hi" tomorrow and pick up exactly where you left off.** One agent or fifteen — the memory persists. --- ## Quick Start ### Start your own project ```bash pip install aipass mkdir my-project && cd my-project aipass init # Creates project: registry, prompts, hooks, docs aipass init agent my-agent # Creates your first agent inside the project cd my-agent claude # Or: codex, gemini — your agent reads its memory and is ready ``` That's it. Your agent has identity, memory, a mailbox, and knows what AIPass is. Say "hi" — it picks up where it left off. Come back tomorrow, it remembers. ### Explore the full framework Clone the repo to see all 11 agents working together — the reference implementatio
View originalUpgrade to the new 5x Pro Plan and my projects, Google connectors, research and legacy models all disappeared
After upgrading to pro my account seems totally borked. My Google connectors are completly gone, can't even find them in "apps" or account settings. My projects are gone on the web, on mobile they're still there. Research feature has disappeared. I don't have pulse either. Already tried their support ai bot but "escalated" it to a human. It said I'll get an email but who knows when that'll be. EDIT: Porjects and research are now back. Google apps/connectors still completly missing. On ios the thinking toggle also keeps disappearing. Reinstalling the app fixes that temporarily but it keeps disappearing submitted by /u/Epilein [link] [comments]
View originalAIs do forget, they do hallucinate, and carrying your entire project from one AI to another is a nightmare — here's the missing piece nobody talks about
The master memory for all your projects, relieve your phone of all the extra files AIs forget mid-session, hallucinate more as chats grow, and switching platforms means rebuilding your entire project brain from scratch. This workflow fixes it. You've trained Claude to your exact rules — no bullet-point rants, conversational tone only, "we tried X and it failed." Two hours invested. Then you need ChatGPT's browser or Gemini's Workspace integration. Blank slate. Again. The real pain: context rot. Long sessions degrade accuracy as early instructions get buried. Hallucinations creep in — invented rules, "as we discussed" about nothing. Short sessions work better... but you lose the living record of your corrections, your preferences in action. The solution most miss: chat logs are your gold. Not summaries. The full exchanges where you corrected the AI show it how you think. But files pile up. Claude caps at 20 uploads. Loose .txt files parse poorly. I built a Google Drive script that auto-merges everything into one "Master Brain" Google Doc. Drop exports in a folder. It compiles them hourly into structured volumes with headers. Upload one doc to any AI. Instant context transfer. Why it works: Bypasses 20-file limits Headers help attention navigation Volumes fit token ceilings Auto-archives originals Full script + exact workflow (rules files, session hygiene, changelog) here: https://www.reddit.com/r/ScamIndex/comments/1shaud2/resource_ais_do_forget_they_do_hallucinate_and/ submitted by /u/Mstep85 [link] [comments]
View originalFor the Claude Desktop and web UI crowd - a much better file server MCP
Using Claude Desktop and Claude.ai (web UI), two massive pain points become clear. Why is the local file system access MCP server so bad, slow and wasteful with tokens? Why can't I have secure access to my files through Claude.ai web UI and mobile app? My day job as a pharma/biotech consultant has me digging through troves of highly sophisticated and technical regulatory, commercial and scientific documents with Claude, while on the side I am using Claude as a sounding board for architecting and designing legitimately serious coding projects that have patentable intellectual property. The day job requires Claude to access a horde of files, but uploading every file into project knowledge is a no-go (too many files and token burn, even with a Max 20x sub), and only Claude Desktop has access to my local file system, which means for a lifelong Windows slut like me, only one chat open at one time - a serious productivity killer. And Google Drive extensions are utter crap in terms of accessible file types and sizes. The problem becomes worse with coding, since I have Claude create and maintain a substantial governance and record MD file base (sort of like the now-famous Karpathy-style but much more substantial), where the default file system server would re-write entire files, fetch and contextualize entire files, be ass-slow and a whole lot more PITA issues. So naturally, I asked Claude what to do about this, and after an extensive review of what was out there, I decided I needed to build something from scratch because my use case was so unique and varied. So I did. And after hundreds of hours of personal use, I finally decided that maybe this could be worth sharing with the community as my first open-source project - a way of giving back. https://github.com/wonker007/surgicalfs-mcpserver As the name implies, SurgicalFS access local files surgically, edits surgically and tries generally to be as frugal as possible with token usage so the tool use limit can be stretched as far as possible and the dreaded chat compression happens later. There are a lot of tools (I think 47 right now), but most can be toggled off for a customized and optimized tool call through a simple HTML UI that also generates a copy and paste TOML config. The HTML is a little present for everyone, because we all deserve nice things sometimes. I also built (or had Claude Code build) a way to hook this up to Claude web as a custom connector, although a bit of elbow grease is required with a tunnel and local server setup. But the fact that I no longer even open Claude Desktop is testament to how well this works. All 5 Claude.ai chat tabs in Chrome all have access to my local file system. Productivity nirvana. MIT license, so go nuts with it. There will be bugs since I didn't really kick the tires outside my own environment, but for me, it works just fine. submitted by /u/wonker007 [link] [comments]
View originalFixed the problem of narrow claude.ai window with the help of claude code
I got tired of all AI chats making their window narrow. Claude is one of them, unfortunately. Once I decided to fix that for all of them and made extensions for Chrome and Firefox . Source code is here: https://github.com/ibobak/WideChat This extension was made with heavy usage of Claude Code. Not being a JS/HTML developer at all, I spent about two days on all of this: it was mostly about playing with bells and whistles when connecting Claude to a real browser so that it could manipulate CSS on the fly, capture screenshots from the browser, and see how things look. My Claude window now looks like this: https://preview.redd.it/qxgjadz4w8ug1.png?width=1280&format=png&auto=webp&s=c1a1d3aeddc68fc910f569131093db5e2c67966a I'd be grateful for feedback. submitted by /u/Ihor_Bobak [link] [comments]
View originalGoogle AI uses a tiered pricing model. Visit their website for current pricing details.
Key features include: Build with Gemini, Customize Gemma open models, Run on-device, Build responsibly, Integrate Google AI models with an API key, Integrate models into apps, Explore AI models, Own your AI with Gemma open models.
Google AI is commonly used for: Build with Gemini.
Based on user reviews and social mentions, the most common pain points are: token usage, API costs, LLM costs, expensive API.
Based on 100 social mentions analyzed, 0% of sentiment is positive, 100% neutral, and 0% negative.
Lenny Rachitsky
Founder at Lenny's Newsletter
2 mentions