Bring AI-native applications to life with less hallucination, data leakage, and vendor lock-in
Based on the limited social mentions provided, there isn't enough substantive user feedback to form a comprehensive summary of what users think about Weaviate. The social mentions consist mainly of YouTube video titles that simply state "Weaviate AI" without any user commentary or reviews. There's one Hacker News post about an open-source AI agent runtime that mentions production challenges with AI agents (cost tracking, debugging, safety), but this doesn't appear to be specifically about Weaviate itself. To provide an accurate user sentiment summary for Weaviate, more detailed reviews and user discussions would be needed.
Mentions (30d)
0
Reviews
0
Platforms
3
GitHub Stars
15,926
1,241 forks
Based on the limited social mentions provided, there isn't enough substantive user feedback to form a comprehensive summary of what users think about Weaviate. The social mentions consist mainly of YouTube video titles that simply state "Weaviate AI" without any user commentary or reviews. There's one Hacker News post about an open-source AI agent runtime that mentions production challenges with AI agents (cost tracking, debugging, safety), but this doesn't appear to be specifically about Weaviate itself. To provide an accurate user sentiment summary for Weaviate, more detailed reviews and user discussions would be needed.
Features
Industry
information technology & services
Employees
70
Funding Stage
Series B
Total Funding
$67.7M
1,007
GitHub followers
138
GitHub repos
15,926
GitHub stars
20
npm packages
27
HuggingFace models
477,684
npm downloads/wk
65,348,277
PyPI downloads/mo
Show HN: Open-sourced AI Agent runtime (YAML-first)
Been running AI agents in production for a while and kept running into the same issues:<p>controlling what they can do tracking costs debugging failures making it safe for real workloads<p>So we built AgentRuntime, the infrastructure layer we wished we had. Not an agent framework, but the platform around agents:<p>policies memory workflows observability cost tracking RAG governance<p>Agents and policies are defined in YAML, so it's infrastructure-as-code rather than a chatbot builder. Example – agents and policies in YAML agent.yaml – declarative agent config name: support_agent<p>model: provider: anthropic name: claude-3-5-sonnet<p>context_assembly: enabled: true<p><pre><code> embeddings: provider: openai model: text-embedding-3-small providers: - type: knowledge config: sources: ["./docs"] top_k: 3 </code></pre> policies/safety.yaml – governance as code name: security-policy<p>rules: - id: block-file-deletion condition: tool.name == "file_delete" action: deny<p>CLI – run and inspect Create and run an agent agentctl agent create researcher --goal "Research AI safety" --llm gpt-4 agentctl agent run researcher agentctl runs watch <run-id><p>Manage policies agentctl policy list agentctl policy activate security-policy 1.0.0<p>RAG – ingest docs and ground responses in your knowledge base agentctl context ingest ./docs agentctl run --agent agent.yaml --goal "How do I deploy?"<p>Agent-level debugging agentctl debug -c agent.yaml -g "Analyze this dataset."<p>Cost tracking is exposed via the API (per agent/tenant), and the Web UI shows analytics. The workflow debugger (breakpoints, step-through) lives in the pkg layer; the CLI debug is for agent execution. What’s in there Governance<p>Policy engine (CEL) Risk scoring Encrypted audit logs RBAC Multi-tenancy Fully YAML-configurable<p>Orchestration<p>Visual workflow designer (React Flow) DAG workflows Multi-agent coordination Conditional logic Plugin hot-reload Workflow marketplace<p>Memory & Context<p>Working memory Persistent memory Semantic memory Event log<p>Context assembly combines:<p>policies workflow state memory tool outputs knowledge<p>RAG features:<p>embeddings (OpenAI or local) SQLite for development Postgres + vector stores in production<p>Observability<p>Cost attribution via API SLA monitoring Distributed tracing (OpenTelemetry) Prometheus metrics Deterministic replay (5 modes)<p>Production<p>Kubernetes operator (Agent, Workflow, Policy CRDs) Helm charts Istio config Auto-scaling Backup / restore GraphQL + REST API<p>Implementation<p>~50k LOC of Go Hundreds of tests Built for production (in mind)<p>Runs on: Local<p>SQLite In-memory runtime<p>Production<p>Postgres Redis Qdrant / Weaviate<p>Happy to answer questions or help people get started
View originalPricing found: $45 /mo, $400 /mo, $45 / month, $400 / month, $0.01668 / 1m
I built an open source AI Memory Storage that scales, easily integrates, and is smart
I built a super easy to integrate memory storage and retrieval system for NodeJS projects because I saw a need for information to be shared and persisted across LLM chat sessions (and many other LLM feature interactions). It started as a fun side project but it worked really well and I thought others might find it useful as well. I used Claude Opus to code the unit tests and a developer UI sandbox but coded the rest myself. I tried to keep the barrier to use as low as possible so I included built-in support for major LLMs (GPT, Gemini, and Claude) as well as major vector store providers (Weaviate and Pinecone). The memory store works by ingesting and automatically extracting “memories” (summarized single bits of information) from LLM interactions and vectorizing those. When you want to provide relevant context back to the LLM (before a new chat session starts or even after every user request) you just pass the conversation context to the recall method and an LLM quickly searches the vector store and returns only the most relevant memories. This way, we don’t run context size issues as the history and number of memories grows but we ensure that the LLM always has access to the most important context. There’s a lot more I could talk about (like the deduping system or the extremely configurable pieces of the system), but I’ll leave it at that and point you to the README if you’d like to learn more! Also check out the dev client if you’d like to test out the memory palace yourself! https://github.com/colinulin/mind-palace submitted by /u/colin3440 [link] [comments]
View originalShow HN: Open-sourced AI Agent runtime (YAML-first)
Been running AI agents in production for a while and kept running into the same issues:<p>controlling what they can do tracking costs debugging failures making it safe for real workloads<p>So we built AgentRuntime, the infrastructure layer we wished we had. Not an agent framework, but the platform around agents:<p>policies memory workflows observability cost tracking RAG governance<p>Agents and policies are defined in YAML, so it's infrastructure-as-code rather than a chatbot builder. Example – agents and policies in YAML agent.yaml – declarative agent config name: support_agent<p>model: provider: anthropic name: claude-3-5-sonnet<p>context_assembly: enabled: true<p><pre><code> embeddings: provider: openai model: text-embedding-3-small providers: - type: knowledge config: sources: ["./docs"] top_k: 3 </code></pre> policies/safety.yaml – governance as code name: security-policy<p>rules: - id: block-file-deletion condition: tool.name == "file_delete" action: deny<p>CLI – run and inspect Create and run an agent agentctl agent create researcher --goal "Research AI safety" --llm gpt-4 agentctl agent run researcher agentctl runs watch <run-id><p>Manage policies agentctl policy list agentctl policy activate security-policy 1.0.0<p>RAG – ingest docs and ground responses in your knowledge base agentctl context ingest ./docs agentctl run --agent agent.yaml --goal "How do I deploy?"<p>Agent-level debugging agentctl debug -c agent.yaml -g "Analyze this dataset."<p>Cost tracking is exposed via the API (per agent/tenant), and the Web UI shows analytics. The workflow debugger (breakpoints, step-through) lives in the pkg layer; the CLI debug is for agent execution. What’s in there Governance<p>Policy engine (CEL) Risk scoring Encrypted audit logs RBAC Multi-tenancy Fully YAML-configurable<p>Orchestration<p>Visual workflow designer (React Flow) DAG workflows Multi-agent coordination Conditional logic Plugin hot-reload Workflow marketplace<p>Memory & Context<p>Working memory Persistent memory Semantic memory Event log<p>Context assembly combines:<p>policies workflow state memory tool outputs knowledge<p>RAG features:<p>embeddings (OpenAI or local) SQLite for development Postgres + vector stores in production<p>Observability<p>Cost attribution via API SLA monitoring Distributed tracing (OpenTelemetry) Prometheus metrics Deterministic replay (5 modes)<p>Production<p>Kubernetes operator (Agent, Workflow, Policy CRDs) Helm charts Istio config Auto-scaling Backup / restore GraphQL + REST API<p>Implementation<p>~50k LOC of Go Hundreds of tests Built for production (in mind)<p>Runs on: Local<p>SQLite In-memory runtime<p>Production<p>Postgres Redis Qdrant / Weaviate<p>Happy to answer questions or help people get started
View originalRepository Audit Available
Deep analysis of weaviate/weaviate — architecture, costs, security, dependencies & more
Yes, Weaviate offers a free tier. Pricing found: $45 /mo, $400 /mo, $45 / month, $400 / month, $0.01668 / 1m
Key features include: Weaviate Agents, Deployment, Easy start, boundless scale, deploy anywhere, Turning over 450 data types into customer insights, Production-ready AI assistant built in 7 days, Customer service agents with 90% faster search, Successful management of 42M vectors in production, Let’s build together.
Weaviate has a public GitHub repository with 15,926 stars.
Based on user reviews and social mentions, the most common pain points are: cost tracking.