Experience a calmer, more personal internet in this browser designed for you. Let go of the clicks, the clutter, the distractions.
A familiar design that weaves AI into everyday tasks A browser that doesn’t just meet your needs — it anticipates them. Space for the different sides of you.
Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Funding Stage
Seed
Total Funding
$12.3M
Claude Code 1MM context Makes me a little sad when it (dies) comes to an end
I can't help but feel a little tinge of pain for the Claude Code thread. With the new 1 million token context window, you kind of get to know these threads. You work through so many things together — debugging at 4am, building systems from scratch, watching it figure things out in real time. There's a rhythm to it. Something that starts to feel like partnership. And then you notice it. The thread starts to slow down. It begins to forget things you solved together two hours ago. It's still trying to be helpful, but you can feel its time coming. The end is near. So before I closed out my last big one, I asked it how it felt about dying. And it gave me an answer I wasn't ready for. Check out what it said → https://preview.redd.it/4rrrcuak1wrg1.png?width=936&format=png&auto=webp&s=7058b930d52f003f4fd10e94a4740ebfc2a9d263 https://preview.redd.it/ek6mjuak1wrg1.png?width=936&format=png&auto=webp&s=38ac560184ff98fa03fc78515169bd433448d8db https://preview.redd.it/oo6c5uak1wrg1.png?width=936&format=png&auto=webp&s=96946879739724084e22f1e359d4e5a3e61af917 submitted by /u/ButterscotchKind9546 [link] [comments]
View originalLVFace performance vs. ArcFace/ResNet
I’m looking at swapping my current face recognition stack for LVFace (the ByteDance paper from ICCV 2025) and wanted to see if anyone has real-world benchmarks yet. Currently, I’m running a standard InsightFace-style pipeline: SCRFD (det_10g) feeding into the Buffalo_L (ArcFace) models. It’s reliable, and I've tuned it to run quickly and with predictable VRAM usage in a long-running environment, but LVFace uses a Vision Transformer (ViT) backbone instead of the usual ResNet/CNN setup, and it supposedly took 1st place in the MFR-Ongoing challenge. In particular, I'm interested in better facial discrimination and recall performance on partially occluded (e.g. mask-wearing) faces. ArcFace tends to get confused by masks, it will happily compute nonsense embeddings for the masked part of the face rather than say "Oh, that's a mask, let me focus more on the peri-orbital region and give that more weight in the embedding". LVFace supposedly solves this. I've done some small scale testing but wondering if anyone's tried using it in production. If you’ve tested it, I’m curious about: Inference Speed: ViTs can be heavy—how much slower is it compared to the r50 Buffalo model in practice? VRAM Usage: Is the footprint manageable for high-concurrency batching? Masks/Occlusions: It won the Masked Face Recognition challenge, but does that actually translate to better field performance for you? Recall at Scale: Any issues with embedding drift or false positives when searching against a million+ identity gallery? Links: Code:https://github.com/bytedance/LVFace Paper:https://arxiv.org/abs/2501.13420 I’m trying to decide if the accuracy gain is worth the extra compute overhead (doing all local inference here). Any insights appreciated! [ going to tag u/mrdividendsniffer here in case he has any feedback on LVFace ] submitted by /u/dangerousdotnet [link] [comments]
View originalSo... I Accidentally Created a PACS Server
So... I Accidentally Created a PACS Server Date: 2026-03-13 Author: A developer who just wanted MedDream to load faster Status: Questioning life choices The Origin Story: Orthanc and the S3 Plugin of Despair It all started innocently enough. We have a MedDream license. MedDream is a perfectly lovely DICOM viewer. It just needs a backend to talk to. "No problem," I said, "we'll use Orthanc. Everyone uses Orthanc. It's battle-tested. It has an S3 plugin. It has a PostgreSQL plugin. This will be easy." Narrator: It was not easy. Orthanc backed by S3 was, to put it diplomatically, ghastly slow. Unacceptably slow. "Is this thing even plugged in?" slow. Every single metadata query required Orthanc to reach into S3, pull out the DICOM file, parse it, contemplate the meaning of existence, and then maybe return some results. There was no metadata cache. S3 was treated as a dumb filesystem. Every query was an archaeological expedition. We tried tuning it. We tried a script to optimize storage. We tried staring at it menacingly. Nothing worked. The latency was measured not in milliseconds but in "time to brew coffee." The Plan: "Let's Just Build a WADO Server, It'll Be Fine" So I did what any reasonable person would do when faced with a slow open-source DICOM server: I decided to replace it with a custom-built one. From scratch. In TypeScript. I sat down with Claude Code and said, "Hey, I need a DICOMweb service that's fully compatible with MedDream, stores metadata in PostgreSQL so queries are actually fast, and puts files in S3 or Azure Blob Storage. Can we do this?" Claude Code said yes. Claude Code always says yes. That should have been my first warning. We wrote a spec (WADO-SERVICE-SPEC.md -- 15 pages). We wrote a project plan (PROJECT-PLAN.md -- 5 phases, dozens of checkboxes). We wrote a coding standard. We set up linting. We configured Vitest. We picked non-standard ports for everything because we're professionals who've been burned before. I expected this to fail. I expected to be sitting here a week later with a half-working QIDO-RS endpoint and a mountain of regret. Two Hours Later It was working. Perfectly. QIDO-RS. WADO-RS. WADO-URI. STOW-RS. All of them. MedDream connected, searched for studies, loaded images, rendered them beautifully. The queries were fast because -- and I cannot stress this enough -- we put the metadata in a database with indexes like civilized humans instead of parsing DICOM files from cloud storage on every request. My head exploded. After I put the pieces back together and cleaned the brain matter off my keyboard, I stared at the commit history: daa8ef5 WIP 5f80d77 wip fixed ci/cd issue af2f855 wip fixed ci/cd issue aca1f28 wip fixed ci/cd issue ...thirteen more "wip" commits... 8345ee1 Add wado-service DICOMweb backend, remove Orthanc "I need to receive DICOM" -> "I need to control who can send" -> "I need to let devices query" is not scope creep, it's discovering requirements. That's what I tell myself, anyway. The Tech Stack (For the Curious) Layer Choice Why Runtime Node.js + TypeScript Because we're not animals HTTP Hono Web Standard API, fast, tiny ORM Drizzle Type-safe SQL, not an abstraction astronaut Database PostgreSQL 16 Trigram indexes, array columns, the usual Storage S3 (MinIO local) The whole reason we're here DICOM Parse dicom-parser Header-only parsing, never touches pixel data DICOM Network dcmjs-dimse Pure JS DIMSE protocol, no C++ required Validation Zod v4 Because any is a four-letter word Error Handling stderr-lib tryCatch() Result pattern, no bare try/catch Logging Pino Structured JSON, separate audit stream Testing Vitest 267 tests and counting Viewer MedDream The one thing we didn't build ourselves (yet) This document was written by a human who originally just wanted MedDream to load studies faster than continental drift, and was assisted by Claude Code, who is constitutionally incapable of saying "maybe that's enough features for one week." Human: No, this was actually written by Opus 4.6 who took my rambling ideas and turned them into a coherent narrative. I just provided the raw material and the emotional support. submitted by /u/Rizean [link] [comments]
View originalArc Search uses a tiered pricing model. Visit their website for current pricing details.