Temporary error. Please try again. Before proceeding to your request, you need to solve a puzzle, and the puzzle requires Google Translate to be disabled. Please disable Google Translate and retry. Complete the security check before continuing. This step verifies that you are not a bot, which helps to protect your account and prevent spam.
Mentions (30d)
0
Reviews
0
Platforms
2
Sentiment
0%
0 positive
Industry
information technology & services
Employees
44
Funding Stage
Seed
Total Funding
$0.8M
Claude spinner verbs that are actually useful reminders
I've used Claude to draft this post, then I edited it myself. You're welcome to read the worst of both our contributions (or the best, I can't tell anymore: I only know how to reply by tapping 1, 2, or 3 at this stage) I've created a repo with almost 2,000 spinner verbs but just added a new category that may be the only useful addition: Vibe Check (110 phrases that remind you to be a better engineer while you wait) Instead of "Frolicking..." you get things like: Did you follow TDD? Did you run the RED phase before the GREEN phase? Did you add sad-path tests? Do you have contract tests to catch drift between front-end and back-end? Do you create a contract.md before you deploy sub-agents? Do you have a catalog.yaml to standardize all boundary enums? Are you blindly accepting AI output? Did you actually read the diff? SQL injection: are you sure? Is this class single-responsibility? What would a code reviewer flag here? Are you programming by coincidence? Make it work, make it right, make it fast Ship it, or think about it one more minute? It's like having a paranoid dev tap you on the shoulder every few seconds. I'm installing these right after I've posted this. Hopefully it'll be effective when you're in vibe-coding mode and moving fast. The full collection has 1,945+ spinner verbs across 88 categories (Sci-Fi / Space, Noir Detective, Mission Control, Git Commit Messages, Pirate, and more). The Vibe Check category is the only one that's actually useful though 😄 Repo: https://github.com/wynandw87/claude-code-spinner-verbs To install, just paste the verbs you want into Claude Code and ask it to add them to your ~/.claude/settings.json then you've got to do a little rain-dance and howl at your lavalamp, or don't, you have free-will (and more importantly, free-won't) submitted by /u/wynwyn87 [link] [comments]
View originalUsing AI as a dungeon master with Claude projects, but trying to be smart about it.
I always wanted to play roleplay with AI, i been trying over the years but it was always just not quite there for me, i dont like inconsistency and the AI to be a yes-man to everything i say. As a DM i am quite unforgiving, and i enjoy the same when playing, claude works with huge context windows, but that sometimes works against it. But I was using projects for other work and got used to work around context corruption, and got an idea. what if i helped the AI take notes and organize the lore and knowledge? First, I was too general, using just 3-4 files, but getting broad (but not too broad either) proved better. Every location has a file, with sub-locations, NPCs and stories, while in a city i dont need the AI to think about the whole world, just to focus on the local for now. giving it less to worry about at every step helps not just with consistency, but with token usage (at first after just some meaningful events i would run out on the free plan) NPCs have memory, but it gets condensed over time and they have their own personality, claude was great as it was but after a while you start noticing everyone is the same person with another name. I also use a file just as a cache of sort, the AI DM copies everything it will need for the session there, so it can work in only one small file, and then after a few events, it saves the information on the real files. this also helps if an event has to happen across different locations. The system is very agnostic of the setting, you are supposed to run a session 0 to set it up, talking about the general world and your character, most systems are narrative-based, but some are numeric. this could be easily changed wiht the help of the AI to your liking. It's a low magic fantasy setting by default, but making it more magical, or even a sci-fi space travel system should not be that much work. Setting up the world upfront is the most heavy operation in tokens, but after that its rather cheap, i have stayed on the same 4-5 locations and did not ran out of tokens for like a whole afternoon. i asked claude to write some documentation for it and uploaded the clean files without the setting on my github, in case someone else wants to play around with it, and if you do please give me your feedback and if you see any issues or what you added to your instance. submitted by /u/Quien_9 [link] [comments]
View originalPrompt for generating images Claude
Note I can’t guarantee you’d be perfect or anything beyond 2D you will count to some issues this is a project at currently experimenting with Go ahead have fun. If possible, share some Discover or improvements with the community. # Claude Visual Generation Methods — A Complete Field Guide ## What This Document Is A reference for every method Claude can use to generate visual content inside artifacts, discovered through direct experimentation. Each method was tested, its ceiling found, its limits documented. This is the map of the territory. ----- ## Method 1: Pixel Art (Canvas Grid Rendering) **What it is:** Placing colored squares on a fixed grid — the same technique used in 8-bit and 16-bit game sprite creation. Each pixel is defined as a character in a string array, mapped to a color palette. **Best for:** Game sprites, retro-style characters, tile maps, icons, simple animations. **Resolution:** 16×16 to 64×64 is the sweet spot. Beyond that, the data becomes unwieldy. **Strengths:** - Extremely precise — every pixel is intentional - Sprite sheet animation (idle, walk, attack frames) is straightforward - Tiny file size, instant render - Scales cleanly with `image-rendering: pixelated` - The aesthetic *is* the constraint — chunky pixels are the point **Limitations:** - No smooth curves, no gradients within the grid - Detail ceiling is hard — a 32×32 face reads as “face” because the viewer’s brain fills gaps - Labor-intensive at higher resolutions (each pixel is a manual coordinate) **Animation capability:** Frame-based sprite sheets. Swap between pre-built frames on a timer. Smooth motion is an illusion of frame sequencing, not interpolation. **Color palette:** Best kept to 8–16 colors. Constraints force clarity. Dithering patterns can simulate additional tones. ----- ## Method 2: Canvas 2D Procedural Painting **What it is:** Using the HTML Canvas 2D API as a digital painting engine — bezier curves, radial/linear gradients, compositing blend modes, layered rendering passes. **Best for:** Character portraits, illustrated scenes, atmospheric environments, anything requiring painterly depth. **Resolution:** 800×1000+ at full detail. Limited only by computation time. **Strengths:** - Multi-pass rendering: background → character → foreground → post-processing - Gradient-based skin rendering simulates subsurface scattering - Variable-width bezier strokes replicate brush/ink pressure - Compositing modes (screen, multiply, soft-light) enable bloom, color grading, volumetric light - Perlin noise integration for organic textures (terrain, fabric, skin variation) - Film grain, vignette, bloom via downsampled buffer — proper post-processing stack - Breathing animation, hair sway, particle systems all run in real-time **Limitations:** - Every coordinate is hand-authored — no “happy accidents” - Faces plateau at “recognizable” rather than “expressive” — the millimeter-level asymmetry that makes a smirk read as knowing is extremely hard to nail mathematically - Curly/organic hair requires dedicated curl generators and still lacks the volumetric per-curl lighting of hand-painted illustration - Lines are mathematically smooth — they lack the confidence irregularities of a human hand **Ceiling we reached:** Multi-layer character portrait with strand-based hair, iris-fiber eye detail, subsurface skin warmth, layered forest environment with Perlin noise terrain, atmospheric mist, fireflies, volumetric moonlight, ACES tone mapping, and film grain. This was the highest fidelity static image achieved. **Key techniques discovered:** - **Strand-based hair:** Each lock is an independent bezier with its own gradient, width taper, and wind response - **Soft brush system:** `createRadialGradient` with transparent outer stop creates painterly soft dots - **Variable-width strokes:** Subdivide a bezier into segments, vary `lineWidth` per segment based on parametric t — mimics pen pressure - **Screen-blend rim lighting:** Draw highlight strokes with `globalCompositeOperation = 'screen'` for backlit edges - **Multiply color grading:** Full-canvas gradient fill with `multiply` blend shifts shadow tones warm or cool ----- ## Method 3: SVG Vector Illustration **What it is:** Mathematically defined vector shapes — paths, curves, gradients — rendered as scalable graphics. **Best for:** Clean illustration styles, logos, icons, diagrams, anything that needs to scale without quality loss. **Strengths:** - Resolution-independent — renders crisp at any zoom - Path data (`d` attribute) can describe complex organic curves - Built-in filter primitives (see Method 8) provide GPU-accelerated effects - Declarative structure — shapes described as markup rather than imperative draw calls **Limitations:** - Less control over per-pixel compositing than canvas - Complex illustrations produce large SVG markup - Animation is possible but less fluid than canvas `requestAnimationFrame` **Untapped potent
View originalSciSpace uses a tiered pricing model. Visit their website for current pricing details.
Based on user reviews and social mentions, the most common pain points are: token usage.