7 Comments
User's avatar
Virul D's avatar

For frontend testing and debugging, Browserbase MCP will be really helpful as it can show the coding agent issur with a screenshot.

Expand full comment
Zach Silveira's avatar

I’ve found that Claude code has a smart pre work process where it converts anything you say into a structured to do list. It’s a great reinforcement for how you should already approach what you tell it to do

Expand full comment
Josh France's avatar

I’ve found a similar rule of thumb: the quality of the spec determines the quality of the autocomplete. Feeding the model rich scaffolding (error, intent, expected output) feels like onboarding a junior eng who never sleeps.

Expand full comment
Magnus Stjernstrom's avatar

This is awesome! I built a tool that checks prompts against these and suggests improvements: https://promptchecker.withcascade.ai/

Expand full comment
Paul's avatar

One thing is missing here. Prompt your LLM to ask for context it deems relevant and remind it that yourself might make errors that it should point out. o3 regularly asks for clarification and specific files it suspects should be in my codebase since I added the personification in ChatGPT.

Expand full comment
Matt's avatar

readable code is more important here:

const sortedProducts = [...filteredProducts].sort((a, b) =>

sortOrder === 'asc'

? a.name.localeCompare(b.name)

: b.name.localeCompare(a.name)

Expand full comment
Matt's avatar
Jun 8Edited

or if you want to get technical make it a hashmap :)

const sortedProducts = [...filteredProducts].sort((a, b) => sortedOrder[var_here])

sortedOrder = {

asc: a.name.localeCompare(b.name)

dec: b.name.localeCompare(a.name)

}

forget the conditional altogether

Expand full comment