Neither? I'm surprised nobody has said it yet. I turned off AI autocomplete, and sometimes use the chat to debug or generate simple code but only when I prompt it to. Continuous autocomplete is just annoying and slows me down.
imiric 3 hours ago [-]
This is the way.
All this IDE churn makes me glad to have settled on Emacs a decade ago. I have adopted LLMs into my workflow via the excellent gptel, which stays out of my way but is there when I need it. I couldn't imagine switching to another editor because of some fancy LLM integration I have no control over. I have tried Cursor and VS Codium with extensions, and wasn't impressed. I'd rather use an "inferior" editor that's going to continue to work exactly how I want 50 years from now.
Emacs and Vim are editors for a lifetime. Very few software projects have that longevity and reliability. If a tool is instrumental to the work that you do, those features should be your highest priority. Not whether it works well with the latest tech trends.
bandoti 58 minutes ago [-]
Emacs diff tools alone is a reason to use the editor. I switch between macOS, Linux, and Windows frequently so settled on Emacs and happy with that choice as well.
zkry 2 hours ago [-]
Ironically LLMs have made Emacs even more relevant. The model LLMs use (text) happens to match up with how Emacs represents everything (text in buffers). This opens up Emacs to becoming the agentic editor par excellence. Just imagine, some macro magic acound a defcommand and voila, the agent can do exactly what a user can. If only such a project could have the funding like Cursor does...
throwanem 2 hours ago [-]
Nothing could be worse for the modern Emacs ecosystem than for the tech industry finance vampires ("VCs," "LPs") to decide there's blood enough there to suck.
Fortunately, alien space magic seems immune, so far at least. I assume they do not like the taste, and no wonder.
imiric 1 hours ago [-]
Why should the Emacs community care whether someone decides to build a custom editor with AI features? If anything this would bring more interest and development into the ecosystem, which everyone would benefit from. Anyone not interested can simply ignore it, as we do for any other feature someone implements into their workflow.
tough 5 minutes ago [-]
what i find interesting is why is nobody building llms trained on using the shell and PTY at its full
right now its dumb unix piping only
I want an AI that can use emacs or vim with me
imiric 1 hours ago [-]
I'm not sure why you were downvoted. You're right that buffers and everything being programmable makes Emacs an ideal choice for building an AI-first editor. Whether that's something that a typical Emacs user wants is a separate issue, but someone could certainly build a polished experience if they had the resources and motivation. Essentially every Emacs setup is someone's custom editor, and AI features are not different from any other customization.
elAhmo 2 hours ago [-]
Cursor/Windsurf and similar IDEs and plugins are more than autocomplete on steroids.
Sure, you might not like it and think you as a human should write all code, but frequent experience in the industry in the past months is that productivity in the teams using tools like this has greatly increased.
It is not unreasonable to think that someone deciding not to use tools like this will not be competitive in the market in the near future.
hn_throw2025 1 hours ago [-]
I think you’re right, and perhaps it’s time for the “autocomplete on steroids” tag to be retired, even if something approximating that is happening behind the scenes.
I was converting a bash script to Bun/TypeScript the other day. I was doing it the way I am used to… working on one file at a time, only bringing in the AI when helpful, reviewing every diff, and staying in overall control.
Out of curiosity, threw the whole task over to Gemini 2.5Pro in agentic mode, and it was able to refine to a working solution. The point I’m trying to make here is that it uses MCP to interact with the TS compiler and linters in order to automatically iterate until it has eliminated all errors and warnings. The MCP integrations go further, as I am able to use tools like Console Ninja to give the model visibility into the contents of any data structure at any line of code at runtime too. The combination of these makes me think that TypeScript and the tooling available is particularly suitable for agentic LLM assisted development.
Quite unsettling times, and I suppose it’s natural to feel disconcerted about how our roles will become different, and how we will participate in the development process. The only thing I’m absolutely sure about is that these things won’t be uninvented with the genie going back in the bottle.
Ahh yes, software development, the discipline that famously has difficult to measure metrics and difficulty with long term maintenance. Months indeed.
LandR 55 minutes ago [-]
I use Rider with some built in AI auto-complete. I'd say its hit rate is pretty low!
Sometimes it auto-completes nonsense, but sometimes I think I'm about to tab on auto-completing a method like FooABC and it actually completes it to FoodACD, both return the same type but are completely wrong.
I have to really be paying attention to catch it selecting the wrong one. I really really hate this. When it works its great, but every day I'm closer to just turning it off out of frustration.
wrasee 1 hours ago [-]
I think you’re arguing a straw man
I don’t think the point was “don’t use LLM tools”. I read the argument here as about the best way to integrate these tools into your workflow.
Similar to the parent, I find interfacing with a chat window sufficiently productive and prefer that to autocomplete, which is just too noisy for me.
blitzar 3 hours ago [-]
I shortcut the "cursor tab" and enable or disable it as needed. If only Ai was smart enough to learn when I do and don't want it (like clippy in the ms days) - when you are manually toggling it on/off clear patterns emerge (to me at least) as to when I do and don't want it.
jonwinstanley 3 hours ago [-]
How do you do that? Sorry if it's obvious - I've looked for this functionality before and didn't spot it
blitzar 3 hours ago [-]
Bottom right says "cursor tab" you can manually manipulate it there (and snooze for X minutes - interesting feature). For binding shortcuts - Command/Ctrl + Shift + P, then look for "Enable|Disable|Whatever Cursor Tab" and set shortcuts there.
Old fashioned variable name / function name auto complete is not affected.
I considered a small macropad to enable / disable with a status light - but honestly don't do enough work to justify avoiding work by finding / building / configuring / rebuilding such a solution. If the future is this sort of extreme autocomplete in everything I do on a computer, I would probably go to the effort.
jonwinstanley 2 hours ago [-]
Thanks!
The thing that bugs me is when Im trying to use tab to indent with spaces, but I get a suggestion instead.
I tried to disable caps lock, then remap tab to caps lock, but no joy
InsideOutSanta 3 hours ago [-]
Yeah, I use IntelliJ with the chat sidebar. I don't use autocomplete, except in trivial cases where I need to write boilerplate code. Other than that, when I need help, I ask the LLM and then write the code based on its response.
I'm sure it's initially slower than vibe-coding the whole thing, but at least I end up with a maintainable code base, and I know how it works and how to extend it in the future.
rco8786 1 hours ago [-]
This is where I landed too. Used Cursor for a while before realizing that it was actually slowing me down because the PR cycle took so much longer, due to all the subtle bugs in generated code.
Went back to VSCode with a tuned down Copilot and use the chat or inline prompt for generating specific bits of code.
admiralrohan 3 hours ago [-]
That is interesting. Which tech are you using?
Are you getting irrelevant suggestions as those autocompletes are meant to predict the things you are about to type.
jonwinstanley 3 hours ago [-]
Agreed, I've found for JS the suggestions are remarkably good
owendarko 51 minutes ago [-]
You could also use these AI coding features on a plug-and-play basis with an IDE extension.
For example, VS Code has Cline & Kilo Code (disclaimer: I help maintain Kilo).
Jetbrains has Junie, Zencoder, etc.
nsteel 2 hours ago [-]
I can't even get simple code generation to work for VHDL. It just gives me garbage that does not compile. I have to assume this is not the case for the majority of people using more popular languages? Is this because the training data for VHDL is far more limited? Are these "AIs" not able to consume the VHDL language spec and give me actual legal syntax at least?! Or is this because I'm being cheap and lazy by only trying free chatGPT and I should be using something else?
vitro 2 hours ago [-]
I had turned autocomplete off as well. Way too many times it was just plain wrong and distracting. I'd like it to be turned on for method documentation only, though, where it worked well once the method was completed, but so far I wasn't able to customize it this way.
2 hours ago [-]
medhir 3 hours ago [-]
+100. I’ve found the “chat” interface most productive as I can scope a problem appropriately.
Cursor, Windsurf, etc tend to feel like code vomit that takes more time to sift through than working through code by myself.
aqme28 2 hours ago [-]
I thought Cursor was dumb and useless too when I was just using autocomplete.
It's the "agent chat" on the sidebar that is where it really shines.
aldanor 2 hours ago [-]
We have an internal ban policy on copilot for IP reasons and while I was... missing it initially, now just using neovim without any AI feels fine. Maybe I'll add an avante.nvim for a built-in chat box though.
whywhywhywhy 3 hours ago [-]
Having it as tab was a mistake, tab complete for snippets is fine because it’s at the end of a line, tab complete in empty text space means you always have to be aware if it’s in autocomplete context or not before setting an indent.
xnorswap 3 hours ago [-]
AI autocomplete can be infuriating if like me, you like to browse the public methods and properties by dotting the type. The AI autocomplete sometimes kicks in and starts writing broken code using suggestions that don't exist and that prevents quickly exploring the actual methods available.
I have largely disabled it now, which is a shame, because there are also times it feels like magic and I can see how it could be a massive productivity lever if it needed a tighter confidence threshold to kick in.
prisenco 2 hours ago [-]
If I can, I map it to ctrl-; so I can bring it up when I need it.
But I found once it was optional I hardly ever used it.
I use Deepseek or others as a conversation partner or rubber duck, but I'm perfectly happy writing all my code myself.
Maybe this approach needs a trendy name to counter the "vibe coding" hype.
anshumankmr 1 hours ago [-]
It sometimes works really well, but I have at times been hampered by its autocomplete.
I always forget syntax for things like ssh port forwarding. Now just describe it at the shell:
$ ssh (take my local port 80 and forward it to 8080 on the machine betsy) user@betsy
or maybe:
$ ffmpeg -ss 0:10:00 -i somevideo.mp4 -t 1:00 (speed it up 2x) out.webm
I press ctrl+x x and it will replace the english with a suggested command. It's been a total game changer for git, jq, rsync, ffmpeg, regex..
For more involved stuff there's screen-query: Confusing crashes, strange terminal errors, weird config scripts, it allows a joint investigation whereas aider and friends just feels like I'm asking AI to fuck around.
nicce 3 hours ago [-]
This never accesses any extradata and works only when explicitly asked? I find terminal as most important part from privacy perspective and I haven’t tried any LLM integration yet…
kristopolous 3 hours ago [-]
It is intentionally non-agentic and only runs when invoked.
For extradata it sends uname and the procname when it captures such as "nvim" or "ipython" and that's it.
raverbashing 4 hours ago [-]
Yeah
AI autocomplete is a feature, not a product (to paraphrase SJ)
I can understand Windsurf getting the valuation as they had their own Codeium model
$B for a VSCode fork? Lol
nicce 3 hours ago [-]
Microsoft seems to be always winner - maybe they predicted all this and for this reason they made core extensions closed source.
unsupp0rted 3 hours ago [-]
Asking HN this is like asking which smartphone to use. You'll get suggestions for obscure Linux-based modular phones that weigh 6 kilos and lack a clock app or wifi. But they're better because they're open source or fully configurable or whatever. Or a smartphone that a fellow HNer created in his basement and plans to sell soon.
Cursor and Windsurf are both good, but do what most people do and use Cursor for a month to start with.
mohsen1 1 hours ago [-]
haha so on point! In the HN world, backend are written in Rust with formal proof and frontend are in pure JS and maybe Web Components. In the real world however, a lot of people are using different tech
jasongill 25 minutes ago [-]
"The clock app isn't missing! You just have to cross-compile it from source and flash a custom firmware that allows loading it!"
danpalmer 6 hours ago [-]
Zed. They've upped their game in the AI integration and so far it's the best one I've seen (external from work). Cursor and VSCode+Copilot always felt slow and janky, Zed is much less janky feels like pretty mature software, and I can just plug in my Gemini API key and use that for free/cheap instead of paying for the editor's own integration.
Overall Zed is super nice and opposite of janky, but still found a few of defaults were off and Python support still was missing in a few key ways for my daily workflow.
frainfreeze 1 hours ago [-]
Zed doesn't even run on my system and the relevant github issue is only updated by people who come to complain about the same issue.
xmorse 42 minutes ago [-]
I am using Zed too, it still has some issues but it is comparable to Cursor. In my opinion they iterate even faster than the VSCode forks.
submeta 4 hours ago [-]
Consumes lots of resources on an M4 Macbook. Would love to test it though. If it didn’t freeze my Macbook.
Edit:
With the latest update to 0.185.15 it works perfectly smooth. Excellent addition to my setup.
_bin_ 3 hours ago [-]
I'll second the zed recommendation, sent from my M4 macbook. I don't know why exactly it's doing this for you but mine is idling with ~500MB RAM (about as little as you can get with a reasonably-sized Rust codebase and a language server) and 0% CPU.
I have also really appreciated something that felt much less janky, had better vim bindings, and wasn't slow to start even on a very fast computer. You can completely botch Cursor if you type really fast. On an older mid-range laptop, I ran into problems with a bunch of its auto-pair stuff of all things.
drcongo 3 hours ago [-]
Yeah, same. Zed is incredibly efficient on my M1 Pro. It's my daily driver these days, and my Python setup in it is almost perfect.
I don't think Zeta is quite up to windsurf's completion quality/speed.
I get that this would go against their business model, but maybe people would pay for this - it could in theory be the fastest completion since it would run locally.
xmorse 44 minutes ago [-]
Running models locally is very expensive in terms of memory and scheduling requirements, maybe instead they should host their model on the Cloudflare AI network which is distributed all around the world and can have lower latency
fastball 5 hours ago [-]
For the agentic stuff I think every solution can be hit or miss. I've tried claude code, aider, cline, cursor, zed, roo, windsurf, etc. To me it is more about using the right models for the job, which is also constantly in flux because the big players are constantly updating their models and sometimes that is good and sometimes that is bad.
But I daily drive Cursor because the main LLM feature I use is tab-complete, and here Cursor blows the competition out of the water. It understands what I want to do next about 95% of the time when I'm in the middle of something, including comprehensive multi-line/multi-file changes. Github Copilot, Zed, Windsurf, and Cody aren't at the same level imo.
solumunus 4 hours ago [-]
If we’re talking purely auto complete I think Supermaven does it the best.
fastball 3 hours ago [-]
Cursor bought Supermaven last year.
xmorse 43 minutes ago [-]
It still works
SafeDusk 6 hours ago [-]
I am betting on myself.
I built a minimal agentic framework (with editing capability) that works for a lot of my tasks with just seven tools: read, write, diff, browse, command, ask and think.
One thing I'm proud of is the ability to have it be more proactive in making changes and taking next action by just disabling the `ask` tool.
I won't say it is better than any of the VSCode forks, but it works for 70% of my tasks in an understandable manner. As for the remaining stuff, I can always use Cursor/Windsurf in a complementary manner.
Aider! Use the editor of your choice and leave your coding assistant separate. Plus, it's open source and will stay like this, so no risk to see it suddenly become expensive or dissappear.
mbanerjeepalmer 5 hours ago [-]
I used to be religiously pro-Aider. But after a while those little frictions flicking backwards and forwards between the terminal and VS Code, and adding and dropping from the context myself, have worn down my appetite to use it. The `--watch` mode is a neat solution but harms performance. The LLM gets distracted by deleting its own comment.
Roo is less solid but better-integrated.
Hopefully I'll switch back soon.
fragmede 5 hours ago [-]
I suspect that if you're a vim user those friction points are a bit different. For me, Aider's git auto commit and /undo command are what sells it for me at this current junction of technology. OpenHands looks promising, though rather complex.
movq 4 hours ago [-]
The (relative) simplicity is what sells aider for me (it also helps that I use neovim in tmux).
It was easy to figure out exactly what it's sending to the LLM, and I like that it does one thing at a time. I want to babysit my LLMs and those "agentic" tools that go off and do dozens of things in a loop make me feel out of control.
ayewo 3 hours ago [-]
I like your framing about “feeling out of control”.
For the occasional frontend task, I don’t mind being out of control when using agentic tools. I guess this is the origin of Karpathy’s vibe coding moniker: you surrender to the LLM’s coding decisions.
For backend tasks, which is my bread and butter, I certainly want to know what it’s sending to the LLM so it’s just easier to use the chat interface directly.
This way I am fully in control. I can cherry pick the good bits out of whatever the LLM suggests or redo my prompt to get better suggestions.
Oreb 4 hours ago [-]
Approximately how much does it cost in practice to use Aider? My understanding is that Aider itself is free, but you have to pay per token when using an API key for your LLM of choice. I can look up for myself the prices of the various LLMs, but it doesn't help much, since I have no intuition whatsoever about how many tokens I am likely to consume. The attraction of something like Zed or Cursor for me is that I just have a fixed monthly cost to worry about. I'd love to try Aider, as I suspect it suits my style of work better, but without having any idea how much it would cost me, I'm afraid of trying.
m3adow 3 hours ago [-]
I'm using Gemini 2.5 Pro with Aider and Cline for work. I'd say when working for 8 full hours without any meetings or other interruptions, I'd hit around $2. In practice, I average at $0.50 and hit $1 once in the last weeks.
didgeoridoo 13 minutes ago [-]
Wow my first venture into Claude Code (which completely failed for a minor feature addition on a tiny Swift codebase) burned $5 in about 20 minutes.
Probably related to Sonnet 3.7’s rampant ADHD and less the CLI tool itself (and maybe a bit of LLMs-suck-at-Swift?)
bluehatbrit 2 hours ago [-]
I'd be really keen to know more about what you're using it for, how you typically prompt it, and how many times you're reaching for it. I've had some success at keeping spend low but can also easily spend $4 from a single prompt so I don't tend to use tools like Aider much. I'd be much more likely to use them if I knew I could reliably keep the spend down.
m3adow 12 minutes ago [-]
I'll try to elaborate:
I'm using VSC for most edits, tab-completion is done via Copilot, I don't use it that much though, as I find the prediction to be subpar or too wordy in case of commenting.
I use Aider for rubber-ducking and implementing small to mid-scope changes. Normally, I add the required files, change to architect or ask mode (depends on the problem I want to solve), explain what my problem is and how I want it to be solved. If the Aider answer satisfies me, I change to coding mode and allow the changes.
No magic, I have no idea how a single prompt can generate $4. I wouldn't be surprised if I'm only scratching on the surface with my approach though, maybe there is a better but more costly strategy yielding better results which I just didn't realize yet.
beacon294 3 hours ago [-]
This is very inexpensive. What is your workflow and savings techniques! I can spend $10/h or more with very short sessions and few files.
m3adow 2 hours ago [-]
Huh, I didn't configure anything for saving, honestly. I just add the whole repo and do my stuff.
How do you get to $10/h? I probably couldn't even provoke this.
I assume we have a very different workflow.
anotheryou 4 hours ago [-]
Depends entirely on the API.
With deepseek: ~nothing.
tuyguntn 3 hours ago [-]
is deepseek fast enough for you? For me the API is very slow, sometimes unusable
anotheryou 3 hours ago [-]
To be honest I'm using windsurf with openAI/google right now and used deepseek with aider when it was still less crowded.
My only problem was deepseek occasionally not answering at all, but generally it was fast (non thinking that was).
jbellis 1 hours ago [-]
I love Aider, but I got frustrated with its limitations and ended up creating Brokk to solve them: https://brokk.ai/
Compared to Aider, Brokk
- Has a GUI (I know, tough sell for Aider users but it really does help when managing complex projects)
- Builds on a real static analysis engine so its equivalent to the repomap doesn't get hopelessly confused in large codebases
- Has extremely useful git integration (view git log, right click to capture context into the workspace)
- Is also OSS and supports BYOK
I'd love to hear what you think!
aitchnyu 5 hours ago [-]
Yup, choose your model and pay as you go, like commodities like rice and water. The others played games with me to minimize context and use cheaper models (such as 3 modes, daily credits etc, using most expensive model etc).
Also the --watch mode is the most productive interface of using your editor, no need of extra textboxes with robot faces.
fragmede 5 hours ago [-]
fwiw. Gemini-*, which is available in Aider, isn't Pay As You Go (payg) but post paid, which means you get a bill at the end of the month and not the OpenAI/others model of charging up credits before you can use the service.
camkego 3 hours ago [-]
I guess this is a good reason to consider things like openrouter. Turns it into a prepaid service.
pembrook 5 hours ago [-]
For a time windsurf was way ahead of cursor in full agentic coding, but now I hear cursor has caught up. I have yet to switch back to try out cursor again but starting to get frustrated with Windsurf being restricted to gathering context only 100-200 lines at a time.
So many of the bugs and poor results that it can introduce are simply due to improper context. When forcibly giving it the necessary context you can clearly see it’s not a model problem but it’s a problem with the approach of gathering disparate 100 line snippets at a time.
Also, it struggles with files over 800ish lines which is extremely annoying
We need some smart deepseek-like innovation in context gathering since the hardware and cost of tokens is the real bottleneck here.
victorbjorklund 3 hours ago [-]
I'm with Cursor for the simple reason it is in practice unlimited. Honestly the slow requests after 500 per month are fast enough. Will I stay with Cursor? No, ill switch the second something better comes along.
mdrzn 3 hours ago [-]
Same. Love the "slow but free" model, I hope they can continue providing it, I love paying only $20/m instead of having a pay by usage.
I've been building SO MANY small apps and web apps in the latest months, best $20/m ever spent.
k4rli 4 minutes ago [-]
20€ seems totally subsidized considering the amount of tokens. Pricing cheaply to be competitive but users will jump to the next one when they inevitably hike the price up.
xiphias2 3 hours ago [-]
I'm cursor with claude 3.7
Somehow other models don't work as well with it. ,,auto'' is the worst.
Still, I hate it when it deletes all my unit tests to ,,make them pass''
rvnx 3 hours ago [-]
Cursor is acceptable because for the price it's unbeatable. Free, unlimited requests are great. But by itself, Cursor is not anything special. It's only interesting because they pay Claude or Gemini from their pockets.
Ideally, things like RooCode + Claude are much better, but you need infinite money glitch.
herbst 3 hours ago [-]
On weekend the slow requests regularly are faster than the paid requests.
reynaldi 3 hours ago [-]
VS Code with GitHub Copilot works great, though they are usually a little late to add features compared to Cursor or Windsurf. I use the 'Edit' feature the most.
Windsurf I think has more features, but I find it slower compared to others.
Cursor is pretty fast, and I like how it automatically suggests completion even when moving my cursor to a line of code. (Unlike others where you need to 'trigger' it by typing a text first)
Honorable mention: Supermaven. It was the first and fastest AI autocomplete I used. But it's no longer updated since they were acquired by Cursor.
lemontheme 5 hours ago [-]
OP probably means to keep using vscode. Honestly, best thing you can do is just try each for a few weeks. Feature comparison tables only say so much, particularly because the terminology is still in a state of flux.
I’ve personally never felt at home in vscode. If you’re open to switching, definitely check out Zed, as others are suggesting.
can16358p 35 minutes ago [-]
While I haven't used Windsurf, I've been using Cursor and I LOVE it: especially the inline autocomplete is like reading my mind and making the work MUCH faster.
I can't say anything about Windsurf (as I haven't tried yet) but I can confidently say Cursor is great.
erenst 5 hours ago [-]
I’ve been using Zed Agent with GitHub Copilot’s models, but with GitHub planning to limit usage, I’m exploring alternatives.
Now I'm testing Claude Code’s $100 Max plan. It feels like magic - editing code and fixing compile errors until it builds. The downside is I’m reviewing the code a lot less since I just let the agent run.
So far, I’ve only tried it on vibe coding game development, where every model I’ve tested struggles. It says “I rewrote X to be more robust and fixed the bug you mentioned,” yet the bug still remains.
I suspect it will work better for backend web development I do for work: write a failing unit test, then ask the agent to implement the feature and make the test pass.
Also, give Zed’s Edit Predictions a try. When refactoring, I often just keep hitting Tab to accept suggestions throughout the file.
energy123 5 hours ago [-]
Can you say more to reconcile "It feels like magic" with "every model I’ve tested struggles."?
erenst 4 hours ago [-]
It feels like magic when it works and it at least gets the code to compile. Other models* would usually return a broken code. Specially when using a new release of a library. All the models use the old function signatures, but Claud Code then sees compile error and fixes it.
Compared to Zed Agent, Claude Code is:
- Better at editing files. Zed would sometimes return the file content in the chatbox instead of updating it. Zed Agent also inserted a new function in the middle of the existing function.
- Better at running tests/compiling. Zed struggled with nix environment and I don't remember it going to the update code -> run code -> update code feedback loop.
With this you can leave Claude Code alone for a few minutes, check back and give additional instructions. With Zed Agent it was more of a constantly monitoring / copy pasting and manually verifying everything.
*I haven't tested many of the other tools mentioned here, this is mostly my experience with Zed and copy/pasting code to AI.
I plan to test other tools when my Claude Code subscription expires next month.
kioku 3 hours ago [-]
It might seem contrary to the current trend, but I've recently returned to using nvim as my daily driver after years with VS Code. This shift wasn't due to resource limitations but rather the unnecessary strain from agentic features consuming high amounts of resources.
heymax054 1 hours ago [-]
90% of their features could fit inside a VS Code extension.
There are already a few popular open-source extension doing 90%+ of what Cursor is doing - Cline, Roo Code (a fork of Cline), Kilo Code (a fork of Roo Code and something I help maintain).
wrasee 1 hours ago [-]
The other 10% being what differentiates them in the market :)
heymax054 1 hours ago [-]
Of course. Are they useful enough though for people to install an entirely new software?
gkbrk 13 minutes ago [-]
Since installing entirely new software is just downloading Cursor.AppImage from the official website and double-clicking on it, it's not a large hassle for most users.
If you're on Arch, there's even an AUR package, so it's even less steps than that.
benterix 3 hours ago [-]
For daily work - neither. They basically promote the style of work where you end up with mediocre code that you don't fully understand, and with time the situation gets worse.
I get much better result by asking specific question to a model that has huge context (Gemini) and analyzing the generated code carefully. That's the opposite of the style of work you get with Cursor or Windsurf.
Is it less efficient? If you are paid by LoCs, sure. But for me the quality and long-term maintainability are far more important. And especially the Tab autocomplete feature was driving me nuts, being wrong roughly half of the time and basically just interrupting my flow.
chrisvalleybay 5 hours ago [-]
Cursor has for me had the best UX and results until now. Trae's way of adding context is way too annoying. Windsurf has minor UI-issues all over. Options that are extensions in VSCode do not cut it in turn of providing fantastic UI/UX because of the API not supporting it.
khwhahn 3 hours ago [-]
I wish your own coding would just be augmented like somebody looking over your shoulder. The problem with the current AI coding is that you don't know your code base anymore. Basically, like somebody helping you figure out stuff faster, update documentation etc.
I really like Zed. Have not tried any of the mentioned by op.
Zed I feel like is getting somewhere that can replace Sublime Text completely (but not there yet).
bilekas 5 hours ago [-]
Zed is an editor firslty.. The Ops has mentioned options which are AI development "agents" basically.
mirekrusin 4 hours ago [-]
AI aided development has first class support in Zed.
Ie. it's not a "plugin" but built-in ecosystem developed by core team.
Speed of iterations on new features is quite impressive.
Their latest agentic editing update basically brought claude code cli to the editor.
Most corporations don't have direct access to arbitrary LLMs but through Microsoft's Github's Copilot they do – and you can use models through copilot and other providers like Ollama – which is great for work.
With their expertise (team behind pioneering tech like electron, atom, teletype, tree sitter, building their own gpu based cross platform ui etc.) and velocity it seems that they're positioned to outpace competition.
Personally I'd say that their tech is maybe two orders of magnitude more valuable than windsurf?
bilekas 3 hours ago [-]
I don't dispite Zed is great, I actually am using it myself, but it's an editor first and foremost. The OP, to me at least seems to be asking more-so about the AI agent comparisons.
drcongo 3 hours ago [-]
Cursor and Windsurf are both forks of VS Code, an editor.
bilekas 1 hours ago [-]
Yes very observant, modified forks with their agents built in. Zed does not have any built in, sublime does not have agents built in but if you like you can continue this disingenuous discussion.
mirekrusin 29 minutes ago [-]
Zed has it built in, it's called "agentic editing" [0] and behaves like claude code cli and other agents – mcp based editing, iterating on tests until they pass etc. – where you leave it in a background window and can do something else waiting for completion notification or you can follow it to see what changes it is doing.
It's not only that they have it built in but it seems to be currently the best open replacement for tools like claude code cli because you can use arbitrary llm with it, ie. from ollama and you have great extension points (mcp servers, rules, slash commands etc).
I was under impression Zed had native LLM integration, built into the editor?
killerstorm 5 hours ago [-]
Cursor: Autocomplete is really good. At a time when I compared them, it was without a doubt better than Githib Copilot autocomplete. Cmd-K - insert/edit snippet at cursor - is good when you use good old Sonnet 3.5. ;;; Agent mode, is, honestly, quite disappointing; it doesn't feel like they put a lot of thought into prompting and wrapping LLM calls. Sometimes it just fails to submit code changes. Which is especially bad as they charge you for every request. Also I think they over-charge for Gemini, and Gemini integration is especially poor.
My reference for agent mode is Claude Code. It's far from perfect, but it uses sub-tasks and summarization using smaller haiku model. That feels way more like a coherent solution compared to Cursor. Also Aider ain't bad when you're OK with more manual process.
Windsurf: Have only used it briefly, but agent mode seems somewhat better thought out. For example, they present possible next steps as buttons. Some reviews say it's even more expensive than Cursor in agent mode.
emrah 2 hours ago [-]
> Neither? I'm surprised nobody has said it yet. I turned off AI autocomplete ...
This represents one group of developers and is certainly valid for that group. To each their own
For another group, where I belong, AI is a great companion! We can handle the noise and development speed is improved as well as the overall experience.
I prefer VSCode and GitHub copilot. My opinion is this combo will eventually eat all the rest, but that's besides the point.
Agent mode could be faster, sometimes it is rather slow thinking but not a big deal. This mode is all I use these days. Integration with the code base is a huge part of the great experience
CuriouslyC 2 hours ago [-]
Personally, if you take the time to configure it well, I think Aider is vastly superior. You can have 4 terminals open in a grid and be running agentic coding workflows on them and 4x the throughput of someone in Cursor, whereas Cursor's UI isn't really amenable to running a bunch of instances and managing them all simultaneously. That plus Aider lets you do more complex automated Gen -> Typecheck -> Lint -> Test workflows with automated fixing.
adamgroom 58 minutes ago [-]
I like cursor, the autocomplete is great most of the time, as others have said use a shortcut to disable it.
The agents are a bit beta, it can’t solve bugs very often, and will write a load of garbage if you let it.
jsumrall 6 hours ago [-]
Amazon Q. Claude Code is great (the best imho, what everything else measures against right now), and Amazon Q seems almost as good and for the first week I've been using it I'm still on the free tier.
The flat pricing of Claude Code seems tempting, but it's probably still cheaper for me to go with usage pricing. I feel like loading my Anthropic account with the minimum of $5 each time would last me 2-3 days depending on usage. Some days it wouldn't last even a day.
I'll probably give Open AI's Codex a try soon, and also circle back to Aider after not using it for a few months.
I don't know if I misundersand something with Cursor or Copilot. It seems so much easier to use Claude Code than Cursor, as Claude Code has many more tools for figuring things out. Cursor also required me to add files to the context, which I thought it should 'figure out' on its own.
quintes 3 hours ago [-]
I remember asking Amazon Q something and it wouldn’t reply cuz of security policy or something. It was as far as I can remember a legit question around Iam policy which I was trying to configure. I figured it out back in Google search.
wordofx 5 hours ago [-]
> I don't know if I misundersand something with Cursor or Copilot. It seems so much easier to use Claude Code than Cursor, as Claude Code has many more tools for figuring things out. Cursor also required me to add files to the context, which I thought it should 'figure out' on its own.
Cursor can find files on its own. But if you point it in the right direction it has far better results than Claude code.
jatins 5 hours ago [-]
this is the first time I am seeing someone says good things about Amazon Q
Do they publish any benchmark sheet on how it compares against others?
retinaros 5 hours ago [-]
It is currently at top3 in swe bench verified.
It went through multiple stages of upgrades and I would say at this stage it is better than copilot. Fundamentally it is as good as cursor or windsurf but lacks some features and cannot match their speed of release. If you re on aws tho its a compelling offering.
eisfresser 6 hours ago [-]
Windsurf at the moment. It now can run multiple "flows" in parallel, so I can set one cascade off to look into a bug somewhere while another cascade implements a feature elswhere in the code base. The LLMs spit out their tokens in the background, I drop in eventually to reveiew and accept or ask for further changes.
ximeng 5 hours ago [-]
Cursor offers this too - open different tabs in chat and ask for different changes; they’ll run in parallel.
frainfreeze 1 hours ago [-]
Until you change model in one of the tabs and all other tabs (and editor instances!) get model changed, stop what they're doing, lose context etc. There is also a bug where if you have two editors working on two codebases they get lost and start working on same thing, I suppose there is some kind of a background workspace that gets mixed up.
mirekrusin 5 hours ago [-]
Zed has this background flow as well, you can see in the video [0] from their latest blog post.
Since this topic is closely related to my new project, I’d love to hear your opinion on it.
I’m thinking of building an AI IDE that helps engineers write production quality code quickly when working with AI. The core idea is to introduce a new kind of collaboration workflow.
You start with the same kind of prompt, like “I want to build this feature...”, but instead of the model making changes right away, it proposes an architecture for what it plans to do, shown from a bird’s-eye view in the 2D canvas.
You collaborate with the AI on this architecture to ensure everything is built the way you want. You’re setting up data flows, structure, and validation checks. Once you’re satisfied with the design, you hit play, and the model writes the code.
Thoughts? Do you think this workflow has a chance of being adopted?
michuk 3 hours ago [-]
Looks like an antidote for "vibe coding", like it. When are you planning to release something that could be tried? Is this open source?
dkaleta 3 hours ago [-]
I believe we can have a beta release in September, and yes, we plan to open-source the editor.
PS. I’m stealing the ‘antidote to “vibe coding”’ phrase :)
rkuodys 4 hours ago [-]
I quite liked the video. Hope you get to launch the product and I could try it out some day.
The only thing that I kept thinking about was - if there is a correction needed- you have to make it fully by hand. Find everything and map. However, if the first try was way off , I would like to enter from "midpoint" a correction that I want. So instead of fixing 50%, I would be left with maybe 10 or 20. Don't know if you get what I mean.
dkaleta 4 hours ago [-]
Yes, the idea is to ‘speak/write’ to the local model to fix those little things so you don’t have to do them by hand. I actually already have a fine-tuned Qwen model running on Apple’s MLX to handle some of that, but given the hard YC deadline, it didn’t make it into the demo.
Eventually, you’d say, ‘add an additional layer, TopicsController, between those two files,’ and the local model would do it quickly without a problem, since it doesn’t involve complicated code generation. You’d only use powerful remote models at the end.
ciaranmca 4 hours ago [-]
Just watched the demo video and thought it is a very interesting approach to development, I will definitely be following this project. Good Luck.
pk97 48 minutes ago [-]
the age of swearing allegiance to a particular IDE/AI tool is over. I keep switching between Cursor and GH Copilot and for the most part they are very similar offerings. Then there's v0, Claude (for its Artifacts feature) and Cline which I use quite regularly for different requirements.
admiralrohan 3 hours ago [-]
Using Windsurf since the start and I am satisfied. Didn't look beyond it. Focused on actually doing the coding. It's impossible to keep up with daily AI news and if something groundbreaking happens it will go viral.
Artgor 5 hours ago [-]
Claude Code.
And... Junie in Jetbrains IDE. It appeared recently and I'm really impressed by its quality. I think it is on the level of Claude Code.
Euphorbium 5 hours ago [-]
I think it uses claude code by default, it is literally the same thing, with different (better) interface.
myflash13 4 hours ago [-]
Really interesting. Source?
cdrx 2 hours ago [-]
Junie in Ask mode:
> Which LLM are you?
> I am Claude, an AI assistant created by Anthropic. In this interface, I'm operating as "Junie," a helpful assistant designed to explore codebases and answer questions about projects. I'm built on Anthropic's large language model technology, specifically the Claude model family.
Jetbrains wider AI tools let you choose the model that gets used but as far as I can tell Junie doesn't. That said, it works great.
jbellis 1 hours ago [-]
That just means it's using Sonnet, not that it's using Claude Code.
frainfreeze 56 minutes ago [-]
Even that doesn't have to be true, LLMs often impersonate other popular models.
jbellis 52 minutes ago [-]
usually OpenAI, but yes
jfoster 2 hours ago [-]
I'm not sure the answer matters so much. My guess is that as soon as one of them gains any notable advantage over the other, the other will copy it as quickly as possible. They're using the same models under the hood.
whywhywhywhy 3 hours ago [-]
Cursor is good for basic stuff but Windsurf consistently solves issues Cursor fails on even after 40+ mins of retries and prompting changes.
Cursor is very lazy about looking beyond the current context or even context at all sometimes it feels it’s trying to one shot a guess without looking deeper.
Bad thing about Windsurf is the plans are pretty limited and the unlimited “cascade base” feels dumb the times I used it so ultimately I use Cursor until I hit a wall then switch to Windsurf.
auggierose 1 hours ago [-]
How about Cursor vs. Windsurf vs. (Claude Desktop + MCP)?
Haven't tried out Cursor / Windsurf yet, but I can see how I can adapt Claude Desktop to specifically my workflow with a custom MCP server.
anonymoushn 6 hours ago [-]
I evaluated Windsurf at a friend's recommendation around half a year ago and found that it could not produce any useful behaviors on files above a thousand lines or so. I understand this is mostly a property of the model, but certainly also a property of the approach used by the editor of just tossing the entire file in, yeah? I haven't tried any of these products since then, but it might be worth another shot because Gemini might be able to handle these files.
sumedh 2 hours ago [-]
Windsurf has improved a lot in the last few months.
eadz 5 hours ago [-]
A year is a long time. Even in the past few months it has improved a lot.
pbowyer 6 hours ago [-]
I've had trials for both running and tested both on the same codebases.
Cursor works roughly how I've expected. It reads files and either gets it right or wrong in agent mode.
Windsurf seems restricted to reading files 50 lines at a time, and often will stop after 200 lines [0]. When dealing with existing code I've been getting poorer results than Cursor.
As to autocomplete: perhaps I haven't set up either properly (for PHP) but the autocomplete in both is good for pattern matching changes I make, and terrible for anything that require knowledge of what methods an object has, the parameters a method takes etc. They both hallucinate wildly, and so I end up doing bits of editing in Cursor/Windsurf and having the same project open in PhpStorm and making use of its intellisense.
I'm coming to the end of both trials and the AI isn't adding enough over Jetbrains PhpStorm's built in features, so I'm going back to that until I figure out how to reduce hallucinations.
Recently started using Cursor for adding a new feature on a small codebase for work, after a couple of years where I didn't code. It took me a couple of tries to figure out how to work with the tool effectively, but it worked great! I'm now learning how to use it with TaskMaster, it's such a different way to do and play with software. Oh, one important note: I went with Cursor also because of the pricing, that's despite confusing in term of fast vs slow requests, it smells less consumption base.
Windsurf. The context awareness is superior compared to cursor. It falls over less and is better at retrieving relevant snippets of code. The premium plan is cheaper too, which is a nice touch.
jonwinstanley 3 hours ago [-]
Has anyone had any joy using a local model? Or is it still too slow?
On something like a M4 Macbook Pro can local models replace the connection to OpenAi/Anthropic?
frainfreeze 60 minutes ago [-]
For advanced autocomplete (not code generation, but can do that too), basic planning, looking things up instead of web search, review & summary, even one shooting smaller scripts, the 32b Q4 models proved very good for me (24gb VRAM RTX 3090). All LLM caveats still apply, of course. Note that setting up local llm in cursor is pain because they don't support local host. Ngrok or vps and reverse ssh solve that though.
brahyam 3 hours ago [-]
Cursor. Good price, the predictive next edit is great, good enough with big code bases and with the auto mode i dont even spend all my prem requests.
I've tried VScode with copilot a couple of times and its frustrating, you have to point out individual files for edits but project wide requests are a pain.
My only pain is the workflow for developing mobile apps where I have to switch back and forth between Android Studio and Xcode as vscode extensions for mobile are not so good
webprofusion 4 hours ago [-]
I just use Copilot (across VS Code, VS etc), it lets you pick the model you want and it's a fixed monthly cost (and there is a free tier). They have most of the core features of these other tools now.
Cursor, Windsurf et al have no "moat" (in startup speak), in that a sufficiently resourced organization (e.g. Microsoft) can just copy anything they do well.
VS code/Copilot has millions of users, cursor etc have hundreds of thousands of users. Google claims to have "hundreds of millions" of users but we can be pretty sure that they are quoting numbers for their search product.
- Fully Agentic. Controllable and Transparent. Agent does all the work, but keeps you in the loop. You can take back control anytime and guide it.
- Not an IDE, so don't compete with VSCode forks. Interface is just a chatbox.
- More like Replit - but full stack focussed. You can build backend services.
- Videos are up at youtube.com/@nonbios
michelsedgh 2 hours ago [-]
best thing about cursor is $20 and u basically get unlimited requests. I know you get “slower” after a certain amount of requests but honestly you dont feel it being slow and reasoning models are taking so much to answer, so anyways you send the prompt and go doing other stuff, so the slowness i dont think it matters and basically unlimited compute u know?
hliyan 5 hours ago [-]
I think with the answer, each responder should include their level of coding proficiency. Or, at least whether they are able to (or even bother to) read the code that the tool generates. Preferences would vary wildly based on it.
snthpy 6 hours ago [-]
Cline?
triptych 6 hours ago [-]
I love Cline and use it every day. It works the way I think and makes smart decisions about features.
vladstudio 6 hours ago [-]
Cline!
powerapple 5 hours ago [-]
I tested windsurf last week, it installed all dependencies to my global python....it didn't know best practices for Python, and didn't create any virtual env..... I am disappointed. My Cursor experience was slightly better. Still, one issue I had was how to make sure it does not change the part of code I don't want it to change. Every time you ask it to do something for A, it rewrote B in the process, very annoying.
Which model are you running locally?
Is it faster than waiting for Claudes generation?
What gear do you use?
Daedren 3 hours ago [-]
Considering Microsoft is closing down on the ecosystem, I'd pick VSCode with Copilot over those two.
It's a matter of time before they're shuttered or their experience gets far worse.
jonwinstanley 3 hours ago [-]
Unlikely they'll disappear. I currently use Cursor but am happy to change if a competitor is markedly better
sumedh 2 hours ago [-]
MS is slow to release new features though.
ChocolateGod 4 hours ago [-]
I use Windsurf but it's been having ridiculous downtime lately.
I can't use Cursor because I don't use Ubuntu which is what their Linux packages are compiled against and they don't run on my non-Ubuntu distro of choice.
m1117 53 minutes ago [-]
Cursor is a better vibe IMO
taherchhabra 4 hours ago [-]
Claude code is the best so far, I am using the 200$ plan. in terms of feature matrix all tools are almost same with some hits and misses but speed is something which claude code wins.
websap 6 hours ago [-]
Currently using cursor. I've found cursor even without the AI features to be a more responsive VS Code. I've found the AI features to be particularly useful when I contain the blast radius to a unit of work.
If I am continuously able to break down my work into smaller pieces and build a tight testing loop, it does help me be more productive.
random42 3 hours ago [-]
How do you define a unit of work for your purposes?
geoffbp 6 hours ago [-]
Vs code with agent mode
vb-8448 1 hours ago [-]
pycharm + augment code + Gemini/Claude to generate the prompt for augment code.
warthog 2 hours ago [-]
Windsurf - the repo code awareness is much higher than Cursor.
3 hours ago [-]
MangoCoffee 3 hours ago [-]
VScode + Github Copilot Pro. $10 per month to try out AI code assist is cheap enough
rcarmo 6 hours ago [-]
Neither. VS Code or Zed.
skrhee 6 hours ago [-]
Zed! I find it to be less buggy and generally more intuitive to use.
shaunxcode 6 hours ago [-]
neither : my pen is my autocomplete
n_ary 2 hours ago [-]
I agree with /u/welder. Preferably neither. Both of these are custom and runs the risk of being acquired and enshittified in future.
If you are using VScode, get familiar with cline. Aider is also excellent if you don’t want to modify your IDE.
Additionally, Jetbrains IDEs now also have built-in local LLMs and their auto-complete is actually fast and decent. They also have added a new chat sidepanel in recent update.
The goal is NOT to change your workflow or dev env, but to integrate these tools into your existing flow, despite what the narrative says.
sharedptr 6 hours ago [-]
Personally copilot/code assist for tab autocomplete, if I need longer boilerplate I request it to the LLM. Usually VIM with LSP.
Anything that’s not boilerplate I still code it
speedgoose 5 hours ago [-]
I’m using Github Copilot in VScode Insiders, mostly because I don’t want yet another subscription. I guess I’m missing out.
ebr4him 6 hours ago [-]
Both, most times one works better than the other.
weiwenhao 2 hours ago [-]
tab completed is a nightmare when it comes to non-expected code.
dvtfl 5 hours ago [-]
If you don't mind not having DAP and Windows support, then Zed is great.
tacker2000 4 hours ago [-]
hijacking this thread: Whats the best AI tool for NeoVim ?
vasachi 6 hours ago [-]
I’d just wait a bit. At current rate of progress winner will be apparent sooner rather than later.
brokegrammer 2 hours ago [-]
Lately I switched to using a triple monitor setup and coding with both Cursor and Windsurf. Basically, the middle monitor has my web browser that shows the front-end I'm building. The left monitor has Cursor, and right one has Windsurf. I start coding with Cursor first because I'm more familiar with its interface, then I ask Windsurf to check if the code is good. If it is, then I commit. Once I'm done coding a feature, I'll also open VScode in the middle monitor, with Cline installed, and I will ask it to check the code again to make sure it's perfect.
I think people who ask the "either or" question are missing the point. We're supposed to use all the AI tools, not one or two of them.
throwaway4aday 2 hours ago [-]
Why not just write a script that does this but with all of the model providers and requests multiple completions from each? Why have a whole ass editor open just for code review?
brokegrammer 1 hours ago [-]
It's not just an "editor". Both Windsurf and Cursor do some tricks with context so that the underlying LLM doesn't get confused. Besides, writing a script sounds hard, no need to spend the extra energy when you can simply open a tool. Anyway, that's how I code, feel free to do whatever you prefer.
octocop 3 hours ago [-]
Use vim
sidcool 6 hours ago [-]
VS Code with Copilot.
urbandw311er 4 hours ago [-]
This is the way.
Getting great results both in chat, edit and now agentic mode. Don’t have to worry about any blocked extensions in the cat and mouse game with MS.
manojkumarsmks 6 hours ago [-]
Using cursor.. pretty good tool.
Pick one and start.
coolcase 5 hours ago [-]
Still on codeium lol! Might give aider another spin. It is never been quite good for my needs but tech evolves.
abeyer 4 hours ago [-]
vi
Alifatisk 5 hours ago [-]
Trae.ai actually, otherwise Windsurf
deafpolygon 2 hours ago [-]
I use neovim now, after getting tired of the feature creep and the constant chasing of shiny new features.
AI is not useful when it does the thinking for you. It's just advanced snippets at that point. I only use LLMs to explain things or to clarify a topic that doesn't make sense right away to me. That's when it shows it's real strength.
sing AI for autocomplete? I turn it off.
LOLwierd 5 hours ago [-]
zed!! the base editor is just better then vscode.
and they just released agentic editing.
anotheryou 4 hours ago [-]
Windsurf, no autocomplete.
you should also ask if people acutally used both :)
adocomplete 6 hours ago [-]
Amp.
Early access waitlist -> ampcode.com
dotemacs 3 hours ago [-]
This is a product by Sourcegraph https://sourcegraph.com who already have a solution in this space.
Is this something wildly different to Cody, your existing solution, or just a "subtle" attempt to gain more customers?
kondu 4 hours ago [-]
I'd love to try it, could you please share an invite? My email is on my profile page.
kixpanganiban 4 hours ago [-]
Interesting! Do you have an invite to spare? My email is in my bio
da_me 6 hours ago [-]
Cursor for personal projects and Just Pycharm for work projects.
bundie 4 hours ago [-]
None
karbon0x 4 hours ago [-]
Claude Code
outside1234 4 hours ago [-]
VSCode with Github Copilot!
retinaros 5 hours ago [-]
I am using both. Windsurf feels complete less clunky. They are very close tho and the pace of major updates is crazy.
I dont like CLI based tools to code. Dont understand why they are being shilled. Claude code is maybe better at coding from scratch because it is only raw power and eating tokens like there is no tomorrow but it us the wrong interface to build anything serious.
All this IDE churn makes me glad to have settled on Emacs a decade ago. I have adopted LLMs into my workflow via the excellent gptel, which stays out of my way but is there when I need it. I couldn't imagine switching to another editor because of some fancy LLM integration I have no control over. I have tried Cursor and VS Codium with extensions, and wasn't impressed. I'd rather use an "inferior" editor that's going to continue to work exactly how I want 50 years from now.
Emacs and Vim are editors for a lifetime. Very few software projects have that longevity and reliability. If a tool is instrumental to the work that you do, those features should be your highest priority. Not whether it works well with the latest tech trends.
Fortunately, alien space magic seems immune, so far at least. I assume they do not like the taste, and no wonder.
right now its dumb unix piping only
I want an AI that can use emacs or vim with me
Sure, you might not like it and think you as a human should write all code, but frequent experience in the industry in the past months is that productivity in the teams using tools like this has greatly increased.
It is not unreasonable to think that someone deciding not to use tools like this will not be competitive in the market in the near future.
I was converting a bash script to Bun/TypeScript the other day. I was doing it the way I am used to… working on one file at a time, only bringing in the AI when helpful, reviewing every diff, and staying in overall control.
Out of curiosity, threw the whole task over to Gemini 2.5Pro in agentic mode, and it was able to refine to a working solution. The point I’m trying to make here is that it uses MCP to interact with the TS compiler and linters in order to automatically iterate until it has eliminated all errors and warnings. The MCP integrations go further, as I am able to use tools like Console Ninja to give the model visibility into the contents of any data structure at any line of code at runtime too. The combination of these makes me think that TypeScript and the tooling available is particularly suitable for agentic LLM assisted development.
Quite unsettling times, and I suppose it’s natural to feel disconcerted about how our roles will become different, and how we will participate in the development process. The only thing I’m absolutely sure about is that these things won’t be uninvented with the genie going back in the bottle.
Sometimes it auto-completes nonsense, but sometimes I think I'm about to tab on auto-completing a method like FooABC and it actually completes it to FoodACD, both return the same type but are completely wrong.
I have to really be paying attention to catch it selecting the wrong one. I really really hate this. When it works its great, but every day I'm closer to just turning it off out of frustration.
I don’t think the point was “don’t use LLM tools”. I read the argument here as about the best way to integrate these tools into your workflow.
Similar to the parent, I find interfacing with a chat window sufficiently productive and prefer that to autocomplete, which is just too noisy for me.
Old fashioned variable name / function name auto complete is not affected.
I considered a small macropad to enable / disable with a status light - but honestly don't do enough work to justify avoiding work by finding / building / configuring / rebuilding such a solution. If the future is this sort of extreme autocomplete in everything I do on a computer, I would probably go to the effort.
The thing that bugs me is when Im trying to use tab to indent with spaces, but I get a suggestion instead.
I tried to disable caps lock, then remap tab to caps lock, but no joy
I'm sure it's initially slower than vibe-coding the whole thing, but at least I end up with a maintainable code base, and I know how it works and how to extend it in the future.
Went back to VSCode with a tuned down Copilot and use the chat or inline prompt for generating specific bits of code.
Are you getting irrelevant suggestions as those autocompletes are meant to predict the things you are about to type.
For example, VS Code has Cline & Kilo Code (disclaimer: I help maintain Kilo).
Jetbrains has Junie, Zencoder, etc.
Cursor, Windsurf, etc tend to feel like code vomit that takes more time to sift through than working through code by myself.
I have largely disabled it now, which is a shame, because there are also times it feels like magic and I can see how it could be a massive productivity lever if it needed a tighter confidence threshold to kick in.
But I found once it was optional I hardly ever used it.
I use Deepseek or others as a conversation partner or rubber duck, but I'm perfectly happy writing all my code myself.
Maybe this approach needs a trendy name to counter the "vibe coding" hype.
I always forget syntax for things like ssh port forwarding. Now just describe it at the shell:
$ ssh (take my local port 80 and forward it to 8080 on the machine betsy) user@betsy
or maybe:
$ ffmpeg -ss 0:10:00 -i somevideo.mp4 -t 1:00 (speed it up 2x) out.webm
I press ctrl+x x and it will replace the english with a suggested command. It's been a total game changer for git, jq, rsync, ffmpeg, regex..
For more involved stuff there's screen-query: Confusing crashes, strange terminal errors, weird config scripts, it allows a joint investigation whereas aider and friends just feels like I'm asking AI to fuck around.
For extradata it sends uname and the procname when it captures such as "nvim" or "ipython" and that's it.
AI autocomplete is a feature, not a product (to paraphrase SJ)
I can understand Windsurf getting the valuation as they had their own Codeium model
$B for a VSCode fork? Lol
Cursor and Windsurf are both good, but do what most people do and use Cursor for a month to start with.
Overall Zed is super nice and opposite of janky, but still found a few of defaults were off and Python support still was missing in a few key ways for my daily workflow.
Edit:
With the latest update to 0.185.15 it works perfectly smooth. Excellent addition to my setup.
I have also really appreciated something that felt much less janky, had better vim bindings, and wasn't slow to start even on a very fast computer. You can completely botch Cursor if you type really fast. On an older mid-range laptop, I ran into problems with a bunch of its auto-pair stuff of all things.
I don't think Zeta is quite up to windsurf's completion quality/speed.
I get that this would go against their business model, but maybe people would pay for this - it could in theory be the fastest completion since it would run locally.
But I daily drive Cursor because the main LLM feature I use is tab-complete, and here Cursor blows the competition out of the water. It understands what I want to do next about 95% of the time when I'm in the middle of something, including comprehensive multi-line/multi-file changes. Github Copilot, Zed, Windsurf, and Cody aren't at the same level imo.
I built a minimal agentic framework (with editing capability) that works for a lot of my tasks with just seven tools: read, write, diff, browse, command, ask and think.
One thing I'm proud of is the ability to have it be more proactive in making changes and taking next action by just disabling the `ask` tool.
I won't say it is better than any of the VSCode forks, but it works for 70% of my tasks in an understandable manner. As for the remaining stuff, I can always use Cursor/Windsurf in a complementary manner.
It is open, have a look at https://github.com/aperoc/toolkami if it interests you.
Roo is less solid but better-integrated.
Hopefully I'll switch back soon.
It was easy to figure out exactly what it's sending to the LLM, and I like that it does one thing at a time. I want to babysit my LLMs and those "agentic" tools that go off and do dozens of things in a loop make me feel out of control.
For the occasional frontend task, I don’t mind being out of control when using agentic tools. I guess this is the origin of Karpathy’s vibe coding moniker: you surrender to the LLM’s coding decisions.
For backend tasks, which is my bread and butter, I certainly want to know what it’s sending to the LLM so it’s just easier to use the chat interface directly.
This way I am fully in control. I can cherry pick the good bits out of whatever the LLM suggests or redo my prompt to get better suggestions.
Probably related to Sonnet 3.7’s rampant ADHD and less the CLI tool itself (and maybe a bit of LLMs-suck-at-Swift?)
I'm using VSC for most edits, tab-completion is done via Copilot, I don't use it that much though, as I find the prediction to be subpar or too wordy in case of commenting. I use Aider for rubber-ducking and implementing small to mid-scope changes. Normally, I add the required files, change to architect or ask mode (depends on the problem I want to solve), explain what my problem is and how I want it to be solved. If the Aider answer satisfies me, I change to coding mode and allow the changes.
No magic, I have no idea how a single prompt can generate $4. I wouldn't be surprised if I'm only scratching on the surface with my approach though, maybe there is a better but more costly strategy yielding better results which I just didn't realize yet.
I assume we have a very different workflow.
With deepseek: ~nothing.
My only problem was deepseek occasionally not answering at all, but generally it was fast (non thinking that was).
Compared to Aider, Brokk
- Has a GUI (I know, tough sell for Aider users but it really does help when managing complex projects)
- Builds on a real static analysis engine so its equivalent to the repomap doesn't get hopelessly confused in large codebases
- Has extremely useful git integration (view git log, right click to capture context into the workspace)
- Is also OSS and supports BYOK
I'd love to hear what you think!
Also the --watch mode is the most productive interface of using your editor, no need of extra textboxes with robot faces.
So many of the bugs and poor results that it can introduce are simply due to improper context. When forcibly giving it the necessary context you can clearly see it’s not a model problem but it’s a problem with the approach of gathering disparate 100 line snippets at a time.
Also, it struggles with files over 800ish lines which is extremely annoying
We need some smart deepseek-like innovation in context gathering since the hardware and cost of tokens is the real bottleneck here.
I've been building SO MANY small apps and web apps in the latest months, best $20/m ever spent.
Somehow other models don't work as well with it. ,,auto'' is the worst.
Still, I hate it when it deletes all my unit tests to ,,make them pass''
Ideally, things like RooCode + Claude are much better, but you need infinite money glitch.
Windsurf I think has more features, but I find it slower compared to others.
Cursor is pretty fast, and I like how it automatically suggests completion even when moving my cursor to a line of code. (Unlike others where you need to 'trigger' it by typing a text first)
Honorable mention: Supermaven. It was the first and fastest AI autocomplete I used. But it's no longer updated since they were acquired by Cursor.
I’ve personally never felt at home in vscode. If you’re open to switching, definitely check out Zed, as others are suggesting.
I can't say anything about Windsurf (as I haven't tried yet) but I can confidently say Cursor is great.
Now I'm testing Claude Code’s $100 Max plan. It feels like magic - editing code and fixing compile errors until it builds. The downside is I’m reviewing the code a lot less since I just let the agent run.
So far, I’ve only tried it on vibe coding game development, where every model I’ve tested struggles. It says “I rewrote X to be more robust and fixed the bug you mentioned,” yet the bug still remains.
I suspect it will work better for backend web development I do for work: write a failing unit test, then ask the agent to implement the feature and make the test pass.
Also, give Zed’s Edit Predictions a try. When refactoring, I often just keep hitting Tab to accept suggestions throughout the file.
Compared to Zed Agent, Claude Code is: - Better at editing files. Zed would sometimes return the file content in the chatbox instead of updating it. Zed Agent also inserted a new function in the middle of the existing function. - Better at running tests/compiling. Zed struggled with nix environment and I don't remember it going to the update code -> run code -> update code feedback loop.
With this you can leave Claude Code alone for a few minutes, check back and give additional instructions. With Zed Agent it was more of a constantly monitoring / copy pasting and manually verifying everything.
*I haven't tested many of the other tools mentioned here, this is mostly my experience with Zed and copy/pasting code to AI.
I plan to test other tools when my Claude Code subscription expires next month.
There are already a few popular open-source extension doing 90%+ of what Cursor is doing - Cline, Roo Code (a fork of Cline), Kilo Code (a fork of Roo Code and something I help maintain).
If you're on Arch, there's even an AUR package, so it's even less steps than that.
I get much better result by asking specific question to a model that has huge context (Gemini) and analyzing the generated code carefully. That's the opposite of the style of work you get with Cursor or Windsurf.
Is it less efficient? If you are paid by LoCs, sure. But for me the quality and long-term maintainability are far more important. And especially the Tab autocomplete feature was driving me nuts, being wrong roughly half of the time and basically just interrupting my flow.
Ie. it's not a "plugin" but built-in ecosystem developed by core team.
Speed of iterations on new features is quite impressive.
Their latest agentic editing update basically brought claude code cli to the editor.
Most corporations don't have direct access to arbitrary LLMs but through Microsoft's Github's Copilot they do – and you can use models through copilot and other providers like Ollama – which is great for work.
With their expertise (team behind pioneering tech like electron, atom, teletype, tree sitter, building their own gpu based cross platform ui etc.) and velocity it seems that they're positioned to outpace competition.
Personally I'd say that their tech is maybe two orders of magnitude more valuable than windsurf?
It's not only that they have it built in but it seems to be currently the best open replacement for tools like claude code cli because you can use arbitrary llm with it, ie. from ollama and you have great extension points (mcp servers, rules, slash commands etc).
[0] https://zed.dev/agentic
My reference for agent mode is Claude Code. It's far from perfect, but it uses sub-tasks and summarization using smaller haiku model. That feels way more like a coherent solution compared to Cursor. Also Aider ain't bad when you're OK with more manual process.
Windsurf: Have only used it briefly, but agent mode seems somewhat better thought out. For example, they present possible next steps as buttons. Some reviews say it's even more expensive than Cursor in agent mode.
This represents one group of developers and is certainly valid for that group. To each their own
For another group, where I belong, AI is a great companion! We can handle the noise and development speed is improved as well as the overall experience.
I prefer VSCode and GitHub copilot. My opinion is this combo will eventually eat all the rest, but that's besides the point.
Agent mode could be faster, sometimes it is rather slow thinking but not a big deal. This mode is all I use these days. Integration with the code base is a huge part of the great experience
The agents are a bit beta, it can’t solve bugs very often, and will write a load of garbage if you let it.
The flat pricing of Claude Code seems tempting, but it's probably still cheaper for me to go with usage pricing. I feel like loading my Anthropic account with the minimum of $5 each time would last me 2-3 days depending on usage. Some days it wouldn't last even a day.
I'll probably give Open AI's Codex a try soon, and also circle back to Aider after not using it for a few months.
I don't know if I misundersand something with Cursor or Copilot. It seems so much easier to use Claude Code than Cursor, as Claude Code has many more tools for figuring things out. Cursor also required me to add files to the context, which I thought it should 'figure out' on its own.
Cursor can find files on its own. But if you point it in the right direction it has far better results than Claude code.
Do they publish any benchmark sheet on how it compares against others?
It went through multiple stages of upgrades and I would say at this stage it is better than copilot. Fundamentally it is as good as cursor or windsurf but lacks some features and cannot match their speed of release. If you re on aws tho its a compelling offering.
[0] https://zed.dev/blog/fastest-ai-code-editor
I’m thinking of building an AI IDE that helps engineers write production quality code quickly when working with AI. The core idea is to introduce a new kind of collaboration workflow.
You start with the same kind of prompt, like “I want to build this feature...”, but instead of the model making changes right away, it proposes an architecture for what it plans to do, shown from a bird’s-eye view in the 2D canvas.
You collaborate with the AI on this architecture to ensure everything is built the way you want. You’re setting up data flows, structure, and validation checks. Once you’re satisfied with the design, you hit play, and the model writes the code.
Website (in progress): https://skylinevision.ai
YC Video showing prototype that I just finished yesterday: https://www.youtube.com/watch?v=DXlHNJPQRtk
Karpathy’s post that talks about this: https://x.com/karpathy/status/1917920257257459899
Thoughts? Do you think this workflow has a chance of being adopted?
PS. I’m stealing the ‘antidote to “vibe coding”’ phrase :)
The only thing that I kept thinking about was - if there is a correction needed- you have to make it fully by hand. Find everything and map. However, if the first try was way off , I would like to enter from "midpoint" a correction that I want. So instead of fixing 50%, I would be left with maybe 10 or 20. Don't know if you get what I mean.
Eventually, you’d say, ‘add an additional layer, TopicsController, between those two files,’ and the local model would do it quickly without a problem, since it doesn’t involve complicated code generation. You’d only use powerful remote models at the end.
> Which LLM are you?
> I am Claude, an AI assistant created by Anthropic. In this interface, I'm operating as "Junie," a helpful assistant designed to explore codebases and answer questions about projects. I'm built on Anthropic's large language model technology, specifically the Claude model family.
Jetbrains wider AI tools let you choose the model that gets used but as far as I can tell Junie doesn't. That said, it works great.
Cursor is very lazy about looking beyond the current context or even context at all sometimes it feels it’s trying to one shot a guess without looking deeper.
Bad thing about Windsurf is the plans are pretty limited and the unlimited “cascade base” feels dumb the times I used it so ultimately I use Cursor until I hit a wall then switch to Windsurf.
Haven't tried out Cursor / Windsurf yet, but I can see how I can adapt Claude Desktop to specifically my workflow with a custom MCP server.
Cursor works roughly how I've expected. It reads files and either gets it right or wrong in agent mode.
Windsurf seems restricted to reading files 50 lines at a time, and often will stop after 200 lines [0]. When dealing with existing code I've been getting poorer results than Cursor.
As to autocomplete: perhaps I haven't set up either properly (for PHP) but the autocomplete in both is good for pattern matching changes I make, and terrible for anything that require knowledge of what methods an object has, the parameters a method takes etc. They both hallucinate wildly, and so I end up doing bits of editing in Cursor/Windsurf and having the same project open in PhpStorm and making use of its intellisense.
I'm coming to the end of both trials and the AI isn't adding enough over Jetbrains PhpStorm's built in features, so I'm going back to that until I figure out how to reduce hallucinations.
0. https://www.reddit.com/r/Codeium/comments/1hsn1xw/report_fro...
BTW There's a new OSS competitor in town that got the front a couple of days ago - Void: Open-source Cursor alternative https://news.ycombinator.com/item?id=43927926
On something like a M4 Macbook Pro can local models replace the connection to OpenAi/Anthropic?
I've tried VScode with copilot a couple of times and its frustrating, you have to point out individual files for edits but project wide requests are a pain.
My only pain is the workflow for developing mobile apps where I have to switch back and forth between Android Studio and Xcode as vscode extensions for mobile are not so good
Cursor, Windsurf et al have no "moat" (in startup speak), in that a sufficiently resourced organization (e.g. Microsoft) can just copy anything they do well.
VS code/Copilot has millions of users, cursor etc have hundreds of thousands of users. Google claims to have "hundreds of millions" of users but we can be pretty sure that they are quoting numbers for their search product.
- We are in public beta and free for now.
- Fully Agentic. Controllable and Transparent. Agent does all the work, but keeps you in the loop. You can take back control anytime and guide it.
- Not an IDE, so don't compete with VSCode forks. Interface is just a chatbox.
- More like Replit - but full stack focussed. You can build backend services.
- Videos are up at youtube.com/@nonbios
My best experience so far is v0.dev :)
It's a matter of time before they're shuttered or their experience gets far worse.
I can't use Cursor because I don't use Ubuntu which is what their Linux packages are compiled against and they don't run on my non-Ubuntu distro of choice.
If I am continuously able to break down my work into smaller pieces and build a tight testing loop, it does help me be more productive.
If you are using VScode, get familiar with cline. Aider is also excellent if you don’t want to modify your IDE.
Additionally, Jetbrains IDEs now also have built-in local LLMs and their auto-complete is actually fast and decent. They also have added a new chat sidepanel in recent update.
The goal is NOT to change your workflow or dev env, but to integrate these tools into your existing flow, despite what the narrative says.
Anything that’s not boilerplate I still code it
I think people who ask the "either or" question are missing the point. We're supposed to use all the AI tools, not one or two of them.
Getting great results both in chat, edit and now agentic mode. Don’t have to worry about any blocked extensions in the cat and mouse game with MS.
AI is not useful when it does the thinking for you. It's just advanced snippets at that point. I only use LLMs to explain things or to clarify a topic that doesn't make sense right away to me. That's when it shows it's real strength.
sing AI for autocomplete? I turn it off.
and they just released agentic editing.
you should also ask if people acutally used both :)
Early access waitlist -> ampcode.com
Is this something wildly different to Cody, your existing solution, or just a "subtle" attempt to gain more customers?
I dont like CLI based tools to code. Dont understand why they are being shilled. Claude code is maybe better at coding from scratch because it is only raw power and eating tokens like there is no tomorrow but it us the wrong interface to build anything serious.