U mostly now use LM studio because it was much more optimised for local computer use then ollama. Speed difference is 50% +There are now some very good visual LLM coming out like gwen3-vl
I was wondering if maybe now is the time to integrate this option? I used UI vision for a long time but now I want to automate things via web browsers that are just to variable and I need an LLM to think between the steps. But I want it local because of inefficient token cost and security (data sensitive information). Thx in advance, keep up the great work! Cheers ^^